Increasing Transparency into What It Takes to Achieve Performance Gains of Machine Learning Algorithms

The computations required for Deep Learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018. AI could account for as much as one-tenth of the world’s electricity use by 2025 according to this article [1].

AI papers tend to target accuracy rather than efficiency.  This following figure shows the proportion of papers that target accuracy, efficiency, both or other from a sample of 60 papers from top AI conferences [2].

Green AI fig1
Image from the Green AI paper. 

The Allen Institute for Artificial Intelligence (AI2) is proposing a new way to incentivize energy-efficient machine learning. Researchers of AI2 have proposed a new way to mitigate this trend. They recommend that AI researchers always publish the financial and computational costs of training their models along with their performance results [2]. In their work, they proposed the following Red AI equation:

Cost(R) ∝ E·D·H

The cost of an AI (R)esult grows linearly with the cost of processing a single (E)xample, the size of the training (D)ataset and the number of (H)yperparameter experiments.

Even though this equation ignores other factors, it illustrates three quantities that are each an important factor in the total cost of generating a result. Below,

The authors hope that increasing transparency into what it takes to achieve performance gains will motivate more investment in the development of efficient machine-learning algorithms [3].

The vision of Green AI raises many exciting research directions that help to overcome the inclusiveness challenges of Red AI. Progress will reduce the computational expense with a minimal reduction in performance, or even improve performance as more efficient methods are discovered.

Please find more details in the references.

Reference:

  1. Is AI the next big climate-change threat?
  2. Green AI not Red AI
  3. AI researchers need to stop hiding the climate toll of their work

Just Got a Reviewer Certificate from Data Mining and Knowledge Discovery (WIREs)

Thanks to the Editors and Board of WIREs for supporting me. As an independent reviewer, I will be fair to everyone and never give in to the “scientific mafia” and “citation cartels”.

Data Mining and Knowledge Discovery (WIREs) (Impact Factor: 2.541)

WIREs_Reviewer_Certificate.PNG

A Taste of TensorFlow on My Android Phone

If you like Google’s open-source machine learning framework, TensorFlow, do not miss this “TensorFlow For Poets“.  I went through the tutorial this afternoon and found it is super Awesome. See the photos below, I first tested it on the coffee mug from my Intern company, Aurecon Group. I used the virtual device, Nexus 5X, from Android Studio 3.0.1 on MacBook Air 11′  (Do not do this unless you have enough SSD 😛 ).

 

This slideshow requires JavaScript.

Then, I successfully installed the compiled app (TF_Classify) on my XIAO MI – 4C (MIUI 9.0 – Android 7.0) and tested it on my coffee mug at home.
You can download and install it on your own Android devices from the following link:

Continue reading “A Taste of TensorFlow on My Android Phone”

Starting My First Intern at Melbourne Australia Tomorrow

Dear All,

I am using the song above to thank you all for your help and support in the past. You know that I have spent the last three years (2014-2017) in pursuing my Ph.D. degree in Computer Science and got a plan to be graduated in 2018.

Continue reading “Starting My First Intern at Melbourne Australia Tomorrow”

Building ConvNets on MNIST dataset by TensorFlow with the new WIN10 GPU Monitor

A few days ago, I updated my  Windows 10 to version 1709 and found out that Microsoft added the GPU monitor in the Task Manager which I thought is awesome for ML developers and researchers.

Here is a screen capture of the official MNIST codes running Tensorflow-GPU on my Desktop.  It is clear to see that the GTX 960 uses about 3.5GB memory out of 4.0GB to train the ConvNets, which is much faster than the CPU computing.

Capture

You can find more models from the TensorFlow Models. This repository contains a number of different models implemented in TensorFlow.

 

What do I think about PyTorch and TensorFlow?

As we all know, the TensorFlow is very powerful and mature deep learning library with strong visualization capabilities and several options to use for high-level model development. PyTorch is still young framework which is getting momentum fast.

I strongly suggest CS and IT researchers/engineers learn both of them.

Tensorflow will be a good option if you are developing models for production or on mobile platforms, maybe in the future for large-scale distributed model training. Because it has good community support and comprehensive documentation, it is easier to find answers and get helps online.

Well, PyTorch is a good fit if you are doing research or your production are not very demanding.

Personly, I think Pytorch has better development and debugging experience.

Continue reading “What do I think about PyTorch and TensorFlow?”

TensorFlow Neural Network Playground in Matlab

The amazing website http://playground.tensorflow.org  can help you open a Neural Network on your Web Browser. The GUI is mind blowing, and you could download all the codes to study or to build your own project.

Now,   The Good News!  Amro and Ray Phan have created the MATLAB version of the NN playground, it looks just like the GUI of the Tensorflow version. However, it is not tensorflow-based, it is built on the Neural Networks Toolbox of Matlab (>R2009b). The authors said they are inspired by the [TensorFlow Neural Networks Playground] interface readily available online, so they created a MATLAB implementation of the same Neural Network interface for using Artificial Neural Networks for regression and classification of highly nonlinear data. Continue reading “TensorFlow Neural Network Playground in Matlab”