What do I think about PyTorch and TensorFlow?

As we all know, the TensorFlow is very powerful and mature deep learning library with strong visualization capabilities and several options to use for high-level model development. PyTorch is still young framework which is getting momentum fast.

I strongly suggest CS and IT researchers/engineers learn both of them.

Tensorflow will be a good option if you are developing models for production or on mobile platforms, maybe in the future for large-scale distributed model training. Because it has good community support and comprehensive documentation, it is easier to find answers and get helps online.

Well, PyTorch is a good fit if you are doing research or your production are not very demanding.

Personly, I think Pytorch has better development and debugging experience.

Continue reading “What do I think about PyTorch and TensorFlow?”

TensorFlow Neural Network Playground in Matlab

The amazing website http://playground.tensorflow.org  can help you open a Neural Network on your Web Browser. The GUI is mind blowing, and you could download all the codes to study or to build your own project.

Now,   The Good News!  Amro and Ray Phan have created the MATLAB version of the NN playground, it looks just like the GUI of the Tensorflow version. However, it is not tensorflow-based, it is built on the Neural Networks Toolbox of Matlab (>R2009b). The authors said they are inspired by the [TensorFlow Neural Networks Playground] interface readily available online, so they created a MATLAB implementation of the same Neural Network interface for using Artificial Neural Networks for regression and classification of highly nonlinear data. Continue reading “TensorFlow Neural Network Playground in Matlab”

Python For Data Science (Cheat Sheets Collections)

PDF Download: Python For Data Science

Here is the list:

  1. Numpy
  2. Matplotlib
  3. Scikit-Learn
  4. SciPy-Linear Algebra
  5. Pandas

When you doing the programming this is very helpful, cheers 🙂

Python Cheat Sheets Conbination_Page_1

Continue reading “Python For Data Science (Cheat Sheets Collections)”

Image Style Transfer Using ConvNets by TensorFlow (Windows)

This post is talking about how to setup a basic developing environment of Google’s TensorFlow on Windows 10 and apply the awesome application called “Image style transfer”, which is using the convolutional neural networks to create artistic images based on the content image and style image provided by the users.

The early research paper is “A Neural Algorithm of Artistic Style” by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge on arXiv. You could also try their website application on DeepArt where there are a lot of amazing images uploaded by the people all over the world. DeepArt has apps on Google Play and App Store, but I suggest you use a much faster app Prisma, which is as awesome as DeepArt! [More to read: Why-does-the-Prisma-app-run-so-fast].

Moreover, I strongly recommend the formal paper Image Style Transfer Using Convolutional Neural Networks by the same authors published on CVPR-2016. This paper gives more details about how this method works. In short, the following two figures cover the idea behind the magic.

Continue reading “Image Style Transfer Using ConvNets by TensorFlow (Windows)”

Talking Machine: An Insightfull Podcast

Strongly recommend this podcast, the hosts and guests share deep understanding about topics from Machine Learning and Artificial Intelligence, which I believe is very helpful to clarify some basic concepts and eliminate misleading ideas.

The latest episode is talking about the basics about the dropout in Deep Learning and various opinions on Inferencing among different research groups. It raises a very interesting question, does  Inference equal prediction? 🙂

Here are some episodes:

Hope you enjoy it like I do.

Continue reading “Talking Machine: An Insightfull Podcast”

Machine Learning on Google Cloud Platform

Google Cloud Platform is a cloud computing service by Google that offers hosting on the same supporting infrastructure that Google uses internally for end-user products like Google Search and YouTube.[1] Cloud Platform provides developer products to build a range of programs from simple websites to complex applications.[2][3]

Google Cloud Platform is a part of a suite of enterprise services from Google Cloud and provides a set of modular cloud-based services with a host of development tools. For example, hosting and computing, cloud storage, data storage, translations APIs and prediction APIs.[2]


Just like Amazon and Microsoft, Google started its own cloud computing platform and the first 12 months for new users are free but with limited credits. You probably get used to running your machine learning algorithms on your local machine, but  I believe you know the clouding computing is the future! It is much cheaper and convenient to use.

When I am reviewing the CS231n: Convolutional Neural Networks for Visual Recognition  2017, I find they extended the new lab notes for the assignment as follows.

The students now can work on the assignment in one of two ways: locally on their own machine, or on a virtual machine on Google Cloud.

That is so cool,  right! If you are new to machine learning or just want to try different tools. This assignment is definitely a good practice.  Do it by yourself and you will make progress!


Continue reading “Machine Learning on Google Cloud Platform”

Another Android Phone? but it is designed by Andy Rubin

The Essential Phone brought to us by the person who created Android is finally ready for the spotlight. It’s an incredibly audacious and ambitious project, with an outlandish screen and the beginnings of a modular ecosystem.


Continue reading “Another Android Phone? but it is designed by Andy Rubin”

Sharing the opinion about Generative Adversarial Networks (GAN)

Generative models, like Generative Adversarial Networks (GAN),  are a rapidly advancing area of research for computer science and machine intelligence nowadays. It’s hard to keep track of them all, not to mention the incredibly creative ways in which researchers have achieved and been working on.

The following figures demonstrate some results of the current works ( Images from https://blog.openai.com/generative-models/).

GAN learning to generate images (linear time)
VAE learning to generate images (log time)

I think it is necessary to understand the basic pros and cons of it, and it may be very helpful to your own research. I have not fully reviewed the theory and papers, but after skimmed a few papers, I got the impression that the training process of GAN models is very tricky as well as any neural networks model. Thus, there must be a huge improving space for people to make.

Thanks to the internet!  There are papers and codes everywhere and nobody will be left behind in these days unless he/she wants to.  So working hard and to be a better man (or women or anything good for humanity), cheers!

Here are some papers and blogs that summarized the literature very well.

Here is my old group slide meeting note and download links.

This slideshow requires JavaScript.

Extra Source:

Continue reading “Sharing the opinion about Generative Adversarial Networks (GAN)”