Python For Data Science (Cheat Sheets Collections)

PDF Download: Python For Data Science

Here is the list:

  1. Numpy
  2. Matplotlib
  3. Scikit-Learn
  4. SciPy-Linear Algebra
  5. Pandas

When you doing the programming this is very helpful, cheers 🙂

Python Cheat Sheets Conbination_Page_1

Continue reading “Python For Data Science (Cheat Sheets Collections)”

Image Style Transfer Using ConvNets by TensorFlow (Windows)

This post is talking about how to setup a basic developing environment of Google’s TensorFlow on Windows 10 and apply the awesome application called “Image style transfer”, which is using the convolutional neural networks to create artistic images based on the content image and style image provided by the users.

The early research paper is “A Neural Algorithm of Artistic Style” by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge on arXiv. You could also try their website application on DeepArt where there are a lot of amazing images uploaded by the people all over the world. DeepArt has apps on Google Play and App Store, but I suggest you use a much faster app Prisma, which is as awesome as DeepArt! [More to read: Why-does-the-Prisma-app-run-so-fast].

Moreover, I strongly recommend the formal paper Image Style Transfer Using Convolutional Neural Networks by the same authors published on CVPR-2016. This paper gives more details about how this method works. In short, the following two figures cover the idea behind the magic.

Continue reading “Image Style Transfer Using ConvNets by TensorFlow (Windows)”

Talking Machine: An Insightfull Podcast

Strongly recommend this podcast, the hosts and guests share deep understanding about topics from Machine Learning and Artificial Intelligence, which I believe is very helpful to clarify some basic concepts and eliminate misleading ideas.

The latest episode is talking about the basics about the dropout in Deep Learning and various opinions on Inferencing among different research groups. It raises a very interesting question, does  Inference equal prediction? 🙂

Here are some episodes:

Hope you enjoy it like I do.

Continue reading “Talking Machine: An Insightfull Podcast”

Machine Learning on Google Cloud Platform

Google Cloud Platform is a cloud computing service by Google that offers hosting on the same supporting infrastructure that Google uses internally for end-user products like Google Search and YouTube.[1] Cloud Platform provides developer products to build a range of programs from simple websites to complex applications.[2][3]

Google Cloud Platform is a part of a suite of enterprise services from Google Cloud and provides a set of modular cloud-based services with a host of development tools. For example, hosting and computing, cloud storage, data storage, translations APIs and prediction APIs.[2]

—–Wiki

Just like Amazon and Microsoft, Google started its own cloud computing platform and the first 12 months for new users are free but with limited credits. You probably get used to running your machine learning algorithms on your local machine, but  I believe you know the clouding computing is the future! It is much cheaper and convenient to use.

When I am reviewing the CS231n: Convolutional Neural Networks for Visual Recognition  2017, I find they extended the new lab notes for the assignment as follows.

The students now can work on the assignment in one of two ways: locally on their own machine, or on a virtual machine on Google Cloud.

That is so cool,  right! If you are new to machine learning or just want to try different tools. This assignment is definitely a good practice.  Do it by yourself and you will make progress!

Cheers!

Continue reading “Machine Learning on Google Cloud Platform”

Another Android Phone? but it is designed by Andy Rubin

The Essential Phone brought to us by the person who created Android is finally ready for the spotlight. It’s an incredibly audacious and ambitious project, with an outlandish screen and the beginnings of a modular ecosystem.

Capture

Continue reading “Another Android Phone? but it is designed by Andy Rubin”

Sharing the opinion about Generative Adversarial Networks (GAN)

Generative models, like Generative Adversarial Networks (GAN),  are a rapidly advancing area of research for computer science and machine intelligence nowadays. It’s hard to keep track of them all, not to mention the incredibly creative ways in which researchers have achieved and been working on.

The following figures demonstrate some results of the current works ( Images from https://blog.openai.com/generative-models/).

gen_models_anim_2
GAN learning to generate images (linear time)
gen_models_anim_1
VAE learning to generate images (log time)

As a Ph.D. student in CS, I think it is necessary to understand the basic pros and cons about it, and it may be very helpful to your own research. I have not fully reviewed the theory and papers, but after skimmed a few papers, I got the impression that the training process of GAN models is very tricky as well as any neural networks model. Thus, there must be a huge improving space for people to make.

Thanks to the internet!  There are papers and codes everywhere and nobody will be left behind in these days unless he/she wants to.  So working hard and to be a better man (or women or anything good for humanity), cheers!

Here are some papers and blogs that summarized the literature very well.

Continue reading “Sharing the opinion about Generative Adversarial Networks (GAN)”