Fast Neural Style Transfer by PyTorch (Mac OS)

2021-Jan-31: The git repo has been upgraded from PyTorch-0.3.0 to PyTorch-1.7.0. with Python=3.8.3.


Continue my last post Image Style Transfer Using ConvNets by TensorFlow (Windows), this article will introduce the Fast Neural Style Transfer by PyTorch on MacOS.

The original program is written in Python, and uses [PyTorch], [SciPy]. A GPU is not necessary but can provide a significant speedup especially for training a new model. Regular sized images can be styled on a laptop or desktop using saved models.

More details about the algorithm could be found in the following papers:

  1. Perceptual Losses for Real-Time Style Transfer and Super-Resolution (2016).
  2. Instance Normalization: The Missing Ingredient for Fast Stylization (2017).


If you could not download the papers, here are the Papers.

You can find all the source code and images (updated in 2021) at my GitHub: fast_neural_style .

Continue reading “Fast Neural Style Transfer by PyTorch (Mac OS)”

Building ConvNets on MNIST dataset by TensorFlow with the new WIN10 GPU Monitor

A few days ago, I updated my  Windows 10 to version 1709 and found out that Microsoft added the GPU monitor in the Task Manager which I thought is awesome for ML developers and researchers.

Here is a screen capture of the official MNIST codes running Tensorflow-GPU on my Desktop.  It is clear to see that the GTX 960 uses about 3.5GB memory out of 4.0GB to train the ConvNets, which is much faster than the CPU computing.

Capture

You can find more models from the TensorFlow Models. This repository contains a number of different models implemented in TensorFlow.

 

What do I think about PyTorch and TensorFlow?

As we all know, the TensorFlow is very powerful and mature deep learning library with strong visualization capabilities and several options to use for high-level model development. PyTorch is still young framework which is getting momentum fast.

I strongly suggest CS and IT researchers/engineers learn both of them.

Tensorflow will be a good option if you are developing models for production or on mobile platforms, maybe in the future for large-scale distributed model training. Because it has good community support and comprehensive documentation, it is easier to find answers and get helps online.

Well, PyTorch is a good fit if you are doing research or your production are not very demanding.

Personly, I think Pytorch has better development and debugging experience.

Continue reading “What do I think about PyTorch and TensorFlow?”

%d bloggers like this: