Fast Neural Style Transfer by PyTorch (Mac OS)

Continue my last post Image Style Transfer Using ConvNets by TensorFlow (Windows), this article will introduce the Fast Neural Style Transfer by PyTorch on MacOS.

The original program is written in Python, and uses [PyTorch], [SciPy]. A GPU is not necessary but can provide a significant speedup especially for training a new model. Regular sized images can be styled on a laptop or desktop using saved models.

More details about the algorithm could be found in the following papers:

  1. Perceptual Losses for Real-Time Style Transfer and Super-Resolution ;
  2. Instance Normalization: The Missing Ingredient for Fast Stylization.

If you could not download the papers, here are the Papers.

1. Install PyTorch on your Macbook.

Screen Shot 2017-10-26 at 00.36.30

conda install pytorch torchvision -c soumith

I suggest you install PyTorch by conda. If you are capable to compile the source, choose the source. Check your python version before installation.  Here I assume you have already know how to use conda to add related packages.

2. Download the project from my Modified Version or the Original Version on GitHub.

Upzip and open the project folder you will the following documents:

Screen Shot 2017-10-26 at 01.05.02

The images folder contains the input images and output images and the style images (if you want to train a new model), here I used the trained model in saved models.

Screen Shot 2017-10-26 at 02.10.13

 

## Usage
Stylize image
“`
python neural_style/neural_style.py eval –content-image </path/to/content/image> –model </path/to/saved/model> –output-image </path/to/output/image> –cuda 0
“`
* `–content-image`: path to content image you want to stylize.
* `–model`: saved model to be used for stylizing the image (eg: `mosaic.pth`)
* `–output-image`: path for saving the output image.
* `–content-scale`: factor for scaling down the content image if memory is an issue (eg: value of 2 will halve the height and width of content-image)
* `–cuda`: set it to 1 for running on GPU, 0 for CPU.

For example,

python neural_style/neural_style.py eval –content-image images/content-images/latrobe.jpg –model saved_models/udnie.pth –output-image images/output-images/latrobe-udnie.jpg –cuda 0

The Test.sh is written by me.  Directly run this bash file in your terminal you will get similar results as follows: (or you can modify this one for your own needs)

Screen Shot 2017-10-26 at 01.12.51

Then in the images/output-images, you will get your amazing neural style transferred images.

The following capture presents the main difference between the original codes and mine, small modification is applied just to get rid of warnings, like this one:

nn.UpsamplingNearest2d is deprecated. Use nn.Upsample instead.

Screen Shot 2017-10-26 at 01.53.48

Here is what I got by using the photo of  LIMS building of the La Trobe University, Melbourne, Australia.

A
content.jpg

 

Here are the style images and the output images:

Support the Author

Help the author to create more useful and interesting articles.

A$1.00

Author: Caihao (Chris) Cui

Digital Scientist & Advisory: Translating the modern machine learning, deep learning, and computer vision techniques into engineering and bringing ideas to life to design a better future.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s