#!/usr/bin/env bash
# Assuming OS X Yosemite 10.10.4
# Install XCode and command line tools
# See https://itunes.apple.com/us/app/xcode/id497799835?mt=12#
# See https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/xcode-select.1.html
xcode-select --install
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| one <- seq(1:10) | |
| two <- rnorm(10) | |
| three <- runif(10, 1, 2) | |
| four <- -10:-1 | |
| df <- data.frame(one, two, three) | |
| df2 <- data.frame(one, two, three, four) | |
| str(df) |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
The following instructions are for creating your own animations using the style transfer technique described by Gatys, Ecker, and Bethge, and implemented by Justin Johnson. To see an example of such an animation, see this video of Alice in Wonderland re-styled by 17 paintings.
The easiest way to set up the environment is to simply load Samim's a pre-built Terminal.com snap or use another cloud service like Amazon EC2. Unfortunately the g2.2xlarge GPU instances cost $0.99 per hour, and depending on parameters selected, it may take 10-15 minutes to produce a 512px-wide image, so it can cost $2-3 to generate 1 sec of video at 12fps.
If you do load the
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import tensorflow as tf | |
| import numpy as np | |
| import time | |
| N=10000 | |
| K=4 | |
| MAX_ITERS = 1000 | |
| start = time.time() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| """ | |
| Reimplementation of nanonet using keras. | |
| Follow the instructions at | |
| https://www.tensorflow.org/install/install_linux | |
| to setup an NVIDIA GPU with CUDA8.0 and cuDNN v5.1. | |
| virtualenv venv --python=python3 | |
| . venv/bin/activate | |
| pip install numpy |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import s3fs | |
| import pickle | |
| import json | |
| import numpy as np | |
| BUCKET_NAME = "my-bucket" | |
| # definitions, keras/tf/... imports... | |
| if __name__ == "__main__": |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # launch your own Gradio Web Demo of Arcane style transfer by following the steps below | |
| # open a jupyter notebook, code editor (vs code etc), or google colab | |
| # pip install gradio | |
| # copy the code below into a file or cell in a python notebook and run it | |
| # that's it, a web demo will appear in your python notebook or web browser | |
| # github: https://github.com/jjeamin/anime_style_transfer_pytorch | |
| # HF blog: https://huggingface.co/blog/gradio-spaces | |
| import gradio as gr |
OlderNewer