From 8d8d90f0e01c5ec6110245892268ee8eb2debac7 Mon Sep 17 00:00:00 2001 From: Santiago Castro Date: Tue, 18 Apr 2017 05:24:57 -0300 Subject: [PATCH] Fix broken Markdown headings --- README.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 5045964..e439bb4 100644 --- a/README.md +++ b/README.md @@ -18,11 +18,11 @@ All single processed + unprocessed frames are also at [github](https://github.co Advise also at https://github.com/graphific/DeepDreamVideo/wiki -##INSTALL Dependencies +## INSTALL Dependencies A good overview (constantly being updated) on which software libraries to install & list of web resources/howto is at reddit: https://www.reddit.com/r/deepdream/comments/3cawxb/what_are_deepdream_images_how_do_i_make_my_own/ -##On using a CPU as opposed to GPU +## On using a CPU as opposed to GPU As there's been a lot of interest in using this code, and deepdream in general, on machines without a decent graphic card (GPU), heres a minor benchmark to let you decide if its worth the time on your pc:
(note that the timing also depends on how far down in the layers of the network you want to go: the deeper, the longer time it takes)

@@ -34,7 +34,7 @@ CPU (amazon ec2 g2.2xlarge, Intel Xeon E5-2670 (Sandy Bridge) Processor, 8 core 1 picture, 540x360px = 45 seconds = 1d 21h for 2 min video (3600 frames/framerate 30)
1 picture, 1024x768px = 144 seconds = 6d for 2 min video (3600 frames/framerate 30)
-##Usage: +## Usage: Extract frames from the source movie in the selected format (png or jpg). @@ -132,7 +132,7 @@ Once enough frames are processed (the script will cut the audio to the needed le `./3_frames2movie.sh [ffmpeg / avconv / mplayer] [processed_frames_dir] [original_video] [png / jpg]` -##Guided Dreaming +## Guided Dreaming
@@ -144,11 +144,11 @@ or `python 2_dreaming_time.py -i frames_directory -o processed_frames_dir -l inception_4c/output --guide-image image_file.jpg` if you're running cpu mode -##Batch Processing with different parameters +## Batch Processing with different parameters `python 2_dreaming_time.py -i frames -o processed -l inception_4c/output --guide-image flower.jpg --gpu 0 --start-frame 1 --end-frame 100; python 2_dreaming_time.py -i frames -o processed -l inception_4b/output --guide-image disco.jpg --gpu 0 --start-frame 101 --end-frame 200` -##Blending Options +## Blending Options The best results come from a well selected blending factor, used to blend each frame into the next, keeping consitancy between the frames and the dreamed up artefacts, but without the added dreamed artefacts overruling the original scene, or in the opposite case, switching too rapidly. blending can be set by
--blend
and can be a float, default 0.5, "random" (a random float between 0.5 and 1., where 1 means disregarding all info from the old frame and starting from scratch with dreaming up artefacts), and "loop" which loops back and forth from 0.5 to 1.0, as originally done in the Fear and Loathing clip. @@ -172,7 +172,7 @@ Random:
-##More information: +## More information: This repo implements a deep neural network hallucinating Fear & Loathing in Las Vegas. Visualizing the internals of a deep net we let it develop further what it think it sees.