-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Images aren't automatically being processed #7
Comments
You should run Finally, the command |
I just did |
yes, same issue here, however I can see python process using almost 100% of the CPU, I guess we don't see outputs because it still slowly process through CPU? |
so the script |
If you want to see what's happening, there is a log file written to log.html. This will contain the outputs of the python program: In addition you can see which image is currently being processed by visiting And yes, process_files.sh must be ran inside of the Docker container. If you look at start.sh you'll see a line which looks like this:
This will start a new Docker container from the visionai/clouddream image. It will take the /deepdream folder from the host machine and mount it inside the container as /opt/deepdream. And finally it will CD into this directory and call process_image.sh. I also made some minor changes, so it's a good idea to do a |
Makes sense, the log.html is been update, I'm getting the
error code 137, do you know what it can be? |
It seems something is wrong with the python script. This is usually a problem with the image. Try making sure the file is called something like "PhotoDeepdream.jpg" and not "Photo Deepdream" with spaces. If that works, then it might be an issue with spaces in the filenames. |
I did
|
What happens if you use the
on one of my images I get:
|
in my case |
What happens if you do:
You will not get a more thorough log of what might be happening incorrectly. In my example, the input image is bad and I get the following at the end of my log:
If I do this with a successful image, It starts to process the image and shows output like:
|
I was playing inside the Docker container and tried the script in python, seems like the image is right but then I've got this error so I don't think it's generating the output.jpg and then the Error Code is 137 |
Well I'm glad this was resolved as a memory issue. I'm playing with a 4GB droplet and it works. I'll run some tests to see what the minimum drop size is, but for now 512MB is not enough. |
Yes, it's processing now, so it needs more than the tiny 512MB of the basic droplets!! |
I just added a script called |
Amazing!! |
I increased my RAM to 1GB and I was able to finally get 1 image output! Now looking into how I can get bigger resolution outputs. edit just found it on the README. Thanks for the feedback guys! |
Just FYI 1GB it's also been to little if you increase the maxwidth to 600px for example |
Need 2GB of RAM to process an output image of 1000px width |
Yeah.. I just ran into that problem unfortunately. |
Hi all, When trying to use any models using rectified linear unit (relu) I receive the following error; KeyError: u'conv2/relu_3x3_reduce' Last part of log; Now did I change the picture size up to 50x50 pixels over different degradations and the memory message looks to be static? Now is my main question has anybody got relu working on CPU? Thanks! |
Can you paste your settings.json file? |
Sure, I have tested this with; { and { When I change it to a non relu value it works perfectly :) |
Ok thanks. I took a deeper look and it seems that the relu layers aren't valid, just like you mentioned.
In addition the dropout layers are also not valid. I will update the README to list only the valid layers. Thanks for catching this! |
Thank you for putting together an easy docker! :) I bet you have more experience with neural networking than I do, any idea what can cause this? It looks like RELU layers are supported on CPU; http://caffe.berkeleyvision.org/tutorial/layers.html |
The ReLU layer is there, but the Caffe layers description for ReLU states: Additionally if you peek inside the tmp.prototxt file for the GoogleNet model, you'll see:
Which confirms that the bottom and top layer are the same. This just makes the network more memory efficient. I suspect it could be possible to re-train the entire network, but I doubt you'll get a lot of value from doing this. |
Learning every minute! :) Browsing the tmp now and makes sense :) Thanks a million! |
You're welcome. I updated the README so it now only lists the layers which have actual data. I still haven't tested all of the layers listed, but at least dropout and relu are gone from the list. |
I followed your directions for Digital Ocean, Ubuntu with Docker installed. I scp'ed the image to my server but noticed that nothing was displaying.. so I manually ran process_images.sh and got this error:
Is there something I'm doing wrong here? Sorry, I'm new with Linux, Python and such.
The text was updated successfully, but these errors were encountered: