interactive-deep-colorization
interactive-deep-colorization copied to clipboard
Please guide install step by step
I download Caffe and Interactive zip .. But i can't run them?
i use mac os please guide step by step
Thanks
you have to be more clear -- what did you do? what interactive zip? did you follow the instructions on the caffe website?
Hi sg-s, I'm not coder. Can you make video guide install soft ? Thank so much :)
it's fine if you're not a coder -- but please be clear about what you did, and what error you got, otherwise no one can help you effectively.
@sg-s, I'm not the person who opened this but there are multiple requests for a proper install guide/pre-configed VM/issues with requirements. #14 #3 #7 and #6 and they have not been answered.
There are a large number of dependencies for this project (the short list in the readme is extended when you include the requirements for Caffe and OpenCV) and those requirements are poorly stated in the readme. For one, there is an incomplete prerequisites sections AND a requirements section; However, the real problem is Caffe which has a real mess of an installation guide. Instead of "Install Caffe and Python libraries (OpenCV)" in the getting started section it would be nice if someone who has successfully installed all of the prereqs from scratch would list out the whole process:
- Install x,y,z
sudo apt-get install x,y,z
(Debian/Ubuntu) - Download x
wget x
- Unzip x
unzip x
- ...Run make/config... etc etc etc.
Literal step by step instructions where someone starting without Python or any of the other dependencies could be up without needing anything but the readme. Not just "install Caffe" but "Install Caffe following these steps: 1,2,3,4,5." It's also not clear how to switch to CPU mode and other config type things. Granted it's complicated due to the different package managers and Linux versions but I think a Ubuntu install guide would be a safe choice to cover most people.
@omniomi you should direct this towards @junyanz
In case it helps someone here is what i did to get it running on debian stretch with python 3 (debians repository only has the caffe python module for python3). If you want to use the notebooks you need some other stuff, but ideepcolor.py should work with this.
Install caffe
Make sure that your /etc/apt/sources.list contains contrib and non-free sections if you want to install the CUDA version, for instance: deb http://ftp2.cn.debian.org/debian sid main contrib non-free Then we update APT cache and directly install Caffe. Note, the cpu version and the cuda version cannot coexist.
sudo apt update
sudo apt install [ caffe-cpu | caffe-cuda ]
Install PyQt and sklearn
sudo apt-get install python3-pyqt4 python3-sklearn python3-skimage
In case you don't have git, pip3 or wget already installed:
sudo apt-get install git python3-pip wget
Install qdarkstyle package
sudo pip3 install qdarkstyle
Install Opencv
sudo pip3 install opencv-python
Download and start ideepcolor
mkdir ideepcolor
cd ideepcolor
# Use a patched version compatible with python3
git clone -b python3 https://github.com/SleepProgger/interactive-deep-colorization
cd interactive-deep-colorization/
# Fetch models
sh models/fetch_models.sh
Start ideepcolor.py for CPU usage
python3 ideepcolor.py --cpu_mode
For gpu usage you probably don't need any parameter at all.
If you are running the caffe-cuda version and see an error complaining about TypeError: 'float' object cannot be interpreted as an integer
you might need to downgrade your numpy version.
I am not sure if the cause of this is in this project or in sklearn but so far i was too lazy to check.
pip3 install -U numpy==1.11.0
If you see an out of memory error like this:
F0601 05:42:15.847261 12169 syncedmem.cpp:71] Check failed: error == cudaSuccess (2 vs. 0) out of memory
Your graphics card has not enough memory.
There might be a proper way to solve this (use smaller batches somehow) but what i did and what worked for me is:
Set the --load_size
to something smaller. This will lead to worse results but at least it will work and run reasonably fast (compared to using the CPU).
You will need to set the same value in the models/.../.prototxt
files input layer.
A (really) dirty script which handles this for you is located here.
Put it in the same place the ideepcolor.py file is and run it like sh ./run_ideepcolor.py
.
Set the XLEN
variable inside to the highest value which doesn't produce the memory error.
sidorencu@sidorencu-K52F:~/ideepcolor/interactive-deep-colorization$ python3 ideepcolor.py
Traceback (most recent call last):
File "ideepcolor.py", line 12, in
Something else seem to have installed skimage in my container.
Install it by:
sudo apt-get install python3-skimage
Updated post above
sidorencu@sidorencu-K52F:~/ideepcolor/interactive-deep-colorization$ python3 ideepcolor.py --cpu_mode
[dist_prototxt] = ./models/reference_model/deploy_nopred.prototxt
[dist_caffemodel] = ./models/reference_model/model.caffemodel
[color_caffemodel] = ./models/reference_model/model.caffemodel
[ui_time] = 60
[cpu_mode] = True
[image_file] = test_imgs/mortar_pestle.jpg
[load_size] = 256
[win_size] = 512
[no_dist] = False
[user_study] = False
[color_prototxt] = ./models/reference_model/deploy_nodist.prototxt
[gpu] = 0
ColorizeImageCaffe instantiated
gpu_id = -1, net_path = ./models/reference_model/deploy_nodist.prototxt, model_path = ./models/reference_model/model.caffemodel
Traceback (most recent call last):
File "ideepcolor.py", line 57, in
I've set up a docker with ideepcolor, building on https://github.com/floydhub/dl-docker, maybe that will be helpful for some of you.
GUI version
docker run -ti --rm \ -e DISPLAY=unix$DISPLAY \ -v /tmp/.X11-unix:/tmp/.X11-unix \ swallner/ideepcolor /bin/sh -c 'cd ideepcolor; python ideepcolor.py --cpu_mode'
only works if you've done xhost +local:root before
I've kind of tested this, but it's terribly slow and basically unusable on my laptop, so I cannot guarantee that everything works the way it should.
Notebook
docker run -it -p 8888:8888 -p 6006:6006 swallner/ideepcolor jupyter notebook
access the notebook using your browser at http://localhost:8888 I haven't really tested the notebook apart from checking that it's accessible, so I cannot guarantee that it works, it might still need some adjustments.
Based on @SleepProgger 's instructions, I've also built a Docker container which runs ideepcolor on Debian Stretch. You can run the GUI version with the following command:
docker run -ti --rm -e DISPLAY=unix$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix swallner/ideepcolor_debian /bin/sh -c 'cd ideepcolor/interactive-deep-colorization; python3 ideepcolor.py --cpu_mode'
With this, I unfortunately see this issue: https://github.com/junyanz/interactive-deep-colorization/issues/2
Thank @SleepProgger for providing the installation guidelines. I updated the installation with more details. Let me know if I missed something.
@SleepProgger that is a nice step by step guide for Ubuntu. How about the same for OSX? I am stuck on this error:
MBP:ideepcolor2 vb$ python ideepcolor.py --cpu_mode
Traceback (most recent call last):
File "ideepcolor.py", line 8, in
I have run 'pip install --user scikit-image' successfully and still getting this error.
Thank you
@vbisbest Are you sure you installed the 'scikit-image' for the correct python version (python 3 in the case of my "tutorial") ?
Otherwise try to install it with the pip3
command.
@SleepProgger I did get it installed and it fixed that issue. Now I am stuck on another one, I opened a new issue for that.
@sabrinawallner I sort of have the docker image running on OSX. I am not getting a GUI though. The last few lines of output I get are:
Setting ab cluster centers in layer: pred_ab Setting upsampling layer kernel: pred_313_us test_imgs/mortar_pestle.jpg scale = 2.000000 Killed
And thats it. Is there supposed to be a GUI? Did a file get output somewhere? Thanks for your help.
EDIT: I found the issue. I need to give docker 8GB ram and the GUI pops up. Now the issue is nothing happens when the cup is loaded. Clicking on colors puts a point on the "Drawing Pad" but the "Result" stays gray. Not sure what the issue is here.
That is some serious hardware required... has anyone tried Quardro cards?
I'm getting this error when I click on one of the Suggested Colors:
Traceback (most recent call last):
File "/home/user/interactive-deep-colorization/ui/gui_palette.py", line 85, in mousePressEvent
self.update_ui(color_id)
File "/home/user/interactive-deep-colorization/ui/gui_palette.py", line 78, in update_ui
color = self.colors[color_id]
IndexError: only integers, slices (:
), ellipsis (...
), numpy.newaxis (None
) and integer or boolean arrays are valid indices
The color selection doesn't work and only gives an error. All the other functions work.
Which color did you try to select? could you have a screenshot of UI?
It happens with all the colors, it doesn't matter what color I select, the error is always the same.
I cannot reproduce your bug. The current system works for me. My python is python 2.7. I added a debugging line that might help you find the issue.
For anyone it may help in the future:
Arch linux 64bits, python 3, nvidia card. Some dependencies may be missing if I already had them installed
Install dependencies
From official repo's:
sudo pacman -S python-scikit-learn py opencv
From AUR:
Install python-scikit-image python-qdarkstyle pyqt4 Install either caffe-cuda or python-pytorch-cuda (or both)
Download ideepcolor
git clone https://github.com/junyanz/interactive-deep-colorization.git
Fetch models
execute in main git directory, requires wget
bash ./models/fetch_models.sh
Fix error
Was needed in my case, may not be needed in yours. Solution found in #40 edit ui/gui_draw.py. Remove ".encode('utf-8')" from method init_result
Start script
python ideepcolor.py
Failed for me with an out of memory error. Only worked in cpu_mode. Possibly an issue with hybrid laptop cards.
or
python ideepcolor.py --backend pytorch
No issues with GPU mode or memory but seems to be missing intelligence. Training data instructions below but doesnt seem to solve much.
Tip
you can use nvidia-smi
to check what the GPU id is if you have multiple cards.
In addition to the above. Note that I have not found it to be much better than without. Possibly the same is already included in the original. Instructions for training your own are on the colorization-pytorch page
Training for pytorch
Install from Aur
python-torchvision-git
Install training script
git clone https://github.com/richzhang/colorization-pytorch.git
cd colorization-pytorch
Install python dependencies
pip install -r requirements.txt
Download pretrained model
bash pretrained_models/download_siggraph_model.sh
Remember where this is downloaded (output will mention this) and substitute below for [[PTH/TO/MODEL]]. In my case colorization-pytorch/checkpoints/siggraph_pretrained/latest_net_G.pth
Execute main script with downloaded model
(in install dir from main script see above post) python ideepcolor.py --backend pytorch --color_model [[PTH/TO/MODEL]] --dist_model [[PTH/TO/MODEL]]
In case it helps someone here is what i did to get it running on debian stretch with python 3 (debians repository only has the caffe python module for python3).
Hi and thank you for your help. I've tried your method on my Debian Stretch VM. The installation went seemingly well. However when trying to start ideepcolor.py I get a lot of output to the terminal, but no GUI:
WARNING: Logging before InitGoogleLogging() is written to STDERR
W1216 13:50:06.872355 1010 _caffe.cpp:139] DEPRECATION WARNING - deprecated use of Python interface
W1216 13:50:06.872525 1010 _caffe.cpp:140] Use this instedad (with the named "weights" parameter):
W1216 13:50:06.872543 1010 _caffe.cpp:142] Net('./models/reference_model/deploy_nodist.prototxt', 1, weights='./models/reference_model/model.caffemodel')
I1216 13:50:06.904345 1010 upgrade_proto.cpp:79] Attempting to upgrade batch norm layers using deptrecated params: ./models/reference_model/deploy_nodist.prototxt
I1216 13:50:06.904379 1010 upgrade_proto.cpp:82] Successfully upgraded batch norm layers using deprecated params.
I1216 13:50:06.90447 1010 net.cpp:53] Initializing net from parameters:
state {
phase: TEST
level: 0
}
layer {
[Insert loads and loads of layer definitions]
}
ICE default IO error handler doing an exit(), pid = 1010, errno = 32
Hello @SleepProgger , first of all, thanks for the detailed step-by-step.
I am sorry for digging up this old discussion but I think this is probably the best place to ask for help.
I am getting this error:
Models already available [win_size] = 512 [image_file] = test_imgs/mortar_pestle.jpg [gpu] = 0 [cpu_mode] = False [color_prototxt] = ./models/reference_model/deploy_nodist.prototxt_136 [color_caffemodel] = ./models/reference_model/model.caffemodel [dist_prototxt] = ./models/reference_model/deploy_nopred.prototxt_136 [dist_caffemodel] = ./models/reference_model/model.caffemodel [no_dist] = False [load_size] = 136 [ui_time] = 60 [user_study] = False ColorizeImageCaffe instantiated gpu_id = 0, net_path = ./models/reference_model/deploy_nodist.prototxt_136, model_path = ./models/reference_model/model.caffemodel WARNING: Logging before InitGoogleLogging() is written to STDERR F0329 22:12:21.984036 32036 common.cpp:152] Check failed: error == cudaSuccess (30 vs. 0) unknown error *** Check failure stack trace: *** Aborted (core dumped)
the gpu=0 part concerns me. Is something wrong with my drivers maybe?
apart from that I also tried using the script you provided but even setting the XLEN to 1 didn't solve the issue.
I am running an Nvidia GTX 1050 ti, maybe 4gigs just aren't enough memory?
I am using driver 390.
I would really greatly appreciate any help! Thanks!
Hi - below is a working environment.yml for the GUI, installed on Xubuntu 18.04 with miniconda.
name: ideepcolor
channels:
- pytorch
- conda-forge
- anaconda
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- blas=1.0=mkl
- ca-certificates=2019.10.16=0
- certifi=2019.9.11=py27_0
- cffi=1.13.2=py27h2e261b9_0
- cudatoolkit=10.1.243=h6bb024c_0
- dbus=1.13.6=he372182_0
- expat=2.2.6=he6710b0_0
- fontconfig=2.13.1=he4413a7_1000
- freetype=2.9.1=h8a8886c_1
- future=0.18.2=py27_0
- gettext=0.19.8.1=hc5be6a0_1002
- glib=2.58.3=py27h6f030ca_1002
- gst-plugins-base=1.14.5=h0935bb2_0
- gstreamer=1.14.5=h36ae1b5_0
- icu=58.2=hf484d3e_1000
- intel-openmp=2019.4=243
- jpeg=9b=h024ee3a_2
- libedit=3.1.20181209=hc058e9b_0
- libffi=3.2.1=hd88cf55_4
- libgcc-ng=9.1.0=hdf63c60_0
- libgfortran-ng=7.3.0=hdf63c60_0
- libiconv=1.15=h516909a_1005
- libpng=1.6.37=hbc83047_0
- libstdcxx-ng=9.1.0=hdf63c60_0
- libtiff=4.1.0=h2733197_0
- libuuid=2.32.1=h14c3975_1000
- libxcb=1.13=h14c3975_1002
- libxml2=2.9.9=hea5a465_1
- mkl=2019.4=243
- mkl-service=2.3.0=py27he904b0f_0
- mkl_fft=1.0.15=py27ha843d7b_0
- mkl_random=1.1.0=py27hd6b4f25_0
- ncurses=6.1=he6710b0_1
- ninja=1.9.0=py27hfd86e86_0
- numpy=1.16.5=py27h7e9f1db_0
- numpy-base=1.16.5=py27hde5b4d6_0
- olefile=0.46=py27_0
- openssl=1.1.1=h7b6447c_0
- pcre=8.43=he1b5a44_0
- pillow=6.2.1=py27h34e0f95_0
- pip=19.3.1=py27_0
- pthread-stubs=0.4=h14c3975_1001
- pycparser=2.19=py27_0
- pyqt=4.11.4=py27_4
- python=2.7.17=h9bab390_0
- pytorch=1.3.1=py2.7_cuda10.1.243_cudnn7.6.3_0
- qt=4.8.7=2
- readline=7.0=h7b6447c_5
- setuptools=42.0.1=py27_0
- sip=4.18=py27_0
- six=1.13.0=py27_0
- sqlite=3.30.1=h7b6447c_0
- tk=8.6.8=hbc83047_0
- torchvision=0.4.2=py27_cu101
- typing=3.7.4.1=py27_0
- wheel=0.33.6=py27_0
- xorg-libxau=1.0.9=h14c3975_0
- xorg-libxdmcp=1.1.3=h516909a_0
- xz=5.2.4=h14c3975_4
- zlib=1.2.11=h7b6447c_3
- zstd=1.3.7=h0b5b093_0
- pip:
- backports-functools-lru-cache==1.6.1
- cloudpickle==1.2.2
- configparser==4.0.2
- contextlib2==0.6.0.post1
- cycler==0.10.0
- decorator==4.4.1
- helpdev==0.6.10
- importlib-metadata==1.1.0
- kiwisolver==1.1.0
- matplotlib==2.2.4
- more-itertools==5.0.0
- networkx==2.2
- opencv-python==4.1.2.30
- pathlib2==2.3.5
- psutil==5.6.7
- pyparsing==2.4.5
- python-dateutil==2.8.1
- python-qt==0.50
- pytz==2019.3
- pywavelets==1.0.3
- qdarkstyle==2.7
- scandir==1.10.0
- scikit-image==0.14.5
- scikit-learn==0.20.4
- scipy==1.2.2
- subprocess32==3.5.4
- zipp==0.6.0
prefix: /home/ac/miniconda3/envs/ideepcolor
I was able to install this on Windows with Python 3.6. I updated the repository in this forked version.
Technology based on this is also now in Photoshop Elements 2020