OpenVoice
OpenVoice copied to clipboard
feat: docker improvements
Features:
- Includes CUDA 12.1 and cuDNN 8.9.7 & CUDA/cuDNN installation process. Solved: #215 and #225.
- Optimized Docker layer caching for faster builds.
- Added ability to download only necessary checkpoints. Solved: time and space.
- Switched base image to python:3.10-slim.
This setup has been thoroughly tested to ensure stability and performance.
Prerequisites:
Join the NVIDIA Developer Program:
- Go to the NVIDIA Developer Program.
- Sign up for an account if you don't already have one.
- Once you have an account, log in to the NVIDIA Developer website.
Download cuDNN:
- Navigate to the cuDNN Archive.
- Select the version you need (cuDNN 8.9.7 for CUDA 12.1).
- Download the appropriate file for Linux (should look like cudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz).
- Place file in the root of the directory.
Run:
docker build -t openvoice .
then
docker run --gpus all -p 8888:8888 openvoice v2
tl;dr
Hey everyone,
I've been working on improving the Docker setup for OpenVoice, and I think these changes will make it much easier to run in a containerized environment.
The main issue I've seen is with CUDA and cuDNN versions not matching up, causing errors. In this Dockerfile, I've included CUDA 12.1 and cuDNN 8.9.7, which work well with the latest PyTorch that supports CUDA 12. This should help eliminate those errors.
Another improvement is the entrypoint shell script. Additionally, you can now download only the checkpoints you need: it will only download the checkpoints for the specified version, saving time and bandwidth.
I've also optimized the Docker layer cache. I rearranged some commands so that if only the local files change, Docker can reuse the base layers that have all the lengthy installations. This should speed up your builds when you're making changes to your local setup.
In summary, smoother, faster, and less prone to errors. It's now easier to spin up different versions and notebooks without CUDA issues or long installations.
This setup has been thoroughly tested to ensure stability and performance.
Give it a try and let me know how it goes! I'm always happy to hear feedback and suggestions. I think this will be a big improvement for the OpenVoice experience.
Happy Dockerizing! 🐳 Vlad
Screenshots:
Running:
Results:
If you want to implement a similar setup on windows, follow:
- https://github.com/myshell-ai/OpenVoice/issues/215#issuecomment-2153388034
- https://github.com/myshell-ai/OpenVoice/issues/215#issuecomment-2153417117
@wl-zhao, @yuxumin, @Zengyi-Qin, may you take a look, please. I wasn't able to assign a reviewer, this option seems to be disabled in this repo. Thank you!
@oldmanjk Thank you for reviewing the pull and bringing this up, but kernel died can have various causes, this error message can be displayed in everything from lack of memory to missing libs etc.
First issue you are having
The folders should not exist or be populated prior to the checkpoint extraction, this is a docker container. Based on your logs, it seems the first issue you are facing is related to moving the extracted checkpoint files: the directories /workspace/checkpoints_v2/base_speakers and /workspace/checkpoints_v2/converter are not empty, preventing the extracted files from being moved.
In my testing environment, I've built the image from scratch, and it works without any issues. The folders should not exist or be populated prior to the extraction process.
Idk what your build context is, but these are some things that I can think of:
- Are you using any persistent volumes or bind mounts when running the container? If so, could you try running the container without those mounts to see if the issue persists? If you have previously run the container with persistent volumes or bind mounts attached to those specific directories, the folders might persist even after the container is removed, causing conflicts when building the image again; or if you didn't clean up the previous container instances or the volumes, files might still be there. Could you please ensure that any previous container instances and associated volumes are properly cleaned up before building the image again?
- Are there any additional files or directories in your build context that might be causing the folders to be populated?
Second issue
For the second one. Looks like LSP tries to autodetect and start lang servers and it looks for the nodejs executable path. I have the container running right now, here:
It looks like the only mention of jupyterlab-lsp, that requires node is your comment in this pull, so I assume that this is related to your particular setup.
@oldmanjk, could you please check if you have any user-specific jupyter configurations, additional jupyter extensions, or dev environment settings that might be enabling or interacting with the LSP extension? If so, try disabling or removing them and rebuilding the container to see if the errors persist
If the issue still persists after considering these, I'd be happy to work with you to investigate further and find a solution. We can explore additional steps.
@oldmanjk Thank you for reviewing the pull and bringing this up, but kernel died can have various causes, this error message can be displayed in everything from lack of memory to missing libs etc.
First issue you are having
The folders should not exist or be populated prior to the checkpoint extraction, this is a docker container. Based on your logs, it seems the first issue you are facing is related to moving the extracted checkpoint files: the directories /workspace/checkpoints_v2/base_speakers and /workspace/checkpoints_v2/converter are not empty, preventing the extracted files from being moved.
In my testing environment, I've built the image from scratch, and it works without any issues. The folders should not exist or be populated prior to the extraction process.
Idk what your build context is, but these are some things that I can think of:
- Are you using any persistent volumes or bind mounts when running the container? If so, could you try running the container without those mounts to see if the issue persists? If you have previously run the container with persistent volumes or bind mounts attached to those specific directories, the folders might persist even after the container is removed, causing conflicts when building the image again; or if you didn't clean up the previous container instances or the volumes, files might still be there. Could you please ensure that any previous container instances and associated volumes are properly cleaned up before building the image again?
- Are there any additional files or directories in your build context that might be causing the folders to be populated?
Second issue
For the second one. Looks like LSP tries to autodetect and start lang servers and it looks for the nodejs executable path. I have the container running right now, here:
It looks like the only mention of jupyterlab-lsp, that requires node is your comment in this pull, so I assume that this is related to your particular setup.
@oldmanjk, could you please check if you have any user-specific jupyter configurations, additional jupyter extensions, or dev environment settings that might be enabling or interacting with the LSP extension? If so, try disabling or removing them and rebuilding the container to see if the errors persist
If the issue still persists after considering these, I'd be happy to work with you to investigate further and find a solution. We can explore additional steps.
Thanks for the fast and thorough response. Unfortunately, I have deleted everything and moved on. Good luck though!
@vladlearns I have been working in parallels on a fix for the Dockerfile that will suit the CPU setup, particularly for Mac series M and similar systems. I have finally chanced upon a solution. Considering your work on this matter, perhaps we can combine our efforts. We could develop specialized Dockerfiles; one for CUDA and another for CPU. Correspondingly, we could generate docker-compose files (docker-compose.cuda.yml and docker-compose.cpu.yml). What do you think?
My work : https://github.com/npjonath/OpenVoice/pull/1
note: this PR also include the fix from @Afnanksalal, as this is a requirement to run this project on CPU based architecture. (https://github.com/myshell-ai/OpenVoice/pull/262)
The Openvoice V1 work correctly on my setup. The V2 is still not working because of this issue from MeloTTS
Issue: https://github.com/myshell-ai/MeloTTS/issues/167 And a possible solution for this by running a specific version of MeloTTS : https://github.com/Meiye-lj/Dockerfiles/blob/76c88309a4bb7b7070441bed3b4b72231f5349b8/MeloTTS/Dockerfile
I don't use this project anymore, so I probably shouldn't be a requested reviewer
@oldgithubman You added yourself by approving the PR and then dismissing the review because of your environment. Later, you decided to leave without providing any details. Now, when I ask for a review, you are automatically added, and there is no way to remove you.
@vladlearns I have been working in parallels on a fix for the Dockerfile that will suit the CPU setup, particularly for Mac series M and similar systems. I have finally chanced upon a solution. Considering your work on this matter, perhaps we can combine our efforts. We could develop specialized Dockerfiles; one for CUDA and another for CPU. Correspondingly, we could generate docker-compose files (docker-compose.cuda.yml and docker-compose.cpu.yml). What do you think?
My work : npjonath#1
note: this PR also include the fix from @Afnanksalal, as this is a requirement to run this project on CPU based architecture. (#262)
The Openvoice V1 work correctly on my setup. The V2 is still not working because of this issue from MeloTTS
Issue: myshell-ai/MeloTTS#167 And a possible solution for this by running a specific version of MeloTTS : https://github.com/Meiye-lj/Dockerfiles/blob/76c88309a4bb7b7070441bed3b4b72231f5349b8/MeloTTS/Dockerfile
@npjonath Hey! So, you just want me to rename the file?
@vladlearns No it was just for talking about this with you. You can leave the naming as it. I guess GPU usage is the default one. I will add docker-compose file and Dockerfile.cpu separately to extends your implementation.
@oldgithubman You added yourself by approving the PR and then dismissing the review because of your environment. Later, you decided to leave without providing any details. Now, when I ask for a review, you are automatically added, and there is no way to remove you.
Ok. I don't really know what I'm doing. I'll just approve it so you can move on
@npjonath Sure. So far, I've tested my setup on multiple environments. It works for multiple people as well, but it seems they don't merge pull requests into the main branch. Instead, they ask contributors to fork the repository and point to the fork in the documentation
It looks like the only mention of 