dc_tts
dc_tts copied to clipboard
what are the memory requirements to run the model?
I can see that on sythesize.py my GTX 1080 runs out of memory and GTX 1070 Ti has enough to load the Graph but as soon as the next loop starts I can't make it even a single loop in for loop. What kind of systems is anyone using to successfully run synthesize.py or train.py?
I am running this with tensorflow 1.11 and python 3.6. Interestingly I made a separate conda environment with python 2.7 and tensorflow 1.3. Using the same files, the synthesis.py runs fine and and the train.py 1 is currently training from scratch without issue so far.
Did you get it working under Python 3?
@gitrodg1 I was able to train this model and train up to 100k steps on Google Colab (n1-highmem-2 instance) but change batch_size to 8/16. Google Colab Specs 2vCPU @ 2.2GHz 13GB RAM Tesla K80/T4
@Energyanalyst I had the same issue in Linux with python 3. In my case, it comes from data_load.py (https://github.com/Kyubyong/dc_tts/blob/master/data_load.py)
TypeError: a bytes-like object is required, not 'str' is the error with Python 3 (https://stackoverflow.com/questions/606191/convert-bytes-to-a-string ). This is a solution for it. I replaced:
mel = "mels/{}".format(fname.replace("wav", "npy"))
mag = "mags/{}".format(fname.replace("wav", "npy"))
by
mel = "mels/{}".format(fname.decode("utf-8").replace("wav", "npy"))
mag = "mags/{}".format(fname.decode("utf-8").replace("wav", "npy"))
And it seems to work.
@gitrodg1 I was able to train this model and train up to 100k steps on Google Colab (n1-highmem-2 instance) but change batch_size to 8/16. Google Colab Specs 2vCPU @ 2.2GHz 13GB RAM Tesla K80/T4
@Energyanalyst I had the same issue in Linux with python 3. In my case, it comes from data_load.py (https://github.com/Kyubyong/dc_tts/blob/master/data_load.py)
TypeError: a bytes-like object is required, not 'str' is the error with Python 3 (https://stackoverflow.com/questions/606191/convert-bytes-to-a-string ). This is a solution for it. I replaced:
mel = "mels/{}".format(fname.replace("wav", "npy")) mag = "mags/{}".format(fname.replace("wav", "npy"))
by
mel = "mels/{}".format(fname.decode("utf-8").replace("wav", "npy")) mag = "mags/{}".format(fname.decode("utf-8").replace("wav", "npy"))
And it seems to work.
i love you so much that you can not even imagine
Hello, I am having trouble running the model in my computer ubuntu with python 3 and 8Gb RAM I have been looking at google colab but it only allows for .ipynb files and i dont know how to make all this into one ipynb someone please help I am just trying to get the model to run
I was able to run synthesize.py but not train.py because I only have 8 gigs of ram, so what amount of ram is required? It completely maxed out my ram to the point where even moving the mouse lagged my pc and had to hard reset.
@Traincraft101 it will depend on the dataset. I'll suggest at least 12Gb of RAM.
@Traincraft101 it will depend on the dataset. I'll suggest at least 12Gb of RAM. Thanks! I only intend to use about 200-300 audio samples per dataset, but I'll get 16 or maybe 32gb of ram to be safe.
Hi Guys, I am running the train model on an AWS vm with 16 cores and 30 GB mem. the "train 1" seems to take some time, is there any way to see how far along the training process I am ? its been a few hours now and all I see is the bar loading and starts all over again...
@dan3333 I'll suggest you use a GPU for training.