Jonathan Shen
Jonathan Shen
Sorry for not being clear, you need to actually post the output of the tf_env.txt. But yes, it seems that for some reason the environment is still not set up...
that should be the case.
That looks right. Now you can try building the gpu docker and running on gpu for faster training.
Should take a couple of days total. The warnings are fine.
The segfault can be fixed by my comment in the other issue https://github.com/tensorflow/lingvo/issues/136#issuecomment-520066943
See eg. https://www.digitalocean.com/community/tutorials/how-to-share-data-between-the-docker-container-and-the-host You should download the dataset outside of docker, then link it into the docker instance with -v, so the dataset doesn't get removed when you quit docker.
Hm, sorry I've never actually tried the librispeech processing scripts myself :( I think if you create an empty directory and then link it into docker with -v then put...
The -v is in the docker command, as described in https://www.digitalocean.com/community/tutorials/how-to-share-data-between-the-docker-container-and-the-host
Thank you for opening this issue. Is this a feature request or are you planning to contribute this feature? This feature is not currently on our horizon.
IIRC there was some problem with 2.4.1 at the time. The pip package is quite old now, let me see if we can build a new one for 2.5.0 which...