audio_adversarial_examples
audio_adversarial_examples copied to clipboard
something wrong about [pip3 install $(python3) util/taskcluster.py --decoder]
There's another issue for this (https://github.com/carlini/audio_adversarial_examples/issues/33). The script your executing tries to download the ds-ctcdecoder, which is not available any more on the server. Rogthlu from issue https://github.com/carlini/audio_adversarial_examples/issues/33 seems to have found the whl file that the script is trying to download and installed it himself. Alternatively you could use the fixed code I provide in this repo: https://github.com/tom-doerr/audio_adversarial_examples
There's another issue for this (#33). The script your executing tries to download the ds-ctcdecoder, which is not available any more on the server. Rogthlu from issue #33 seems to have found the whl file that the script is trying to download and installed it himself. Alternatively you could use the fixed code I provide in this repo: https://github.com/tom-doerr/audio_adversarial_examples
thanks a lot! Do you post your docker image in the docker hub? If you do, i'd like to pull the images directly.
The .whl
can be built from the local DeepSpeech 0.4.1 repository:
make -C native_client/ctcdecode
https://discourse.mozilla.org/t/could-not-install-requirement-ds-ctcdecoder-0-4-1/47013/5
Just pushed them. For the GPU version run:
docker run --gpus all -it --mount src=$(pwd),target=/audio_adversarial_examples,type=bind -w /audio_adversarial_examples tomdoerr/aae_deepspeech_041_gpu
For the CPU-only version:
docker run -it --mount src=$(pwd),target=/audio_adversarial_examples,type=bind -w /audio_adversarial_examples tomdoerr/aae_deepspeech_041_cpu
The
.whl
can be built from the local DeepSpeech 0.4.1 repository:make -C native_client/ctcdecode
https://discourse.mozilla.org/t/could-not-install-requirement-ds-ctcdecoder-0-4-1/47013/5
@dijksterhuis by running this command, I am getting this error:
Traceback (most recent call last):
File "./setup.py", line 53, in
can you please tell me how can I resolve this?
You could just extract the .whl
file from one of the docker images in case you don't want to use them.
@JeetShah10 You're probably better off asking in the DeepSpeech Discourse forum tbh.
It's either one of two things:
- the fact you're trying to use Windows
- problems with
make
finding files
The first is probably going to be most likely.
windows
This thread seems to suggest windows support is patchy for v0.4.1
.
IIRC I don't think there were any decoder wheels produced for windows? native_client/definitions.mk
only contains build definitions for OS X and Linux.
If this is the issue then you're probably better off using the docker images provided by @tom-doerr
make
finding files
make
calls setup.py
on two different occasions, each time calling build_common.py
:
\build_common.py", line 67, in build_common
build_common.py
looks for a bunch of common files, including *.cpp
extensions, to compile the package.
cmd = '{cc} -fPIC -c {cflags} {args} {includes} {infile} -o {outfile}'.format(
cc=compiler,
cflags=cflags,
args=' '.join(ARGS),
includes=' '.join('-I' + i for i in INCLUDES),
infile=file,
outfile=outfile,
)
print(cmd)
subprocess.check_call(shlex.split(cmd))
That last line is the one that is failing. I think it's because it can't find some file it needs to run the string cmd
on your system. make
should print the command cmd
it was going to run just before the traceback. Check the files that are listed and see if they exist on your system somewhere?
@JeetShah10 I just uploaded the ctcdecoder .whl
file: https://github.com/tom-doerr/audio_adversarial_examples/blob/master/ds_ctcdecoder-0.4.1-cp35-cp35m-linux_x86_64.whl
Hi Tom, I am trying to reproduce code with your GPU images. but cannot figure out the right path DeepSpeech in classify.py. Many thanks
File "classify.py", line 20, in
@kkfkk Sorry, didn't see your comment. In case you still want to get it running: There were some issues with the docker images due to a numpy update. I fixed the Dockerfiles, you shouldn't have an issue with the newest versions.