Varun Gujjar
Varun Gujjar
@daanzu sorry a very noob question... but am just new to this..:) i see the package kaldi_active_grammar-1.5.0-py2.py3-none-manylinux2010_x86_64.whl will this run on arm7l or aarch64 ? or do i have to...
@daanzu sure :D let me know how that goes.. also just managed to try the new pi4 64bit OS.. maybe i can compile one for aarch64 and share the package..if...
hey sorry been busy with my daily tasks.. did manage to runn on RPi 64bit aarch64... been a while since i touched it so will manage to collect the installation...
you can run the following python code to see your exact device index ``` import pyaudio p = pyaudio.PyAudio() info = p.get_host_api_info_by_index(0) numdevices = info.get('deviceCount') for i in range(0, numdevices):...
> You can hardcode `input_device_index=MY_INDEX` in [right here](https://github.com/MycroftAI/mycroft-precise/blob/dev/runner/precise_runner/runner.py#L197). Alternatively, if you'd like to submit a pull request, you'd want to pass a custom `stream=stream` into [here](https://github.com/MycroftAI/mycroft-precise/blob/dev/precise/scripts/listen.py#L61) where the custom stream...
ok so i upgraded my tensorflow from 1.13 --> 1.14 and also upgraded my keras ->2.3.1 and now it works fine.
@alokprasad I made the changes as per the https://github.com/alokprasad/binaries/blob/master/squeezewave.diff removed all references to cuda ..however am still unable to run the model using this command python inference.py -f
@alokprasad great ... also just refered to your page i'll be trying this on a RPI4 4gb and get back to you with the timing .. :)
@sujeendran am trying to run this on raspberry pi4 and do some tests.. ... however just needed help with which tts engine did u manage synthesize audio from text.. ?...
@alokprasad thankx a lot for the beginners :D however looking at your fastspeech repo it says u need cuda.. is it also been modified for cpu support aswell ?