Dominik
Dominik
Did a quick test running the tflite-model (`output_graph_from_koh-osug.tflite`) with coqui-stt (v1.0.0) - works fine...
For some reason the mycroft-chat skill isn't listed in Marketplace/mycroft-skills repo since 20.02 So here is a upgrade that runs with 20.08 including an minor fix for `service.entity` and updated...
> in order to hopefully solve the unpickling error mentioned above, maybe you could let me know how you pickled the VITS model and which Python version was used? I...
The link to the BOM was most likely deleted because Adhams repo is outdated - I would recommend to use the new repo instead: https://github.com/OpenQuadruped/spot_mini_mini Link to latest BOM can...
There is a calibration mode that sets servos to aligned positions, see calibration guide for details: https://github.com/OpenQuadruped/spot_mini_mini/blob/spot/spot_real/Calibration.md
You probably need to set parameter „sampling_rate“ in the data section of the config file during training. Anyway, human speaking voice has no relevant information above 8Khz so in my...
Hi, if I remember correctly we have trained our Coqui-VITS model up to nearly 1000k steps, but there weren't any improvements in quality neither audible nor technical (MOSNET, DNSMOS, SRMR)...
Technically works for me on MacbookPro M1 (macOS Monterey 12.6.5, Python 3.11.3, PyTorch nightly build via Conda install) but there is only noise output. Quick performance test with following prompt...
> Anyone else getting this? Yes. You need to set `PYTORCH_ENABLE_MPS_FALLBACK=1`. Either by `export PYTORCH_ENABLE_MPS_FALLBACK=1` in your console session or prefixing your python-call, e.g. `PYTORCH_ENABLE_MPS_FALLBACK=1 python myscript.py`
Runs on MacOS Monterey and Python 3.10.