wenet
wenet copied to clipboard
WeNet 3.0 Roadmap
If you are interest in WeNet 3.0, please see our roadmap https://github.com/wenet-e2e/wenet/blob/main/ROADMAP.md, add discuss here.
WeNet is a community-driven project and we love your feedback and proposals on where we should be heading. Feel free to volunteer yourself if you are interested in trying out some items(they do not have to be on the list).
Do we have plan to enhance post processing like puct restoring
Yes, ITN, punctuation is in our plan, and the solution should be simple and elegant.
Do you have plan to implement Text to speech models?
Do you have plan to implement Text to speech models?
I found this: https://github.com/wenet-e2e/wetts
For binding:there is 3 questions:
1 get model by language type, so if we can supply small and big model for each language?(the small model could be trained with kd method)
2 shrink libtorch, now it is little big,but libtorch has many backend like mkl or openmp,it not easy make it small only through passing compile argument。 it seems we need open a repo to do this?
3 For advance usage,should we open more api for other language developer like onnx model
Hello, any plan for a VAD in the future?
Hi, do you have plans to introduce some text only domain adaptation methods? Or do you have any suggestions on the topic?
Hello, any plan for a VAD in the future?
Under testing...
Hello, any plan for a VAD in the future?
Under testing...
new architure? any paper for reference?
new architure? any paper for reference?
The server-side is the old architure with a smaller acoustic unit. But we don't need the force alignment. The idea is the same as the endpoint of wenet runtime.
From @Mddct: Plz see: An End-to-End Architecture for Keyword Spotting and Voice Activity Detection
The network outputs directly to characters in the alphabet including the blank and space characters.
@robin1001 Hi, macoS m1 can not built.
Moree data augment like rir
https://github.com/pytorch/audio/issues/2624
torchaudio will add multi channel riri based on pyroomacoustics
Moree data augment like rir
torchaudio will add multi channel riri based on pyroomacoustics
Great!
what about adding time-stamp per word? The current completion does not seems accurate with 100ms fixed width.
Really exciting to see the Raspberry Pi on the 3.0 roadmap but think there are now so many great Aarch64 platforms that often have friendlier OpenCL capable GPU's and NPU's that maybe not just focus on Raspberry and with stock availability things are not looking good.
It would be great to see a losely coupled system where you can mix and match preference of any vedor modules to make up a voice system. A saw what you did with websockets and grpc and on the output of each module all is required is a simple queue and route to the next module with keeping any framework specifics between them to an absolute minimum.
Linux needs standalone standard conf modules that are weakly linked by queues.
There is a natural serial queue to voice processing:-
1... Mic/kws input 2... Mic/kws & speech enhancement server 3... ASR 4... NLP Skill router 5... TTS 6... Audio server 7... Skill Server(s) 2 – 5 can work in a really simplistic manner where either audio & metadata or text & metadata or queued until the process in front is clear and then sent.
Thats it in a nutshell how simple a native Linux voice can be as its just a series of queues and keeping it simple with Native Linux methods than embedded programming means its scalable to the complex.
Each Mic/KWS is allocated to a zone (room) and channel which should remain /etc/conf linux file system that likely mirrors the zone & channel of the audio system outputs As distributed mic/kws can connect to a Mic/kws & speech enhancement server and on KW hit the best stream of that zone of the KWS argmax is selected. The Mic/kws & speech enhancement server receives both audio and metadata transcribes to audio but merely passes on the metadata to a skill router. A Skill router connects to skill servers to collect simple entity data by basic NLP matching of predicate and subject to route to a skill server again purely forwarding metadata The Skill router will also accept text from skill servers that return metadata so the TTS will forward audio to the correct zone & channel also on completion the calling skill server is added to the metadata and forwarded back to the Mic/kws speech enhancement server to initiate a non kws mic broadcast. Again the chain starts again and because the initiate skill server metadata is included the skill server knows that transcription dialog destination. Thats it and you can add multiple routes at any stage to multiple instances so that it scales.
Sorry, I don't get the point. wenet focuses on ASR. It should be easy to integrate wenet in your system if ASR is required for your system,
Sorry, I don't get the point. wenet focuses on ASR. It should be easy to integrate wenet in your system if ASR is required for your system,
You have wekws & wetts aswell ?
Sorry, I don't get the point. wenet focuses on ASR. It should be easy to integrate wenet in your system if ASR is required for your system,
You have wekws & wetts aswell ?
Yes.
For binding:there is 3 questions:
1 get model by language type, so if we can supply small and big model for each language?(the small model could be trained with kd method)
2 shrink libtorch, now it is little big,but libtorch has many backend like mkl or openmp,it not easy make it small only through passing compile argument。 it seems we need open a repo to do this?
3 For advance usage,should we open more api for other language developer like onnx model
Is it convenient for you to provide the code for knowledge distillation based on wenet
For binding:there is 3 questions:
1 get model by language type, so if we can supply small and big model for each language?(the small model could be trained with kd method)
2 shrink libtorch, now it is little big,but libtorch has many backend like mkl or openmp,it not easy make it small only through passing compile argument。 it seems we need open a repo to do this?
3 For advance usage,should we open more api for other language developer like onnx model
Update: For 2, we now support ort backend in wenetruntime
https://github.com/wenet-e2e/wenet/pull/1708