wenet icon indicating copy to clipboard operation
wenet copied to clipboard

WeNet 3.0 Roadmap

Open robin1001 opened this issue 2 years ago • 20 comments

If you are interest in WeNet 3.0, please see our roadmap https://github.com/wenet-e2e/wenet/blob/main/ROADMAP.md, add discuss here.

WeNet is a community-driven project and we love your feedback and proposals on where we should be heading. Feel free to volunteer yourself if you are interested in trying out some items(they do not have to be on the list).

robin1001 avatar Jun 02 '22 14:06 robin1001

Do we have plan to enhance post processing like puct restoring

Mddct avatar Jun 10 '22 01:06 Mddct

Yes, ITN, punctuation is in our plan, and the solution should be simple and elegant.

robin1001 avatar Jun 10 '22 01:06 robin1001

Do you have plan to implement Text to speech models?

icyda17 avatar Jun 13 '22 02:06 icyda17

Do you have plan to implement Text to speech models?

I found this: https://github.com/wenet-e2e/wetts

Mddct avatar Jun 13 '22 03:06 Mddct

For binding:there is 3 questions:

1 get model by language type, so if we can supply small and big model for each language?(the small model could be trained with kd method)

2 shrink libtorch, now it is little big,but libtorch has many backend like mkl or openmp,it not easy make it small only through passing compile argument。 it seems we need open a repo to do this?

3 For advance usage,should we open more api for other language developer like onnx model

Mddct avatar Jun 18 '22 08:06 Mddct

Hello, any plan for a VAD in the future?

lubacien avatar Jun 21 '22 09:06 lubacien

Hi, do you have plans to introduce some text only domain adaptation methods? Or do you have any suggestions on the topic?

pehonnet avatar Jul 19 '22 15:07 pehonnet

Hello, any plan for a VAD in the future?

Under testing...

pengzhendong avatar Jul 20 '22 02:07 pengzhendong

Hello, any plan for a VAD in the future?

Under testing...

new architure? any paper for reference?

fengshi-cherish avatar Jul 26 '22 08:07 fengshi-cherish

new architure? any paper for reference?

The server-side is the old architure with a smaller acoustic unit. But we don't need the force alignment. The idea is the same as the endpoint of wenet runtime.

From @Mddct: Plz see: An End-to-End Architecture for Keyword Spotting and Voice Activity Detection

The network outputs directly to characters in the alphabet including the blank and space characters.

pengzhendong avatar Jul 26 '22 08:07 pengzhendong

@robin1001 Hi, macoS m1 can not built.

lucasjinreal avatar Aug 02 '22 07:08 lucasjinreal

Moree data augment like rir

https://github.com/pytorch/audio/issues/2624

torchaudio will add multi channel riri  based on pyroomacoustics

Mddct avatar Aug 17 '22 02:08 Mddct

Moree data augment like rir

pytorch/audio#2624

torchaudio will add multi channel riri  based on pyroomacoustics

Great!

robin1001 avatar Aug 17 '22 07:08 robin1001

what about adding time-stamp per word? The current completion does not seems accurate with 100ms fixed width.

BSen007 avatar Oct 11 '22 05:10 BSen007

Really exciting to see the Raspberry Pi on the 3.0 roadmap but think there are now so many great Aarch64 platforms that often have friendlier OpenCL capable GPU's and NPU's that maybe not just focus on Raspberry and with stock availability things are not looking good.

It would be great to see a losely coupled system where you can mix and match preference of any vedor modules to make up a voice system. A saw what you did with websockets and grpc and on the output of each module all is required is a simple queue and route to the next module with keeping any framework specifics between them to an absolute minimum.

Linux needs standalone standard conf modules that are weakly linked by queues.

There is a natural serial queue to voice processing:-

1... Mic/kws input 2... Mic/kws & speech enhancement server 3... ASR 4... NLP Skill router 5... TTS 6... Audio server 7... Skill Server(s) 2 – 5 can work in a really simplistic manner where either audio & metadata or text & metadata or queued until the process in front is clear and then sent.

Thats it in a nutshell how simple a native Linux voice can be as its just a series of queues and keeping it simple with Native Linux methods than embedded programming means its scalable to the complex.

Each Mic/KWS is allocated to a zone (room) and channel which should remain /etc/conf linux file system that likely mirrors the zone & channel of the audio system outputs As distributed mic/kws can connect to a Mic/kws & speech enhancement server and on KW hit the best stream of that zone of the KWS argmax is selected. The Mic/kws & speech enhancement server receives both audio and metadata transcribes to audio but merely passes on the metadata to a skill router. A Skill router connects to skill servers to collect simple entity data by basic NLP matching of predicate and subject to route to a skill server again purely forwarding metadata The Skill router will also accept text from skill servers that return metadata so the TTS will forward audio to the correct zone & channel also on completion the calling skill server is added to the metadata and forwarded back to the Mic/kws speech enhancement server to initiate a non kws mic broadcast. Again the chain starts again and because the initiate skill server metadata is included the skill server knows that transcription dialog destination. Thats it and you can add multiple routes at any stage to multiple instances so that it scales.

StuartIanNaylor avatar Nov 26 '22 01:11 StuartIanNaylor

Sorry, I don't get the point. wenet focuses on ASR. It should be easy to integrate wenet in your system if ASR is required for your system,

robin1001 avatar Nov 26 '22 03:11 robin1001

Sorry, I don't get the point. wenet focuses on ASR. It should be easy to integrate wenet in your system if ASR is required for your system,

You have wekws & wetts aswell ?

StuartIanNaylor avatar Nov 26 '22 05:11 StuartIanNaylor

Sorry, I don't get the point. wenet focuses on ASR. It should be easy to integrate wenet in your system if ASR is required for your system,

You have wekws & wetts aswell ?

Yes.

robin1001 avatar Nov 26 '22 06:11 robin1001

For binding:there is 3 questions:

1 get model by language type, so if we can supply small and big model for each language?(the small model could be trained with kd method)

2 shrink libtorch, now it is little big,but libtorch has many backend like mkl or openmp,it not easy make it small only through passing compile argument。 it seems we need open a repo to do this?

3 For advance usage,should we open more api for other language developer like onnx model

Is it convenient for you to provide the code for knowledge distillation based on wenet

rookie0607 avatar Mar 10 '23 10:03 rookie0607

For binding:there is 3 questions:

1 get model by language type, so if we can supply small and big model for each language?(the small model could be trained with kd method)

2 shrink libtorch, now it is little big,but libtorch has many backend like mkl or openmp,it not easy make it small only through passing compile argument。 it seems we need open a repo to do this?

3 For advance usage,should we open more api for other language developer like onnx model

Update: For 2, we now support ort backend in wenetruntime

https://github.com/wenet-e2e/wenet/pull/1708

xingchensong avatar Mar 10 '23 10:03 xingchensong