LPCNet icon indicating copy to clipboard operation
LPCNet copied to clipboard

Improve LPCNet to make it run on CPU without any GPU ?

Open stayforapple opened this issue 3 years ago • 4 comments

Actually, LPCNet is designed only for CPU inference. It's only the training that takes a GPU.

jmvalin avatar Feb 24 '22 07:02 jmvalin

Thanks.

But, could you please let LPCnet run on a conventional (and cheap) MCU at 1GHz ?

------------------ 原始邮件 ------------------ 发件人: "xiph/LPCNet" @.>; 发送时间: 2022年2月24日(星期四) 下午3:13 @.>; @.@.>; 主题: Re: [xiph/LPCNet] Improve LPCNet to make it run on CPU without any GPU ? (Issue #178)

Actually, LPCNet is designed only for CPU inference. It's only the training that takes a GPU.

— Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you authored the thread.Message ID: @.***>

stayforapple avatar Feb 24 '22 09:02 stayforapple

Or,could you please develop any open-source ASIC (Application Specific Integrated Circuit) just for LPCNet ?Cortex-A76 or Cortex-75 is expensive and power-hungry.

stayforapple avatar Feb 24 '22 09:02 stayforapple

Full-Band LPCNet: A Real-Time Neural Vocoder for 48 kHz Audio With a CPU

GPU is expensive and power-hungry.

Pretty bold claim there "The results of these experiments demonstrate that full-band LPCNet is the only neural vocoder that can synthesize higher-quality 48 kHz speech waveforms in real-time with a CPU"

At least for 44.1kHZ MelGAN (and most extended versions like FreGAN, Hifigan) for example easily is 3-4 times faster than realtime on some generic AMD CPU and also on the current Android snapdragon CPUs like 2 times faster than realtime. And generally the only modification needed is an upsampling factor (check for example https://github.com/NVIDIA/NeMo/blob/main/examples/tts/conf/hifigan/hifigan_44100.yaml)

m-toman avatar May 02 '22 05:05 m-toman