Maciej Flak
Maciej Flak
I avoid writing anything more complex than a script without a strong static type system. I feel the most productive in rust but I'd be happy to assist/review someone who...
Hello yall! I've been looking into #26 and wanted to test the search using the transcription. I wanted to post my findings here. I've previously worked with deep speech from...
Over the weekend I've added vad splitting and instructions on how to run [the inference](https://github.com/FlakM/jupiter-search#running-inference-of-some-audio). I've also created couple of issues in the repository if someone wants to help. The...
@gerbrent @pagdot precisely! I've got to admit that I was left speechless by the transcription quality (pan intended). I have even more good news! I've managed to compile the code...
@pagdot this sounds like a great plan. Also no pressure on the timeline, It is a perfectly separated feature :+1: Working without motivation and free time sucks... Take good care...
That's good to hear, you never know on the Internet.
@pagdot you can find the output that `jupiter-search` is currently able to generate (tiny model on laptop) [BlueIsTheNewRedCoderRadio331.log](https://github.com/JupiterBroadcasting/jupiterbroadcasting.com/files/10016352/BlueIsTheNewRedCoderRadio331.log). Please review if you think that I should add something. Otherwise, I'll...
@pagdot this looks very cool. Did I understand correctly, the model is not able to classify specific speakers using some kind of transfer learning and downstream task tuning. So we...
@pagdot same here... Using the same data between different interferences must involve having a synchronized access to some shared data so it will definitely limit the parallelism. But maybe it...
@pagdot I'm building a new pc next week and will probably also come back to this topic to test the new hardware. Could you please tell what's the use case...