Georgi Gerganov
Georgi Gerganov
For 128K you can help with summarizing and providing references of what is needed to be implemented
@luke-jr I'm not familiar with POWER9, but from a quick ChatGPT search, it seems this CPU has a RISC architecture:  Currently, `whisper.cpp` supports only x86 and ARM architectures. By...
For example on my `Ryzen 9 5950X` if I remove the `-mavx -mavx2 -mfma -mf16c` flags I observed about x50 slower computation of the `bench` tool. Removing those flags is...
@fitzsim @luke-jr I am planning to merge a refactored version of the SIMD routines in `ggml` which I think will make things easier to maintain in the future. The PR...
The steps are like this: ```bash # we need this for the f32 conversion git clone https://github.com/openai/whisper # create f32 ggml model (assumes you have ~/.cache/whisper/base.en.pt downloaded from original repo)...
@fitzsim Great work! Will take a look at the PRs in the following days and merge after I make sure the other platforms work correctly.
Can you demonstrate the Event-based Windows implementation? I tried waiting on `condition_variable` instead of spin locks, but it wasn't more efficient. Maybe I missed something.
@fitzsim We just merged a FP16 lookup-table (#368) that is used when F16C intrinsics are not available. I believe that this will lead to significant improvement on POWER9 platforms using...
@DontEatOreo On the command line, you still have to specify the non-coreml model: `models/ggml-base.en.bin`. The code will automatically also load the `models/ggml-base.en.mlmodelc` if it is present in the same folder.
> > @DontEatOreo > > On the command line, you still have to specify the non-coreml model: `models/ggml-base.en.bin`. The code will automatically also load the `models/ggml-base.en.mlmodelc` if it is present...