Berkeley Malagon

Results 6 comments of Berkeley Malagon

For context, here are the params during inference in case there's anything obviously wrong with them: ``` model.params: {'batch_size': 16, 'learning_rate': 0.0002, 'max_grad_norm': None, 'sample_rate': 44100, 'n_mels': 80, 'n_fft': 1024,...

I'm very new to the project (a few hours), so first of all, thank you for this great work! +1 on this request - this is the main point of...

Thanks for the context. And yes, this is what I see: ``` > aider --model gpt-4-32k Model: gpt-4-32k Git repo: .git Repo-map: universal-ctags using 1024 tokens ```

Ooooh, that’s good to know! I will try that asap and see how the behavior changes. Thanks for the tip! How can I best get involved and help the project?...