Aidan Do
Aidan Do
@heyjustinai Sweet that's good to hear 👍. I'm keen to also add: - Full integration with GitHub, e.g., a `@llama-agent solve` command on github issues - Integration with 405b (currently...
Hey @heyjustinai, thanks for the review. I will address your comments but thought I would wait on the outcome of this discussion: https://github.com/meta-llama/llama-stack-apps/pull/150#discussion_r1911932384 before going ahead.
Update: synced with @heyjustinai > Thanks for providing more context on the features needed on llama-stack for swe bench, and have briefly discussed with the team > Given that there...
Oh nice. Thanks for that. Yeah I'll give that a go - let me just finish what I'm working on.
Hmm I'm getting this error: ``` (axolotl-env-4) aidan@acb95752c5d2:~/hello-vlm-finetune/vlm-finetuning-phi-lora$ CUDA_VISIBLE_DEVICES=5,6 axolotl train lora-3.5.yaml The following values were not passed to `accelerate launch` and had defaults used instead: `--num_processes` was set to...
Thanks for fixing that. Running into this now: ``` (axolotl-env-5) aidan@acb95752c5d2:~/hello-vlm-finetune/vlm-finetuning-phi-lora$ CUDA_VISIBLE_DEVICES=5,6 axolotl train lora-3.5.yaml The following values were not passed to `accelerate launch` and had defaults used instead: `--num_processes`...