Cal Mitchell

Results 36 comments of Cal Mitchell

Hi @marketneutral and @ssanderson, I have been using the code in this thread as a starting point to include fundamental data in a backtest algorithm. Hopefully you can help me...

Thank you Scott, that worked. For anyone wondering what I did, here is a brief summary of the changes I made to load external data, using the ```run_algorithm()``` command, in...

I would be happy to give it a shot, will follow up early next week.

Hi, I am having this same issue. However, the code in question is in the Huggingface Transformers lib, not in llama-recipes. [I've opened an issue there](https://github.com/huggingface/transformers/issues/30388).

Ok, should I bother putting together a PR?

@RdoubleA, thanks for the fast response. One good example might be [The Stack V1](https://huggingface.co/datasets/bigcode/the-stack), as it is a helpful starting point when training coding assistants. I know a few people...

Great, I will make a PR in the next few days.

Hi, I believe this recipe has similar functionality to what you're asking for: https://github.com/pytorch/torchtune/blob/main/recipes/lora_finetune_distributed.py

You will just need to figure out how to pass in the proper config, for which there is a good amount of documentation.

@RdoubleA this looks great. Your implementation of the packing logic is much cleaner than mine :-) One thing you might want to add is a test that validates how individual...