David Rose

Results 6 comments of David Rose

I am having the same issues as @daranable. The chat model was working for me last week but seems to be broken after updating all my libraries, I specifically get...

I am encountering this as well. I see in [BaseLLM](https://github.com/hwchase17/langchain/blob/e6c8cce05040fd5b45aa25cbf03d660a95c78294/langchain/llms/base.py#L133) a call for `on_llm_start`, while in [BaseChatModel](https://github.com/hwchase17/langchain/blob/e6c8cce05040fd5b45aa25cbf03d660a95c78294/langchain/chat_models/base.py#L81) it is outside of `generate` and used in `generate_prompt`. So when I just...

> So I'm having the same issue -- memory consumption is constant in general but after about 50 steps an OOM is raised. I logged the sequence length and in...

> Should this be staying under 48GB VRAM usage when we run the command below at head? > > ``` > python finetune/adapter.py \ > --data_dir data/alpaca \ > --checkpoint_dir...

Looking through the guide here: [https://github.com/Lightning-AI/lit-gpt/blob/main/howto/finetune_adapter.md#tune-on-your-dataset](https://github.com/Lightning-AI/lit-gpt/blob/main/howto/finetune_adapter.md#tune-on-your-dataset) do each of the json blobs in your file have the three required keys: `instruction`, `input`, and `output`?

For anyone else reading this, here is a small script I wrote to reformat my langchain llm logs to this required format. Keep in mind I am using json-lines because...