Hao Zhang
Hao Zhang
@RaiAmanRai Long content LLMs is a hard research problem. We'll investigate this. But we're unable to provide a significantly longer context in a short time window in this repo. Closing...
I have no idea what this issue is caused. I do not think we have a dependency on the package `prompt_toolkit`? CC @merrymercy
Seems the issue is fixed. Closing. Re-open if it happens again.
You certainly can choose to add an evaluation step in the middle of the training. Besides the reason @merrymercy mentioned, another reason we didn't add an eval step is that...
Seems it is because the Python version is too high. Please try to downgrade. Meanwhile, we should specificy the right Python versions in our toml.
@ethanyanjiali we have an internal developer working on this as well. Do you want me to connect you to him?
all supported.
@kungfu-eric Yep -- hope @kungfu-eric 's solution helps. Feel free to re-open if there is still an issue.
good suggestion. We need to add documentation to `data` folder.
@Hangzhi Yeah, contributions are welcome! Thanks