nebuly
nebuly copied to clipboard
[Chatllama] error when load dataset when use deepspeed
hi, when I use deepspeed , I encountered this error:
[2023-03-09 10:46:33,647] [INFO] [logging.py:77:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
Traceback (most recent call last):
File "/datahdd/nhanv/Projects/NLP/chatllama/artifacts/main.py", line 50, in
I got the bug too. Has any one debug.
@bino282 thank you for reaching out. We know that currently we have some issue with DeepSpeed we already working to fix it. Could you please share with us your current setup?
@PierpaoloSorbellini The setup as following: from pathlib import Path from setuptools import setup, find_packages
REQUIREMENTS = [ "beartype", "deepspeed", "einops", "fairscale", "langchain>=0.0.103", "torch", "tqdm", "transformers", "datasets", "openai", ]
this_directory = Path(file).parent long_description = (this_directory / "README.md").read_text(encoding="utf8")
setup( name="chatllama-py", version="0.0.2", packages=find_packages(), install_requires=REQUIREMENTS, long_description=long_description, include_package_data=True, long_description_content_type="text/markdown", )
I was able to fix the problem "Training data must be a torch Dataset". The parameter training_data of deepspeed.initialize must be altered to training_data=self.train_dataset,
. I changed it in actor.py and reward.py. Then deepspeed worked for me. Hopefully this information helps.
Hi @phste @Xuan-ZW @bino282
With the PR #306 soon to be merged most of the deepspeed problems should have been addressed!