gpt4all
gpt4all copied to clipboard
train.py enhanced checkpoint resuming
- Adds automatic checkpoint resuming logic to
train.py
- Updated train progress bar to include epoch and train_loss
- Added an
__init__.py
to make it easier to invoke gpt4all code from other Python wrappers; there is a larger discussion that should be had regarding whether python files should be moved into a subfolder and make the project a proper Python package. - Configuration files should be backwards compatible
To start a new training session use the following configuration:
checkpoint: ~
As before, to force a specific checkpoint:
checkpoint: "foo/gpt4all-7b-hf-lora/step_135000"
train_args:
resume_from_checkpoint: "step_135000"