llm-foundry
llm-foundry copied to clipboard
Feature/peft compatible models
Edits needed to support a combo of composer with hf/peft.
Pipeline is:
- load a hf model e.g., mpt-7b
- use hf/peft to add lora modules or adapter modules.
- convert that peft model (that is loaded into python) into a composer model (use my new function for this)
- train in composer (required adding the
inputs_embedsargs tomodel.forward().
refactored the hf convertor to a single function as suggested by @dakinggg. tested it on my end and ran pre-commit successfully. I want to move forward and push the code updates to the hub.
Tests are failing with
___________________ ERROR collecting tests/test_training.py ____________________
ImportError while importing test module '/llm-foundry/tests/test_training.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
llmfoundry/__init__.py:8: in <module>
from llmfoundry.data import (ConcatTokensDataset,
llmfoundry/data/__init__.py:5: in <module>
from llmfoundry.data.denoising import (MixtureOfDenoisersCollator,
llmfoundry/data/denoising.py:20: in <module>
from llmfoundry.models import utils
llmfoundry/models/__init__.py:4: in <module>
from llmfoundry.models.hf import (ComposerHFCausalLM, ComposerHFPrefixLM,
llmfoundry/models/hf/__init__.py:4: in <module>
from llmfoundry.models.hf.hf_causal_lm import (ComposerHFCausalLM,
llmfoundry/models/hf/hf_causal_lm.py:10: in <module>
import peft
E ModuleNotFoundError: No module named 'peft'
Hello @danbider , could you share your yamls for MPT peft/lora training? Thanks.
Is there an example on how to fine-tune with this?
@stoperro according to https://github.com/mosaicml/llm-foundry/pull/416 just use the ordinary peft code (huggingface has ready to go PEFT notebooks) or with llm-foundry add
Hey @chris-aeviator, I noticed that in the repository, LoRA currently only supports MPT models. Can we perform LoRA fine-tuning on other models such as LLAMA?
@palash04 this is getting fixed in https://github.com/mosaicml/llm-foundry/pull/435