llm-foundry icon indicating copy to clipboard operation
llm-foundry copied to clipboard

Feature/peft compatible models

Open danbider opened this issue 2 years ago • 3 comments

Edits needed to support a combo of composer with hf/peft.

Pipeline is:

  1. load a hf model e.g., mpt-7b
  2. use hf/peft to add lora modules or adapter modules.
  3. convert that peft model (that is loaded into python) into a composer model (use my new function for this)
  4. train in composer (required adding the inputs_embeds args to model.forward().

danbider avatar Jun 20 '23 18:06 danbider

refactored the hf convertor to a single function as suggested by @dakinggg. tested it on my end and ran pre-commit successfully. I want to move forward and push the code updates to the hub.

danbider avatar Jun 23 '23 18:06 danbider

Tests are failing with

___________________ ERROR collecting tests/test_training.py ____________________
ImportError while importing test module '/llm-foundry/tests/test_training.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
llmfoundry/__init__.py:8: in <module>
    from llmfoundry.data import (ConcatTokensDataset,
llmfoundry/data/__init__.py:5: in <module>
    from llmfoundry.data.denoising import (MixtureOfDenoisersCollator,
llmfoundry/data/denoising.py:20: in <module>
    from llmfoundry.models import utils
llmfoundry/models/__init__.py:4: in <module>
    from llmfoundry.models.hf import (ComposerHFCausalLM, ComposerHFPrefixLM,
llmfoundry/models/hf/__init__.py:4: in <module>
    from llmfoundry.models.hf.hf_causal_lm import (ComposerHFCausalLM,
llmfoundry/models/hf/hf_causal_lm.py:10: in <module>
    import peft
E   ModuleNotFoundError: No module named 'peft'

samhavens avatar Jun 23 '23 21:06 samhavens

Hello @danbider , could you share your yamls for MPT peft/lora training? Thanks.

c9o avatar Jun 24 '23 15:06 c9o

Is there an example on how to fine-tune with this?

stoperro avatar Jun 29 '23 08:06 stoperro

@stoperro according to https://github.com/mosaicml/llm-foundry/pull/416 just use the ordinary peft code (huggingface has ready to go PEFT notebooks) or with llm-foundry add grafik

chris-aeviator avatar Jul 17 '23 08:07 chris-aeviator

Hey @chris-aeviator, I noticed that in the repository, LoRA currently only supports MPT models. Can we perform LoRA fine-tuning on other models such as LLAMA?

palash04 avatar Jul 23 '23 16:07 palash04

@palash04 this is getting fixed in https://github.com/mosaicml/llm-foundry/pull/435

dakinggg avatar Jul 25 '23 02:07 dakinggg