openfl icon indicating copy to clipboard operation
openfl copied to clipboard

finetuning neuralchat-7b using intel(r) extension for transformers and workflow interface

Open kta-intel opened this issue 1 year ago • 0 comments

Description: this PR add an example of fine-tuning neuralchat-7b on a medical qa dataset using the experimental workflow interface and intel(r) extension for transformers

Objectives:

  1. demonstrate OpenFL support for fine-tuning LLMs in a federated learning workflow and provide example users may follow
  2. demonstrate OpenFL support for Intel(R) Extension for Transformers by fine-tuning the Intel neuralchat-7b model

Changes: (+) preprocess_dataset.py: to preprocess the MedQuAD dataset to be ingestible by the model and workflow (+) Workflow_Interface_NeuralChat.ipynb: tutorial notebook (+) requirements.txt (mod) stream_redirect.py: resolution for AttributeError: 'RedirectStdStream' object has no attribute 'flush', caused by Trainer

kta-intel avatar Jan 08 '24 17:01 kta-intel