openfl
openfl copied to clipboard
finetuning neuralchat-7b using intel(r) extension for transformers and workflow interface
Description: this PR add an example of fine-tuning neuralchat-7b on a medical qa dataset using the experimental workflow interface and intel(r) extension for transformers
Objectives:
- demonstrate OpenFL support for fine-tuning LLMs in a federated learning workflow and provide example users may follow
- demonstrate OpenFL support for Intel(R) Extension for Transformers by fine-tuning the Intel neuralchat-7b model
Changes: (+) preprocess_dataset.py: to preprocess the MedQuAD dataset to be ingestible by the model and workflow (+) Workflow_Interface_NeuralChat.ipynb: tutorial notebook (+) requirements.txt (mod) stream_redirect.py: resolution for AttributeError: 'RedirectStdStream' object has no attribute 'flush', caused by Trainer