llm-foundry icon indicating copy to clipboard operation
llm-foundry copied to clipboard

Fine-Tuning a model with SageMaker

Open greenpau opened this issue 1 year ago • 1 comments

❓ Question

I want to fine-tune the model with SageMaker. Is there a guide how to do it?

I have a dataset that I want to fine-tune the model with. I was reading https://github.com/mosaicml/llm-foundry/tree/main/scripts/train/finetune_example , but it is unclear how to do it in SageMaker.

Additional context

At the moment, I am using the model in the following way.

from sagemaker.huggingface.model import HuggingFaceModel

hf_model = HuggingFaceModel(
   model_data=s3_model_location,      # path to your model and script
   role=role,                    # iam role with permissions to create an Endpoint
   transformers_version="4.26",  # transformers version used
   pytorch_version="1.13",       # pytorch version used
   py_version='py39',            # python version used
   model_server_workers=1
)

predictor = hf_model.deploy(
    initial_instance_count=1,
    instance_type="ml.g5.12xlarge",
    endpoint_name=model_endpoint_name,
    update_endpoint = True,
)

prompt = "Write a short story about a robot that has a nice day."
data = {
    "inputs": prompt,
    "temperature": 0.5,
    "top_p": 0.92,
    "top_k": 0,
    "max_length": 1024,
    "use_cache": True,
    "do_sample": True,
    "repetition_penalty" : 1.1
}

res = predictor.predict(data=data)
print(res)

greenpau avatar Jun 05 '23 17:06 greenpau

What's your Question ?

Einengutenmorgen avatar Jun 06 '23 06:06 Einengutenmorgen

Sorry, we don't currently have any guides on fine-tuning with SageMaker, but we're happy to answer specific questions about our how our tools work!

growlix avatar Jun 07 '23 22:06 growlix