notebooks
notebooks copied to clipboard
Notebooks using the Hugging Face libraries 🤗
# What does this PR do? Fixes a tiny error on the SAM notebooks introduced in https://github.com/huggingface/transformers/pull/23295 Do not merge until https://github.com/huggingface/transformers/pull/23376 gets approved and merged
Hi, I plan to test combining parts from SageMaker examples 24 and 25, but are there any already existing notebook examples with FSDP + LoRA?
Hi, notebook `sagemaker/25_pytorch_fsdp_model_parallelism` states, "Here we will follow the instructions on the [Model card](https://huggingface.co/togethercomputer/GPT-NeoXT-Chat-Base-20B#strengths-of-the-model) and add the EOS token." What does this refer to? I can't seem to find anything...
question_answering_ort.ipynb ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ in :2 │ │ │ │ 1 model_name = model_checkpoint.split("/")[-1] │ │ ❱ 2 args = ORTTrainingArguments( │ │ 3 │...
# What does this PR do? PR shows how to add AWS infra parameters (subnets, security groups, kms keys, internet proxies (via env_var_dict/env parameters). Fixes # (issue) ## Who can...
@Rocketknight1 @NielsRogge @NimaBoscarino I got an issue ``` val_transforms = Compose( [ Resize(feature_extractor.size), CenterCrop(feature_extractor.size), ToTensor(), normalize, ] ) ``` I got the following error TypeError: Size should be int or...
I tried **to fine-tune mT5 for English->Myanmar translation** from [Tatoeba-Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge) Dataset. I followed to train [this notebook example](https://github.com/huggingface/notebooks/blob/master/examples/translation.ipynb) of en-ro translation. And I used model_checkpoint as "google/mt5-small". I tested 1~4...
When I run the image_classification.ipynb to fine tung the model, it generates the following error after ran train_ds.set_transform(preprocess_train) val_ds.set_transform(preprocess_val) train_ds[0]: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_26181/576563551.py in ---->...
@sgugger @lewtun Why the inputs are truncated at 128 tokens, although the model can take 512 tokens? ```python max_input_length = 128 model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True) ``` And this is...
The cell on Google colab: ```python from datasets import load_dataset # load a custom dataset from local/remote files or folders using the ImageFolder feature # option 1: local/remote files (supporting...