Ahmed Elnaggar

Results 24 issues of Ahmed Elnaggar

I have 6 titan GPUs machine with 12 GB memory, I changed the code to add my own dataset. However, I always get cuda out of memory: ``` Run training......

Hello, I have few questions regarding adding and training new corpus: **_Adding New Corpus:_** 1. Function "vocab.encode_file" second parameter is "ordered", what is the purpose of this parameter and when...

### 🐛 Describe the bug Pytorch Compile using OnnxRt backend works with T5 models only on CPU but not on Cuda. Complete reproducible results could be seen here: https://colab.research.google.com/drive/1xSRiz91hNTCDkiuMnsQym1bYYb1atmCj?usp=sharing ###...

module: onnx
oncall: pt2

### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu117 (False) - Tensorflow version (GPU?): 2.11.0 (False)...

Hello, I have a standard Pytorch model that doesn't exist in HuggingFace. Is there anyway way to use the library to merge normal Pytorch model ?

## Expected Behavior The search should work as easy-search for the protein sequence fasta file ## Current Behavior Only easy-search is working for the protein sequence fasta file ## Steps...

**Describe the tutorial you would like to see here** All the current tutorials assume the document store will not change over time. For example, assume we have a single elastic...

new tutorial

Hello, Both evaluation and prediction currently not working with the aligned model "Bert Style". I have fixed this issue by adding a new if statement in "transformer/utils.py": ``` elif mode...

Hi, I am using Google T5 library which is based on TensorFlow mesh for training a non-autoregressive model like Bert. The training running without a problem, but both the prediction...

Hello, I have successfully converted the bloomz 176B model to fp16. However, the quantization doesn't work and throw an error: ``` ./quantize ./models/ggml-model-bloomz-f16.bin ./models/ggml-model-bloomz-f16-q4_0.bin 2 bloom_model_quantize: loading model from './models/ggml-model-bloomz-f16.bin'...