brand17
brand17
I am running the standard example https://github.com/jrowberg/i2cdevlib/blob/master/ESP32_ESP-IDF/main/example.cpp. I only had to swap PIN_SDA and PIN_CLK to make it working (based on https://www.instructables.com/ESP32-Internal-Details-and-Pinout/). The code doesn't enter into the following block...
I see strange definitions in the lines 109-113 of the header https://github.com/jrowberg/i2cdevlib/blob/master/ESP32_ESP-IDF/components/MPU6050/MPU6050_6Axis_MotionApps20.h ``` #define HEX "f" #define DEBUG_PRINT(x) printf("%d", x) #define DEBUG_PRINTF(x, y) Serial.print(x, y) #define DEBUG_PRINTLN(x) printf("%s\n", x) #define...
### Steps to reproduce 1. Create a file and save it to the parent directory (outside of the project folder): ``` class my_class(): def my_method(self): self.var = 1 ``` 2....
As far as I know - byte pair encoding is used in the original transformer model. Why don't you use it ?
I am trying: ``` from xlsx2csv import Xlsx2csv Xlsx2csv('Book1.xlsx', sheet_name='Sheet2').convert('myfile.csv') ``` But it always saving the first worksheet. Tried versions `0.8.0` and `0.7.8`. My python version is `3.10.5 64-bit`.
I tried to train Mistral with this example: https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook As far as I understand - it utilizes one GPU only. Is it possible to utilise the both GPUs?
I am trying the example: [Google Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_#scrollTo=95_Nn-89DhsL) The only thing I did - I added `data_collator`: ```python from transformers import DataCollatorWithPadding data_collator = DataCollatorWithPadding(tokenizer=tokenizer) trainer = SFTTrainer( model=model, tokenizer=tokenizer, data_collator=data_collator,...
I try your example `Kaggle Mistral 7b Unsloth notebook`. The only thing I changed - I changed Flase to True in this line: `if True: model.save_pretrained_merged("model", tokenizer, save_method = "merged_16bit",)`...
I am getting this error on inference stage wiht llama-3-8b if set `load_in_4bit = False`: https://colab.research.google.com/drive/1RUzN1ZNjuDi4y-HyT2W9v7ePM3tLI-Lo#scrollTo=QmUBVEnvCDJv