Witold Gren

Results 24 comments of Witold Gren

The similar issue exist when we have response from `HassTimerStatus.yaml`. We can use one params `{{ next_timer.area }}` to return area name. But when we use some variations of name...

Thanks for explanation, but from practical site what I should change? 😀 I'm not that proficient with model training tools yet.. How you recognise those values for this specific model?...

hm.. in your message it sounds like it's something very simple 😀 but can you tell me how to determine those values? I think I will be able to add...

I updated script to latest version. I also updated the way how I run the main train script: ``` LLM_MODEL_NAME="home-llama-3.1-8b-polish" LLM_MODEL="NousResearch/Meta-Llama-3.1-8B-Instruct" # Example models for train: # meta-llama/Meta-Llama-3.1-8B-Instruct # speakleash/Bielik-7B-Instruct-v0.1...

Thank you for the very good explanation. Now I understand the problem more broadly. Also, many thanks for creating this `find_split.py` script which certainly simplifies the search for correct values....

BTW. @acon96 your knowledge is amazing, so I took the liberty of updating the documentation to be helpful for people like me who are learning this whole process. I hope...

It is very strange.. because even if I set properly prefix and suffix the script still can show me info about no assistant response. In example below I checked and...

Thank you for find and fix the problem. I try tu ryn this script for `python3 find_split.py NousResearch/Meta-Llama-3.1-8B-Instruct` and it work properly.. (BTW. I have fine-tuned this model but I...

I understand. If possible, I would be very grateful for adding such a support option because these are the largest Polish models and it seems to me that they would...

@acon96 I made a minor correction and now it looks like the templates are working. You can find the code changes below: ``` ... assistant_prompt = tokenizer.apply_chat_template( conversation=[ {"role": "user",...