hasan9090
hasan9090
I have the exact same problem.. Is there still no fix for this? I also tried openexr 1.2.0 but this does not build under python 3. The version from pip...
Thanks for the info. I will check these out. One question still though, does the problem seem to be connected only with Python environment on Mac OS or purely with...
> You'd have to collect training data similar to the data used for this project (http://www.fki.inf.unibe.ch/databases/iam-on-line-handwriting-database) and retrain the model. > > That said, how well it would work will...
> Hi @sjvasquez Nice one > > Can we do Handwriting Synthesis for other languages.Am interested to implement for Tamil language > [Tamil](https://en.wikipedia.org/wiki/Tamil_script). If possible kindly let us know. have...
@cg123 How is the example ties config possible then? Afaik it merges orca mini which is llama2 based with wizwardmath (mistral-based). Similar I don't get why from this page the...
I don't think the articles are all wrong. I found that there is a hard restriction in linear merge method code to use the same tensor sizes so architectures have...
@hiyouga In this thread https://github.com/hiyouga/LLaMA-Factory/issues/2684 you are mentioning that you need to use additional lora target parameters to retrain in order for chatML template to be working. Isn't that perhaps...
I am also trying the same with Mixtral on ultrachat with chatml template. However the training procedure breaks after the steps of dataset preparation and tokenization at 18% with the...