LoopControl
LoopControl
I thought something was wrong with the server I was on until I saw this issue here also. This behavior absolutely needs to be fixed as others mentioned above.
Searching through archives led me to issue https://github.com/mastodon/mastodon/issues/34 which has been open since `Sep 10, 2016`. Literally one of the earliest issues filed and still a problem..
> does P40 support FP16? will that be the issue? P40 supports FP16 but the performance is very slow. Transformers has a `use_cuda_fp16 = False` flag that massively speeds up...
> I recently bought a P40 and I plan to optimize performance for it, but I'll first need to investigate the bottlenecks. That's great to hear, thanks for looking into...
@oobabooga I don't think this works with OPT models unfortunately. When I tried this modification, it only worked for me with GPT-Neo models of which there's a 2.7B model (which...
This also happens for me -- the model doesn't get unloaded when loading a new model. Even choosing the "No AI" model doesn't make a previously-loaded model unload. I have...
Is there a way to train from long-form text/articles/stories without using the question-answer format? Can I just feed in 1 full article text at a time through the `generate_prompt` function...
> I tried to tune down all of the parameters like MICRO_BATCH_SIZE, BATCH_SIZE , EPOCH, but none seems help. I wonder what else I can do if I want to...
@olihough86 I basically just did exactly what @collant suggested above. I used the `lengths.ipnb` file to generate snippets of the training data (I just took around 1600 character snippets randomly...
Don’t rename the adapter files - put it back to original names. Set Base model to the original model it was trained on top of. Only LoRA_weights should point to...