Teknium
Teknium
Also @nlpxucan , do you have a twitter? Can we get in touch
> @artyemk Thanks for your kindly suggestions. We are also focusing on improving the data quality now, and will update next version of WizardLM after significant improvement, your discovery and...
> here you go https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered I am working on training a model with this dataset. It should be more cooperative. Was this made by simply removing entries with alignment or...
After inspecting, it seems you removed around 20,000 entries. In my search for "aligned" responses as I said, I only found about 5300~. Could I ask your filtering method/way you...
> > here you go https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered I am working on training a model with this dataset. It should be more cooperative. > > Was this made by simply removing entries...
I find the model to be SOTA 7B model there is so, I don't agree that it is overhyped
Random side note, somewhat relevant, https://www.youtube.com/watch?v=SaJ8wyKMBds
Yes (and it doesn't seem to make anything worse)
Why cant HF get their stuff together lmao wasting my money with their issues
Can you do a 13b the full 7b is already released ;o