Samuel Galanakis
Samuel Galanakis
``` import lmql from transformers import ( AutoTokenizer, ) tokenizer_string = "HuggingFaceH4/zephyr-7b-beta" lmql_model = lmql.model( f"local:gpt2", tokenizer=tokenizer_string,cuda=True ) tokenizer = AutoTokenizer.from_pretrained(tokenizer_string) dialogue = [ {"role": "system", "content": "You are a...
I see, we could track the previous role as well and pass it in with another dumy text but parsing the start+end from the resulting string would be very tricky....
Any plans to support vision transformers from huggingface / timm? A lot of potential use cases there for deploying many classifiers. If not what would that entail? Would be open...
@tgaddair Ok clear, joined the discord will look out for it!
Hi @jkhenning this is the pip freeze [requirements.txt](https://github.com/user-attachments/files/15898802/requirements.txt)
Also I see that it does store the main data but fails on some metadata / auxillary files. ``` RequestId:48002e24-301e-005a-022e-c20c19000000 Time:2024-06-19T09:52:41.5150691ZAuthentication scheme Bearer is not supported in this version.) 2024-06-19...
@jkhenning Any update on this?
I see "The backward pass is implemented only for src.shape == index.shape." on the native torch implementation and I believe he backward pass is implemented here in all cases.