Imran Akbar
Imran Akbar
Before writing any code, wanted to clarify the requirements, as my understanding of the sliding window technique in this context isn't clear. I've looked at a few implementations in: -...
> cc @rmasiso since they were looking into it too [here](https://github.com/huggingface/diffusers/pull/5928#issuecomment-1848846131). > > I will be referring to [this](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/blob/d2bf367d41836dd9204497e32dd30cd1e81ae54d/animatediff/sampling.py#L618C60-L618C60) implementation in this comment. Other implementations are also same/similar. The overall...
I am using the sample Electron code as well. I cannot get Google logins to work. After I authorize Google, I get redirected to a page like this: My settings...
This is a great package. I'd like to use this for the Korean language - do you have any training code I could use with a custom dataset? Korean is...
thanks. I am actually using the openai prefix, like this: ```model = dspy.LM(f"openai/{self.huggingface_model_id}", api_base="http://localhost:8000/v1", api_key=VLLM_OPENAI_SERVER_API_KEY)``` that works fine with a model like Llama 3, but with Gemma 2 it returns...
thanks Prof. @okhat! Here's what I'm going to test out: ``` import dspy class GemmaChatAdapter(dspy.ChatAdapter): def format(self, signature, demos, inputs): messages = super().format(signature, demos, inputs) if messages and messages[0]["role"] ==...
Yeah, it worked fine! But then ran into [this issue](https://github.com/stanfordnlp/dspy/issues/1132), as vLLM only supports 4k context length with gemma :(. So now trying it out with TGI's OpenAI server