Anindyadeep
Anindyadeep
> I still don't have access neither to LlaMA 2 nor to some Mistral models. But when I tried with Phi 2 everything worked fine. Here is a code snippet...
Hi @deshraj, it would be great, if we can start the review process for this, so that I can re-iterate. Thanks and appreciate it. ps: the code was very readible,...
Hi @deshraj, just checking, if we can move on to the review process, since this PR is open for quite a while now. Thanks
Also, RAG has evolved a lot over time. Example from this [blog](https://www.anyscale.com/blog/a-comprehensive-guide-for-building-rag-based-llm-applications-part-1)
Hi would love to get your thoughts on this, so that according to the feedbacks I can move forward. Thanks cc: @arnavsinghvi11
> Current status: WIP > > The PR motivation started from this issue: #1018 > > ### What this PR do? > We have an existing HuggingFace model integration on...
> Hi @Anindyadeep , thanks for this PR refactor of HF! This looks great! > > I anticipate this PR will eventually replace [`hf.py`](https://github.com/stanfordnlp/dspy/blob/main/dsp/modules/hf.py) so it would be great to...
> > Hi @Anindyadeep , thanks for this PR refactor of HF! This looks great! > > I anticipate this PR will eventually replace [`hf.py`](https://github.com/stanfordnlp/dspy/blob/main/dsp/modules/hf.py) so it would be great...
> I wonder how much of this can we delegate to an opinionated tool like axotol or torchtune or something btw? Well, it depends, I mean sure, we can go...
> My only goal is to support most useful models & quantization settings & loss functions (DPO etc) with as little code as possible. Do you think HF is still...