nshoman
nshoman
I also had the same problem, but the `conda init bash` fix worked for me
I actually think what I posted is just a workaround and the real solution is passing in the proper LLM config into helpers. If that's agreeable I can prepare a...
@macromeer I think you need to append the provider (based on the endpoint) to the model name in your case. So if it's an openai compliant endpoint you'd have "openai/my-llm"....
> Hi - we're working on getting this documentation improved. In the meantime - I believe you'll get better results without using an agent from a local model and instead...
Reading through the paper, the intended application of DV4 is quite different than prior models, so it might actually not be appropriate for this repo.
I'm not sure that DV4 belongs here in sheeprl -- I was surprised to see it was a fairly significant departure from DV3. I might have some support this year...