Behrooz Azarkhalili

Results 10 comments of Behrooz Azarkhalili

> There is no need to clean up GPU cache in d3rlpy. Currently, d3rlpy does not support multi-GPU training. But, you can switch which GPU to use by specifying the...

> What's the shape of observation here? I believe this is not an issue of d3rlpy. I dramatically reduced the `batch_size` to 8; however, the issue still exists. Its shape...

> Hi @behroozazarkhalili, > > I will work on this, aiming to get back in 1-2 weeks. > > Thanks. Thank you for your time and consideration.

@dxli94 Hi. Any update about this?

@mgomes @mgomes @amerine @dxli94 Hi everyone. Could you please kindly do a favor and answer this vital question?

> Hi, Are you referring to open-source LLM models or Hugginface transformer models such as Bert etc? Yes, I agree with @Wauplin that supporting other backends than OpenAI would be...

> Drafted a PR #13 for it. Needs more refinements (docs, examples,...) but feels free to check the [notebook example](https://github.com/promptslab/Promptify/blob/cff210b473f071d9072408ce1ce82b600e1d1736/examples/classification_hf_hub.ipynb) and give some feedback. Hi @Wauplin Great work and you're...

> > Where should you define the used model? in the HubModel() module? > > At the moment I copied the openai model that has a `model_name` argument in the...

> > I think the second one (the consistent approach with the HF model) is much better. > > @behroozazarkhalili (cc @monk1337) I have updated my PR (#13) accordingly and...

@Wauplin @monk1337 Thank you so much. Could you please provide the HF example as a new example in the package's documentation?