deltawi
deltawi
Any updates on this ? I am trying to do a simple operation on testnet after generating the key/secret from here [https://testnet.binance.vision/](https://testnet.binance.vision/) and the code: ``` # Initialize Binance client...
@DS-mehul try the `--no-cache-dir` option of pip install.
I was able to make it work with the same script as 1.0 but adapted to 1.5, please advise if it's correct: - https://github.com/deltawi/Qwentuning/blob/main/finetune.py
I don't get it, why does it have to plot while training ? can we deactivate this ?
> I am currently faced with similar issue even when I tried to evaluate the performance of the tft model. > > predictions = best_tft.predict(val_dataloader, return_y=True, trainer_kwargs=dict(accelerator="cpu")) MAE()(predictions.output, predictions.y) >...
Any news on this one ? Can we do like 80/20 split ? how do we do that?
Hey team, I am facing the same issue on `Ubuntu 22.04` with `GPU RTX A5000`. I am trying the `mixtral:8x7b-instruct-v0.1-q4_0`. I ran: ```bash ollama run mixtral:8x7b-instruct-v0.1-q4_0 ```
I am facing the same issue installing this on my local kubernetes cluster.
Can't this be done through the `Custom Endpoint Config` ? I tried with this : ```yaml llm: provider: huggingface config: endpoint: https://api.endpoints.anyscale.com/v1 model_kwargs: model: mistralai/Mixtral-8x7B-Instruct-v0.1 ``` Starting the app with...
> I will check the error. May I ask you to let me know your OpenAI Gym version? ``` >>> import gym >>> gym.__version__ '0.21.0' >>> ```