Cristian Gutiérrez

Results 10 comments of Cristian Gutiérrez

Google Colab redirects `localhost` to the VM backend, using `127.0.0.1` solves the issue: ```bash !python3 -m fastchat.serve.controller --host 127.0.0.1 --port 8000 ```

Google Colab redirects `localhost` to the VM backend, using `127.0.0.1` solves the issue: ```bash !python3 -m fastchat.serve.controller --host 127.0.0.1 --port 8000 ```

Hi @merrymercy ! I have a notebook in which I am able to run the FastChat API in Google Colab free tier. As examples I've included the code snippets for...

This keeps happening even though I am passing the 'cpu' parameter: `device = torch.device('cpu')` Is there a way to avoid this error popping off in stderr?

Move the model to CUDA, you are probably using `float 16` which is only implemented for GPUs.

Will be huge for sure, thanks for the work!

`pip install protobuf` and you will be able to run it fine.

Agree. The dependency comes from the `transformers` library, not LLaVA. https://github.com/huggingface/transformers/issues/24533

@LucaBernecker Hi, have you found out the problem? It must be something related with 'mps' as if I run it with 'cpu' it correctly performs inference.