PITTI

Results 6 comments of PITTI

Hi there, facing this exact issue at the moment. I thought about disallowing local models and I am glad to see it suggested here but it does not work for...

Thanks, I got to the same conclusion : it only works if you force the download form the remote URL. I could not get to the bottom of it. I...

I don’t think it is only tangentially related, I think it is the same bug everywhere : before downloading the tokenizer from HF, transformers.js checks if it exists locally or...

are you all saying that browser caching actually works in production build? I obviously did not think about trying given it crashed all the time in dev.

for that one project, it was indeed vite... Thanks for the solution (and lol for the rant)

I've done it here if helpful : https://github.com/pappitti/mlx-vlm/blob/main/mlx_vlm/server.py I didn't aim for OpenAi compatibility though, just dynamic loading and unloading of models. Caching (one at a time) when the server...