steveepreston

Results 26 comments of steveepreston

agree with @snakeying! > the installation process can be a nightmare. Even with detailed instructions after becoming familiar with this package and not finding an installer in releases section, I...

thank you @zhouxihong1! i downloaded and started your version successfully via `start_open_webui serve`. im was not sure where to add models, so i have gone to Models section in `localhost:8080/admin/settings`...

thank you man! worked like a charm! i suggest to enable it as default for next releases: ```.env ENABLE_OLLAMA_API=true OLLAMA_BASE_URL='http://localhost:11434' ```

@zhouxihong1 as 0.3.12 released in org repo and continuing, can you please create it's automation script/action for creating exe version and PL to here? so all can make sure it's...

i fixed it by adding `client_max_body_size 100M;` to my nginx config. ```nginx server { listen 80; server_name your_domain.com; location ^~ /mlflow/api/ { client_max_body_size 100M; # Increase limit here proxy_pass http://127.0.0.1:5000/api/;...

Thank you for attention @Gopi-Uppari Yes, `gemma` successfully executed in my test too. (although `gemma-2-9b-it` thrown OOM on TPU). Problem is about `llama` model. ok, i will try to create...

Problem not resolved and I've moved to PyTorch. Maybe I'll back to follow and solve this in future. There is still no example for `Llama3CausalLM`+`XLA` in the web.

@SamanehSaadat Thanks for the explanation. it sounds fine. just to confirm: is `get_layout_map()` currently available only for `LlamaBackbone` and `GemmaBackbone`? What should we do for other models, such as `distil_bert`...