LocalAI-frontend icon indicating copy to clipboard operation
LocalAI-frontend copied to clipboard

webui select model list is empty

Open Dev-Wiki opened this issue 1 year ago • 8 comments

docker compose install localai and web ui:

version: '3.6'

services:
  api:
    image: quay.io/go-skynet/local-ai:latest
    ports:
      - 8080:8080
    env_file:
      - .env
    volumes:
      - ./models:/models:cached
    command: ["/usr/bin/local-ai" ]

  frontend:
    image: quay.io/go-skynet/localai-frontend:master
    ports:
      - 3000:3000

build result:

$ docker-compose up -d --pull always
[+] Running 2/2
 ✔ frontend Pulled                                                                                                 3.2s
 ✔ api Pulled                                                                                                      3.2s
[+] Building 0.0s (0/0)
[+] Running 2/0
 ✔ Container localai-frontend-1  Running                                                                           0.0s
 ✔ Container localai-api-1       Running                                                                           0.0s


$ curl http://localhost:8080/v1/models
{"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}

but,the webui: image

Dev-Wiki avatar Jun 21 '23 08:06 Dev-Wiki

I think i found the issue, (or maybe its a new one :smile: ) image

Cross-Origin Resource Sharing error: MissingAllowOriginHeader

weselben avatar Jul 22 '23 11:07 weselben

same for me. Model list is empty even though i have models.

blablazzz avatar Aug 10 '23 05:08 blablazzz

Hi! I've setted the .env parameters in the .env file, and added: REACT_APP_API_HOST=172.16.1.94:8081

Which points to the backend. Sadly I still have the same issue.... And I don't understand why does it not work.

mate1213 avatar Dec 30 '23 22:12 mate1213

Same problem here... Inspecting the container files in Docker Desktop, I can see that the model files are loaded in the models folder in the api container. I inspected the .js file in the frontend container but I couldn't make sense of it. I did find a reference to to a folder /v1/models and i changed that to just /models, but it didn't change anything. I haven't seen any responses to any issues in this repo so I'm not holding out for anything soon,,, :(

partisansb avatar Jan 20 '24 12:01 partisansb

When I run $npm start, the browser opens and the interface pops up... Still no Models... I edited the line : const host = process.env.REACT_APP_API_HOST; to read : const host = "http://127.0.0.1:8080"; in ChatGptInterface.js

In the console I have this message :

`Compiled successfully!

You can now view chat-gpt-interface in the browser.

  Local:            http://localhost:3000
  On Your Network:  http://192.168.0.13:3000

Note that the development build is not optimized.
To create a production build, use npm run build.

webpack compiled successfully`

However $./local-ai is running on http://127.0.0.1:8080 │ │ (bound on host 0.0.0.0 and port 8080) So I don't think they can see each other... How do I resolve this?

I had the same issue when running docker-compose... No modules in the dropdown list, even though inspecting the api container files I could see the models in the model folder... Any ideas?

partisansb avatar Jan 21 '24 15:01 partisansb

I believe the issue can be resolved by including the "MissingAllowOriginHeader" parameter in the header of the model request. However, I'm unsure about the implementation process for this in the project. Perhaps someone else could test and confirm its effectiveness.

Edit: What about this #7?

weselben avatar Jan 22 '24 17:01 weselben

I believe the issue can be resolved by including the "MissingAllowOriginHeader" parameter in the header of the model request. However, I'm unsure about the implementation process for this in the project. Perhaps someone else could test and confirm its effectiveness.

Edit: What about this #7?

Thanks for bringing that to my notice, I'll test and merge it if it works without issues!

Dhruvgera avatar Jan 22 '24 18:01 Dhruvgera

I believe the issue can be resolved by including the "MissingAllowOriginHeader" parameter in the header of the model request. However, I'm unsure about the implementation process for this in the project. Perhaps someone else could test and confirm its effectiveness. Edit: What about this #7?

Thanks for bringing that to my notice, I'll test and merge it if it works without issues!

Did it work ? I have the same problem as you, but their solution didn't work...

LeGrandMonoss avatar Apr 24 '24 13:04 LeGrandMonoss