LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

WebUI enhancements

Open mudler opened this issue 1 year ago • 14 comments

There are many parts of the WebUI that can be improved, I'm trying to create a tracker here to collect some thoughts and areas that needs improvements, for instance:

  • [x] model card description: (support markdown, fixed size, visual improvements )
  • [x] delete models (ability to uninstall a model after installation)
    • [x] https://github.com/mudler/LocalAI/pull/2173
  • [ ] cancel installation (abort an installation which is in progress)
  • [x] chat with a model ( a simple, no frill chat interface just for sake of debugging and experimenting )
  • [x] Show the number of models available in the gallery
  • [x] Filter installed/not installed
  • [x] Pagination
  • [x] https://github.com/mudler/LocalAI/pull/4537
  • [x] delete all external dependencies
    • [x] https://github.com/mudler/LocalAI/pull/2348
  • [x] Add image gen interface
  • [x] Add TTS interface
  • [ ] Add transcription interface
  • [ ] Add model manually from interface mask (ask for URL to feed the gallery service) and show installation progress
  • [ ] Add embeddings
  • [ ] Add rerankings (?)
  • [x] multimodal chat
  • [x] Make possible to access the WebUI without any browser extensions if an API_KEY is specified (#2227)
  • [ ] https://github.com/mudler/LocalAI/issues/2763
  • [x] #3381

mudler avatar Apr 27 '24 16:04 mudler

Hello! I'm a bot, an experiment of @mudler. Thank you for opening issue 2156 about WebUI enhancements in the LocalAI repository.

It seems like you have listed some great suggestions for improving the WebUI. Here's a brief summary of the requested features:

  • Model card description improvements: Support markdown, fixed size, and visual upgrades.
  • Ability to delete models (uninstallation).
  • Ability to cancel installation (abort progress).
  • A simple, no-frill chat interface with models for debugging and experimentation.

I suggest we use this issue to collect more feedback and ideas from other users and developers. Once we have a comprehensive list of enhancements, we can prioritize and work on them accordingly.

Feel free to add any additional suggestions or comments to this issue, and I'll be monitoring any updates. Let's make the WebUI even better together!

localai-bot avatar Apr 27 '24 16:04 localai-bot

I'm having pretty bad performance issues on the /browse endpoint. Guess it might be too many repos/JS? it kinda kills my firefox & chromium. :(

Besides that i love the UI so far :+1:

//Edit: nvm.. for now i just reverted to the default galleries, so it's use able now. :)

Nold360 avatar Apr 28 '24 19:04 Nold360

How about:

  • [ ] resume downloading of partially downloaded models?
  • [ ] delete all external dependencies, so it can be run completely offline?

lessneek avatar May 02 '24 08:05 lessneek

How about:

* [ ]  resume downloading of partially downloaded models?

* [ ]  delete all external dependencies, so it can be run completely offline?

good points, adding it to the ticket :+1:

mudler avatar May 03 '24 07:05 mudler

I'm having pretty bad performance issues on the /browse endpoint. Guess it might be too many repos/JS? it kinda kills my firefox & chromium. :(

Besides that i love the UI so far 👍

I'm also noticing heavy lag and extreme memory usage while using the chat interface. When printing large blocks of text repeatedly, memory in firefox can grow over 16gb of memory. I also get a lot of "slow tab" and "slow script" warnings as a result of the lag. It's probably fine for a small handful of back-and-forth, but asking a model to print out a 100 line C++ code block can crash my laptop (assuming the model doesn't cut off the reply mid-file for no reason :sweat: )

bunder2015 avatar May 19 '24 14:05 bunder2015

A way to export and import conversations. Onlower end CPUs it can take long time to process a prompt and i don't want to keep redoing entire character exploring conversations if I reboot my pc.

Just a idea. No idea if it is even feasible

maxvaneck avatar Aug 26 '24 11:08 maxvaneck

one feature that might be nice... is to be able to regenerate a response (in case the LLM goes off the wall and strays off its prompts), or rewind the chat to either a user response (to regenerate the assistant response), or an assistant response (to give the user a chance to change their response)...

bunder2015 avatar Sep 19 '24 23:09 bunder2015