TavernAI icon indicating copy to clipboard operation
TavernAI copied to clipboard

Q42023 "State-of-the-TavernAI glassnost."

Open FunkEngine2023 opened this issue 2 years ago • 0 comments

Issues prior to this point have been archived to give us an epoch(ⁿ) from which to better address the expectations/truths with the community. To put it one way: we're taking a "mulligan" on the support queue.

Once 1.6.0 hits... Do feel free to resubmit/reopen the feature requests/changes issues that I closed today that are still relevant pending the things added, improved fixed, etc. in 1.6.0 itself.

I promise to keep the issue ticket queue much more relevant and honest going forward. We're at the beginning of Q4-2023, as good a time as any to start fresh. This time we can do better, not let it get out of hand and/or fail to give the attention we should be giving it. (Some of those were still from the 1.10 release, and while technically not fixed, due to external factors, it isn't at this time an actual issue.)

Everything from this point forward is more or less an open letter to the community to give more insight about the state of things currently; the past not quite a year since the first release of TavernAI; why we are where we are now, where we are trying to go; and what you can do to help.

All ~~TavernAI Manifesto~~ jokes aside, here's what you (the community, the users, the creators, and the mildly curious.) at the very least deserve to know. Statements and opinions are my own. I do not speak for Humi, the future of TavernAI is ultimately his choice, he created it. This is me attempting to translate the current geist as pertaining to where we currently find ourselves here at v1.5.1 of the TavernAI project and our communication with its users.

As to 1.6.0? The only "secrets" I will disclose are rooms actually work, and there's no more "endpoint IP" localtunnel rigmarole on the colab. Cloudeflared for both TavernAI and KoboldAI and localtunnel not even a visible option. (It can be enabled, e.,g. cloudflare goes down for extended period etc, but it's not a button on the menu to fiddle with and confuse users.)

Bugs got fixed for sure, but (the users and AI community members aka you) are the ones that are going to help us find the (inevitable) new and unique bugs that we managed to include when we added features. And the support queue will actually get more than just 'put out fires' levels of responses.

Let's end 2023 with a positive can-do optimism for the near and long-term future, and I will try to set expectations for you.

Mobile device related/centric things other than "that the UI renders in a functional state" is, for the midterm, the best that we will commit to providing.

Sorry to any PhoneBros reading this, but the basic Mobile phone and tablet compatibility you have now, currently, is not going to vastly improve for some time. We'll continue to support being run on Colab, and you will be able to use us, but the experience provided by TavernAI targets the desktop/laptop user who is using both hands on a keyboard+mouse.

You'll get the new features in the interface, but they may not be optimized for a tiny touchscreen and a software keyboard.

Mobile TavernAI will happen when the time is appropriate, so we're not abandoning you. Just be prepared for some time to go by before such things are actually on the road map.

Regarding the mulligan on support queue...

Effective going forward:

It will get an honest "wontfix" label if the true answer to a support ticket is "Not going to happen unless someone does it ~~for us~~ themselves and submits a proof of concept or better pull request from their working fork of TavernAI, we have neither the time or requisite knowledge to implement or simply have no desire/motivation to do this..."

"wishlist" if we may get around to it before 2030.

"Help wanted" if we're not sure how to pull it off ourselves but are interested in the community helping us out. We'll give things a couple of months and reevaluate based on if anything has been put forward from you guys at large or if there wasn't much interest. No ETA implied.

"Enhancement" label means it is something that you hopefully will see in a release within a 6-12 month time frame. We can and want to do it. If you want it faster, help us out...(¹)

"Bug" label will be on things that you can expect in the "7 days to 4 weeks" time frame. (things that break functionality completely will be ASAP repo commits. things that are otherwise merely inconvenient will be rolled up with current internal code changes and push with whatever commit we're working on(1.x.x), possibly as a worst case, in the next major version milestone. (1.x) release...

That's the direction things are (intended to be) going, we're going to have a better transparency about what things we will/won't do, we'll say that if someone wants to do most of this, we'll work with them to upstream their contributions, if we're totally not knowledgeable (Mobile devices as example.) anyone who is can do it and show us how it would be done, this may not be the same person who opened the ticket, it might be whoever thinks to themselves "I can do that" and does it enough for proof of concept and tags the wishlist issue in a PR and declares interest in seeing it through to integration along with us. And be upfront and frank to the "You're dreaming/You're bonkers/You're wrong." Tickets with a firm and timely wontfix.

The more involved and helpful (contribution) the community is, the faster we get to better places. Don't be afraid to contribute, and if you have a good idea but no actual coding experience, if you want it sooner rather then later I would encourage you to find someone who does have the skills and knowledge necessary and work with them on giving us proof of concept code to work with.

So all hatchets on either side of the fence, prior to today, have now hopefully been buried sufficiently deep... Thank you for understanding.

Let's start fresh and strive for the next 10 months to be much better than our first 10 months turned out.

We're just a couple of folks doing what we can...

We can do more if there are more like us, working with us, sharing our interests in LLM AI chatbots, our desire to make TavernAI the best that it can possibly be, and our commitment to seeing things through, even if the path to them is treacherous or unknown.

If you think that you can do more than just drop an idea in the suggestion box or include comprehensive "When - What - Where (is it doing or not-doing) the thing that is the issue and how you encounter it or what makes it occur." helpful support tickets(⁰)... Please do what you can. Anything you can contribute to the code and the project is more than we currently would without your help...

TODO of sorts. (TavernAI Talent Search?)

Currently we could totally use someone who knows TavernAI from the user-side like the back of their hand that maybe has no coding/development interest or skill but is good with written documentation, we have heard from a few, but we need someone who could literally write the "TavernAI for Dummies(tm)" comprehensive deep dive; "this is what a token is this is what the prompt does this is what rep pen actually is and why it has sliders for range and slope etc..."

We also need people who speak additional languages to help translate the UI, the documentation we have now, and the documents in the works... As long as you can take things in English and convey the same thing effectively to someone in another language, you're useful. Localization is a nuanced human thing, GTranslate doesn't cut it. Can you?

Even if you have no idea how this whole thing works and you just really enjoy TavernAI, no programming experience necessary. If you use TavernAI enough to know your way around it and the terminology used, you're someone who can help bring TavernAI to those who would appreciate a native language UI even if they are using ESL to interact with the bot. The future where multilingual models are common is coming. A quality of life addition now for the users who speak that language will be a necessity when models are trained in it
Having UI in that language ahead of the rush would be useful...

~~_Can you clean windows and mop floors? _~~ Be thankful we're a digital tavern, or I'd likely be saying something similar... :)

Let's do cool things together in 2024.

Tell me (us) what you think that you could do to make the TavernAI user experience better. The entire experience. Soup to nuts.

From those very first awkward questions: "what's an AI?/what's a token/what do you mean model?" through those who actually "Remember what they took from us." back in 4Q2022. When AI LLMs were just emerging as a thing a person could play around with one on one.

I was there. I remember. Although I was literally in on the ground floor from not much after the start of this field... Now 11 months later, I know something about many things I didn't a year ago, and have expertise in some certain things, I still have a lot to learn. We all do.

Share what you know and help everyone learn.

Become "the" expert on something, and help share it out.

Do you have the beginnings of or the entire guide on "Choosing the right model for the things I want to do?" Get on the discord and make a post about whatever it is that you have expertise on even tangentially-relevant to the quality of a TavernAI user's experience in getting the most out of using TavernAI itself.

Not just a: "Do this/Use that one/Here's a link." please...

Curate your expertise for the user, e.g. Explain why this is "the right way" and why x is good at y but if you want z x is not going to cooperate, choose something from the list that are intended for z that someone else has curated at (link). Etc... Start a thread in #support, and then bring it to my attention and I'll add it to our collection of places we can authoritatively point to when asked to explain the complicated things, or what the options are that the user has to choose from, or Here's the walk-through for doing that.

We can all be useful together in 2024.

--FunkEngine

(ⁿ):

Epoch. https://youtu.be/K3m3_7RoGZk

(⁰): Props to many of you, good work on giving us what we needed. To those of you who just posted: "x doesn't work." or "TavernAI stopped working..." tickets; thanks for taking the time to at least let us know about the issue, even if we may have had to work out the actual (if any) issue you're having for ourselves. We may not have been aware of it / were aware of it. Just letting us know is more than most ever do, when the best you can offer is: "Something is wrong." You made the effort. We appreciate it. Not as useful as some others but still undeniably useful.

(¹): Ideas-guys are handy, but having an idea for a whole new feature that would be nice if tavern had vs. having an idea and also how to implement the idea of yours into the project are different sides of the coin. If you have a demonstration of proof of concept and the actual thing it does or problem it eliminates is useful and valid, your efforts tend to get the recognition deserved.

Aibo and I are examples of handing functional if possibly ugly code that fixed the existing issues properly, or proved that the functionality can be integrated into TavernAI needing only some finesse before it goes out as part of a TavernAI release We didn't just have an idea and ask if it could happen. We popped the hood and set about making our copy of TavernAI do the new thing, or do the thing it had been doing wrong, the right way, and be better than it was before we tinkered...

Humi saw what we achieved ourselves, found our improvements useful, and included our changes with the next release of TavernAI.

Aibo and I were approached by Humi himself around that same time, offering us the opportunity to become contributors and community relations assistance to the TavernAI project. Because we cared enough to contribute something useful to TavernAI. ~~Probably also because we weren't completely unhinged.~~

FunkEngine2023 avatar Oct 06 '23 19:10 FunkEngine2023