guidance icon indicating copy to clipboard operation
guidance copied to clipboard

added text-generation-webui infrence support

Open danikhan632 opened this issue 1 year ago • 3 comments

Adds support to run "Local LLM" using text-generation-webui which has built-in Multi-GPU support, consistently supports new models shortly after release, provide mechanisms to use larger models such as memory and disk offloading and could enable to gudiance to run on systems with less resources in an easy to setup approach.

Note this is still currently WIP however is under active development, this is a seperate class and will not break any other functionalities of guidance

corresponds with this pr https://github.com/oobabooga/text-generation-webui/pull/2637

danikhan632 avatar Jun 12 '23 02:06 danikhan632

@microsoft-github-policy-service agree

danikhan632 avatar Jun 12 '23 02:06 danikhan632

https://github.com/oobabooga/text-generation-webui/pull/2709

updated

danikhan632 avatar Jun 16 '23 00:06 danikhan632

Thanks! Two questions:

  1. Could you add a basic unit test file (it is fine if it only runs locally when a webui server is available).
  2. Do you have an insight into whether acceleration and token healing are possible with the webui API? (I have not looked deeply yet)

slundberg avatar Jun 20 '23 23:06 slundberg

I can add unit testing ASAP and token healing does work but not acceleration however I can implement that and testing by tommorow

danikhan632 avatar Jun 21 '23 00:06 danikhan632

Update unit tests and improved functionality were added

danikhan632 avatar Jun 22 '23 23:06 danikhan632

@slundberg is this going to be merged?

jediknight813 avatar Jun 28 '23 20:06 jediknight813

@slundberg is this going to be merged?

I'd like to get feedback regarding any concerns with the pull request but haven't heard much, maybe should open new one?

danikhan632 avatar Jun 29 '23 00:06 danikhan632

Thank you for submitting this. I was thinking of writing it myself but wanted to check if someone already sent a PR before I did.

I have 1 question and 1 feedback about this PR.


TGWUI's GET /api/v1/model response is different than expected?

Steps to recreate:

  1. Launch text-generation-webui like this: python server.py --api --model=name/model --loader=ExLlama
  2. Use this PR like this: guidance.llm = guidance.llms.TGWUI('http://localhost:5000', chat_mode=False)

Results in this error

  File "/Users/chrisle/code/kbot/.venv/lib/python3.9/site-packages/guidance/llms/_tgwui.py", line 34, in getModelInfo
    resp=response.json()["results"].
KeyError: 'results'

Not sure if it was me or if i'm using some different version of the WebUI's API but I wasn't able to use this PR. (I checked that i was using the most current pull from the text-generation-webui main branch).

Tracking this down I'm finding this in this PR: source code

def getModelInfo(self):        
    response = requests.get(self.base_url+'/api/v1/model')
    # Expect response to be { "results": <any> }
    resp=response.json()["results"] # Error: KeyError: 'results'
    return resp

# ....

# Called in `__init__`
self.model_info= self.getModelInfo()
# Expect response to be { "results": { "model_name": "name/model" }}
self.model_name = self.model_info["model_name"] }} # This will also fail. See below.

Compare to the API's route: source code

# See: https://github.com/oobabooga/text-generation-webui/blob/333075e726e8216ed35c6dd17a479c44fb4d8691/extensions/api/blocking_api.py#L28

def do_GET(self):
    if self.path == '/api/v1/model':
        self.send_response(200)
        self.end_headers()
        response = json.dumps({
            'result': shared.model_name # <= Returning { "result": "name/model" }
        })
    # ...

Am I somehow using a different web API than what others are using?


Feedback: Consider adding some asserts and exception handling around HTTP requests.

You can't always trust an HTTP request will happen. Lost connection, timeouts, or in this case, a malformed response from the API.

Consider adding some asserts or try/catches in case things go wrong. This would help with getting helpful error messages when debugging when the API itself fails.

Example:

def getModelInfo(self):
    data = self._make_request(self.base_url + '/api/v1/model')
    assert 'result' in data, f"Expected /api/v1/model to respond with a result. Got: {data}"
    return data['result']
    
def _make_request(self, uri: str):
    """Make a GET request to TGWUI.
    
    Args:
        uri: URI to fetch.
        
    Returns:
        Data as response.json()
    """
    try:
        response = requests.get(uri)
        return response.json()
    except requests.exceptions.RequestException as e:
        logging.critical(f'TWGUI API request failed: {e}')
        raise

chrisle avatar Jul 04 '23 22:07 chrisle

Thanks for the feedback, I should have put guards on API routes and I should pull the latest changes for TGWUI. I am a little confused with getting inital feedback from the maintainers but then nothing else after I updated. I'm honestly thinking about closing this PR, revising code to be updated, then making a new PR. Does this sound like a good plan?

danikhan632 avatar Jul 04 '23 23:07 danikhan632

Totally up to you. It's your PR :)

Personally? I'd keep it. A little extra time to button things up and you have a solid PR.

Just a passing thought on PRs and nothing to do with this PR in particular: A quick google and looks like the owner is a Senior Researcher at MS. Probably means they're stuck in lots of boring "impact assessment committee meetings" and unfortunately don't have enough time to do full-on code reviews like they want to. People can rest easy when they glance at the code and see the contributor has handled exceptions and edge cases, the code is well documented, style looks pretty, and, bonus points for test coverage. It looks rock solid and ready-to-ship, and the owner can rest easy.

chrisle avatar Jul 05 '23 09:07 chrisle

Sounds good, will add further units tests, documention, error handling, etc. I might want to open a new pr just bc I've changed so much code between commits expect to see an update very soon.

danikhan632 avatar Jul 05 '23 14:07 danikhan632

Will this be merged or is there existing functionality that now exists within Guidance to support text-generation-web-ui or using external LLMs via API requests?

jordan-barrett-jm avatar Nov 17 '23 04:11 jordan-barrett-jm

Will this be merged or is there existing functionality that now exists within Guidance to support text-generation-web-ui or using external LLMs via API requests?

I plan on pushing these changes to litellm

danikhan632 avatar Nov 17 '23 18:11 danikhan632