Open-Assistant icon indicating copy to clipboard operation
Open-Assistant copied to clipboard

Setup web API Path that runs a prompt against a live model and returns the results

Open fozziethebeat opened this issue 2 years ago • 3 comments

This requires:

  • [ ] A new web API path created in website/src/pages/api/prompt_model.ts or something similarly named. It should take a few post body fields: a single prompt string and a model_id string.
  • [ ] Given the prompt and model_id it should run the prompt against the requested model ID and then generate send the response to the user.

fozziethebeat avatar Jan 06 '23 05:01 fozziethebeat

The actual prompt handling can be done with fake models for now that just make clear something is happening. We don't need to query real models just yet.

fozziethebeat avatar Jan 06 '23 05:01 fozziethebeat

I'll take this on if it's still available, just need to know if you had any 'fake model' in mind.

othrayte avatar Jan 09 '23 11:01 othrayte

Before moving too far, let's discuss this one in the web team meeting. It depends on how fast the ML team has gotten a model up and running.

For pure testing purposes tho, I think it's fair to take a model that just wraps the text with something.

For example:

  • input: "Name a cat for me"
  • output: "model1_output(Name a cat for me)"

fozziethebeat avatar Jan 09 '23 11:01 fozziethebeat

Check the discord or ping me if you want to do some testing with an early api and model version.

Rallio67 avatar Jan 11 '23 01:01 Rallio67

Going to close this. We decided that the client side will talk directly to the inference server and verify via JWT (when I get that working)

fozziethebeat avatar Jan 24 '23 07:01 fozziethebeat