Open-Assistant
Open-Assistant copied to clipboard
Setup web API Path that runs a prompt against a live model and returns the results
This requires:
- [ ] A new web API path created in
website/src/pages/api/prompt_model.ts
or something similarly named. It should take a few post body fields: a singleprompt
string and amodel_id
string. - [ ] Given the
prompt
andmodel_id
it should run the prompt against the requested model ID and then generate send the response to the user.
The actual prompt handling can be done with fake models for now that just make clear something is happening. We don't need to query real models just yet.
I'll take this on if it's still available, just need to know if you had any 'fake model' in mind.
Before moving too far, let's discuss this one in the web team meeting. It depends on how fast the ML team has gotten a model up and running.
For pure testing purposes tho, I think it's fair to take a model that just wraps the text with something.
For example:
- input: "Name a cat for me"
- output: "model1_output(Name a cat for me)"
Check the discord or ping me if you want to do some testing with an early api and model version.
Going to close this. We decided that the client side will talk directly to the inference server and verify via JWT (when I get that working)