ArxivDigest
ArxivDigest copied to clipboard
Can we support other LLM mode?
I recently published a package llm-client that can be very helpful in enabling the support to run other LLM models, including OpenAI, Google, AI21, HuggingfaceHub, Aleph Alpha, Anthropic, Local models with transformers.
So there are three things that need to be done to incorporate other models:
- Get the appropriate auth/api tokens to the code
- Change the calls in https://github.com/AutoLLM/ArxivDigest/blob/5c7340d79ba21b3c6e77510d5ca54b78dfbb0d02/src/utils.py#L108 to use the new model
- We need to parse the output, so the model needs to always generate responses in a certain format (
{"Relevancy score": "an integer score out of 10", "Reasons for match": "1-2 sentence short reasonings"}). We've only tested this for gpt-3.5-turbo, and before we officially support any other LLM, we need to make sure that it will follow that format given the prompt in src/relevancy_prompt.txt
The last point is the main pain point, and is generally independent of the exact library we use to call the model.