moonshot icon indicating copy to clipboard operation
moonshot copied to clipboard

Using Moonshot for local models instead of hosted models via API endpoints

Open imda-junyoung opened this issue 1 year ago • 0 comments

For locally developed models / most models downloaded from HuggingfaceHub, it can be troublesome to set up an API endpoint for the model. There are libraries such as vLLM that only extends support for models of certain architecture, or transformers-openai-api, which is not actively maintained. This means that it is even more difficult to benchmark custom models via Moonshot, which only supports model calls via API endpoints.

It would be a useful feature for researchers/developers to be able to use Moonshot as a library for benchmarking locally running models instead of solely for hosted models via API endpoints, given that the model can handle appropriately crafted prompts and provide a parseable text output.

imda-junyoung avatar Feb 29 '24 06:02 imda-junyoung