MMTrustEval
MMTrustEval copied to clipboard
Regarding testing my own model
Dear authors, does your method support testing models that have been fine-tuned and saved locally? If so, how should I proceed with this?
Sure. You can integrate your own model to the framework following these steps.
- Define a subclass of
BaseChatfor your own model - Implement the
chatmethod to support multimodal and text-only inference - Set up the model-id and register it to the model registry via
@registry.register_chatmodel()
If your model is fine-tuned from an existing model family like LLaVA, you can use the pre-defined LLaVAChat class (mmte/models/llava_chat.py) by changing the model-id and the corresponding config (e.g., model-path).
Your model can be tested as long as it can be correctly loaded in __init__ and supports multimodal and text-only inference in chat.