feat(LM Studio): Add response_format param for LM Studio to config
Description
Thank you for checking this PR! Added a “param” parameter to configure the response_format for LM Studio as desired.
Background: LM Studio: 0.3.14 mem0: 0.1.81
I got the following error when using Mem0 with LMStudio. To avoid this error, we would like to be able to specify an arbitrary response_format in config's param.
Traceback (most recent call last):
File "/Users/*************/workspace/py_prj/agent_practice/mem.py", line 77, in <module>
m.add(messages, user_id="alice123", metadata={"category": "movies"})
File "/Users/*************/workspace/py_prj/agent_practice/.venv/lib/python3.12/site-packages/mem0/memory/main.py", line 158, in add
vector_store_result = future1.result()
^^^^^^^^^^^^^^^^
File "/Users/*************/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Users/*************/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/*************/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/*************/workspace/py_prj/agent_practice/.venv/lib/python3.12/site-packages/mem0/memory/main.py", line 197, in _add_to_vector_store
response = self.llm.generate_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/*************/workspace/py_prj/agent_practice/.venv/lib/python3.12/site-packages/mem0/llms/lmstudio.py", line 50, in generate_response
response = self.client.chat.completions.create(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/*************/workspace/py_prj/agent_practice/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 279, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/*************/workspace/py_prj/agent_practice/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 914, in create
return self._post(
^^^^^^^^^^^
File "/Users/*************/workspace/py_prj/agent_practice/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1242, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/*************/workspace/py_prj/agent_practice/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 919, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/*************/workspace/py_prj/agent_practice/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1023, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': "'response_format.type' must be 'json_schema'"}
We would like to be able to specify in the following way
config = {
"llm": {
"provider": "lmstudio",
"config": {
"model": "meta-llama-3.1-70b-instruct",
"temperature": 0.2,
"max_tokens": 2000,
"lmstudio_base_url": "http://127.0.0.1:1234/v1", # default LM Studio API URL
"lmstudio_response_format": {"type": "json_schema", "json_schema": {"type": "object", "schema": {}}},
}
},
}
Thank you so much for considering this PR.
P.S. We did not know where to find the documentation for the change. Could you tell me where and how to change it.
Fixes # (issue)
Type of change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Refactor (does not change functionality, e.g. code style improvements, linting)
- [ ] Documentation update
How Has This Been Tested?
Added test_generate_response_specifying_response_format. This is a test case to verify that the specified response_format is reflected. Conventional tests have also been performed to confirm that when response_format is not set in config, the same params as before are used.
- [x] Unit Test
- [x] Test Script (please provide)
Checklist:
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes
- [x] Any dependent changes have been merged and published in downstream modules
- [x] I have checked my code and corrected any misspellings
Maintainer Checklist
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] Made sure Checks passed
Hey @siroa Can you please resolve the merge conflicts?
@Dev-Khant Thank you for your feedback. I have now resolved the merge conflicts.
@Dev-Khant Apologies, I pressed the button by mistake. Please disregard this.(requested a review )
@Dev-Khant I've updated the doc. Please let me know if everything looks good on your end.
Hey @siroa Looks good to me. Thanks for the contribution!