llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

Server webmethod endpoint and llama-stack-spec.yaml file mismatch

Open cheesecake100201 opened this issue 1 year ago • 0 comments

System Info

  • PyTorch version: 2.2.2

  • Is debug build: False

  • CUDA used to build PyTorch: None

  • ROCM used to build PyTorch: N/A

  • OS: macOS 13.6.6 (x86_64)

  • GCC version: Could not collect

  • Clang version: 14.0.3 (clang-1403.0.22.14.1)

  • CMake version: Could not collect

  • Libc version: N/A

  • Python version: 3.10.14 (main, May 6 2024, 14:47:20) [Clang 14.0.6 ] (64-bit runtime)

  • Python platform: macOS-10.16-x86_64-i386-64bit

  • Is CUDA available: False

  • CUDA runtime version: No CUDA

  • CUDA_MODULE_LOADING set to: N/A

  • GPU models and configuration: No CUDA

  • Nvidia driver version: No CUDA

  • cuDNN version: No CUDA

  • HIP runtime version: N/A

  • MIOpen runtime version: N/A

  • Is XNNPACK available: True

  • CPU:

  • Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz

  • Versions of relevant libraries:

  • [pip3] numpy==1.26.4

  • [pip3] onnxruntime==1.19.2

  • [pip3] torch==2.2.2

  • [conda] numpy 1.26.4 pypi_0 pypi

  • [conda] torch 2.2.2 pypi_0 pypi

Information

  • [ ] The official example scripts
  • [ ] My own modified scripts

🐛 Describe the bug

llama-stack-spec.yaml file is used to generate the llama-stack-client SDK. In the .yaml file, some endpoints like /agents/turn/get or /agents/session/get are GET methods, but in the agents.py file inside apis/agents/ it is as POST method. Along with this on the server side, some parameters are being sent in query_params inside the client but the server expects them in request body. This is causing a mismatch between the implemented code on the server side and the code on SDK level thus causing a Pydantic error.

Code:

session_info = client.agents.session.retrieve(session_id=<>, agent_id=<>)

Error logs

BadRequestError                           Traceback (most recent call last)
Cell In[58], line 1
----> 1 session_info = client_sync.agents.session.retrieve(agent_id=agent_sync.agent_id, session_id=session_sync.session_id)

File /usr/local/Cellar/jupyterlab/4.2.5_1/libexec/lib/python3.12/site-packages/llama_stack_client/resources/agents/session.py:121, in SessionResource.retrieve(self, agent_id, session_id, turn_ids, x_llama_stack_provider_data, extra_headers, extra_query, extra_body, timeout)
    107 """
    108 Args:
    109   extra_headers: Send extra headers
   (...)
    115   timeout: Override the client-level default timeout for this request, in seconds
    116 """
    117 extra_headers = {
    118     **strip_not_given({"X-LlamaStack-ProviderData": x_llama_stack_provider_data}),
    119     **(extra_headers or {}),
    120 }
--> 121 return self._post(
    122     "/agents/session/get",
    123     body=maybe_transform({"turn_ids": turn_ids}, session_retrieve_params.SessionRetrieveParams),
    124     options=make_request_options(
    125         extra_headers=extra_headers,
    126         extra_query=extra_query,
    127         extra_body=extra_body,
    128         timeout=timeout,
    129         query=maybe_transform(
    130             {
    131                 "agent_id": agent_id,
    132                 "session_id": session_id,
    133             },
    134             session_retrieve_params.SessionRetrieveParams,
    135         ),
    136     ),
    137     cast_to=Session,
    138 )

File /usr/local/Cellar/jupyterlab/4.2.5_1/libexec/lib/python3.12/site-packages/llama_stack_client/_base_client.py:1261, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
   1247 def post(
   1248     self,
   1249     path: str,
   (...)
   1256     stream_cls: type[_StreamT] | None = None,
   1257 ) -> ResponseT | _StreamT:
   1258     opts = FinalRequestOptions.construct(
   1259         method="post", url=path, json_data=body, files=to_httpx_files(files), **options
   1260     )
-> 1261     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File /usr/local/Cellar/jupyterlab/4.2.5_1/libexec/lib/python3.12/site-packages/llama_stack_client/_base_client.py:953, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    950 else:
    951     retries_taken = 0
--> 953 return self._request(
    954     cast_to=cast_to,
    955     options=options,
    956     stream=stream,
    957     stream_cls=stream_cls,
    958     retries_taken=retries_taken,
    959 )

File /usr/local/Cellar/jupyterlab/4.2.5_1/libexec/lib/python3.12/site-packages/llama_stack_client/_base_client.py:1056, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
   1053         err.response.read()
   1055     log.debug("Re-raising status error")
-> 1056     raise self._make_status_error_from_response(err.response) from None
   1058 return self._process_response(
   1059     cast_to=cast_to,
   1060     options=options,
   (...)
   1064     retries_taken=retries_taken,
   1065 )

BadRequestError: Error code: 400 - {'error': {'detail': {'errors': [{'loc': ['body', 'agent_id'], 'msg': 'Field required', 'type': 'missing'}, {'loc': ['body', 'session_id'], 'msg': 'Field required', 'type': 'missing'}]}}}

Expected behavior

Give back Session Object

session_id: str
session_name: str
turns: List[Turn]
started_at: datetime

memory_bank: Optional[MemoryBankDef] = None

cheesecake100201 avatar Oct 25 '24 11:10 cheesecake100201