llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

Improve errors in client when there are server errors

Open raghotham opened this issue 1 year ago • 2 comments
trafficstars

System Info

Python=3.11 CUDA

Information

  • [ ] The official example scripts
  • [ ] My own modified scripts

🐛 Describe the bug

Need to propagate server errors to the client and log at the client

Error logs

Client error:

$ WOLFRAM_ALPHA_API_KEY=XXX BRAVE_SEARCH_API_KEY=XXX with-proxy python -m examples.agents.inflation localhost 5001
Created session_id=f179010c-ba26-4591-ab93-5589c03db291 for Agent(734c3530-a059-4eeb-95dd-e87a07bb8dff)
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/rsm/llama-stack-apps/examples/agents/inflation.py", line 95, in <module>
    fire.Fire(main)
  File "/home/rsm/.conda/envs/vllm/lib/python3.11/site-packages/fire/core.py", line 135, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rsm/.conda/envs/vllm/lib/python3.11/site-packages/fire/core.py", line 468, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
                                ^^^^^^^^^^^^^^^^^^^^
  File "/home/rsm/.conda/envs/vllm/lib/python3.11/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rsm/llama-stack-apps/examples/agents/inflation.py", line 91, in main
    asyncio.run(run_main(host, port, disable_safety))
  File "/home/rsm/.conda/envs/vllm/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/home/rsm/.conda/envs/vllm/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rsm/.conda/envs/vllm/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/rsm/llama-stack-apps/examples/agents/inflation.py", line 86, in run_main
    async for log in EventLogger().log(response):
  File "/home/rsm/.conda/envs/vllm/lib/python3.11/site-packages/llama_stack_client/lib/agents/event_logger.py", line 55, in log
    async for chunk in event_generator:
  File "/home/rsm/.conda/envs/vllm/lib/python3.11/site-packages/llama_stack_client/lib/agents/agent.py", line 54, in create_turn
    if chunk.event.payload.event_type != "turn_complete":
       ^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'payload'

Server error:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/server/server.py", line 208, in sse_generator
    async for item in event_gen:
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/agents/meta_reference/agents.py", line 138, in _create_agent_turn_streaming
    async for event in agent.create_and_execute_turn(request):
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/agents/meta_reference/agent_instance.py", line 179, in create_and_execute_turn
    async for chunk in self.run(
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/agents/meta_reference/agent_instance.py", line 244, in run
    async for res in self.run_multiple_shields_wrapper(
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/agents/meta_reference/agent_instance.py", line 299, in run_multiple_shields_wrapper
    await self.run_multiple_shields(messages, shields)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/agents/meta_reference/safety.py", line 37, in run_multiple_shields
    responses = await asyncio.gather(
  File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/routers/routers.py", line 189, in run_shield
    return await self.routing_table.get_provider_impl(shield_id).run_shield(
  File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/routers/routing_tables.py", line 149, in get_provider_impl
    raise ValueError(
ValueError: Shield `llama_guard` not served by provider: `llama-guard`. Make sure there is an Safety provider serving this shield.

Expected behavior

Need to see this error in the client:

ValueError: Shield `llama_guard` not served by provider: `llama-guard`. Make sure there is an Safety provider serving this shield.

raghotham avatar Nov 13 '24 00:11 raghotham