llama-stack-apps icon indicating copy to clipboard operation
llama-stack-apps copied to clipboard

Bug?: Embedded and Malicious Probabilities (index 2 is out of bounds for dimension 1 with size 2)

Open mburges-cvl opened this issue 6 months ago • 3 comments

Hello, I get this error when running the model locally, as described by the repo. however, I get this error:

index 2 is out of bounds for dimension 1 with size 2

with this traceback:

/python3.10/site-packages/mesop/server/server.py:153 | generate_data
 for _ in result: 
/python3.10/site-packages/mesop/runtime/context.py:161 | run_event_handler
 yield from result 
./llama-agentic-system/app/utils/chat.py:228 | on_input_enter
         state = me.state(State) 
         state.input = e.value 
         yield from submit() 
     def submit(): 
         state = me.state(State) 
         if state.in_progress or not state.input: 
./llama-agentic-system/app/utils/chat.py:270 | submit
         cur_uuids = set(state.output) 
         for op_uuid, op in transform(content): 
             KEY_TO_OUTPUTS[op_uuid] = op 
             if op_uuid not in cur_uuids: 
                 output.append(op_uuid) 
                 cur_uuids.add(op_uuid) 
./llama-agentic-system/app/utils/transform.py:39 | transform
     generator = sync_generator(EVENT_LOOP, client.run([input_message])) 
     for chunk in generator: 
         if not hasattr(chunk, "event"): 
             # Need to check for custom tool first 
             # since it does not produce event but instead 
             # a Message 
./llama-agentic-system/app/utils/common.py:36 | generator
         while True: 
             try: 
                 yield loop.run_until_complete(async_generator.__anext__()) 
             except StopAsyncIteration: 
                 break 
     return generator() 
/python3.10/asyncio/base_events.py:649 | run_until_complete
 return future.result() 
./llama-agentic-system/llama_agentic_system/utils.py:73 | run
 async for chunk in execute_with_custom_tools( 
./llama-agentic-system/llama_agentic_system/client.py:122 | execute_with_custom_tools
 async for chunk in system.create_agentic_system_turn(request): 
./llama-agentic-system/llama_agentic_system/agentic_system.py:763 | create_agentic_system_turn
 async for event in agent.create_and_execute_turn(request): 
./llama-agentic-system/llama_agentic_system/agentic_system.py:270 | create_and_execute_turn
 async for chunk in self.run( 
./llama-agentic-system/llama_agentic_system/agentic_system.py:396 | run
 async for res in self.run_shields_wrapper( 
./llama-agentic-system/llama_agentic_system/agentic_system.py:341 | run_shields_wrapper
 await self.run_shields(messages, shields) 
/python3.10/site-packages/llama_toolchain/safety/shields/shield_runner.py:41 | run_shields
 results = await asyncio.gather(*[s.run(messages) for s in shields]) 
/python3.10/site-packages/llama_toolchain/safety/shields/base.py:55 | run
 return await self.run_impl(text) 
/python3.10/site-packages/llama_toolchain/safety/shields/prompt_guard.py:96 | run_impl
 score_malicious = probabilities[0, 2].item() 

to fix it I changed :

         score_embedded = probabilities[0, 1].item()
         score_malicious = probabilities[0, 2].item()

to

        score_embedded = probabilities[0, 0].item()
        score_malicious = probabilities[0, 1].item()

in llama_toolchain/safety/shields/prompt_guard.py PromptGuardShield run_impl (Line 95/96)

Not sure if that is correct, but for me the model works now.

mburges-cvl avatar Jul 30 '24 09:07 mburges-cvl