llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

Add debug logging for RBAC access control decisions

Open derekhiggins opened this issue 1 month ago • 1 comments

Refactor is_action_allowed() to track decision outcome, matched rule index, and reason. Add structured debug log output for troubleshooting access control.

derekhiggins avatar Nov 28 '25 14:11 derekhiggins

example output

DEBUG    2025-11-28 14:18:57,828 llama_stack.core.server.auth:149 core::auth: Authentication successful: e91dfa83-0d63-4457-8a9d-562696c9b65d with 2 attributes                                                                            
DEBUG    2025-11-28 14:18:57,830 llama_stack.core.server.server:230 core::server: Incoming raw request body for POST /v1/chat/completions:                                                                                                 
         {                                                                                                                                                                                                                                 
             'messages': [{'role': 'user', 'content': 'Say Hi!!!'}],                                                                                                                                                                       
             'model': 'vllm-inference/llama-3-2-3b',                                                                                                                                                                                       
             'max_tokens': 10                                                                                                                                                                                                              
         }                                                                                                                                                                                                                                 
DEBUG    2025-11-28 14:18:57,831 llama_stack.core.routers.inference:215 core::routers: InferenceRouter.openai_chat_completion: model=vllm-inference/llama-3-2-3b, stream=None, messages=[OpenAIUserMessageParam(role='user', content='Say  
         Hi!!!', name=None)]                                                                                                                                                                                                               
DEBUG    2025-11-28 14:18:57,833 llama_stack.core.access_control.access_control:128 core::auth: ACL,e91dfa83-0d63-4457-8a9d-562696c9b65d,model::vllm-inference/llama-3-2-3b,read,True,0,Users can read vLLM Llama model                    
DEBUG    2025-11-28 14:18:57,834 llama_stack.core.access_control.access_control:128 core::auth: ACL,e91dfa83-0d63-4457-8a9d-562696c9b65d,model::vllm-inference/llama-3-2-3b,read,True,0,Users can read vLLM Llama model                    
INFO     2025-11-28 14:18:58,438 uvicorn.access:473 uncategorized: ::1:51018 - "POST /v1/chat/completions HTTP/1.1" 200                                                                                                                    

derekhiggins avatar Nov 28 '25 14:11 derekhiggins