[Initiative]:Cloud Native AI Security Whitepaper
Name
Cloud Native AI Security Whitepaper
Short description
This whitepaper discusses securing AI workloads, AI Systems in Cloud Native environments
Responsible group
TOC
Does the initiative belong to a subproject?
Yes
Subproject name
Cloud Native AI Working Group
Primary contact
@dehatideep
Additional contacts
@zanetworker @ronaldpetty @raravena80
Initiative description
Cloud Native AI Security Whitepaper
The increasing adoption of AI in cloud-native environments presents a compelling case for prioritizing AI security. As AI systems become integral to decision-making and automation, the potential impact from security breaches becomes a critical concern. Compromised AI models can lead to incorrect predictions, manipulated outcomes, and even the theft of sensitive intellectual property. Moreover, regulatory compliance and customers trust are at stake when AI systems are not adequately secured. This paper should aim to address some of these concerns by providing a guide to securing AI in cloud-native environments, offering practical solutions and strategies to mitigate risks and ensure the integrity of AI-powered applications. Along these lines, here are some rough goals/ideas:
Analyze specific security risks unique to cloud-native AI deployments and the potential impact of breaches. Explore how cloud native security tooling/landscape would make AI workloads more secure. If not, explore how to. Draft esign considerations for securing AI workloads, data, and infrastructure in cloud-native environments, including Kubernetes security best practices. Provide actionable guidance on securing AI models, data pipelines, and infrastructure, along with recommendations for secure CI/CD pipelines and vulnerability management. Exploration emerging trends such as confidential computing, homomorphic encryption, and AI-powered threat detection for cloud-native AI.
AI WG issue link: https://github.com/cncf/tag-runtime/issues/177
Deliverable(s) or exit criteria
Fully reviewed document by the stakeholders (AI WG and STAG) and finished document after all comments are incorporated. This is completed on May 21, 2025. Document link provided in the description and it is fully ready. All meeting minutes and Research and Resources docs are listed on first page in the 'Metadata' section. This project actively started in Oct 2024 and finished whitepaper document is delivered today (May 21, 2025).
TOC members - This whitepaper is ready to be published. It was actively going on since Oct 2024, with AI WG initiative https://github.com/cncf/tag-runtime/issues/177 and all review cycles, doc changes, and comments from AI WG and STAG folks are incorporated. Please reach me or folks mentioned in additional contacts, if you have questions. Thank you.
** This whitepaper covers lot of security ground, so a bit voluminous (50 pages), hence I am writing a summary doc (3-4 pages) which will refer to detailed doc/whitepaper. This is just FYI that summary doc I am creating can be used/published as a blog which may refer to main whitepaper.
@riaankleinhans @mrbobbytables @angellk This is ready to be published. What's the process to get it published by the CNCF? Servicedesk? Thanks!
Hi Ricardo - this needs to be submitted for TOC review.
Also, the TOC still needs to review all initiatives and approve.
Sounds good @angellk Thx! Any timeline to get the initiative approved? Also, how do we officially submit it to the TOC.
@vikas-agarwal76 -- what's the overlap between this and #1671 ?
@vikas-agarwal76 -- what's the overlap between this and #1671 ?
@evankanderson This one (#1718) covers Cloud Native AI Security Issues alone, from technical standpoint, and talks about issues, mitigation mechanisms, tools available, and then refers to regulations in these regard, merely to emphasize the fact that security issue may lead to non-compliance. On the other hand #1671 talks about compliance framework itself and benchmarks available to corroborate compliance. Example: #1718 will talk about digital identity, cryptography, cryptographic verification and how these can solve a few technical security challenges and what security problems they may create further. This will not focus on compliance at all. #1671 will not talk about technical challenges or what technology solves it, instead it'll quantify challenges and provide a framework to ensure best practices/solutions are followed.
Hope this helps, @vikas-agarwal76 please add, if you need to.
@dehatideep Yes. I agree. #1671 is around various Compliance frameworks in AI domain like NIST AI RMF, EU AI Act, etc. and what are Best practices and benchmarks that are needed to support those Compliance requirements. Whereas #1718 is purely focussed on AI security issues and their mitigation (tools and technologies) and does not directly delve into compliance aspects which are much broader than just security.
@angellk checking in with this. Do you know when we can have vote/approval to publish this? Thx.
cc: @dehatideep @joshhalley
@angellk Any chance this could be reviewed by TOC anytime soon? This is ready since May but we are not able to make any progress on 'TOC review' and approval. I am not sure what else to do other than raising this issue. Please let us know. @raravena80, @joshhalley , @nimishamehta5
Thread in Slack https://cloud-native.slack.com/archives/C08Q78J65A7/p1754899438623219?thread_ts=1747609724.013299&cid=C08Q78J65A7