We are thrilled to announce the publication of the “LLM Taxonomy” paper by the AI Controls working group of the Cloud Security Alliance (CSA). Released on June 11, this groundbreaking paper features contributions from our Bosch colleague, Dr. Jesus Luna, on behalf of COBALT.
This publication is part of the CSA AI Safety Initiative and aims to establish a common taxonomy and definitions for key terms related to risk scenarios and threats to Large Language Models (LLMs). The taxonomy detailed in the document plays a crucial role in unifying the industry’s understanding by defining essential terms necessary for discussing LLM risks.
By creating a common language, the document reduces ambiguity, enhances communication between diverse groups, and facilitates more accurate discussions related to AI risk assessment, control measures, and governance.
Key Takeaways from the “CSA Large Language Model (LLM) Threats Taxonomy”
- Critical Asset Definition: The paper defines critical assets for LLM/AI systems implementation and management.
- Lifecycle Phases: It outlines the phases of the LLM lifecycle.
- Risk Categorization: Potential LLM risks are categorized comprehensively.
- Impact Categories: The document defines the impact categories of these risks.
Released under the auspices of the Cloud Security Alliance, this document underscores CSA’s commitment to promoting secure practices in cloud computing and leveraging cloud technologies to enhance overall computing security.
For those interested, the paper can be downloaded here.