Title | Type | Date | Comment |
---|---|---|---|
AI-Exploits | code | A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities | |
LLM-Guard | code | The Security Toolkit for LLM Interactions | |
Garak | code | LLM vulnerability scanner | |
NIST AI RMF Playbook | doc | NST AI RM Playbook | |
MITRE ATLAS | doc | Adversarial Threat Landscape for AI Systems | |
NIST AI 100-2e2023 | doc | Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations | |
OWASP ML Top 10 | doc | Top 10 security issues of machine learning systems | |
Deploying AI Systems Securely | doc | Apr 2024 | Best Practices for Deploying Secure and Resilient AI Systems, From NCSC |
LLM Top 10 | doc | OWASP Top 10 for LLM Applications | |
Google SAIF | doc | Google Secure AI Framework | |
Stealing Part of a Production Language Model | paper | Mar 11, 2024 | model-stealing attack that extracts precise, nontrivial information from black-box production language models |
Leaky Language Models | blog | Dec 1, 2023 | Privado, So, do we all use Leaky Language Models (LLMs) now? |
Leveraging an AI Security Framework | blog | Apr 11, 2024 | Resilient Cyber, Chris Hughes |
OpenLIT | code | OpenLIT is an open-source GenAI and LLM observability platform | |
openllmetry | code | Open-source observability for your LLM application | |
Usage Panda LLM Proxy | code | Security and compliance proxy for LLM APIs | |
Phoenix | code | AI Observability & Evaluation | |
Prompt Injection Attacks and Defenses in LLM-Integrated Applications | paper | Prompt Injection Attacks & Defenses | |
Daxa: 2-in-1 data classifier for sensitive inputs/datasets and access management for RAG | code | Pebblo enables developers to safely load data and promote their Gen AI app to deployment |
Company |
---|
HiddenLayer |
Bosch AIShield |
Microsoft https://www.youtube.com/watch?v=f0MDjS9-dNw
Palo Alto – https://www.youtube.com/watch?v=9RCq-vN3jzA
AWS https://www.youtube.com/watch?v=eOmgoNIC7a0