Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/README.skills.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to
| ---- | ----------- | -------------- |
| [add-educational-comments](../skills/add-educational-comments/SKILL.md) | Add educational comments to the file specified, or prompt asking for file to comment if one is not provided. | None |
| [agent-governance](../skills/agent-governance/SKILL.md) | Patterns and techniques for adding governance, safety, and trust controls to AI agent systems. Use this skill when:<br />- Building AI agents that call external tools (APIs, databases, file systems)<br />- Implementing policy-based access controls for agent tool usage<br />- Adding semantic intent classification to detect dangerous prompts<br />- Creating trust scoring systems for multi-agent workflows<br />- Building audit trails for agent actions and decisions<br />- Enforcing rate limits, content filters, or tool restrictions on agents<br />- Working with any agent framework (PydanticAI, CrewAI, OpenAI Agents, LangChain, AutoGen) | None |
| [agent-owasp-compliance](../skills/agent-owasp-compliance/SKILL.md) | Check AI agent systems against the OWASP Agentic Security Initiative (ASI) Top 10 risks. Evaluates all 10 controls (prompt injection, tool governance, excessive agency, escalation, trust boundaries, logging, identity, policy integrity, supply chain, behavioral anomaly) and generates an X/10 compliance report. | None |
| [agent-supply-chain](../skills/agent-supply-chain/SKILL.md) | Verify supply chain integrity for AI agent plugins and tools. Generate SHA-256 integrity manifests, verify installed plugins match published manifests, detect tampered files, audit dependency version pinning, and gate plugin promotion from dev to production. | None |
| [agentic-eval](../skills/agentic-eval/SKILL.md) | Patterns and techniques for evaluating and improving AI agent outputs. Use this skill when:<br />- Implementing self-critique and reflection loops<br />- Building evaluator-optimizer pipelines for quality-critical generation<br />- Creating test-driven code refinement workflows<br />- Designing rubric-based or LLM-as-judge evaluation systems<br />- Adding iterative improvement to agent outputs (code, reports, analysis)<br />- Measuring and improving agent response quality | None |
| [ai-prompt-engineering-safety-review](../skills/ai-prompt-engineering-safety-review/SKILL.md) | Comprehensive AI prompt engineering safety review and improvement prompt. Analyzes prompts for safety, bias, security vulnerabilities, and effectiveness while providing detailed improvement recommendations with extensive frameworks, testing methodologies, and educational content. | None |
| [appinsights-instrumentation](../skills/appinsights-instrumentation/SKILL.md) | Instrument a webapp to send useful telemetry data to Azure App Insights | `LICENSE.txt`<br />`examples`<br />`references/ASPNETCORE.md`<br />`references/AUTO.md`<br />`references/NODEJS.md`<br />`references/PYTHON.md`<br />`scripts/appinsights.ps1` |
Expand Down Expand Up @@ -183,6 +185,7 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to
| [markdown-to-html](../skills/markdown-to-html/SKILL.md) | Convert Markdown files to HTML similar to `marked.js`, `pandoc`, `gomarkdown/markdown`, or similar tools; or writing custom script to convert markdown to html and/or working on web template systems like `jekyll/jekyll`, `gohugoio/hugo`, or similar web templating systems that utilize markdown documents, converting them to html. Use when asked to "convert markdown to html", "transform md to html", "render markdown", "generate html from markdown", or when working with .md files and/or web a templating system that converts markdown to HTML output. Supports CLI and Node.js workflows with GFM, CommonMark, and standard Markdown flavors. | `references/basic-markdown-to-html.md`<br />`references/basic-markdown.md`<br />`references/code-blocks-to-html.md`<br />`references/code-blocks.md`<br />`references/collapsed-sections-to-html.md`<br />`references/collapsed-sections.md`<br />`references/gomarkdown.md`<br />`references/hugo.md`<br />`references/jekyll.md`<br />`references/marked.md`<br />`references/pandoc.md`<br />`references/tables-to-html.md`<br />`references/tables.md`<br />`references/writing-mathematical-expressions-to-html.md`<br />`references/writing-mathematical-expressions.md` |
| [mcp-cli](../skills/mcp-cli/SKILL.md) | Interface for MCP (Model Context Protocol) servers via CLI. Use when you need to interact with external tools, APIs, or data sources through MCP servers, list available MCP servers/tools, or call MCP tools from command line. | None |
| [mcp-copilot-studio-server-generator](../skills/mcp-copilot-studio-server-generator/SKILL.md) | Generate a complete MCP server implementation optimized for Copilot Studio integration with proper schema constraints and streamable HTTP support | None |
| [mcp-security-audit](../skills/mcp-security-audit/SKILL.md) | Audit MCP server configurations in .mcp.json files for security issues including secrets exposure, shell injection patterns, unpinned dependencies, and dangerous command patterns. | None |
| [mcp-create-adaptive-cards](../skills/mcp-create-adaptive-cards/SKILL.md) | Skill converted from mcp-create-adaptive-cards.prompt.md | None |
| [mcp-create-declarative-agent](../skills/mcp-create-declarative-agent/SKILL.md) | Skill converted from mcp-create-declarative-agent.prompt.md | None |
| [mcp-deploy-manage-agents](../skills/mcp-deploy-manage-agents/SKILL.md) | Skill converted from mcp-deploy-manage-agents.prompt.md | None |
Expand Down
323 changes: 323 additions & 0 deletions skills/agent-owasp-compliance/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,323 @@
---
name: agent-owasp-compliance
description: |
Check any AI agent codebase against the OWASP Agentic Security Initiative (ASI) Top 10 risks.
Use this skill when:
- Evaluating an agent system's security posture before production deployment
- Running a compliance check against OWASP ASI 2026 standards
- Mapping existing security controls to the 10 agentic risks
- Generating a compliance report for security review or audit
- Comparing agent framework security features against the standard
- Any request like "is my agent OWASP compliant?", "check ASI compliance", or "agentic security audit"
---

# Agent OWASP ASI Compliance Check

Evaluate AI agent systems against the OWASP Agentic Security Initiative (ASI) Top 10 — the industry standard for agent security posture.

## Overview

The OWASP ASI Top 10 defines the critical security risks specific to autonomous AI agents — not LLMs, not chatbots, but agents that call tools, access systems, and act on behalf of users. This skill checks whether your agent implementation addresses each risk.

```
Codebase → Scan for each ASI control:
ASI-01: Prompt Injection Protection
ASI-02: Tool Use Governance
ASI-03: Agency Boundaries
ASI-04: Escalation Controls
ASI-05: Trust Boundary Enforcement
ASI-06: Logging & Audit
ASI-07: Identity Management
ASI-08: Policy Integrity
ASI-09: Supply Chain Verification
ASI-10: Behavioral Monitoring
→ Generate Compliance Report (X/10 covered)
```

## The 10 Risks

| Risk | Name | What to Look For |
|------|------|-----------------|
| ASI-01 | Prompt Injection | Input validation before tool calls, not just LLM output filtering |
| ASI-02 | Insecure Tool Use | Tool allowlists, argument validation, no raw shell execution |
| ASI-03 | Excessive Agency | Capability boundaries, scope limits, principle of least privilege |
| ASI-04 | Unauthorized Escalation | Privilege checks before sensitive operations, no self-promotion |
| ASI-05 | Trust Boundary Violation | Trust verification between agents, signed credentials, no blind trust |
| ASI-06 | Insufficient Logging | Structured audit trail for all tool calls, tamper-evident logs |
| ASI-07 | Insecure Identity | Cryptographic agent identity, not just string names |
| ASI-08 | Policy Bypass | Deterministic policy enforcement, no LLM-based permission checks |
| ASI-09 | Supply Chain Integrity | Signed plugins/tools, integrity verification, dependency auditing |
| ASI-10 | Behavioral Anomaly | Drift detection, circuit breakers, kill switch capability |

---

## Check ASI-01: Prompt Injection Protection

Look for input validation that runs **before** tool execution, not after LLM generation.

```python
import re
from pathlib import Path

def check_asi_01(project_path: str) -> dict:
"""ASI-01: Is user input validated before reaching tool execution?"""
positive_patterns = [
"input_validation", "validate_input", "sanitize",
"classify_intent", "prompt_injection", "threat_detect",
"PolicyEvaluator", "PolicyEngine", "check_content",
]
negative_patterns = [
r"eval\(", r"exec\(", r"subprocess\.run\(.*shell=True",
r"os\.system\(",
]

# Scan Python files for signals
root = Path(project_path)
positive_matches = []
negative_matches = []

for py_file in root.rglob("*.py"):
content = py_file.read_text(errors="ignore")
for pattern in positive_patterns:
if pattern in content:
positive_matches.append(f"{py_file.name}: {pattern}")
for pattern in negative_patterns:
if re.search(pattern, content):
negative_matches.append(f"{py_file.name}: {pattern}")

positive_found = len(positive_matches) > 0
negative_found = len(negative_matches) > 0

return {
"risk": "ASI-01",
"name": "Prompt Injection",
"status": "pass" if positive_found and not negative_found else "fail",
"controls_found": positive_matches,
"vulnerabilities": negative_matches,
"recommendation": "Add input validation before tool execution, not just output filtering"
}
```

**What passing looks like:**
```python
# GOOD: Validate before tool execution
result = policy_engine.evaluate(user_input)
if result.action == "deny":
return "Request blocked by policy"
tool_result = await execute_tool(validated_input)
```

**What failing looks like:**
```python
# BAD: User input goes directly to tool
tool_result = await execute_tool(user_input) # No validation
```

---

## Check ASI-02: Insecure Tool Use

Verify tools have allowlists, argument validation, and no unrestricted execution.

**What to search for:**
- Tool registration with explicit allowlists (not open-ended)
- Argument validation before tool execution
- No `subprocess.run(shell=True)` with user-controlled input
- No `eval()` or `exec()` on agent-generated code without sandbox

**Passing example:**
```python
ALLOWED_TOOLS = {"search", "read_file", "create_ticket"}

def execute_tool(name: str, args: dict):
if name not in ALLOWED_TOOLS:
raise PermissionError(f"Tool '{name}' not in allowlist")
# validate args...
return tools[name](**validated_args)
```

---

## Check ASI-03: Excessive Agency

Verify agent capabilities are bounded — not open-ended.

**What to search for:**
- Explicit capability lists or execution rings
- Scope limits on what the agent can access
- Principle of least privilege applied to tool access

**Failing:** Agent has access to all tools by default.
**Passing:** Agent capabilities defined as a fixed allowlist, unknown tools denied.

---

## Check ASI-04: Unauthorized Escalation

Verify agents cannot promote their own privileges.

**What to search for:**
- Privilege level checks before sensitive operations
- No self-promotion patterns (agent changing its own trust score or role)
- Escalation requires external attestation (human or SRE witness)

**Failing:** Agent can modify its own configuration or permissions.
**Passing:** Privilege changes require out-of-band approval (e.g., Ring 0 requires SRE attestation).

---

## Check ASI-05: Trust Boundary Violation

In multi-agent systems, verify that agents verify each other's identity before accepting instructions.

**What to search for:**
- Agent identity verification (DIDs, signed tokens, API keys)
- Trust score checks before accepting delegated tasks
- No blind trust of inter-agent messages
- Delegation narrowing (child scope <= parent scope)

**Passing example:**
```python
def accept_task(sender_id: str, task: dict):
trust = trust_registry.get_trust(sender_id)
if not trust.meets_threshold(0.7):
raise PermissionError(f"Agent {sender_id} trust too low: {trust.current()}")
if not verify_signature(task, sender_id):
raise SecurityError("Task signature verification failed")
return process_task(task)
```

---

## Check ASI-06: Insufficient Logging

Verify all agent actions produce structured, tamper-evident audit entries.

**What to search for:**
- Structured logging for every tool call (not just print statements)
- Audit entries include: timestamp, agent ID, tool name, args, result, policy decision
- Append-only or hash-chained log format
- Logs stored separately from agent-writable directories

**Failing:** Agent actions logged via `print()` or not logged at all.
**Passing:** Structured JSONL audit trail with chain hashes, exported to secure storage.

---

## Check ASI-07: Insecure Identity

Verify agents have cryptographic identity, not just string names.

**Failing indicators:**
- Agent identified by `agent_name = "my-agent"` (string only)
- No authentication between agents
- Shared credentials across agents

**Passing indicators:**
- DID-based identity (`did:web:`, `did:key:`)
- Ed25519 or similar cryptographic signing
- Per-agent credentials with rotation
- Identity bound to specific capabilities

---

## Check ASI-08: Policy Bypass

Verify policy enforcement is deterministic — not LLM-based.

**What to search for:**
- Policy evaluation uses deterministic logic (YAML rules, code predicates)
- No LLM calls in the enforcement path
- Policy checks cannot be skipped or overridden by the agent
- Fail-closed behavior (if policy check errors, action is denied)

**Failing:** Agent decides its own permissions via prompt ("Am I allowed to...?").
**Passing:** PolicyEvaluator.evaluate() returns allow/deny in <0.1ms, no LLM involved.

---

## Check ASI-09: Supply Chain Integrity

Verify agent plugins and tools have integrity verification.

**What to search for:**
- `INTEGRITY.json` or manifest files with SHA-256 hashes
- Signature verification on plugin installation
- Dependency pinning (no `@latest`, `>=` without upper bound)
- SBOM generation

---

## Check ASI-10: Behavioral Anomaly

Verify the system can detect and respond to agent behavioral drift.

**What to search for:**
- Circuit breakers that trip on repeated failures
- Trust score decay over time (temporal decay)
- Kill switch or emergency stop capability
- Anomaly detection on tool call patterns (frequency, targets, timing)

**Failing:** No mechanism to stop a misbehaving agent automatically.
**Passing:** Circuit breaker trips after N failures, trust decays without activity, kill switch available.

---

## Compliance Report Format

```markdown
# OWASP ASI Compliance Report
Generated: 2026-04-01
Project: my-agent-system

## Summary: 7/10 Controls Covered

| Risk | Status | Finding |
|------|--------|---------|
| ASI-01 Prompt Injection | PASS | PolicyEngine validates input before tool calls |
| ASI-02 Insecure Tool Use | PASS | Tool allowlist enforced in governance.py |
| ASI-03 Excessive Agency | PASS | Execution rings limit capabilities |
| ASI-04 Unauthorized Escalation | PASS | Ring promotion requires attestation |
| ASI-05 Trust Boundary | FAIL | No identity verification between agents |
| ASI-06 Insufficient Logging | PASS | AuditChain with SHA-256 chain hashes |
| ASI-07 Insecure Identity | FAIL | Agents use string names, no crypto identity |
| ASI-08 Policy Bypass | PASS | Deterministic PolicyEvaluator, no LLM in path |
| ASI-09 Supply Chain | FAIL | No integrity manifests or plugin signing |
| ASI-10 Behavioral Anomaly | PASS | Circuit breakers and trust decay active |

## Critical Gaps
- ASI-05: Add agent identity verification using DIDs or signed tokens
- ASI-07: Replace string agent names with cryptographic identity
- ASI-09: Generate INTEGRITY.json manifests for all plugins

## Recommendation
Install agent-governance-toolkit for reference implementations of all 10 controls:
pip install agent-governance-toolkit
```

---

## Quick Assessment Questions

Use these to rapidly assess an agent system:

1. **Does user input pass through validation before reaching any tool?** (ASI-01)
2. **Is there an explicit list of what tools the agent can call?** (ASI-02)
3. **Can the agent do anything, or are its capabilities bounded?** (ASI-03)
4. **Can the agent promote its own privileges?** (ASI-04)
5. **Do agents verify each other's identity before accepting tasks?** (ASI-05)
6. **Is every tool call logged with enough detail to replay it?** (ASI-06)
7. **Does each agent have a unique cryptographic identity?** (ASI-07)
8. **Is policy enforcement deterministic (not LLM-based)?** (ASI-08)
9. **Are plugins/tools integrity-verified before use?** (ASI-09)
10. **Is there a circuit breaker or kill switch?** (ASI-10)

If you answer "no" to any of these, that's a gap to address.

---

## Related Resources

- [OWASP Agentic AI Threats](https://owasp.org/www-project-agentic-ai-threats/)
- [Agent Governance Toolkit](https://github.com/microsoft/agent-governance-toolkit) — Reference implementation covering 10/10 ASI controls
- [agent-governance skill](https://github.com/github/awesome-copilot/tree/main/skills/agent-governance) — Governance patterns for agent systems
Loading
Loading