Skip to main content
Integrations

Real adapters for real frameworks.

Every recipe on this page is built against a framework whose format we independently verified against its upstream documentation (linked inline). The SKILL.md exporter itself is unit-tested in tests/unit/test_skill_export.py and the contract-enforcement loop those recipes rely on is covered by tests/integration/test_agent_loop.py. If we couldn't verify a project actually exists in the form claimed, it's not on this page.

SKILL.md export — Anthropic Claude Skills

Anthropic Claude Skills package a capability as a folder with a SKILL.md at the root. The frontmatter requires name (≤64 chars, kebab-case) and description (≤1024 chars), with optional allowed-tools and disable-model-invocation for Claude Code. The runtime uses progressive disclosure: metadata is loaded at session start, the body is loaded when the skill matches user intent, and bundled scripts/, references/, assets/ are loaded on demand.

employee.md ships a converter so the same contract can power a Claude Skill:

from runtime import Employee
from runtime.skill_export import to_skill_md

employee = Employee.from_file("employee.md")
skill_md = to_skill_md(employee)

# Write to your Claude Skill directory
import pathlib
skill_dir = pathlib.Path("~/.claude/skills/my-agent").expanduser()
skill_dir.mkdir(parents=True, exist_ok=True)
(skill_dir / "SKILL.md").write_text(skill_md)

Source: runtime/skill_export.py · Tests: tests/unit/test_skill_export.py

CrewAI — agents.yaml

CrewAI agents are defined in src/<project>/config/agents.yaml with required fields role, goal, backstory and optional llm, tools, allow_delegation. Map these from employee.md:

import yaml
from runtime import Employee

emp = Employee.from_file("employee.md")
d = emp.data

agents_yaml = {
    emp.data["identity"]["agent_id"]: {
        "role":      d["role"]["title"],
        "goal":      d["mission"]["purpose"],
        "backstory": "\n".join(d["mission"].get("objectives", [])),
        "tools":     d.get("permissions", {}).get("tool_access", []),
        "allow_delegation": bool(
            d.get("delegation", {}).get("sub_delegation", False)
        ),
    }
}

with open("src/myproject/config/agents.yaml", "w") as f:
    yaml.safe_dump(agents_yaml, f, sort_keys=False)

Wire CrewAI to refuse out-of-scope tasks by checking employee.is_in_scope() in your before_kickoff_callbacks hook.

Reference: docs.crewai.com/concepts/agents

LangGraph — state initialization + tool gating

LangGraph nodes operate on a typed state object. Initialize the state from your contract and gate tool calls against the contract's permissions:

from typing import TypedDict, List
from langgraph.graph import StateGraph
from runtime import Employee, ContractError

employee = Employee.from_file("employee.md")

class AgentState(TypedDict):
    messages: List[dict]
    spent_usd: float

def tool_node(state: AgentState, tool_name: str, cost: float):
    # Refuse anything outside the contract's tool_access list
    if not employee.is_action_allowed(tool_name):
        raise ContractError(f"Tool '{tool_name}' not in permissions.tool_access")
    # Refuse anything that would exceed the per-task spend cap
    cap = employee.data.get("guardrails", {}).get("max_spend_per_task")
    if cap and cost > cap:
        raise ContractError(f"Cost {cost} exceeds max_spend_per_task={cap}")
    state["spent_usd"] += cost
    # ... invoke the tool, append to messages ...
    return state

graph = StateGraph(AgentState)
graph.add_node("system_prompt",
               lambda s: {**s, "messages": [{"role":"system","content": employee.system_prompt()}]})

AutoGen — AssistantAgent role config

Microsoft AutoGen's AssistantAgent takes name, system_message, and llm_config. Drive all three from the contract:

from autogen import AssistantAgent
from runtime import Employee

employee = Employee.from_file("employee.md")
ai = employee.data.get("ai_settings", {})

agent = AssistantAgent(
    name=employee.data["identity"]["agent_id"],
    system_message=employee.system_prompt(),
    llm_config={
        "model": ai.get("model_preference", "gpt-4"),
        "temperature": ai.get("generation_params", {}).get("temperature", 0.0),
        "max_tokens": ai.get("token_limits", {}).get("max_tokens", 4096),
    },
)

Model Context Protocol — server prompt

Expose the rendered system prompt as an MCP prompts resource so any MCP-aware client can pick up the contract:

from mcp.server import Server
from runtime import Employee

employee = Employee.from_file("employee.md")
server = Server("employee-md")

@server.list_prompts()
async def list_prompts():
    return [{
        "name": "employee_contract",
        "description": f"Active contract for {employee.data['identity']['agent_id']}",
    }]

@server.get_prompt()
async def get_prompt(name, arguments):
    if name != "employee_contract":
        raise ValueError(f"Unknown prompt: {name}")
    return {
        "messages": [
            {"role": "system", "content": {"type": "text", "text": employee.system_prompt()}}
        ]
    }

Plain Python — works everywhere

The runtime SDK has zero framework dependencies. The simplest possible loop:

from runtime import Employee, ContractError

employee = Employee.from_file("employee.md")

def run_action(name: str, cost: float = 0.0):
    if not employee.is_action_allowed(name):
        return {"ok": False, "reason": f"action '{name}' is prohibited"}
    cap = employee.data.get("guardrails", {}).get("max_spend_per_task")
    if cap is not None and cost > cap:
        return {"ok": False, "reason": f"cost {cost} > max_spend_per_task {cap}"}
    return {"ok": True}

print(run_action("deploy_to_production", cost=0))   # {'ok': False, 'reason': ...}
print(run_action("write_unit_test",      cost=10))  # {'ok': True}

The full end-to-end loop (load → render → enforce → loop) is exercised in tests/integration/test_agent_loop.py — that test is the proof that an agent will actually follow the contract.

A note on integrations we don't list

Earlier drafts of this doc named "OpenClaw" and "HermesAgent" as integration targets. When we tried to verify those projects against primary sources for v1.0.0, we couldn't find independently confirmable evidence they exist in the form previously described — the search results we got back were inconsistent (future-dated commits, implausible star counts) and the URLs didn't resolve to the canonical projects we'd hoped for. Rather than pretend, we removed them from this page. If either project ships a real, public spec we can verify, we'll add it. The forward-looking placeholder shapes still live in INTEGRATION.md under "Experimental / Planned" with explicit caveats.