Skip to content

Agent integration

Martha is designed to be operated by both human developers and LLM agents. The CLI is the primary surface for both — it ships with a structured JSON output mode, predictable exit codes, and a bundled skill reference describing every command.

The skill bundle

@aiaiai-pt/martha-cli ships with skills/martha-cli/SKILL.md — a self-contained operational reference covering tenant/agent/function/task primitives, every command, and the JSON contracts the CLI produces. Pull it directly into your agent's prompt context:

bash
# Print to stdout
martha skill

# Pipe a head into the prompt
npx -y @aiaiai-pt/martha-cli@latest skill | head -200

The skill is keyword-tagged for tools like Claude Code that auto-discover skills under ~/.claude/skills/ — drop the file there and it activates automatically.

JSON output discipline

Every command supports --json. Use it whenever you're piping to anything that parses output:

bash
martha agents list --json | jq '.[] | {name, status}'
martha tasks list --status open --json | jq '.[] | .id'

Without --json, output is human-formatted with color codes and may include hints VitePress-style decorators won't survive.

Stability

0.x releases may change CLI flags or JSON output between minor versions. Pin a version in agent runtimes:

bash
npx -y @aiaiai-pt/martha-cli@0.3.0 ...

A schema_version envelope in --json output is on the roadmap; until then, treat the contract as best-effort and rebuild your parser on each minor bump.

Exit codes

CodeMeaning
0Success
1Generic error
2Auth failure (token missing, expired, or rejected)
3Not found (404)
4Validation error (400/422)
5Conflict (409)

Agent runtimes should map 2 → "re-authenticate", 3 → "the named entity doesn't exist", 4 → "your input was malformed" rather than treating non-zero as undifferentiated failure.

Auth modes for agent runtimes

The cleanest fit is service-account credentials — no browser flow, env-based, auto-refreshing:

bash
export MARTHA_CLIENT_ID="martha-tenant-acme-client"
export MARTHA_CLIENT_SECRET="..."
martha auth login --service-account

After login, every subsequent CLI invocation uses the cached service-account token, refreshing automatically before expiry.

For a one-shot agent run that doesn't need persistence:

bash
MARTHA_TOKEN="$(curl -sf -X POST https://auth.../token ...)" \
  npx -y @aiaiai-pt/martha-cli@0.3.0 agents list --json

Diagnosing problems

martha doctor runs five checks (API health, version skew, Keycloak reach, token validity, authenticated request) and prints a specific remedy for each failure. Run it as the first step when an agent run is misbehaving — it will surface configuration or auth issues before you spend tokens on the actual task.

Tested integrations

The CLI is exercised via Claude Code's skill mechanism today. Other harnesses that have been tested ad-hoc:

  • CrewAI — shell out via subprocess.run(["martha", ...], capture_output=True)
  • Pydantic AI — wrap the CLI in a Tool that returns parsed --json output
  • ork — direct shell exec; the skill has a section dedicated to ork's task-claim/heartbeat/complete pattern
  • Plain Bash agentsmartha skill | head -200 to seed prompt context, then --json | jq for everything else

If you've integrated the CLI with a different harness, let us know on the GitHub issue tracker.

Martha is built by aiaiai-pt.