Tool Index — Cornerstone Template
Agents: Read this file before writing any new tool. If the capability you need already exists, extend it. If you must create a new tool, add an entry here as part of the same commit (Tool Writer Phase 3).
Paths below are relative to the template root (
cookiecutter-agentic-ci/). Tools inside{{cookiecutter.project_slug}}/tools/are scaffolded into generated projects with cookiecutter variables resolved.
Schema Reference
Each entry below follows this schema:
# Required fields
path: # path relative to repository root
domain: # top-level domain bucket (ci | software/discovery | hardware/systems | ...)
status: # stable | experimental | deprecated
output: # where final payload goes: json-stdout | markdown-file | sqlite-db | text-stdout
dependencies: # Python packages or external binaries required beyond stdlib
invoked_by: # agent skills or hooks that call this tool
Domain: ci — CI/CD Gate & Telemetry
check_adr_gate
path: "{{cookiecutter.project_slug}}/tools/check_adr_gate.py"
domain: ci
status: stable
output: text-stdout # human-readable; exits 0 (pass) or 1 (fail)
dependencies: [stdlib]
invoked_by: [.github/workflows/ci.yml, .claude/settings.json PreToolUse hook]
How to Use
# Pipe changed-file list from git
git diff --name-only HEAD~1 | python tools/check_adr_gate.py \
--new-files "$(git diff --name-only --diff-filter=A HEAD~1)" \
--commit-message "feat: add new feature"
# Explicit file lists
python tools/check_adr_gate.py \
--changed-files "library/my_package/module.py" \
--new-files "docs/adr/ADR-0001-new-decision.md"
# Bypass for trivial fixes
python tools/check_adr_gate.py --skip-adr
# OR: include [skip-adr] in the commit message
When to Use
- Automatically in CI (
adr-gatejob). Must pass before thetestjob runs. - As a local pre-push check before opening a PR.
- The
PreToolUsehook in.claude/settings.jsonhandles real-time enforcement during development.
Constraints
GUARDED_PATTERNSandADR_PATTERNare placeholder strings (__LIBRARY_PATH__,__ADR_PATH__) in the template; patched byhooks/post_gen_project.pyat generation time.--skip-adris for trivial fixes only. Misuse defeats the ADR mandate.
Example
# In CI, triggered from ci.yml
git diff --name-only $BEFORE $SHA | python tools/check_adr_gate.py \
--new-files "$(git diff --name-only --diff-filter=A $BEFORE $SHA)" \
--commit-message "$COMMIT_MESSAGE"
install_hooks
path: "{{cookiecutter.project_slug}}/tools/install_hooks.sh"
domain: ci
status: stable
output: text-stdout # confirmation; exits 0 on success
dependencies: [bash, git]
invoked_by: [hooks/post_gen_project.py (auto), manual]
How to Use
bash tools/install_hooks.sh
When to Use
- Automatically called by
post_gen_project.pyat project generation time. - Call manually after cloning on a new machine or after
.git/hooks/is lost.
Constraints
- Requires
.git/directory. Overwrites any existingpre-commithook without backup. - Installed hook calls
python tools/check_adr_gate.py— Python must be on PATH.
emit_ci_event
path: "{{cookiecutter.project_slug}}/tools/emit_ci_event.py"
domain: ci
status: stable
output: text-stdout # silent on success; warnings to stderr
dependencies: [stdlib] # no third-party imports — runs before pip install
invoked_by: [.github/workflows/ci.yml]
How to Use
python tools/emit_ci_event.py \
--adr-gate-passed true \
--tests-passed true \
--lint-passed true \
--run-id "$GITHUB_RUN_ID" \
--ref "$GITHUB_REF" \
--commit-sha "$GITHUB_SHA" \
--duration-ms 12000
When to Use
- At the end of the
testjob in CI to push aci.runevent to the Observability Service. - No-op when
AGENTIC_TELEMETRY_URLis unset — safe to include unconditionally.
Constraints
- Fires-and-forgets over HTTP in a background thread; waits at most 4 s.
- Template contains
__AGENTIC_TELEMETRY_URL__as a placeholder; patched at generation time. - Must not import project packages — designed to run before
pip install -e ..
sync_agents
path: "{{cookiecutter.project_slug}}/tools/sync_agents.sh"
domain: ci
status: stable
output: text-stderr # progress/warnings; always exits 0
dependencies: [gh CLI (authenticated)]
invoked_by: [.claude/settings.json UserPromptSubmit hook]
How to Use
# Normal sync (throttled to once per 24 h)
bash tools/sync_agents.sh
# Force sync regardless of timestamp
bash tools/sync_agents.sh --force
When to Use
- Automatically at session start via the
UserPromptSubmithook. Do not call manually unless forcing a refresh. - Use
--forceimmediately after a Cornerstone template release.
Constraints
- Requires
ghCLI installed and authenticated (gh auth login). - Silent skip when
ghis missing — never blocks the session. - Template placeholder
{{cookiecutter.project_slug}}is used to locate skills within the cloned upstream repo. - Overwrites
.agents/skills/content from upstream; local-only additions may be lost on next sync.
cornerstone eval
path: "tools/cornerstone.py" # orchestrates tools/run_swarm_eval.py
domain: ci
status: experimental
output: text-stdout # CLI progress and final Swarm report
dependencies: [stdlib, cookiecutter]
invoked_by: [manual, technical committee]
How to Use
# Safely verify sandbox scaffolding
python tools/cornerstone.py eval --dry-run --model gemini
# Execute full Swarm evaluation
python tools/cornerstone.py eval --model gemini
When to Use
- When proposing profound changes to
.agents/skillsor core template architecture. - Evaluates the "butterfly effect" of prompt changes on the Swarm's collaborative output by dropping the AIs inside a polyglot test maze.
Domain: software/discovery — Codebase Analysis
All tools in this domain are invoked by the
software-archeologistandretro-engineerskills. Stdout carries final payload (JSON/Markdown/DOT); all diagnostic output goes to stderr. Tools are stateless unless they write to a SQLite database.
retro-engineer (main.py)
path: "{{cookiecutter.project_slug}}/tools/software/discovery/main.py"
domain: software/discovery
status: stable
output: markdown-file | json-stdout | dot-stdout
dependencies: [stdlib]
invoked_by: [software-archeologist, retro-engineer]
How to Use
python tools/software/discovery/main.py [PATH] [OPTIONS]
--output, -o Output file (default: output/retro-report.md)
--format, -f markdown|json|dot (default: markdown)
--depth, -d Max call tree depth (default: 6)
--json Also emit .json alongside markdown
When to Use
- Entry point for full-codebase archaeology. Start here when a new codebase arrives.
- Orchestrates:
language_detector→structure_mapper→call_tree→api_mapper→decision_extractor→reporter.
Constraints
- Does not parse compiled binaries; run
decompiler_managerfirst. - SQL-heavy codebases: use the SQL chain (
sql_procedure_analyzer→sql_topology→sql_logic_parser) for richer detail.
language_detector
path: "{{cookiecutter.project_slug}}/tools/software/discovery/language_detector.py"
domain: software/discovery
status: stable
output: json-stdout # dict: {primary_language, languages, build_system, ...}
dependencies: [stdlib]
invoked_by: [retro-engineer (main.py)]
When to Use — First step in any archaeology pipeline. Returns a tech_stack dict consumed by all downstream tools.
Constraints — Heuristic detection (extensions + filenames); may misidentify heavily polyglot projects.
structure_mapper
path: "{{cookiecutter.project_slug}}/tools/software/discovery/structure_mapper.py"
domain: software/discovery
status: stable
output: json-stdout # {tree, modules, entry_points, public_api}
dependencies: [stdlib]
invoked_by: [retro-engineer (main.py)]
When to Use — After language_detector. Produces the module/class map consumed by call_tree and reporter.
Constraints — Python uses AST (precise); C/C#/Java uses regex (approximate). Syntax errors in target files cause silent skip.
call_tree
path: "{{cookiecutter.project_slug}}/tools/software/discovery/call_tree.py"
domain: software/discovery
status: stable
output: json-stdout # {edges, graph, dot, entry_flows, external_calls}
dependencies: [stdlib]
invoked_by: [retro-engineer (main.py)]
When to Use — After structure_mapper. Reveals execution flow and hardware/API boundaries (external_calls).
Constraints — Static analysis only; dynamic dispatch and decorators are not resolved.
api_mapper
path: "{{cookiecutter.project_slug}}/tools/software/discovery/api_mapper.py"
domain: software/discovery
status: stable
output: json-stdout # dict keyed by API group (winscard, libnfc, pyserial, ...)
dependencies: [stdlib]
invoked_by: [retro-engineer (main.py)]
When to Use — To inventory which external system APIs a codebase calls and from where.
Constraints — Regex-based. Extend API_GROUPS in source (with ADR) for APIs not in the default list.
decision_extractor
path: "{{cookiecutter.project_slug}}/tools/software/discovery/decision_extractor.py"
domain: software/discovery
status: stable
output: json-stdout # [{kind, description, file, line, snippet, rationale}]
dependencies: [stdlib]
invoked_by: [retro-engineer (main.py), decision-logger skill]
When to Use — When the decision-logger skill needs to surface implicit architectural choices. Log all findings to output/findings/FINDINGS.md (F-XXX format).
Constraints — Detects only statically visible decisions. Runtime-computed constants are missed.
reporter
path: "{{cookiecutter.project_slug}}/tools/software/discovery/reporter.py"
domain: software/discovery
status: stable
output: markdown-file | json-stdout | dot-stdout
dependencies: [stdlib]
invoked_by: [retro-engineer (main.py)]
When to Use — Final step in the retro-engineer pipeline. Do not call directly; invoke main.py instead.
sql_procedure_analyzer
path: "{{cookiecutter.project_slug}}/tools/software/discovery/sql_procedure_analyzer.py"
domain: software/discovery
status: stable
output: sqlite-db # output/analysis_dbs/codebase_index.db
dependencies: [stdlib]
invoked_by: [software-archeologist]
How to Use
python tools/software/discovery/sql_procedure_analyzer.py \
--path /path/to/sql/procedures \
--db output/analysis_dbs/codebase_index.db
When to Use — SQL Server–heavy codebases. Run before sql_topology, query_trace, and kedro_lineage_builder. Start from leaf procedures (bottom-up protocol).
Constraints — T-SQL only. PL/pgSQL or MySQL require parser extension.
sql_topology
path: "{{cookiecutter.project_slug}}/tools/software/discovery/sql_topology.py"
domain: software/discovery
status: stable
output: text-stdout # tabular: procedure, complexity, line count
dependencies: [stdlib]
invoked_by: [software-archeologist]
How to Use
python tools/software/discovery/sql_topology.py \
--db output/analysis_dbs/codebase_index.db \
--limit 20 --max-lines 200
When to Use — To identify leaf procedures (starting points for bottom-up archaeology). Higher complexity = analyze first.
sql_logic_parser
path: "{{cookiecutter.project_slug}}/tools/software/discovery/sql_logic_parser.py"
domain: software/discovery
status: stable
output: markdown-file | json-stdout
dependencies: [stdlib]
invoked_by: [software-archeologist]
How to Use
python tools/software/discovery/sql_logic_parser.py \
--file /path/to/procedure.sql \
--output output/specs/procedure_spec.md
When to Use — On individual T-SQL procedures. Required by the Traceability Mandate: every transformation must be documented before re-implementation.
Constraints — Regex-based; complex nested CTEs or dynamic SQL may need manual supplement.
query_trace
path: "{{cookiecutter.project_slug}}/tools/software/discovery/query_trace.py"
domain: software/discovery
status: stable
output: text-stdout # tabular match list with metrics
dependencies: [stdlib]
invoked_by: [software-archeologist]
How to Use
python tools/software/discovery/query_trace.py \
--db output/analysis_dbs/codebase_index.db \
--proc "usp_CalculateWPC"
When to Use — To retrieve metrics for a known procedure name. Uses LIKE matching — narrow the name if results are noisy.
squit_client
path: "{{cookiecutter.project_slug}}/tools/software/discovery/squit_client.py"
domain: software/discovery
status: stable
output: json-stdout
dependencies: [requests]
invoked_by: [software-archeologist, squit skill]
How to Use
export SQUIT_API_KEY="your-key-here"
python tools/software/discovery/squit_client.py \
--query "WPC calculation stored procedures" \
--limit 10
When to Use — Semantic search over DeAcero's 5.7 M legacy SQL objects when the exact name is unknown. Consult .agents/skills/squit.md for full workflow.
Constraints — Requires SQUIT_API_KEY (see .env.example) and network access to squit-mcp.deacero.us. Not usable in air-gapped environments.
cluster_deps
path: "{{cookiecutter.project_slug}}/tools/software/discovery/cluster_deps.py"
domain: software/discovery
status: stable
output: json-stdout # {file: [dependency_names]} by manifest type
dependencies: [stdlib]
invoked_by: [software-archeologist, retro-engineer]
When to Use — To discover and group external package dependencies across multiple manifests. Useful for migration cost assessment.
Constraints — Reads manifest files only; does not resolve transitive dependencies.
dedup_minhash
path: "{{cookiecutter.project_slug}}/tools/software/discovery/dedup_minhash.py"
domain: software/discovery
status: stable
output: json-stdout # [{files: [...], similarity: 0.94}]
dependencies: [datasketch]
invoked_by: [software-archeologist]
How to Use
python tools/software/discovery/dedup_minhash.py \
--path /path/to/codebase \
--threshold 0.9 --num-perm 128
When to Use — To identify near-duplicate files before re-implementation. Run after structure_mapper when copy-paste proliferation is suspected.
Constraints — datasketch required. Large codebases (> 50 000 files) may need significant memory.
code_indexer
path: "{{cookiecutter.project_slug}}/tools/software/discovery/code_indexer.py"
domain: software/discovery
status: stable
output: sqlite-db # output/analysis_dbs/codebase_index.db
dependencies: [datasketch]
invoked_by: [software-archeologist]
When to Use — Builds the SQLite index (file hashes + MinHash signatures) consumed by dedup_minhash, query_trace, and sql_topology. Run once per archaeology session.
Constraints — Shares the DB path with sql_procedure_analyzer. Both write to the same DB; run sequentially.
decompiler_manager
path: "{{cookiecutter.project_slug}}/tools/software/discovery/decompiler_manager.py"
domain: software/discovery
status: stable
output: files # output/decompiled/dotnet/<name>/ or output/decompiled/java/<name>/
dependencies: [ilspycmd (.NET), cfr-cli (Java)]
invoked_by: [software-archeologist]
When to Use — When source code is unavailable and only .dll, .exe, .jar, or .class files exist. Run before retro-engineer.
Constraints
- ilspycmd: dotnet tool install -g ilspycmd
- cfr: download cfr-<version>.jar; expose as cfr on PATH
- Decompiled output is approximate — treat as a starting point, not ground truth.
dll_unpacker
path: "{{cookiecutter.project_slug}}/tools/software/discovery/dll_unpacker.py"
domain: software/discovery
status: stable
output: files # output/unpacked_dll/<name>/ — version info, resources, symbols
dependencies: [pefile]
invoked_by: [software-archeologist]
When to Use — To extract metadata, import/export tables, and resources from Windows PE files without full decompilation. Use before decompiler_manager for a quick API surface scan.
Constraints — pefile required. Windows PE format only (no ELF or Mach-O).
kedro_lineage_builder
path: "{{cookiecutter.project_slug}}/tools/software/discovery/kedro_lineage_builder.py"
domain: software/discovery
status: stable
output: files # YAML pipeline definitions + Markdown lineage docs
dependencies: [stdlib, pyyaml]
invoked_by: [software-archeologist, retro-engineer]
When to Use — After sql_procedure_analyzer populates the DB. Derives Kedro-compatible pipeline YAML from the SQL dependency graph. Use when the modernization target is a Kedro data pipeline (standard for DeAcero GCP migrations).
Constraints — Requires sql_metrics table. pyyaml required. SQL summary heuristic is pattern-based — verify generated semantics manually.
Domain: hardware/* — Hardware Interfacing
Per the New Stack Mandate: Hardware tool directories are intentionally empty. The Tool Writer Agent generates tools on demand. Do not create hardware tools directly.
| Subdirectory | Purpose | Status |
|---|---|---|
{{cookiecutter.project_slug}}/tools/hardware/systems/ |
Embedded systems, smart card interfacing | Empty — generated by Tool Writer |
{{cookiecutter.project_slug}}/tools/hardware/wireless/ |
BLE, NFC/RFID interfacing | Empty — generated by Tool Writer |
Domain: observability — Telemetry & Reporting
The Observability Service (
services/observability/) is a standalone microservice, not a CLI tool. Interact with it via thecornerstoneCLI or the HTTP API. It is documented separately inservices/observability/README.md.
| Component | Path | Purpose |
|---|---|---|
| FastAPI app | services/observability/app/ |
Ingest and query telemetry events |
| Dashboard | services/observability/dashboard/ |
Visual summary of cross-team metrics |
| CLI | tools/cornerstone.py status |
System health check and diagnostic tool |
| CLI | cornerstone report |
Query summary, cost, and event data |
cornerstone.py status
path: "{{cookiecutter.project_slug}}/tools/cornerstone.py"
domain: observability
status: stable
output: text-stdout | json-stdout
dependencies: [stdlib]
invoked_by: [manual, CI]
How to Use
python tools/cornerstone.py status
python tools/cornerstone.py status --json
When to Use
- To perform real-time health checks of the project's agentic infrastructure (observability connection,
.envfile, skills freshness,ghauth).
Constraints
- Requires
AGENTIC_TELEMETRY_URLandSQUIT_API_KEYdefined in.envto pass all checks.
Adding a New Tool
- Invoke the Tool Writer Agent (
.agents/skills/core/tool-writer/SKILL.md). - Tool Writer confirms no existing tool covers the need (deduplication check against this index).
- Tool Writer writes the tool to
{{cookiecutter.project_slug}}/tools/<domain>/<subdomain>/<tool-name>/. - Tool Writer writes an ADR:
docs/adr/ADR-NNNN-<decision-title>.md. - Tool Writer adds an entry to this file following the schema at the top.
- All files (tool + ADR + index update) are committed together.
Commit message convention:
feat(tools): add <tool-name> for <purpose>