feat: add spec-review skill and ai-tools-doctor docs

spec-review: Multi-model review of spec-kit artifacts using orch
- SKILL.md with progressive disclosure pattern
- Review processes: spec, plan, tasks, gate-check
- Prompts for critique, review, and go/no-go decisions

ai-tools-doctor: RFC and implementation report for diagnostics skill

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
dan 2025-12-15 00:43:52 -08:00
parent 90e72f1095
commit 437265b916
15 changed files with 852 additions and 0 deletions

View file

@ -1,18 +1,28 @@
{"id":"skills-0og","title":"spec-review: Define output capture and audit trail","description":"Reviews happen in terminal then disappear. No audit trail, no diffable history.\n\nAdd:\n- Guidance to tee output to review file (e.g., specs/{branch}/review.md)\n- Standard location for gate check results\n- Template for recording decisions and rationale","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-15T00:23:23.705164812-08:00","updated_at":"2025-12-15T00:23:23.705164812-08:00"}
{"id":"skills-1ig","title":"Brainstorm agent-friendly doc conventions","description":"# Agent-Friendly Doc Conventions - Hybrid Architecture\n\n## FINAL ARCHITECTURE: Vale + LLM Hybrid\n\n### Insight\n\u003e \"Good old deterministic testing (dumb robots) is the best way to keep in check LLMs (smart robots) at volume.\"\n\n### Split by Tool\n\n| Category | Rubrics | Tool |\n|----------|---------|------|\n| Vale-only | Format Integrity, Deterministic Instructions, Terminology Strictness, Token Efficiency | Fast, deterministic, CI-friendly |\n| Vale + LLM | Semantic Headings, Configuration Precision, Security Boundaries | Vale flags, LLM suggests fixes |\n| LLM-only | Contextual Independence, Code Executability, Execution Verification | Semantic understanding required |\n\n### Pipeline\n\n```\n┌─────────────────────────────────────────────────────────────┐\n│ Stage 1: Vale (deterministic, fast, free) │\n│ - Runs in CI on every commit │\n│ - Catches 40% of issues instantly │\n│ - No LLM cost for clean docs │\n└─────────────────────┬───────────────────────────────────────┘\n │ only if Vale passes\n ▼\n┌─────────────────────────────────────────────────────────────┐\n│ Stage 2: LLM Triage (cheap model) │\n│ - Evaluates 3 semantic rubrics │\n│ - Identifies which need patches │\n└─────────────────────┬───────────────────────────────────────┘\n │ only if issues found\n ▼\n┌─────────────────────────────────────────────────────────────┐\n│ Stage 3: LLM Specialists (capable model) │\n│ - One agent per failed rubric │\n│ - Generates patches │\n└─────────────────────────────────────────────────────────────┘\n```\n\n### Why This Works\n- Vale is battle-tested, fast, CI-native\n- LLM only fires when needed (adaptive cost)\n- Deterministic rules catch predictable issues\n- LLM handles semantic/contextual issues\n\n---\n\n## Vale Rules Needed\n\n### Format Integrity\n- Existence: code blocks without language tags\n- Regex for unclosed fences\n\n### Deterministic Instructions \n- Existence: hedging words (\"might\", \"may want to\", \"consider\", \"you could\")\n\n### Terminology Strictness\n- Consistency: flag term variations\n\n### Token Efficiency\n- Existence: filler phrases (\"In this section we will...\", \"As you may know...\")\n\n### Semantic Headings (partial)\n- Existence: banned headings (\"Overview\", \"Introduction\", \"Getting Started\")\n\n### Configuration Precision (partial)\n- Existence: vague versions (\"Python 3.x\", \"recent version\")\n\n### Security Boundaries (partial)\n- Existence: hardcoded API key patterns\n\n---\n\n## NEXT STEPS\n\n1. Create Vale style for doc-review rubrics\n2. Test Vale on sample docs\n3. Design LLM prompts for semantic rubrics only\n4. Wire into orch or standalone","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-04T14:02:04.898026177-08:00","updated_at":"2025-12-04T16:43:53.0608948-08:00","closed_at":"2025-12-04T16:43:53.0608948-08:00"}
{"id":"skills-20s","title":"Compare BOUNDARIES.md with upstream","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:15:53.585115099-08:00","updated_at":"2025-12-03T20:19:28.442646801-08:00","closed_at":"2025-12-03T20:19:28.442646801-08:00","dependencies":[{"issue_id":"skills-20s","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:53.586442134-08:00","created_by":"daemon"}]}
{"id":"skills-25l","title":"Create orch skill for multi-model consensus","description":"Build a skill that exposes orch CLI capabilities to agents for querying multiple AI models","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-30T15:43:49.209528963-08:00","updated_at":"2025-11-30T15:47:36.608887453-08:00","closed_at":"2025-11-30T15:47:36.608887453-08:00"}
{"id":"skills-2xo","title":"Add README.md for web-search skill","description":"web-search skill has SKILL.md and scripts but no README.md. AGENTS.md says README.md is for humans, contains installation instructions, usage examples, prerequisites.","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-30T11:58:14.26066025-08:00","updated_at":"2025-11-30T12:00:25.561281052-08:00","dependencies":[{"issue_id":"skills-2xo","depends_on_id":"skills-vb5","type":"blocks","created_at":"2025-11-30T12:01:30.240439018-08:00","created_by":"daemon"}]}
{"id":"skills-39g","title":"RFC: .skills manifest pattern for per-repo skill deployment","description":"Document the .skills file pattern where projects declare skills in a manifest, .envrc reads it, and agents can query/edit it.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-30T12:37:50.106992381-08:00","updated_at":"2025-11-30T12:43:04.155161727-08:00","closed_at":"2025-11-30T12:43:04.155161727-08:00"}
{"id":"skills-3o7","title":"Fix ai-skills.nix missing sha256 hash","description":"modules/ai-skills.nix:16 has empty sha256 placeholder for opencode-skills npm package. Either get actual hash or remove/comment out the incomplete fetchFromNpm approach.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-11-30T11:58:24.404929863-08:00","updated_at":"2025-11-30T12:12:39.372107348-08:00","closed_at":"2025-11-30T12:12:39.372107348-08:00"}
{"id":"skills-4pw","title":"spec-review: Expand NFR checklist in prompts","description":"Current prompts mention 'performance, security, accessibility' but miss many critical NFRs.\n\nExpand to include:\n- Security (authn/authz, secrets, threat model)\n- Privacy/compliance (GDPR, PII)\n- Observability (logging, metrics, tracing)\n- Reliability (SLOs, failure modes)\n- Rollout/rollback strategy\n- Migration/backfill\n- Data retention/lifecycle\n- Cost constraints","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-15T00:23:24.485420922-08:00","updated_at":"2025-12-15T00:23:24.485420922-08:00"}
{"id":"skills-4yn","title":"Decide on screenshot-latest skill deployment","description":"DEPLOYED.md shows screenshot-latest as 'Not yet deployed - Pending decision'. Low risk skill that finds existing files. Need to decide whether to deploy or archive.","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-30T11:58:33.099790809-08:00","updated_at":"2025-11-30T11:58:33.099790809-08:00"}
{"id":"skills-53k","title":"Design graph-based doc discovery","description":"How does doc-review find and traverse documentation?\n\nApproach: Start from README.md or AGENTS.md, graph out from there.\n\nDesign questions:\n- Parse markdown links to find related docs?\n- Follow only relative links or also section references?\n- How to handle circular references?\n- Depth limit or exhaustive traversal?\n- What about orphan docs not linked from root?\n- How to represent the graph for chunking decisions?\n\nConsiderations:\n- Large repos may have hundreds of markdown files\n- Not all .md files are \"documentation\" (changelogs, templates, etc.)\n- Some docs are generated and shouldn't be patched\n\nDeliverable: Algorithm/pseudocode for doc discovery + chunking strategy.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-04T14:02:13.316843518-08:00","updated_at":"2025-12-04T16:43:58.277061015-08:00","closed_at":"2025-12-04T16:43:58.277061015-08:00"}
{"id":"skills-5hb","title":"spec-review: Add Prerequisites section documenting dependencies","description":"SKILL.md and process docs assume orch is installed, prompt files exist, models are available, but none of this is documented.\n\nAdd:\n- orch install instructions/link\n- Required env vars and model availability\n- Prompt file locations\n- Expected repo structure (specs/ convention)\n- Troubleshooting section for common failures","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-15T00:23:23.030537501-08:00","updated_at":"2025-12-15T00:23:23.030537501-08:00"}
{"id":"skills-5v8","title":"Replace SKILL.md with upstream version","description":"Upstream has 644 lines vs our 122. Missing: self-test questions, notes quality checks, token checkpointing, database selection, field usage table, lifecycle workflow, common patterns, troubleshooting","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-03T20:15:53.025829293-08:00","updated_at":"2025-12-03T20:16:20.470185004-08:00","closed_at":"2025-12-03T20:16:20.470185004-08:00","dependencies":[{"issue_id":"skills-5v8","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:53.027601712-08:00","created_by":"daemon"}]}
{"id":"skills-5vg","title":"spec-review: Add context/assumptions step to prompts","description":"Reviews can become speculative without establishing context first.\n\nAdd to prompts:\n- List assumptions being made\n- Distinguish: missing from doc vs implied vs out of scope\n- Ask clarifying questions if critical context missing","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-15T00:23:25.681448596-08:00","updated_at":"2025-12-15T00:23:25.681448596-08:00"}
{"id":"skills-6jw","title":"spec-review: Add severity labeling to prompts and reviews","description":"Reviews produce flat lists mixing blockers with minor nits. Hard to make decisions.\n\nAdd to prompts:\n- Require severity labels: Blocker / High / Medium / Low\n- Sort output by severity\n- Include impact and likelihood for each issue","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-15T00:23:23.334156366-08:00","updated_at":"2025-12-15T00:23:23.334156366-08:00"}
{"id":"skills-7s0","title":"Compare STATIC_DATA.md with upstream","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:15:55.193704589-08:00","updated_at":"2025-12-03T20:19:29.659256809-08:00","closed_at":"2025-12-03T20:19:29.659256809-08:00","dependencies":[{"issue_id":"skills-7s0","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:55.195160705-08:00","created_by":"daemon"}]}
{"id":"skills-7sh","title":"Set up bd-issue-tracking Claude Code skill from beads repo","description":"Install the beads Claude Code skill from https://github.com/steveyegge/beads/tree/main/examples/claude-code-skill\n\nThis skill teaches Claude how to effectively use beads for issue tracking across multi-session coding workflows. It provides strategic guidance on when/how to use beads, not just command syntax.\n\nFiles to install to ~/.claude/skills/bd-issue-tracking/:\n- SKILL.md - Core workflow patterns and decision criteria\n- BOUNDARIES.md - When to use beads vs markdown alternatives\n- CLI_REFERENCE.md - Complete command documentation\n- DEPENDENCIES.md - Relationship types and patterns\n- WORKFLOWS.md - Step-by-step procedures\n- ISSUE_CREATION.md - Quality guidelines\n- RESUMABILITY.md - Making work resumable across sessions\n- STATIC_DATA.md - Using beads as reference databases\n\nCan symlink or copy the files. Restart Claude Code after install.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T17:53:43.254007992-08:00","updated_at":"2025-12-03T20:04:53.416579381-08:00","closed_at":"2025-12-03T20:04:53.416579381-08:00"}
{"id":"skills-8d4","title":"Compare CLI_REFERENCE.md with upstream","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:15:53.268324087-08:00","updated_at":"2025-12-03T20:17:26.552616779-08:00","closed_at":"2025-12-03T20:17:26.552616779-08:00","dependencies":[{"issue_id":"skills-8d4","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:53.27265681-08:00","created_by":"daemon"}]}
{"id":"skills-9af","title":"spec-review: Add spike/research task handling","description":"Tasks like 'Investigate X' can linger without clear outcomes.\n\nAdd to REVIEW_TASKS:\n- Flag research/spike tasks\n- Require timebox and concrete outputs (decision record, prototype, risks)\n- Pattern for handling unknowns","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-15T00:23:26.887719136-08:00","updated_at":"2025-12-15T00:23:26.887719136-08:00"}
{"id":"skills-a0x","title":"spec-review: Add traceability requirements across artifacts","description":"Prompts don't enforce spec → plan → tasks linkage. Drift can occur without detection.\n\nAdd:\n- Require trace matrix or linkage in reviews\n- Each plan item should reference spec requirement\n- Each task should reference plan item\n- Flag unmapped items and extra scope","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-15T00:23:25.270581198-08:00","updated_at":"2025-12-15T00:23:25.270581198-08:00"}
{"id":"skills-a23","title":"Update main README to list all 9 skills","description":"Main README.md 'Skills Included' section only lists worklog and update-spec-kit. Repo actually has 9 skills: template, worklog, update-spec-kit, screenshot-latest, niri-window-capture, tufte-press, update-opencode, web-research, web-search.","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-30T11:58:14.042397754-08:00","updated_at":"2025-11-30T12:00:18.916270858-08:00","dependencies":[{"issue_id":"skills-a23","depends_on_id":"skills-4yn","type":"blocks","created_at":"2025-11-30T12:01:30.306742184-08:00","created_by":"daemon"}]}
{"id":"skills-al5","title":"Consider repo-setup-verification skill","description":"The dotfiles repo has a repo-setup-prompt.md verification checklist that could become a skill.\n\n**Source**: ~/proj/dotfiles/docs/repo-setup-prompt.md\n\n**What it does**:\n- Verifies .envrc has use_api_keys and skills loading\n- Checks .skills manifest exists with appropriate skills\n- Optionally checks beads setup\n- Verifies API keys are loaded\n\n**As a skill it could**:\n- Be invoked to audit any repo's agent setup\n- Offer to fix missing pieces\n- Provide consistent onboarding for new repos\n\n**Questions**:\n- Is this better as a skill vs a slash command?\n- Should it auto-fix or just report?\n- Does it belong in skills repo or dotfiles?","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-06T12:38:32.561337354-08:00","updated_at":"2025-12-06T12:38:32.561337354-08:00"}
{"id":"skills-bcu","title":"Design doc-review skill","description":"# doc-review skill\n\nFight documentation drift with a non-interactive review process that generates patchfiles for human review.\n\n## Problem\n- No consistent documentation system across repos\n- Stale content accumulates\n- Structural inconsistencies (docs not optimized for agents)\n\n## Envisioned Workflow\n\n```bash\n# Phase 1: Generate patches (non-interactive, use spare credits, test models)\ndoc-review scan ~/proj/foo --model claude-sonnet --output /tmp/foo-patches/\n\n# Phase 2: Review patches (interactive session)\ncd ~/proj/foo\nclaude # human reviews patches, applies selectively\n```\n\n## Design Decisions Made\n\n- **Trigger**: Manual invocation (not CI). Use case includes burning extra LLM credits, testing models repeatably.\n- **Source of truth**: Style guide embedded in prompt template. Blessed defaults, overridable per-repo.\n- **Output**: Patchfiles for human review in interactive Claude session.\n- **Chunking**: Based on absolute size, not file count. Logical chunks easy for Claude to review.\n- **Scope detection**: Graph-based discovery starting from README.md or AGENTS.md, not glob-all-markdown.\n\n## Open Design Work\n\n### Agent-Friendly Doc Conventions (needs brainstorming)\nWhat makes docs agent-readable?\n- Explicit context (no \"as mentioned above\")\n- Clear section headers for navigation\n- Self-contained sections\n- Consistent terminology\n- Front-loaded summaries\n- ???\n\n### Prompt Content\nFull design round needed on:\n- What conventions to enforce\n- How to express them in prompt\n- Examples of \"good\" vs \"bad\"\n\n### Graph-Based Discovery\nHow does traversal work?\n- Parse links from README/AGENTS.md?\n- Follow relative markdown links?\n- Depth limit?\n\n## Skill Structure (tentative)\n```\nskills/doc-review/\n├── prompt.md # Core review instructions + style guide\n├── scan.sh # Orchestrates: find docs → invoke claude → emit patches\n└── README.md\n```\n\n## Out of Scope (for now)\n- Cross-repo standardization (broader than skills repo)\n- CI integration\n- Auto-apply without human review","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-04T14:01:43.305653729-08:00","updated_at":"2025-12-04T16:44:03.468118288-08:00","closed_at":"2025-12-04T16:44:03.468118288-08:00","dependencies":[{"issue_id":"skills-bcu","depends_on_id":"skills-1ig","type":"blocks","created_at":"2025-12-04T14:02:17.144414636-08:00","created_by":"daemon"},{"issue_id":"skills-bcu","depends_on_id":"skills-53k","type":"blocks","created_at":"2025-12-04T14:02:17.164968463-08:00","created_by":"daemon"}]}
{"id":"skills-bvz","title":"spec-review: Add Definition of Ready checklists for each phase","description":"'Ready for /speckit.plan' and similar are underspecified.\n\nAdd concrete checklists:\n- Spec ready for planning: problem statement, goals, constraints, acceptance criteria, etc.\n- Plan ready for tasks: milestones, risks, dependencies, test strategy, etc.\n- Tasks ready for bd: each task has acceptance criteria, dependencies explicit, etc.","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-15T00:23:24.877531852-08:00","updated_at":"2025-12-15T00:23:24.877531852-08:00"}
{"id":"skills-cc0","title":"spec-review: Add anti-hallucination constraints to prompts","description":"Models may paraphrase and present as quotes, or invent requirements/risks not in the doc.\n\nAdd:\n- 'Quotes must be verbatim'\n- 'Do not assume technologies/constraints not stated'\n- 'If missing info, list as open questions rather than speculating'","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-15T00:23:26.045478292-08:00","updated_at":"2025-12-15T00:23:26.045478292-08:00"}
{"id":"skills-cjx","title":"Create spec-review skill for orch + spec-kit integration","description":"A new skill that integrates orch multi-model consensus with spec-kit workflows.\n\n**Purpose**: Use different models/temps/stances to review spec-kit artifacts before phase transitions.\n\n**Proposed commands**:\n- /spec-review.spec - Critique current spec for completeness, ambiguity, gaps\n- /spec-review.plan - Evaluate architecture decisions in plan\n- /spec-review.gate - Go/no-go consensus before phase transition\n\n**Structure**:\n```\nskills/spec-review/\n├── SKILL.md\n├── commands/\n│ ├── spec.md\n│ ├── plan.md\n│ └── gate.md\n└── prompts/\n └── ...\n```\n\n**Key design points**:\n- Finds spec/plan files from current branch or specs/ directory\n- Invokes orch with appropriate prompt, models, stances\n- Presents consensus/critique results\n- AI reviewing AI is valuable redundancy (different models/temps/stances)\n\n**Dependencies**:\n- orch CLI must be available (blocked on dotfiles-3to)\n- spec-kit project structure conventions","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-14T17:50:13.22879874-08:00","updated_at":"2025-12-15T00:10:23.122342449-08:00","closed_at":"2025-12-15T00:10:23.122342449-08:00"}
{"id":"skills-cnc","title":"Add direnv helper for per-repo skill deployment","description":"Create sourceable helper script and documentation for the standard per-repo skill deployment pattern using direnv + nix build.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-30T12:19:20.71056749-08:00","updated_at":"2025-11-30T12:37:47.22638278-08:00","closed_at":"2025-11-30T12:37:47.22638278-08:00"}
{"id":"skills-czz","title":"Research OpenCode agents for skill integration","description":"DEPLOYMENT.md:218 has TODO to research OpenCode agents. Need to understand how Build/Plan/custom agents work and whether skills need agent-specific handling.","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-30T11:58:24.855701141-08:00","updated_at":"2025-11-30T11:58:24.855701141-08:00"}
{"id":"skills-d6r","title":"Design: orch as local agent framework","description":"# Orch Evolution: From Consensus Tool to Agent Framework\n\n## Current State\n- `orch consensus` - multi-model queries\n- `orch chat` - single model queries\n- No state, no pipelines, no retries\n\n## Proposed Extensions\n\n### Pipeline Mode\n```bash\norch pipeline config.yaml\n```\nWhere config.yaml defines:\n- Stages (triage → specialists → verify)\n- Routing logic (if triage finds X, run specialist Y)\n- Retry policy\n\n### Evaluate Mode (doc-review specific)\n```bash\norch evaluate doc.md --rubrics=1,4,7 --output=patches/\n```\n- Applies specific rubrics to document\n- Outputs JSON or patches\n\n### Parallel Mode\n```bash\norch parallel --fan-out=5 --template=\"evaluate {rubric}\" rubrics.txt\n```\n- Fan-out to multiple parallel calls\n- Aggregate results\n\n## Open Questions\n1. Does this belong in orch or a separate tool?\n2. Should orch pipelines be YAML-defined or code-defined?\n3. How does this relate to Claude Code Task subagents?\n4. What's the minimal viable extension?\n\n## Context\nEmerged from doc-review skill design - need multi-pass evaluation but don't want to adopt heavy framework (LangGraph, etc.)","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-04T16:06:56.681282678-08:00","updated_at":"2025-12-04T16:44:08.652185174-08:00","closed_at":"2025-12-04T16:44:08.652185174-08:00"}
@ -21,10 +31,13 @@
{"id":"skills-ebh","title":"Compare bd-issue-tracking skill files with upstream","description":"Fetch upstream beads skill files and compare with our condensed versions to identify differences","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:14:07.886535859-08:00","updated_at":"2025-12-03T20:19:37.579815337-08:00","closed_at":"2025-12-03T20:19:37.579815337-08:00"}
{"id":"skills-fo3","title":"Compare WORKFLOWS.md with upstream","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:15:54.283175561-08:00","updated_at":"2025-12-03T20:19:28.897037199-08:00","closed_at":"2025-12-03T20:19:28.897037199-08:00","dependencies":[{"issue_id":"skills-fo3","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:54.286009672-08:00","created_by":"daemon"}]}
{"id":"skills-fvx","title":"use-skills.sh: stderr from nix build corrupts symlinks when repo is dirty","description":"In use-skills.sh, the line:\n\n```bash\nout=$(nix build --print-out-paths --no-link \"${SKILLS_REPO}#${skill}\" 2\u003e\u00261) || {\n```\n\nThe `2\u003e\u00261` merges stderr into stdout. When the skills repo is dirty, nix emits a warning to stderr which gets captured into $out and used as the symlink target.\n\nResult: symlinks like:\n```\norch -\u003e warning: Git tree '/home/dan/proj/skills' is dirty\n/nix/store/j952hgxixifscafb42vmw9vgdphi1djs-ai-skill-orch\n```\n\nFix: redirect stderr to /dev/null or filter it out before creating symlink.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-14T11:54:03.06502295-08:00","updated_at":"2025-12-14T11:59:25.472044754-08:00","closed_at":"2025-12-14T11:59:25.472044754-08:00"}
{"id":"skills-gas","title":"spec-review: File discovery is brittle, can pick wrong file silently","description":"The fallback `find ... | head -1` is non-deterministic and can select wrong spec/plan/tasks file without user noticing. Branch names with `/` also break path construction.\n\nFixes:\n- Fail fast if expected file missing\n- Print chosen file path before proceeding\n- Require explicit confirmation if falling back\n- Handle branch names with slashes","status":"open","priority":2,"issue_type":"bug","created_at":"2025-12-15T00:23:22.762045913-08:00","updated_at":"2025-12-15T00:23:22.762045913-08:00"}
{"id":"skills-h9f","title":"spec-review: Balance negativity bias in prompts","description":"'Be critical' and 'devil's advocate' can bias toward over-flagging without acknowledging what's good.\n\nAdd:\n- 'List top 3 strongest parts of the document'\n- 'Call out where document is sufficiently clear/testable'\n- Categorize :against concerns as confirmed/plausible/rejected","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-15T00:23:26.418087998-08:00","updated_at":"2025-12-15T00:23:26.418087998-08:00"}
{"id":"skills-kmj","title":"Orch skill: document or handle orch not in PATH","description":"Skill docs show 'orch consensus' but orch requires 'uv run' from ~/proj/orch. Either update skill to invoke correctly or document installation requirement.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-01T17:29:48.844997238-08:00","updated_at":"2025-12-01T18:28:11.374048504-08:00","closed_at":"2025-12-01T18:28:11.374048504-08:00"}
{"id":"skills-lie","title":"Compare DEPENDENCIES.md with upstream","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:15:53.925914243-08:00","updated_at":"2025-12-03T20:19:28.665641809-08:00","closed_at":"2025-12-03T20:19:28.665641809-08:00","dependencies":[{"issue_id":"skills-lie","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:53.9275694-08:00","created_by":"daemon"}]}
{"id":"skills-lvg","title":"Compare ISSUE_CREATION.md with upstream","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:15:54.609282051-08:00","updated_at":"2025-12-03T20:19:29.134966356-08:00","closed_at":"2025-12-03T20:19:29.134966356-08:00","dependencies":[{"issue_id":"skills-lvg","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:54.610717055-08:00","created_by":"daemon"}]}
{"id":"skills-m21","title":"Apply niri-window-capture code review recommendations","description":"CODE-REVIEW-niri-window-capture.md identifies action items: add dependency checks to scripts, improve error handling for niri failures, add screenshot directory validation, implement rate limiting. See High/Medium priority sections.","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-30T11:58:24.648846875-08:00","updated_at":"2025-11-30T11:58:24.648846875-08:00"}
{"id":"skills-mx3","title":"spec-review: Define consensus thresholds and decision rules","description":"'Use judgment' for mixed results leads to inconsistent decisions.\n\nDefine:\n- What constitutes consensus (2/3? unanimous?)\n- How to handle NEUTRAL votes\n- Tie-break rules\n- When human override is acceptable and how to document it","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-15T00:23:24.121175736-08:00","updated_at":"2025-12-15T00:23:24.121175736-08:00"}
{"id":"skills-pu4","title":"Clean up stale beads.left.jsonl merge artifact","description":"bd doctor flagged multiple JSONL files. beads.left.jsonl is empty merge artifact that should be removed: git rm .beads/beads.left.jsonl","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-30T11:58:33.292221449-08:00","updated_at":"2025-11-30T12:37:49.916795223-08:00","closed_at":"2025-11-30T12:37:49.916795223-08:00"}
{"id":"skills-qeh","title":"Add README.md for web-research skill","description":"web-research skill has SKILL.md and scripts but no README.md. AGENTS.md says README.md is for humans, contains installation instructions, usage examples, prerequisites.","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-30T11:58:14.475647113-08:00","updated_at":"2025-11-30T12:00:30.309340468-08:00","dependencies":[{"issue_id":"skills-qeh","depends_on_id":"skills-vb5","type":"blocks","created_at":"2025-11-30T12:01:30.278784381-08:00","created_by":"daemon"}]}
{"id":"skills-uz4","title":"Compare RESUMABILITY.md with upstream","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:15:54.897754095-08:00","updated_at":"2025-12-03T20:19:29.384645842-08:00","closed_at":"2025-12-03T20:19:29.384645842-08:00","dependencies":[{"issue_id":"skills-uz4","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:54.899671178-08:00","created_by":"daemon"}]}

View file

@ -0,0 +1,83 @@
# AI Tools Doctor Implementation Report
**Date**: 2025-12-01
**Status**: MVP Complete
**RFC**: docs/rfc-ai-tools-doctor.md
## Summary
Implemented `ai-tools-doctor` CLI with `check` and `sync` commands per RFC. MVP scope - no `bump` command yet.
## What Was Built
### dotfiles repo
**`config/ai-tools/tools.json`** - Updated manifest
- Added `source` field (npm/nix) to each tool
- Pinned versions (was "latest")
- Added nix tools (opencode, beads)
**`bin/ai-tools-doctor`** - New CLI (212 lines bash)
- `check` - Compare installed vs declared versions
- `check --json` - Machine-readable output
- `check --quiet` - Exit code only
- `sync` - Install/update npm tools to declared versions
### skills repo
**`skills/ai-tools-doctor/`** - Skill for Claude Code / OpenCode
- SKILL.md - Agent instructions
- README.md - Human documentation
## Deviations from RFC
| RFC | Implementation | Reason |
|-----|----------------|--------|
| Exit code 1 for "updates available" | Exit 0 for success, 1 for errors only | Non-zero confuses many tools |
| `expected_version` for nix | `version` for all | Consistency |
| opencode version 0.2.1 | 1.0.119 | RFC had wrong version |
## Current Tool Versions
```
claude-code 2.0.55 (npm)
openai-codex 0.63.0 (npm)
opencode 1.0.119 (nix)
beads 0.26.0 (nix)
```
## Still TODO
1. **Delete old script** - `bin/ai-tools-sync.sh` still exists, can be removed
2. **Home Manager integration** - Symlink manifest to `~/.config/ai-tools/`
3. **Session hook** - Optional: run check on agent session start
4. **Delete obsolete skill** - `.claude/skills/update-claude/` per RFC
## Testing
```bash
# All pass
ai-tools-doctor check # Human output
ai-tools-doctor check --json # JSON output
ai-tools-doctor check --quiet # Exit code only
ai-tools-doctor sync # Syncs npm tools
```
## Files Changed
```
dotfiles/
├── config/ai-tools/tools.json (modified)
└── bin/ai-tools-doctor (new)
skills/
└── skills/ai-tools-doctor/
├── SKILL.md (new)
└── README.md (new)
```
## Next Steps
1. Review and commit dotfiles changes
2. Add to Home Manager if needed
3. Consider `bump` command for future (fetches latest from npm registry)

168
docs/rfc-ai-tools-doctor.md Normal file
View file

@ -0,0 +1,168 @@
# RFC: AI Tools Doctor - Unified Version Management
**Status**: Draft
**Author**: dotfiles team
**Date**: 2025-12-01
**Related**: dotfiles-6l8
## Problem
Multiple AI coding tools (claude-code, codex, opencode, beads) with mixed management approaches:
- NPM tools want to self-update, drift from known state
- Nix tools are declarative but version info scattered
- No unified view for agents to report tool status
- Current `ai-tools-sync.sh` only handles npm tools
## Goals
1. **Unified view** - Single tool to check all AI tools regardless of source
2. **Declarative** - Pin versions in manifest, updates are explicit
3. **Agent-first** - JSON output, exit codes, fast for hooks
4. **Nix-ian** - Reproducible, version-controlled, git-trackable
## Design
### Manifest: `config/ai-tools/tools.json`
```json
{
"tools": {
"claude-code": {
"source": "npm",
"npm_package": "@anthropic-ai/claude-code",
"version": "2.0.55",
"install_dir": "~/.local/share/claude-code",
"binary": "claude"
},
"openai-codex": {
"source": "npm",
"npm_package": "@openai/codex",
"version": "0.63.0",
"install_dir": "~/.local/share/openai-codex",
"binary": "codex"
},
"opencode": {
"source": "nix",
"expected_version": "0.2.1",
"binary": "opencode"
},
"beads": {
"source": "nix",
"expected_version": "0.26.0",
"binary": "bd"
}
}
}
```
### CLI: `ai-tools-doctor`
```bash
# Check all tools (human-readable)
ai-tools-doctor check
# Check all tools (machine-readable for agents)
ai-tools-doctor check --json
# Exit code only (for hooks)
ai-tools-doctor check --quiet
# Sync npm tools to declared versions
ai-tools-doctor sync
# Update manifest to latest, then sync
ai-tools-doctor bump [tool]
ai-tools-doctor bump --all
```
### Exit Codes
- `0` - All tools match declared versions
- `1` - Updates available (informational, not error)
- `2` - Error (missing tools, network failure, etc.)
### JSON Output Schema
```json
{
"status": "updates_available",
"updates_count": 1,
"tools": {
"claude-code": {
"source": "npm",
"installed": "2.0.55",
"declared": "2.0.55",
"available": "2.0.56",
"status": "update_available"
},
"beads": {
"source": "nix",
"installed": "0.26.0",
"declared": "0.26.0",
"status": "ok"
}
}
}
```
### Shared Skill: `claude/skills/ai-tools-doctor/`
Following existing skills pattern, deployed to both Claude and OpenCode:
```
claude/skills/ai-tools-doctor/
├── SKILL.md # Agent invocation description
├── README.md # Documentation
└── scripts/
└── check.sh # Wrapper calling bin/ai-tools-doctor.sh
```
**Replaces**: `.claude/skills/update-claude/` (Claude-only, obsolete)
### Agent Integration
Session start hook or skill can run:
```bash
result=$(ai-tools-doctor check --json 2>/dev/null)
if [[ $? -eq 1 ]]; then
# Agent parses JSON, mentions updates to user
fi
```
## Key Decisions
### Version Pinning (chosen over "latest")
**Why**: Aligns with Nix philosophy. Updates are intentional, tracked in git. Can review changelogs before bumping. Reproducible: same manifest = same tools.
**Tradeoff**: Requires manual `bump` command. Mitigated by agent visibility.
### Nix Tools as Read-Only
**Why**: Nix tools are managed by `nix flake update`, not this tool. Doctor reports their status but doesn't manage them.
**Behavior**: Shows installed vs declared. No "available" check - that's nix territory.
### Shared Skill (not Claude-only)
**Why**: Both Claude and OpenCode use these tools. Unified skill works for both agents via existing Home Manager deployment pattern.
## Files to Modify
1. `config/ai-tools/tools.json` - Add source types, pin versions, add nix tools
2. `bin/ai-tools-sync.sh``bin/ai-tools-doctor.sh` - Rename, add JSON/exit codes
3. `claude/skills/ai-tools-doctor/` - New shared skill
4. `home/claude.nix` - Deploy shared skill
5. Delete `.claude/skills/update-claude/` - Replaced
## Open Questions
1. Cache npm registry responses for faster checks?
2. Session start hook integration - automatic or manual trigger?
3. Should `bump` auto-commit the manifest change?
## Out of Scope
- Managing nix tool versions (use `nix flake update`)
- Auto-updating during sessions (agent reports, human decides)
- Other non-AI tools

View file

@ -17,6 +17,7 @@
"niri-window-capture"
"orch"
"screenshot-latest"
"spec-review"
"tufte-press"
"worklog"
"update-spec-kit"

View file

@ -0,0 +1,52 @@
# AI Tools Doctor Skill
Skill for checking and syncing AI coding tool versions.
## Purpose
Enables agents to:
- Report tool version status to users
- Detect version mismatches
- Trigger npm tool sync when needed
## Quick Start
```bash
# Check all tools
ai-tools-doctor check
# JSON output for parsing
ai-tools-doctor check --json
# Sync npm tools to pinned versions
ai-tools-doctor sync
```
## Files
- `SKILL.md` - Agent instructions (loaded by Claude Code/OpenCode)
- `README.md` - Human documentation (this file)
## Prerequisites
- `ai-tools-doctor` CLI (from dotfiles `bin/`)
- `jq` for JSON parsing
- `npm` for syncing npm tools
## Manifest
Tools are declared in `~/.config/ai-tools/tools.json`:
```json
{
"tools": {
"claude-code": {
"source": "npm",
"package": "@anthropic-ai/claude-code",
"version": "2.0.55",
"install_dir": "~/.local/share/claude-code",
"binary": "claude"
}
}
}
```

View file

@ -0,0 +1,82 @@
---
name: ai-tools-doctor
description: Check and sync AI coding tool versions against declared manifest
---
# AI Tools Doctor
Check installed AI tools against declared versions and sync npm tools to pinned versions.
## When to Use
- At session start to verify tool versions
- When user asks about AI tool status or versions
- Before/after updating tools
- When troubleshooting tool issues
## Commands
```bash
# Check all tools (human-readable)
ai-tools-doctor check
# Check all tools (machine-readable for parsing)
ai-tools-doctor check --json
# Exit code only (for scripts/hooks)
ai-tools-doctor check --quiet
# Sync npm tools to declared versions
ai-tools-doctor sync
```
## Managed Tools
| Tool | Source | Binary |
|------|--------|--------|
| claude-code | npm | `claude` |
| openai-codex | npm | `codex` |
| opencode | nix | `opencode` |
| beads | nix | `bd` |
## Output
### Human-readable (default)
```
beads (nix)
✓ 0.26.0
claude-code (npm)
✓ 2.0.55
```
### JSON (--json)
```json
{
"status": "ok",
"tools": {
"claude-code": {
"source": "npm",
"installed": "2.0.55",
"declared": "2.0.55",
"status": "ok"
}
}
}
```
## Status Values
- `ok` - Installed version matches declared
- `version_mismatch` - Installed differs from declared
- `not_installed` - Tool not found
## Exit Codes
- `0` - All tools match declared versions
- `1` - Mismatch or error
## Notes
- Nix tools are read-only (reports status, doesn't manage)
- Use `sync` to install/update npm tools to declared versions
- Manifest location: `~/.config/ai-tools/tools.json`

View file

@ -0,0 +1,66 @@
# Gate Check Process
Go/no-go consensus before phase transitions.
## When to Use
- After spec review, before planning
- After plan review, before implementation
- Any time you want a sanity check
## 1. Determine the Phase
```bash
BRANCH=$(git branch --show-current)
# What are we gating?
if [[ -f "specs/${BRANCH}/plan.md" ]]; then
PHASE="plan"
FILE="specs/${BRANCH}/plan.md"
else
PHASE="spec"
FILE="specs/${BRANCH}/spec.md"
fi
```
## 2. Run the Gate Check
```bash
orch consensus --mode vote --temperature 0.5 \
--file "$FILE" \
"$(cat ~/.claude/skills/spec-review/prompts/gate-check.txt)" \
flash deepseek gpt
```
**Why these settings**:
- `--mode vote` - Structured Support/Oppose/Neutral verdicts
- `--temperature 0.5` - More deterministic, less creative
- Fast models for quick sanity check
## 3. Interpret Verdicts
**Vote outcomes**:
- `SUPPORT` - Ready to proceed
- `OPPOSE` - Has concerns, should not proceed
- `NEUTRAL` - On the fence, needs more info
**Consensus thresholds**:
- All support → proceed
- Any oppose → review concerns before proceeding
- Mixed → use judgment, consider concerns
## 4. Handle Blockers
If models oppose, they should provide:
- Specific blocker (quote the problematic text)
- Why it's a blocker
- What would resolve it
Address blockers, re-run gate check.
## 5. Record Decision
Whether proceeding or not, note:
- Gate check result
- Any concerns raised
- Decision and rationale

View file

@ -0,0 +1,68 @@
# Plan Review Process
Detailed process for evaluating architecture and technology decisions before implementation.
## 1. Locate the Files
```bash
BRANCH=$(git branch --show-current)
PLAN_FILE="specs/${BRANCH}/plan.md"
SPEC_FILE="specs/${BRANCH}/spec.md"
# Fallback
if [[ ! -f "$PLAN_FILE" ]]; then
PLAN_FILE=$(find specs -name "plan.md" -type f 2>/dev/null | head -1)
fi
```
If no plan found, the user needs to run `/speckit.plan` first.
## 2. Run Devil's Advocate Review
```bash
orch consensus --mode open --temperature 1.0 \
--file "$PLAN_FILE" \
--file "$SPEC_FILE" \
"$(cat ~/.claude/skills/spec-review/prompts/plan-review.txt)" \
flash:for deepseek:against gpt:neutral
```
**Why these settings**:
- `--mode open` - Freeform responses, not structured verdicts
- `--temperature 1.0` - Maximum divergent thinking
- **Stances**:
- `:for` - Argues in favor of the plan
- `:against` - Actively looks for problems
- `:neutral` - Balanced assessment
## 3. Interpret the Stances
**From the :for model**:
- What's good about this approach
- Why the choices make sense
- Strengths to preserve
**From the :against model** (most valuable):
- Risks and concerns
- Alternative approaches
- What could go wrong
- Hidden complexity
**From the :neutral model**:
- Balanced tradeoff assessment
- Overall recommendation
- Key decision points
## 4. Synthesize
- Where models agree → high confidence
- Where models disagree → needs decision
- :against concerns that :for can't rebut → real issues
## 5. Next Steps
| Result | Action |
|--------|--------|
| Significant concerns | Revise plan |
| Minor risks noted | Document risks, proceed |
| Clean | Ready for `/speckit.tasks` |

View file

@ -0,0 +1,57 @@
# Spec Review Process
Detailed process for reviewing a spec-kit specification before moving to planning.
## 1. Locate the Spec
```bash
BRANCH=$(git branch --show-current)
SPEC_FILE="specs/${BRANCH}/spec.md"
# Fallback if not on feature branch
if [[ ! -f "$SPEC_FILE" ]]; then
SPEC_FILE=$(find specs -name "spec.md" -type f 2>/dev/null | head -1)
fi
```
If no spec found, the user needs to run `/speckit.specify` first.
## 2. Run the Critique
```bash
orch consensus --mode critique --temperature 0.8 \
--file "$SPEC_FILE" \
"$(cat ~/.claude/skills/spec-review/prompts/spec-critique.txt)" \
flash deepseek gpt
```
**Why these settings**:
- `--mode critique` - Models actively look for problems
- `--temperature 0.8` - Encourages finding edge cases
- Three diverse models catch different blind spots
## 3. Interpret Results
Look for:
**Consensus issues** (multiple models flagged):
- These are high-confidence problems
- Address before proceeding
**Single-model concerns**:
- May be false positives or edge cases
- Evaluate on merit
**Categories of issues**:
- Requirements clarity - ambiguous language
- Completeness - missing edge cases
- Scope - feature creep or unbounded
- Feasibility - contradictions, risks
## 4. Next Steps
| Result | Action |
|--------|--------|
| Blockers found | Fix spec, re-run review |
| Minor issues | Consider fixing, can proceed |
| Clean | Ready for `/speckit.plan` |

View file

@ -0,0 +1,74 @@
# Tasks Review Process
Review generated task list before converting to bd issues.
## When to Use
After `/speckit.tasks` generates tasks, before `bd create`. Once tasks become issues, you're committed to the breakdown.
## 1. Locate the Tasks
```bash
BRANCH=$(git branch --show-current)
TASKS_FILE="specs/${BRANCH}/tasks.md"
if [[ ! -f "$TASKS_FILE" ]]; then
TASKS_FILE=$(find specs -name "tasks.md" -type f 2>/dev/null | head -1)
fi
```
Also load plan for context:
```bash
PLAN_FILE="specs/${BRANCH}/plan.md"
```
## 2. Run the Review
```bash
orch consensus --mode critique --temperature 0.7 \
--file "$TASKS_FILE" \
--file "$PLAN_FILE" \
"$(cat ~/.claude/skills/spec-review/prompts/tasks-review.txt)" \
flash deepseek gpt
```
**Why these settings**:
- `--mode critique` - Find problems with the breakdown
- `--temperature 0.7` - Balanced creativity
- Include plan for context on what tasks should accomplish
## 3. What to Look For
**Granularity issues**:
- Tasks too large (should be split)
- Tasks too small (should be combined)
- Missing tasks (gaps in coverage)
**Dependency issues**:
- Wrong ordering
- Missing dependencies
- Circular dependencies
**Scope issues**:
- Tasks that don't trace to plan
- Gold-plating (unnecessary tasks)
- Missing edge case handling
**Clarity issues**:
- Vague task descriptions
- Unclear acceptance criteria
- Ambiguous ownership
## 4. Next Steps
| Result | Action |
|--------|--------|
| Major issues | Revise tasks, re-run review |
| Minor tweaks | Fix before bd create |
| Clean | Ready for `bd create` |
## Why Review Before bd create?
- Easier to restructure tasks in markdown than in issue tracker
- bd issues carry history - better to get it right first
- Dependencies in bd are harder to refactor than in a task list

View file

@ -0,0 +1,80 @@
---
name: spec-review
description: Review spec-kit specifications and plans using multi-model AI consensus (orch) before phase transitions. Use when working with spec-kit projects and need to validate specs, evaluate architecture decisions, or gate phase transitions.
---
# Spec Review
Multi-model review of spec-kit artifacts. Uses orch to get diverse AI perspectives that catch blind spots a single model might miss.
## When to Use
- Before `/speckit.plan` - review the spec for completeness
- Before `/speckit.tasks` - evaluate architecture decisions in the plan
- Before `bd create` - review task breakdown before committing to issues
- At any phase transition - go/no-go gate check
## Quick Start
**Review a spec**:
```bash
orch consensus --mode critique --temperature 0.8 \
--file specs/{branch}/spec.md \
"$(cat ~/.claude/skills/spec-review/prompts/spec-critique.txt)" \
flash deepseek gpt
```
**Review a plan** (devil's advocate):
```bash
orch consensus --mode open --temperature 1.0 \
--file specs/{branch}/plan.md \
"$(cat ~/.claude/skills/spec-review/prompts/plan-review.txt)" \
flash:for deepseek:against gpt:neutral
```
**Review tasks** (before bd create):
```bash
orch consensus --mode critique --temperature 0.7 \
--file specs/{branch}/tasks.md \
"$(cat ~/.claude/skills/spec-review/prompts/tasks-review.txt)" \
flash deepseek gpt
```
**Gate check**:
```bash
orch consensus --mode vote --temperature 0.5 \
--file specs/{branch}/spec.md \
"$(cat ~/.claude/skills/spec-review/prompts/gate-check.txt)" \
flash deepseek gpt
```
## Detailed Processes
- [REVIEW_SPEC.md](REVIEW_SPEC.md) - Full spec review process
- [REVIEW_PLAN.md](REVIEW_PLAN.md) - Plan evaluation with stances
- [REVIEW_TASKS.md](REVIEW_TASKS.md) - Task breakdown review before bd
- [GATE_CHECK.md](GATE_CHECK.md) - Go/no-go consensus
## Model Selection
**Default (fast, cheap, diverse)**:
- `flash` - Gemini 2.5 Flash
- `deepseek` - DeepSeek v3
- `gpt` - GPT 5.2
**Thorough review**:
- `gemini` - Gemini 3 Pro
- `r1` - DeepSeek R1 (reasoning)
## Why Multi-Model?
Different models catch different issues:
- Different training data → different blind spots
- Stances (for/against/neutral) force opposing viewpoints
- Higher temperature → more divergent thinking
## Requirements
- `orch` CLI in PATH
- API keys: GEMINI_API_KEY, OPENAI_API_KEY, OPENROUTER_KEY
- Working in a spec-kit project (has `specs/` directory)

View file

@ -0,0 +1,15 @@
This is a go/no-go gate check before proceeding to the next phase.
Review this document and vote:
- SUPPORT: Ready to proceed, no blockers
- OPPOSE: Has issues that must be resolved first
- NEUTRAL: Concerns but not blocking
If you vote OPPOSE, you MUST provide:
1. The specific blocker (quote the problematic text)
2. Why it's a blocker (not just a nice-to-have)
3. What would resolve it
If you vote SUPPORT with reservations, note any risks to monitor.
Be decisive. The goal is a clear go/no-go signal.

View file

@ -0,0 +1,27 @@
Evaluate this implementation plan for a software feature.
The plan was created from a specification. Analyze:
1. ARCHITECTURE DECISIONS
- Are the technology choices appropriate for the requirements?
- Is the architecture over-engineered or under-engineered?
- Are there simpler alternatives that weren't considered?
2. RISK ASSESSMENT
- What could go wrong with this approach?
- Scalability concerns?
- Security vulnerabilities?
- Maintenance burden?
3. SPEC ALIGNMENT
- Does the plan actually address all spec requirements?
- Any requirements that got lost or misinterpreted?
- Does the plan introduce scope not in the spec?
4. IMPLEMENTATION CONCERNS
- Are there tricky parts that need more detail?
- Missing error handling or edge cases?
- Testing strategy adequate?
Be specific. Quote the plan when identifying issues.
Suggest concrete alternatives where you see problems.

View file

@ -0,0 +1,30 @@
Review this software specification for completeness and quality.
Critique the following aspects:
1. REQUIREMENTS CLARITY
- Are requirements specific and testable?
- Any ambiguous language that could be interpreted multiple ways?
- Are acceptance criteria measurable?
2. COMPLETENESS
- Missing edge cases or error scenarios?
- Are all user types/personas covered?
- Missing non-functional requirements (performance, security, accessibility)?
3. SCOPE
- Is the scope clearly bounded?
- Any feature creep or unnecessary complexity?
- Dependencies clearly identified?
4. FEASIBILITY
- Any requirements that seem technically risky?
- Contradictions between requirements?
- Unrealistic success criteria?
For each issue found, provide:
- The specific problematic text (quote it)
- Why it's a problem
- A suggested improvement
Be critical. The goal is to catch issues NOW before they become costly during implementation.

View file

@ -0,0 +1,36 @@
Review this task breakdown for a software feature before it becomes tracked issues.
The tasks were generated from an implementation plan. Analyze:
1. GRANULARITY
- Are tasks appropriately sized? (Not too big, not too small)
- Can each task be completed in a reasonable time?
- Should any tasks be split or combined?
2. COMPLETENESS
- Do tasks cover everything in the plan?
- Any gaps in coverage?
- Missing tasks for error handling, testing, documentation?
3. DEPENDENCIES
- Is the ordering logical?
- Are dependencies between tasks clear?
- Any tasks that should be parallelizable but aren't?
- Circular dependencies?
4. CLARITY
- Is each task description clear and actionable?
- Would someone else understand what "done" means?
- Any vague or ambiguous tasks?
5. SCOPE
- Do all tasks trace back to the plan?
- Any gold-plating or unnecessary tasks?
- Any tasks that belong in a different feature?
For each issue found:
- Quote the problematic task
- Explain the problem
- Suggest how to fix it
These tasks are about to become tracked issues. Catch problems now.