ops-jrz1/.claude/commands/code-review.md
Dan 3d33a45cc9 Add learner dev environment, testing infrastructure, and skills
Learner account management:
- learner-add.sh: create accounts with SSH, plugin skeleton
- learner-remove.sh: remove accounts with optional archive
- plugin-skeleton template: starter maubot plugin

Testing:
- flake.nix: add checks output for pre-deploy validation
- smoke-test.sh: post-deploy service verification

Documentation:
- learner-onboarding.md: VS Code Remote-SSH setup guide
- learner-admin.md: account management procedures

Skills:
- code-review.md: multi-lens code review skill
- orch, worklog: symlinks to shared skills
2025-12-28 22:23:06 -05:00

4.8 KiB

name description
code-review Run multi-lens code review on target files. Analyzes for bloat, smells, dead-code, and redundancy. Interactive - asks before filing issues.

Code Review Skill

Run focused code analysis using multiple review lenses. Findings are synthesized and presented for your approval before any issues are filed.

When to Use

Invoke this skill when:

  • "Review this code"
  • "Run code review on src/"
  • "Check this file for issues"
  • "Analyze the codebase"
  • /code-review

Arguments

The skill accepts an optional target:

  • /code-review - Reviews recently changed files (git diff)
  • /code-review src/ - Reviews specific directory
  • /code-review src/main.py - Reviews specific file
  • /code-review --diff HEAD~5 - Reviews changes in last 5 commits

Available Lenses

Lenses are focused review prompts located in ~/.config/lenses/:

Lens Focus
bloat.md File size, function length, complexity, SRP violations
smells.md Code smells, naming, control flow, readability
dead-code.md Unused exports, zombie code, unreachable paths
redundancy.md Duplication, parallel systems, YAGNI violations

Workflow

Phase 1: Target Selection

  1. Parse the target argument (default: git diff of uncommitted changes)
  2. Identify files to review
  3. Show file list to user for confirmation

Phase 2: Lens Execution

For each lens, analyze the target files:

  1. Read the lens prompt from ~/.config/lenses/{lens}.md
  2. Apply the lens to the target code
  3. Collect findings in structured format

Finding Format:

[TAG] <severity:HIGH|MED|LOW> <file:line>
Issue: <one-line description>
Suggest: <actionable fix>
Evidence: <why this matters>

Phase 3: Synthesis

After all lenses complete:

  1. Deduplicate overlapping findings
  2. Group related issues
  3. Rank by severity and confidence
  4. Generate summary report

Optional: If user requests consensus (--orch or asks for it):

orch consensus "<findings summary>" gpt gemini

Use this to filter false positives and prioritize.

Phase 4: Interactive Review

Present findings to user:

  1. Show executive summary (counts by severity)
  2. List top issues with details
  3. Ask: "Which findings should I file as issues?"

User can respond:

  • "File all" - creates beads issues for everything
  • "File HIGH only" - filters by severity
  • "File 1, 3, 5" - specific findings
  • "None" - just keep the report
  • "Let me review first" - show full details

Phase 5: Issue Filing (if requested)

For approved findings:

  1. Create beads issues with bd create
  2. Include lens tag, severity, file location
  3. Link related issues if applicable

Output

The skill produces:

  1. Console summary - immediate feedback
  2. Beads issues - if user approves filing

Example Session

User: /code-review src/cli.py

Agent: I'll review src/cli.py with 4 lenses (bloat, smells, dead-code, redundancy).

[Running bloat lens...]
[Running smells lens...]
[Running dead-code lens...]
[Running redundancy lens...]

## Review Summary: src/cli.py

| Severity | Count |
|----------|-------|
| HIGH     | 1     |
| MED      | 3     |
| LOW      | 2     |

### Top Issues

1. [BLOAT] HIGH src/cli.py:145-280
   Issue: Function `handle_request` is 135 lines
   Suggest: Extract into smaller functions by responsibility

2. [SMELL] MED src/cli.py:89
   Issue: Magic number 3600 without explanation
   Suggest: Extract to named constant SECONDS_PER_HOUR

3. [DEAD] MED src/cli.py:12
   Issue: Import `unused_module` has no references
   Suggest: Remove unused import

Would you like me to file any of these as beads issues?
Options: all, HIGH only, specific numbers (1,2,3), or none

Configuration

The skill respects .code-review.yml in the repo root if present:

# Optional configuration
ignore_paths:
  - vendor/
  - node_modules/
  - "*.generated.*"

severity_defaults:
  bloat: MED
  dead-code: LOW

max_file_size_kb: 500  # Skip files larger than this

Guidelines

  1. Be Thorough But Focused - Each lens checks one concern deeply
  2. Evidence Over Opinion - Cite specific lines and patterns
  3. Actionable Suggestions - Every finding needs a clear fix
  4. Respect User Time - Summarize first, details on request
  5. No Spam - Don't file issues without explicit approval

Process Checklist

  1. Parse target (files/directory/diff)
  2. Confirm scope with user if large (>10 files)
  3. Run each lens, collecting findings
  4. Deduplicate and rank findings
  5. Present summary to user
  6. Ask which findings to file
  7. Create beads issues for approved findings
  8. Report issue IDs created

Integration

  • Lenses: Read from ~/.config/lenses/*.md
  • Issue Tracking: Uses bd create for beads issues
  • Orch: Optional consensus filtering via orch consensus