feat: convert all skills to dual-publish pattern
Add Claude plugin structure (.claude-plugin/plugin.json) and auto-discovery (skills/<name>.md) to 15 skills. orch was already converted. Skills converted: - ai-tools-doctor, bd-issue-tracking, code-review, doc-review - niri-window-capture, ops-review, playwright-visit, screenshot-latest - spec-review, tufte-press, update-opencode, update-spec-kit - web-research, web-search, worklog Marketplace now lists all 16 skills for /plugin install. Closes: skills-1ks Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
f6ec653a83
commit
5a7891656a
|
|
@ -3,7 +3,7 @@
|
|||
{"id":"skills-0og","title":"spec-review: Define output capture and audit trail","description":"Reviews happen in terminal then disappear. No audit trail, no diffable history.\n\nAdd:\n- Guidance to tee output to review file (e.g., specs/{branch}/review.md)\n- Standard location for gate check results\n- Template for recording decisions and rationale","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-15T00:23:23.705164812-08:00","updated_at":"2025-12-15T13:02:32.313084337-08:00","closed_at":"2025-12-15T13:02:32.313084337-08:00"}
|
||||
{"id":"skills-17f","title":"Ensure git access to both local Forgejo and ops-jrz1 Forgejo","description":"Need reliable access to both git servers for plugin development and deployment:\n\n**Local Forgejo (home):**\n- URL: http://192.168.1.108:3000\n- Currently down/unreachable at times\n- Used for skills repo remote\n\n**ops-jrz1 Forgejo (VPS):**\n- URL: https://git.clarun.xyz\n- Production server\n- Target for emes plugin deployment\n\n**Tasks:**\n- Verify local Forgejo is running (systemctl status)\n- Add ops-jrz1 as additional remote for skills repo\n- Consider mirroring skills repo to both\n- Document which forge is source of truth","status":"open","priority":2,"issue_type":"task","created_at":"2026-01-09T11:14:52.639492685-08:00","created_by":"dan","updated_at":"2026-01-09T11:14:52.639492685-08:00"}
|
||||
{"id":"skills-1ig","title":"Brainstorm agent-friendly doc conventions","description":"# Agent-Friendly Doc Conventions - Hybrid Architecture\n\n## FINAL ARCHITECTURE: Vale + LLM Hybrid\n\n### Insight\n\u003e \"Good old deterministic testing (dumb robots) is the best way to keep in check LLMs (smart robots) at volume.\"\n\n### Split by Tool\n\n| Category | Rubrics | Tool |\n|----------|---------|------|\n| Vale-only | Format Integrity, Deterministic Instructions, Terminology Strictness, Token Efficiency | Fast, deterministic, CI-friendly |\n| Vale + LLM | Semantic Headings, Configuration Precision, Security Boundaries | Vale flags, LLM suggests fixes |\n| LLM-only | Contextual Independence, Code Executability, Execution Verification | Semantic understanding required |\n\n### Pipeline\n\n```\n┌─────────────────────────────────────────────────────────────┐\n│ Stage 1: Vale (deterministic, fast, free) │\n│ - Runs in CI on every commit │\n│ - Catches 40% of issues instantly │\n│ - No LLM cost for clean docs │\n└─────────────────────┬───────────────────────────────────────┘\n │ only if Vale passes\n ▼\n┌─────────────────────────────────────────────────────────────┐\n│ Stage 2: LLM Triage (cheap model) │\n│ - Evaluates 3 semantic rubrics │\n│ - Identifies which need patches │\n└─────────────────────┬───────────────────────────────────────┘\n │ only if issues found\n ▼\n┌─────────────────────────────────────────────────────────────┐\n│ Stage 3: LLM Specialists (capable model) │\n│ - One agent per failed rubric │\n│ - Generates patches │\n└─────────────────────────────────────────────────────────────┘\n```\n\n### Why This Works\n- Vale is battle-tested, fast, CI-native\n- LLM only fires when needed (adaptive cost)\n- Deterministic rules catch predictable issues\n- LLM handles semantic/contextual issues\n\n---\n\n## Vale Rules Needed\n\n### Format Integrity\n- Existence: code blocks without language tags\n- Regex for unclosed fences\n\n### Deterministic Instructions \n- Existence: hedging words (\"might\", \"may want to\", \"consider\", \"you could\")\n\n### Terminology Strictness\n- Consistency: flag term variations\n\n### Token Efficiency\n- Existence: filler phrases (\"In this section we will...\", \"As you may know...\")\n\n### Semantic Headings (partial)\n- Existence: banned headings (\"Overview\", \"Introduction\", \"Getting Started\")\n\n### Configuration Precision (partial)\n- Existence: vague versions (\"Python 3.x\", \"recent version\")\n\n### Security Boundaries (partial)\n- Existence: hardcoded API key patterns\n\n---\n\n## NEXT STEPS\n\n1. Create Vale style for doc-review rubrics\n2. Test Vale on sample docs\n3. Design LLM prompts for semantic rubrics only\n4. Wire into orch or standalone","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-04T14:02:04.898026177-08:00","updated_at":"2025-12-04T16:43:53.0608948-08:00","closed_at":"2025-12-04T16:43:53.0608948-08:00"}
|
||||
{"id":"skills-1ks","title":"Dual-publish pattern: Convert remaining skills","description":"Convert skills to dual-publish pattern (Nix + Claude plugin):\n\n**Pattern established with orch:**\n- Keep SKILL.md at root (Nix deployment)\n- Add .claude-plugin/plugin.json (Claude marketplace)\n- Copy to skills/\u003cname\u003e.md (Claude auto-discovery)\n\n**Skills to convert:**\n- [ ] worklog\n- [ ] code-review (has lenses dependency)\n- [ ] ops-review (has lenses dependency)\n- [ ] playwright-visit\n- [ ] screenshot-latest\n- [ ] niri-window-capture\n- [ ] web-search\n- [ ] web-research\n\n**Why dual-publish:**\n- Cross-agent support (Gemini, OpenCode can't use Claude plugins)\n- Nix provides system-level deployment\n- Claude plugin system provides hooks, marketplace discovery\n- See skills-bo8 for Gemini path restriction issue","status":"open","priority":2,"issue_type":"task","created_at":"2026-01-09T11:20:47.271151803-08:00","created_by":"dan","updated_at":"2026-01-09T11:20:47.271151803-08:00"}
|
||||
{"id":"skills-1ks","title":"Dual-publish pattern: Convert remaining skills","description":"Convert skills to dual-publish pattern (Nix + Claude plugin):\n\n**Pattern established with orch:**\n- Keep SKILL.md at root (Nix deployment)\n- Add .claude-plugin/plugin.json (Claude marketplace)\n- Copy to skills/\u003cname\u003e.md (Claude auto-discovery)\n\n**Skills to convert:**\n- [ ] worklog\n- [ ] code-review (has lenses dependency)\n- [ ] ops-review (has lenses dependency)\n- [ ] playwright-visit\n- [ ] screenshot-latest\n- [ ] niri-window-capture\n- [ ] web-search\n- [ ] web-research\n\n**Why dual-publish:**\n- Cross-agent support (Gemini, OpenCode can't use Claude plugins)\n- Nix provides system-level deployment\n- Claude plugin system provides hooks, marketplace discovery\n- See skills-bo8 for Gemini path restriction issue","status":"in_progress","priority":2,"issue_type":"task","created_at":"2026-01-09T11:20:47.271151803-08:00","created_by":"dan","updated_at":"2026-01-09T16:09:14.544547191-08:00"}
|
||||
{"id":"skills-1n3","title":"Set up agent skills for Gemini CLI","description":"The AI agent skills (worklog, web-search, etc.) configured in .skills are not currently working when using the Gemini CLI. \\n\\nObserved behavior:\\n- 'worklog' command not found even after 'direnv reload'.\\n- .envrc sources ~/proj/skills/bin/use-skills.sh, but skills are not accessible in the Gemini agent session.\\n\\nNeed to:\\n1. Investigate how Gemini CLI loads its environment compared to Claude Code.\\n2. Update 'use-skills.sh' or direnv configuration to support Gemini CLI.\\n3. Ensure skill symlinks/binaries are correctly in the PATH for Gemini.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-22T17:39:28.106296919-05:00","updated_at":"2025-12-28T22:28:49.781533243-05:00","closed_at":"2025-12-28T22:28:49.781533243-05:00","close_reason":"No MCP/extensions. Gemini CLI lacks native skill support (feature request #11506 pending). Current workaround: GEMINI.md references skill paths for manual reading. Revisit when native support lands."}
|
||||
{"id":"skills-20s","title":"Compare BOUNDARIES.md with upstream","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:15:53.585115099-08:00","updated_at":"2025-12-03T20:19:28.442646801-08:00","closed_at":"2025-12-03T20:19:28.442646801-08:00","dependencies":[{"issue_id":"skills-20s","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:53.586442134-08:00","created_by":"daemon","metadata":"{}"}]}
|
||||
{"id":"skills-25l","title":"Create orch skill for multi-model consensus","description":"Build a skill that exposes orch CLI capabilities to agents for querying multiple AI models","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-30T15:43:49.209528963-08:00","updated_at":"2025-11-30T15:47:36.608887453-08:00","closed_at":"2025-11-30T15:47:36.608887453-08:00"}
|
||||
|
|
|
|||
|
|
@ -7,10 +7,85 @@
|
|||
"name": "dan"
|
||||
},
|
||||
"plugins": [
|
||||
{
|
||||
"name": "ai-tools-doctor",
|
||||
"source": "./skills/ai-tools-doctor",
|
||||
"description": "Check and sync AI coding tool versions against declared manifest"
|
||||
},
|
||||
{
|
||||
"name": "bd-issue-tracking",
|
||||
"source": "./skills/bd-issue-tracking",
|
||||
"description": "Track complex, multi-session work with dependency graphs using bd (beads)"
|
||||
},
|
||||
{
|
||||
"name": "code-review",
|
||||
"source": "./skills/code-review",
|
||||
"description": "Multi-lens code review for bloat, security, coupling, and more"
|
||||
},
|
||||
{
|
||||
"name": "doc-review",
|
||||
"source": "./skills/doc-review",
|
||||
"description": "Lint markdown documentation for AI agent consumption"
|
||||
},
|
||||
{
|
||||
"name": "niri-window-capture",
|
||||
"source": "./skills/niri-window-capture",
|
||||
"description": "Invisibly capture screenshots of any window using niri compositor"
|
||||
},
|
||||
{
|
||||
"name": "ops-review",
|
||||
"source": "./skills/ops-review",
|
||||
"description": "Multi-lens ops review for Nix, shell, Docker, CI/CD"
|
||||
},
|
||||
{
|
||||
"name": "orch",
|
||||
"source": "./skills/orch",
|
||||
"description": "Multi-model consensus queries via orch CLI"
|
||||
},
|
||||
{
|
||||
"name": "playwright-visit",
|
||||
"source": "./skills/playwright-visit",
|
||||
"description": "Visit web pages using Playwright browser automation"
|
||||
},
|
||||
{
|
||||
"name": "screenshot-latest",
|
||||
"source": "./skills/screenshot-latest",
|
||||
"description": "Find and analyze the most recent screenshot"
|
||||
},
|
||||
{
|
||||
"name": "spec-review",
|
||||
"source": "./skills/spec-review",
|
||||
"description": "Review spec-kit specs using multi-model AI consensus"
|
||||
},
|
||||
{
|
||||
"name": "tufte-press",
|
||||
"source": "./skills/tufte-press",
|
||||
"description": "Generate Tufte-inspired study cards from conversation"
|
||||
},
|
||||
{
|
||||
"name": "update-opencode",
|
||||
"source": "./skills/update-opencode",
|
||||
"description": "Check and apply OpenCode version updates"
|
||||
},
|
||||
{
|
||||
"name": "update-spec-kit",
|
||||
"source": "./skills/update-spec-kit",
|
||||
"description": "Update spec-kit repository, CLI, and templates"
|
||||
},
|
||||
{
|
||||
"name": "web-research",
|
||||
"source": "./skills/web-research",
|
||||
"description": "Conduct deep web research with structured reports"
|
||||
},
|
||||
{
|
||||
"name": "web-search",
|
||||
"source": "./skills/web-search",
|
||||
"description": "Search the web for information and documentation"
|
||||
},
|
||||
{
|
||||
"name": "worklog",
|
||||
"source": "./skills/worklog",
|
||||
"description": "Create structured worklogs documenting work sessions"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
15
skills/ai-tools-doctor/.claude-plugin/plugin.json
Normal file
15
skills/ai-tools-doctor/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
"name": "ai-tools-doctor",
|
||||
"description": "Check and sync AI coding tool versions against declared manifest.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"ai-tools",
|
||||
"version",
|
||||
"manifest",
|
||||
"sync"
|
||||
]
|
||||
}
|
||||
82
skills/ai-tools-doctor/skills/ai-tools-doctor.md
Normal file
82
skills/ai-tools-doctor/skills/ai-tools-doctor.md
Normal file
|
|
@ -0,0 +1,82 @@
|
|||
---
|
||||
name: ai-tools-doctor
|
||||
description: Check and sync AI coding tool versions against declared manifest
|
||||
---
|
||||
|
||||
# AI Tools Doctor
|
||||
|
||||
Check installed AI tools against declared versions and sync npm tools to pinned versions.
|
||||
|
||||
## When to Use
|
||||
|
||||
- At session start to verify tool versions
|
||||
- When user asks about AI tool status or versions
|
||||
- Before/after updating tools
|
||||
- When troubleshooting tool issues
|
||||
|
||||
## Commands
|
||||
|
||||
```bash
|
||||
# Check all tools (human-readable)
|
||||
ai-tools-doctor check
|
||||
|
||||
# Check all tools (machine-readable for parsing)
|
||||
ai-tools-doctor check --json
|
||||
|
||||
# Exit code only (for scripts/hooks)
|
||||
ai-tools-doctor check --quiet
|
||||
|
||||
# Sync npm tools to declared versions
|
||||
ai-tools-doctor sync
|
||||
```
|
||||
|
||||
## Managed Tools
|
||||
|
||||
| Tool | Source | Binary |
|
||||
|------|--------|--------|
|
||||
| claude-code | npm | `claude` |
|
||||
| openai-codex | npm | `codex` |
|
||||
| opencode | nix | `opencode` |
|
||||
| beads | nix | `bd` |
|
||||
|
||||
## Output
|
||||
|
||||
### Human-readable (default)
|
||||
```
|
||||
beads (nix)
|
||||
✓ 0.26.0
|
||||
claude-code (npm)
|
||||
✓ 2.0.55
|
||||
```
|
||||
|
||||
### JSON (--json)
|
||||
```json
|
||||
{
|
||||
"status": "ok",
|
||||
"tools": {
|
||||
"claude-code": {
|
||||
"source": "npm",
|
||||
"installed": "2.0.55",
|
||||
"declared": "2.0.55",
|
||||
"status": "ok"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Status Values
|
||||
|
||||
- `ok` - Installed version matches declared
|
||||
- `version_mismatch` - Installed differs from declared
|
||||
- `not_installed` - Tool not found
|
||||
|
||||
## Exit Codes
|
||||
|
||||
- `0` - All tools match declared versions
|
||||
- `1` - Mismatch or error
|
||||
|
||||
## Notes
|
||||
|
||||
- Nix tools are read-only (reports status, doesn't manage)
|
||||
- Use `sync` to install/update npm tools to declared versions
|
||||
- Manifest location: `~/.config/ai-tools/tools.json`
|
||||
15
skills/bd-issue-tracking/.claude-plugin/plugin.json
Normal file
15
skills/bd-issue-tracking/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
"name": "bd-issue-tracking",
|
||||
"description": "Track complex, multi-session work with dependency graphs using bd (beads) issue tracker.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"beads",
|
||||
"issue-tracking",
|
||||
"dependencies",
|
||||
"graph"
|
||||
]
|
||||
}
|
||||
644
skills/bd-issue-tracking/skills/bd-issue-tracking.md
Normal file
644
skills/bd-issue-tracking/skills/bd-issue-tracking.md
Normal file
|
|
@ -0,0 +1,644 @@
|
|||
---
|
||||
name: bd-issue-tracking
|
||||
description: Track complex, multi-session work with dependency graphs using bd (beads) issue tracker. Use when work spans multiple sessions, has complex dependencies, or requires persistent context across compaction cycles. For simple single-session linear tasks, TodoWrite remains appropriate.
|
||||
---
|
||||
|
||||
# bd Issue Tracking
|
||||
|
||||
## Overview
|
||||
|
||||
bd is a graph-based issue tracker for persistent memory across sessions. Use for multi-session work with complex dependencies; use TodoWrite for simple single-session tasks.
|
||||
|
||||
## When to Use bd vs TodoWrite
|
||||
|
||||
### Use bd when:
|
||||
- **Multi-session work** - Tasks spanning multiple compaction cycles or days
|
||||
- **Complex dependencies** - Work with blockers, prerequisites, or hierarchical structure
|
||||
- **Knowledge work** - Strategic documents, research, or tasks with fuzzy boundaries
|
||||
- **Side quests** - Exploratory work that might pause the main task
|
||||
- **Project memory** - Need to resume work after weeks away with full context
|
||||
|
||||
### Use TodoWrite when:
|
||||
- **Single-session tasks** - Work that completes within current session
|
||||
- **Linear execution** - Straightforward step-by-step tasks with no branching
|
||||
- **Immediate context** - All information already in conversation
|
||||
- **Simple tracking** - Just need a checklist to show progress
|
||||
|
||||
**Key insight**: If resuming work after 2 weeks would be difficult without bd, use bd. If the work can be picked up from a markdown skim, TodoWrite is sufficient.
|
||||
|
||||
### Test Yourself: bd or TodoWrite?
|
||||
|
||||
Ask these questions to decide:
|
||||
|
||||
**Choose bd if:**
|
||||
- ❓ "Will I need this context in 2 weeks?" → Yes = bd
|
||||
- ❓ "Could conversation history get compacted?" → Yes = bd
|
||||
- ❓ "Does this have blockers/dependencies?" → Yes = bd
|
||||
- ❓ "Is this fuzzy/exploratory work?" → Yes = bd
|
||||
|
||||
**Choose TodoWrite if:**
|
||||
- ❓ "Will this be done in this session?" → Yes = TodoWrite
|
||||
- ❓ "Is this just a task list for me right now?" → Yes = TodoWrite
|
||||
- ❓ "Is this linear with no branching?" → Yes = TodoWrite
|
||||
|
||||
**When in doubt**: Use bd. Better to have persistent memory you don't need than to lose context you needed.
|
||||
|
||||
**For detailed decision criteria and examples, read:** [references/BOUNDARIES.md](references/BOUNDARIES.md)
|
||||
|
||||
## Surviving Compaction Events
|
||||
|
||||
**Critical**: Compaction events delete conversation history but preserve beads. After compaction, bd state is your only persistent memory.
|
||||
|
||||
**What survives compaction:**
|
||||
- All bead data (issues, notes, dependencies, status)
|
||||
- Complete work history and context
|
||||
|
||||
**What doesn't survive:**
|
||||
- Conversation history
|
||||
- TodoWrite lists
|
||||
- Recent discussion context
|
||||
|
||||
**Writing notes for post-compaction recovery:**
|
||||
|
||||
Write notes as if explaining to a future agent with zero conversation context:
|
||||
|
||||
**Pattern:**
|
||||
```markdown
|
||||
notes field format:
|
||||
- COMPLETED: Specific deliverables ("implemented JWT refresh endpoint + rate limiting")
|
||||
- IN PROGRESS: Current state + next immediate step ("testing password reset flow, need user input on email template")
|
||||
- BLOCKERS: What's preventing progress
|
||||
- KEY DECISIONS: Important context or user guidance
|
||||
```
|
||||
|
||||
**After compaction:** `bd show <issue-id>` reconstructs full context from notes field.
|
||||
|
||||
### Notes Quality Self-Check
|
||||
|
||||
Before checkpointing (especially pre-compaction), verify your notes pass these tests:
|
||||
|
||||
❓ **Future-me test**: "Could I resume this work in 2 weeks with zero conversation history?"
|
||||
- [ ] What was completed? (Specific deliverables, not "made progress")
|
||||
- [ ] What's in progress? (Current state + immediate next step)
|
||||
- [ ] What's blocked? (Specific blockers with context)
|
||||
- [ ] What decisions were made? (Why, not just what)
|
||||
|
||||
❓ **Stranger test**: "Could another developer understand this without asking me?"
|
||||
- [ ] Technical choices explained (not just stated)
|
||||
- [ ] Trade-offs documented (why this approach vs alternatives)
|
||||
- [ ] User input captured (decisions that came from discussion)
|
||||
|
||||
**Good note example:**
|
||||
```
|
||||
COMPLETED: JWT auth with RS256 (1hr access, 7d refresh tokens)
|
||||
KEY DECISION: RS256 over HS256 per security review - enables key rotation
|
||||
IN PROGRESS: Password reset flow - email service working, need rate limiting
|
||||
BLOCKERS: Waiting on user decision: reset token expiry (15min vs 1hr trade-off)
|
||||
NEXT: Implement rate limiting (5 attempts/15min) once expiry decided
|
||||
```
|
||||
|
||||
**Bad note example:**
|
||||
```
|
||||
Working on auth. Made some progress. More to do.
|
||||
```
|
||||
|
||||
**For complete compaction recovery workflow, read:** [references/WORKFLOWS.md](references/WORKFLOWS.md#compaction-survival)
|
||||
|
||||
## Session Start Protocol
|
||||
|
||||
**bd is available when:**
|
||||
- Project has a `.beads/` directory (project-local database), OR
|
||||
- `~/.beads/` exists (global fallback database for any directory)
|
||||
|
||||
**At session start, always check for bd availability and run ready check.**
|
||||
|
||||
### Session Start Checklist
|
||||
|
||||
Copy this checklist when starting any session where bd is available:
|
||||
|
||||
```
|
||||
Session Start:
|
||||
- [ ] Run bd ready --json to see available work
|
||||
- [ ] Run bd list --status in_progress --json for active work
|
||||
- [ ] If in_progress exists: bd show <issue-id> to read notes
|
||||
- [ ] Report context to user: "X items ready: [summary]"
|
||||
- [ ] If using global ~/.beads, mention this in report
|
||||
- [ ] If nothing ready: bd blocked --json to check blockers
|
||||
```
|
||||
|
||||
**Pattern**: Always check both `bd ready` AND `bd list --status in_progress`. Read notes field first to understand where previous session left off.
|
||||
|
||||
**Report format**:
|
||||
- "I can see X items ready to work on: [summary]"
|
||||
- "Issue Y is in_progress. Last session: [summary from notes]. Next: [from notes]. Should I continue with that?"
|
||||
|
||||
This establishes immediate shared context about available and active work without requiring user prompting.
|
||||
|
||||
**For detailed collaborative handoff process, read:** [references/WORKFLOWS.md](references/WORKFLOWS.md#session-handoff)
|
||||
|
||||
**Note**: bd auto-discovers the database:
|
||||
- Uses `.beads/*.db` in current project if exists
|
||||
- Falls back to `~/.beads/default.db` otherwise
|
||||
- No configuration needed
|
||||
|
||||
### When No Work is Ready
|
||||
|
||||
If `bd ready` returns empty but issues exist:
|
||||
|
||||
```bash
|
||||
bd blocked --json
|
||||
```
|
||||
|
||||
Report blockers and suggest next steps.
|
||||
|
||||
---
|
||||
|
||||
## Progress Checkpointing
|
||||
|
||||
Update bd notes at these checkpoints (don't wait for session end):
|
||||
|
||||
**Critical triggers:**
|
||||
- ⚠️ **Context running low** - User says "running out of context" / "approaching compaction" / "close to token limit"
|
||||
- 📊 **Token budget > 70%** - Proactively checkpoint when approaching limits
|
||||
- 🎯 **Major milestone reached** - Completed significant piece of work
|
||||
- 🚧 **Hit a blocker** - Can't proceed, need to capture what was tried
|
||||
- 🔄 **Task transition** - Switching issues or about to close this one
|
||||
- ❓ **Before user input** - About to ask decision that might change direction
|
||||
|
||||
**Proactive monitoring during session:**
|
||||
- At 70% token usage: "We're at 70% token usage - good time to checkpoint bd notes?"
|
||||
- At 85% token usage: "Approaching token limit (85%) - checkpointing current state to bd"
|
||||
- At 90% token usage: Automatically checkpoint without asking
|
||||
|
||||
**Current token usage**: Check `<system-warning>Token usage:` messages to monitor proactively.
|
||||
|
||||
**Checkpoint checklist:**
|
||||
|
||||
```
|
||||
Progress Checkpoint:
|
||||
- [ ] Update notes with COMPLETED/IN_PROGRESS/NEXT format
|
||||
- [ ] Document KEY DECISIONS or BLOCKERS since last update
|
||||
- [ ] Mark current status (in_progress/blocked/closed)
|
||||
- [ ] If discovered new work: create issues with discovered-from
|
||||
- [ ] Verify notes are self-explanatory for post-compaction resume
|
||||
```
|
||||
|
||||
**Most important**: When user says "running out of context" OR when you see >70% token usage - checkpoint immediately, even if mid-task.
|
||||
|
||||
**Test yourself**: "If compaction happened right now, could future-me resume from these notes?"
|
||||
|
||||
---
|
||||
|
||||
### Database Selection
|
||||
|
||||
bd automatically selects the appropriate database:
|
||||
- **Project-local** (`.beads/` in project): Used for project-specific work
|
||||
- **Global fallback** (`~/.beads/`): Used when no project-local database exists
|
||||
|
||||
**Use case for global database**: Cross-project tracking, personal task management, knowledge work that doesn't belong to a specific project.
|
||||
|
||||
**When to use --db flag explicitly:**
|
||||
- Accessing a specific database outside current directory
|
||||
- Working with multiple databases (e.g., project database + reference database)
|
||||
- Example: `bd --db /path/to/reference/terms.db list`
|
||||
|
||||
**Database discovery rules:**
|
||||
- bd looks for `.beads/*.db` in current working directory
|
||||
- If not found, uses `~/.beads/default.db`
|
||||
- Shell cwd can reset between commands - use absolute paths with --db when operating on non-local databases
|
||||
|
||||
**For complete session start workflows, read:** [references/WORKFLOWS.md](references/WORKFLOWS.md#session-start)
|
||||
|
||||
## Core Operations
|
||||
|
||||
All bd commands support `--json` flag for structured output when needed for programmatic parsing.
|
||||
|
||||
### Essential Operations
|
||||
|
||||
**Check ready work:**
|
||||
```bash
|
||||
bd ready
|
||||
bd ready --json # For structured output
|
||||
bd ready --priority 0 # Filter by priority
|
||||
bd ready --assignee alice # Filter by assignee
|
||||
```
|
||||
|
||||
**Create new issue:**
|
||||
|
||||
**IMPORTANT**: Always quote title and description arguments with double quotes, especially when containing spaces or special characters.
|
||||
|
||||
```bash
|
||||
bd create "Fix login bug"
|
||||
bd create "Add OAuth" -p 0 -t feature
|
||||
bd create "Write tests" -d "Unit tests for auth module" --assignee alice
|
||||
bd create "Research caching" --design "Evaluate Redis vs Memcached"
|
||||
|
||||
# Examples with special characters (requires quoting):
|
||||
bd create "Fix: auth doesn't handle edge cases" -p 1
|
||||
bd create "Refactor auth module" -d "Split auth.go into separate files (handlers, middleware, utils)"
|
||||
```
|
||||
|
||||
**Update issue status:**
|
||||
```bash
|
||||
bd update issue-123 --status in_progress
|
||||
bd update issue-123 --priority 0
|
||||
bd update issue-123 --assignee bob
|
||||
bd update issue-123 --design "Decided to use Redis for persistence support"
|
||||
```
|
||||
|
||||
**Close completed work:**
|
||||
```bash
|
||||
bd close issue-123
|
||||
bd close issue-123 --reason "Implemented in PR #42"
|
||||
bd close issue-1 issue-2 issue-3 --reason "Bulk close related work"
|
||||
```
|
||||
|
||||
**Show issue details:**
|
||||
```bash
|
||||
bd show issue-123
|
||||
bd show issue-123 --json
|
||||
```
|
||||
|
||||
**List issues:**
|
||||
```bash
|
||||
bd list
|
||||
bd list --status open
|
||||
bd list --priority 0
|
||||
bd list --type bug
|
||||
bd list --assignee alice
|
||||
```
|
||||
|
||||
**For complete CLI reference with all flags and examples, read:** [references/CLI_REFERENCE.md](references/CLI_REFERENCE.md)
|
||||
|
||||
## Field Usage Reference
|
||||
|
||||
Quick guide for when and how to use each bd field:
|
||||
|
||||
| Field | Purpose | When to Set | Update Frequency |
|
||||
|-------|---------|-------------|------------------|
|
||||
| **description** | Immutable problem statement | At creation | Never (fixed forever) |
|
||||
| **design** | Initial approach, architecture, decisions | During planning | Rarely (only if approach changes) |
|
||||
| **acceptance-criteria** | Concrete deliverables checklist (`- [ ]` syntax) | When design is clear | Mark `- [x]` as items complete |
|
||||
| **notes** | Session handoff (COMPLETED/IN_PROGRESS/NEXT) | During work | At session end, major milestones |
|
||||
| **status** | Workflow state (open→in_progress→closed) | As work progresses | When changing phases |
|
||||
| **priority** | Urgency level (0=highest, 3=lowest) | At creation | Adjust if priorities shift |
|
||||
|
||||
**Key pattern**: Notes field is your "read me first" at session start. See [WORKFLOWS.md](references/WORKFLOWS.md#session-handoff) for session handoff details.
|
||||
|
||||
---
|
||||
|
||||
## Issue Lifecycle Workflow
|
||||
|
||||
### 1. Discovery Phase (Proactive Issue Creation)
|
||||
|
||||
**During exploration or implementation, proactively file issues for:**
|
||||
- Bugs or problems discovered
|
||||
- Potential improvements noticed
|
||||
- Follow-up work identified
|
||||
- Technical debt encountered
|
||||
- Questions requiring research
|
||||
|
||||
**Pattern:**
|
||||
```bash
|
||||
# When encountering new work during a task:
|
||||
bd create "Found: auth doesn't handle profile permissions"
|
||||
bd dep add current-task-id new-issue-id --type discovered-from
|
||||
|
||||
# Continue with original task - issue persists for later
|
||||
```
|
||||
|
||||
**Key benefit**: Capture context immediately instead of losing it when conversation ends.
|
||||
|
||||
### 2. Execution Phase (Status Maintenance)
|
||||
|
||||
**Mark issues in_progress when starting work:**
|
||||
```bash
|
||||
bd update issue-123 --status in_progress
|
||||
```
|
||||
|
||||
**Update throughout work:**
|
||||
```bash
|
||||
# Add design notes as implementation progresses
|
||||
bd update issue-123 --design "Using JWT with RS256 algorithm"
|
||||
|
||||
# Update acceptance criteria if requirements clarify
|
||||
bd update issue-123 --acceptance "- JWT validation works\n- Tests pass\n- Error handling returns 401"
|
||||
```
|
||||
|
||||
**Close when complete:**
|
||||
```bash
|
||||
bd close issue-123 --reason "Implemented JWT validation with tests passing"
|
||||
```
|
||||
|
||||
**Important**: Closed issues remain in database - they're not deleted, just marked complete for project history.
|
||||
|
||||
### 3. Planning Phase (Dependency Graphs)
|
||||
|
||||
For complex multi-step work, structure issues with dependencies before starting:
|
||||
|
||||
**Create parent epic:**
|
||||
```bash
|
||||
bd create "Implement user authentication" -t epic -d "OAuth integration with JWT tokens"
|
||||
```
|
||||
|
||||
**Create subtasks:**
|
||||
```bash
|
||||
bd create "Set up OAuth credentials" -t task
|
||||
bd create "Implement authorization flow" -t task
|
||||
bd create "Add token refresh" -t task
|
||||
```
|
||||
|
||||
**Link with dependencies:**
|
||||
```bash
|
||||
# parent-child for epic structure
|
||||
bd dep add auth-epic auth-setup --type parent-child
|
||||
bd dep add auth-epic auth-flow --type parent-child
|
||||
|
||||
# blocks for ordering
|
||||
bd dep add auth-setup auth-flow
|
||||
```
|
||||
|
||||
**For detailed dependency patterns and types, read:** [references/DEPENDENCIES.md](references/DEPENDENCIES.md)
|
||||
|
||||
## Dependency Types Reference
|
||||
|
||||
bd supports four dependency types:
|
||||
|
||||
1. **blocks** - Hard blocker (issue A blocks issue B from starting)
|
||||
2. **related** - Soft link (issues are related but not blocking)
|
||||
3. **parent-child** - Hierarchical (epic/subtask relationship)
|
||||
4. **discovered-from** - Provenance (issue B discovered while working on A)
|
||||
|
||||
**For complete guide on when to use each type with examples and patterns, read:** [references/DEPENDENCIES.md](references/DEPENDENCIES.md)
|
||||
|
||||
## Integration with TodoWrite
|
||||
|
||||
**Both tools complement each other at different timescales:**
|
||||
|
||||
### Temporal Layering Pattern
|
||||
|
||||
**TodoWrite** (short-term working memory - this hour):
|
||||
- Tactical execution: "Review Section 3", "Expand Q&A answers"
|
||||
- Marked completed as you go
|
||||
- Present/future tense ("Review", "Expand", "Create")
|
||||
- Ephemeral: Disappears when session ends
|
||||
|
||||
**Beads** (long-term episodic memory - this week/month):
|
||||
- Strategic objectives: "Continue work on strategic planning document"
|
||||
- Key decisions and outcomes in notes field
|
||||
- Past tense in notes ("COMPLETED", "Discovered", "Blocked by")
|
||||
- Persistent: Survives compaction and session boundaries
|
||||
|
||||
### The Handoff Pattern
|
||||
|
||||
1. **Session start**: Read bead → Create TodoWrite items for immediate actions
|
||||
2. **During work**: Mark TodoWrite items completed as you go
|
||||
3. **Reach milestone**: Update bead notes with outcomes + context
|
||||
4. **Session end**: TodoWrite disappears, bead survives with enriched notes
|
||||
|
||||
**After compaction**: TodoWrite is gone forever, but bead notes reconstruct what happened.
|
||||
|
||||
### Example: TodoWrite tracks execution, Beads capture meaning
|
||||
|
||||
**TodoWrite:**
|
||||
```
|
||||
[completed] Implement login endpoint
|
||||
[in_progress] Add password hashing with bcrypt
|
||||
[pending] Create session middleware
|
||||
```
|
||||
|
||||
**Corresponding bead notes:**
|
||||
```
|
||||
bd update issue-123 --notes "COMPLETED: Login endpoint with bcrypt password
|
||||
hashing (12 rounds). KEY DECISION: Using JWT tokens (not sessions) for stateless
|
||||
auth - simplifies horizontal scaling. IN PROGRESS: Session middleware implementation.
|
||||
NEXT: Need user input on token expiry time (1hr vs 24hr trade-off)."
|
||||
```
|
||||
|
||||
**Don't duplicate**: TodoWrite tracks execution, Beads captures meaning and context.
|
||||
|
||||
**For patterns on transitioning between tools mid-session, read:** [references/BOUNDARIES.md](references/BOUNDARIES.md#integration-patterns)
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Pattern 1: Knowledge Work Session
|
||||
|
||||
**Scenario**: User asks "Help me write a proposal for expanding the analytics platform"
|
||||
|
||||
**What you see**:
|
||||
```bash
|
||||
$ bd ready
|
||||
# Returns: bd-42 "Research analytics platform expansion proposal" (in_progress)
|
||||
|
||||
$ bd show bd-42
|
||||
Notes: "COMPLETED: Reviewed current stack (Mixpanel, Amplitude)
|
||||
IN PROGRESS: Drafting cost-benefit analysis section
|
||||
NEXT: Need user input on budget constraints before finalizing recommendations"
|
||||
```
|
||||
|
||||
**What you do**:
|
||||
1. Read notes to understand current state
|
||||
2. Create TodoWrite for immediate work:
|
||||
```
|
||||
- [ ] Draft cost-benefit analysis
|
||||
- [ ] Ask user about budget constraints
|
||||
- [ ] Finalize recommendations
|
||||
```
|
||||
3. Work on tasks, mark TodoWrite items completed
|
||||
4. At milestone, update bd notes:
|
||||
```bash
|
||||
bd update bd-42 --notes "COMPLETED: Cost-benefit analysis drafted.
|
||||
KEY DECISION: User confirmed $50k budget cap - ruled out enterprise options.
|
||||
IN PROGRESS: Finalizing recommendations (Posthog + custom ETL).
|
||||
NEXT: Get user review of draft before closing issue."
|
||||
```
|
||||
|
||||
**Outcome**: TodoWrite disappears at session end, but bd notes preserve context for next session.
|
||||
|
||||
### Pattern 2: Side Quest Handling
|
||||
|
||||
During main task, discover a problem:
|
||||
1. Create issue: `bd create "Found: inventory system needs refactoring"`
|
||||
2. Link using discovered-from: `bd dep add main-task new-issue --type discovered-from`
|
||||
3. Assess: blocker or can defer?
|
||||
4. If blocker: `bd update main-task --status blocked`, work on new issue
|
||||
5. If deferrable: note in issue, continue main task
|
||||
|
||||
### Pattern 3: Multi-Session Project Resume
|
||||
|
||||
Starting work after time away:
|
||||
1. Run `bd ready` to see available work
|
||||
2. Run `bd blocked` to understand what's stuck
|
||||
3. Run `bd list --status closed --limit 10` to see recent completions
|
||||
4. Run `bd show issue-id` on issue to work on
|
||||
5. Update status and begin work
|
||||
|
||||
**For complete workflow walkthroughs with checklists, read:** [references/WORKFLOWS.md](references/WORKFLOWS.md)
|
||||
|
||||
## Issue Creation
|
||||
|
||||
**Quick guidelines:**
|
||||
- Ask user first for knowledge work with fuzzy boundaries
|
||||
- Create directly for clear bugs, technical debt, or discovered work
|
||||
- Use clear titles, sufficient context in descriptions
|
||||
- Design field: HOW to build (can change during implementation)
|
||||
- Acceptance criteria: WHAT success looks like (should remain stable)
|
||||
|
||||
### Issue Creation Checklist
|
||||
|
||||
Copy when creating new issues:
|
||||
|
||||
```
|
||||
Creating Issue:
|
||||
- [ ] Title: Clear, specific, action-oriented
|
||||
- [ ] Description: Problem statement (WHY this matters) - immutable
|
||||
- [ ] Design: HOW to build (can change during work)
|
||||
- [ ] Acceptance: WHAT success looks like (stays stable)
|
||||
- [ ] Priority: 0=critical, 1=high, 2=normal, 3=low
|
||||
- [ ] Type: bug/feature/task/epic/chore
|
||||
```
|
||||
|
||||
**Self-check for acceptance criteria:**
|
||||
|
||||
❓ "If I changed the implementation approach, would these criteria still apply?"
|
||||
- → **Yes** = Good criteria (outcome-focused)
|
||||
- → **No** = Move to design field (implementation-focused)
|
||||
|
||||
**Example:**
|
||||
- ✅ Acceptance: "User tokens persist across sessions and refresh automatically"
|
||||
- ❌ Wrong: "Use JWT tokens with 1-hour expiry" (that's design, not acceptance)
|
||||
|
||||
**For detailed guidance on when to ask vs create, issue quality, resumability patterns, and design vs acceptance criteria, read:** [references/ISSUE_CREATION.md](references/ISSUE_CREATION.md)
|
||||
|
||||
## Alternative Use Cases
|
||||
|
||||
bd is primarily for work tracking, but can also serve as queryable database for static reference data (glossaries, terminology) with adaptations.
|
||||
|
||||
**For guidance on using bd for reference databases and static data, read:** [references/STATIC_DATA.md](references/STATIC_DATA.md)
|
||||
|
||||
## Statistics and Monitoring
|
||||
|
||||
**Check project health:**
|
||||
```bash
|
||||
bd stats
|
||||
bd stats --json
|
||||
```
|
||||
|
||||
Returns: total issues, open, in_progress, closed, blocked, ready, avg lead time
|
||||
|
||||
**Find blocked work:**
|
||||
```bash
|
||||
bd blocked
|
||||
bd blocked --json
|
||||
```
|
||||
|
||||
Use stats to:
|
||||
- Report progress to user
|
||||
- Identify bottlenecks
|
||||
- Understand project velocity
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Issue Types
|
||||
|
||||
```bash
|
||||
bd create "Title" -t task # Standard work item (default)
|
||||
bd create "Title" -t bug # Defect or problem
|
||||
bd create "Title" -t feature # New functionality
|
||||
bd create "Title" -t epic # Large work with subtasks
|
||||
bd create "Title" -t chore # Maintenance or cleanup
|
||||
```
|
||||
|
||||
### Priority Levels
|
||||
|
||||
```bash
|
||||
bd create "Title" -p 0 # Highest priority (critical)
|
||||
bd create "Title" -p 1 # High priority
|
||||
bd create "Title" -p 2 # Normal priority (default)
|
||||
bd create "Title" -p 3 # Low priority
|
||||
```
|
||||
|
||||
### Bulk Operations
|
||||
|
||||
```bash
|
||||
# Close multiple issues at once
|
||||
bd close issue-1 issue-2 issue-3 --reason "Completed in sprint 5"
|
||||
|
||||
# Create multiple issues from markdown file
|
||||
bd create --file issues.md
|
||||
```
|
||||
|
||||
### Dependency Visualization
|
||||
|
||||
```bash
|
||||
# Show full dependency tree for an issue
|
||||
bd dep tree issue-123
|
||||
|
||||
# Check for circular dependencies
|
||||
bd dep cycles
|
||||
```
|
||||
|
||||
### Built-in Help
|
||||
|
||||
```bash
|
||||
# Quick start guide (comprehensive built-in reference)
|
||||
bd quickstart
|
||||
|
||||
# Command-specific help
|
||||
bd create --help
|
||||
bd dep --help
|
||||
```
|
||||
|
||||
## JSON Output
|
||||
|
||||
All bd commands support `--json` flag for structured output:
|
||||
|
||||
```bash
|
||||
bd ready --json
|
||||
bd show issue-123 --json
|
||||
bd list --status open --json
|
||||
bd stats --json
|
||||
```
|
||||
|
||||
Use JSON output when you need to parse results programmatically or extract specific fields.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**If bd command not found:**
|
||||
- Check installation: `bd version`
|
||||
- Verify PATH includes bd binary location
|
||||
|
||||
**If issues seem lost:**
|
||||
- Use `bd list` to see all issues
|
||||
- Filter by status: `bd list --status closed`
|
||||
- Closed issues remain in database permanently
|
||||
|
||||
**If bd show can't find issue by name:**
|
||||
- `bd show` requires issue IDs, not issue titles
|
||||
- Workaround: `bd list | grep -i "search term"` to find ID first
|
||||
- Then: `bd show issue-id` with the discovered ID
|
||||
- For glossaries/reference databases where names matter more than IDs, consider using markdown format alongside the database
|
||||
|
||||
**If dependencies seem wrong:**
|
||||
- Use `bd show issue-id` to see full dependency tree
|
||||
- Use `bd dep tree issue-id` for visualization
|
||||
- Dependencies are directional: `bd dep add from-id to-id` means from-id blocks to-id
|
||||
- See [references/DEPENDENCIES.md](references/DEPENDENCIES.md#common-mistakes)
|
||||
|
||||
**If database seems out of sync:**
|
||||
- bd auto-syncs JSONL after each operation (5s debounce)
|
||||
- bd auto-imports JSONL when newer than DB (after git pull)
|
||||
- Manual operations: `bd export`, `bd import`
|
||||
|
||||
## Reference Files
|
||||
|
||||
Detailed information organized by topic:
|
||||
|
||||
| Reference | Read When |
|
||||
|-----------|-----------|
|
||||
| [references/BOUNDARIES.md](references/BOUNDARIES.md) | Need detailed decision criteria for bd vs TodoWrite, or integration patterns |
|
||||
| [references/CLI_REFERENCE.md](references/CLI_REFERENCE.md) | Need complete command reference, flag details, or examples |
|
||||
| [references/WORKFLOWS.md](references/WORKFLOWS.md) | Need step-by-step workflows with checklists for common scenarios |
|
||||
| [references/DEPENDENCIES.md](references/DEPENDENCIES.md) | Need deep understanding of dependency types or relationship patterns |
|
||||
| [references/ISSUE_CREATION.md](references/ISSUE_CREATION.md) | Need guidance on when to ask vs create issues, issue quality, or design vs acceptance criteria |
|
||||
| [references/STATIC_DATA.md](references/STATIC_DATA.md) | Want to use bd for reference databases, glossaries, or static data instead of work tracking |
|
||||
16
skills/code-review/.claude-plugin/plugin.json
Normal file
16
skills/code-review/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
{
|
||||
"name": "code-review",
|
||||
"description": "Run multi-lens code review on target files. Analyzes for bloat, smells, dead-code, redundancy, security, error-handling, coupling, boundaries, and evolvability.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"code-review",
|
||||
"lenses",
|
||||
"security",
|
||||
"quality",
|
||||
"static-analysis"
|
||||
]
|
||||
}
|
||||
187
skills/code-review/skills/code-review.md
Normal file
187
skills/code-review/skills/code-review.md
Normal file
|
|
@ -0,0 +1,187 @@
|
|||
---
|
||||
name: code-review
|
||||
description: Run multi-lens code review on target files. Analyzes for bloat, smells, dead-code, redundancy, security, error-handling, coupling, boundaries, and evolvability. Interactive - asks before filing issues.
|
||||
---
|
||||
|
||||
# Code Review Skill
|
||||
|
||||
Run focused code analysis using multiple review lenses. Findings are synthesized and presented for your approval before any issues are filed.
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoke this skill when:
|
||||
- "Review this code"
|
||||
- "Run code review on src/"
|
||||
- "Check this file for issues"
|
||||
- "Analyze the codebase"
|
||||
- `/code-review`
|
||||
|
||||
## Arguments
|
||||
|
||||
The skill accepts an optional target:
|
||||
- `/code-review` - Reviews recently changed files (git diff)
|
||||
- `/code-review src/` - Reviews specific directory
|
||||
- `/code-review src/main.py` - Reviews specific file
|
||||
- `/code-review --diff HEAD~5` - Reviews changes in last 5 commits
|
||||
|
||||
## Available Lenses
|
||||
|
||||
Lenses are focused review prompts located in `~/.config/lenses/code/`:
|
||||
|
||||
| Lens | Focus |
|
||||
|------|-------|
|
||||
| `bloat.md` | File size, function length, complexity, SRP violations |
|
||||
| `smells.md` | Code smells, naming, control flow, readability |
|
||||
| `dead-code.md` | Unused exports, zombie code, unreachable paths |
|
||||
| `redundancy.md` | Duplication, parallel systems, YAGNI violations |
|
||||
| `security.md` | Injection, auth gaps, secrets, crypto misuse |
|
||||
| `error-handling.md` | Swallowed errors, missing handling, failure modes |
|
||||
| `coupling.md` | Tight coupling, circular deps, layer violations |
|
||||
| `boundaries.md` | Layer violations, dependency direction, domain cohesion |
|
||||
| `evolvability.md` | Hard-coded policies, missing seams, change amplification |
|
||||
|
||||
## Workflow
|
||||
|
||||
### Phase 1: Target Selection
|
||||
1. Parse the target argument (default: git diff of uncommitted changes)
|
||||
2. Identify files to review
|
||||
3. Show file list to user for confirmation
|
||||
|
||||
### Phase 2: Lens Execution
|
||||
For each lens, analyze the target files:
|
||||
|
||||
1. Read the lens prompt from `~/.config/lenses/code/{lens}.md`
|
||||
2. Apply the lens to the target code
|
||||
3. Collect findings in structured format
|
||||
|
||||
**Finding Format:**
|
||||
```
|
||||
[TAG] <severity:HIGH|MED|LOW> <file:line>
|
||||
Issue: <one-line description>
|
||||
Suggest: <actionable fix>
|
||||
Evidence: <why this matters>
|
||||
```
|
||||
|
||||
### Phase 3: Synthesis
|
||||
After all lenses complete:
|
||||
1. Deduplicate overlapping findings
|
||||
2. Group related issues
|
||||
3. Rank by severity and confidence
|
||||
4. Generate summary report
|
||||
|
||||
**Optional:** If user requests consensus (`--orch` or asks for it):
|
||||
```bash
|
||||
orch consensus "<findings summary>" gpt gemini
|
||||
```
|
||||
Use this to filter false positives and prioritize.
|
||||
|
||||
### Phase 4: Interactive Review
|
||||
Present findings to user:
|
||||
1. Show executive summary (counts by severity)
|
||||
2. List top issues with details
|
||||
3. Ask: "Which findings should I file as issues?"
|
||||
|
||||
**User can respond:**
|
||||
- "File all" - creates beads issues for everything
|
||||
- "File HIGH only" - filters by severity
|
||||
- "File 1, 3, 5" - specific findings
|
||||
- "None" - just keep the report
|
||||
- "Let me review first" - show full details
|
||||
|
||||
### Phase 5: Issue Filing (if requested)
|
||||
For approved findings:
|
||||
1. Create beads issues with `bd create`
|
||||
2. Include lens tag, severity, file location
|
||||
3. Link related issues if applicable
|
||||
|
||||
## Output
|
||||
|
||||
The skill produces:
|
||||
1. **Console summary** - immediate feedback
|
||||
2. **Beads issues** - if user approves filing
|
||||
|
||||
## Example Session
|
||||
|
||||
```
|
||||
User: /code-review src/cli.py
|
||||
|
||||
Agent: I'll review src/cli.py with 9 lenses.
|
||||
|
||||
[Running bloat lens...]
|
||||
[Running smells lens...]
|
||||
[Running dead-code lens...]
|
||||
[Running redundancy lens...]
|
||||
[Running security lens...]
|
||||
[Running error-handling lens...]
|
||||
[Running coupling lens...]
|
||||
[Running boundaries lens...]
|
||||
[Running evolvability lens...]
|
||||
|
||||
## Review Summary: src/cli.py
|
||||
|
||||
| Severity | Count |
|
||||
|----------|-------|
|
||||
| HIGH | 1 |
|
||||
| MED | 3 |
|
||||
| LOW | 2 |
|
||||
|
||||
### Top Issues
|
||||
|
||||
1. [BLOAT] HIGH src/cli.py:145-280
|
||||
Issue: Function `handle_request` is 135 lines
|
||||
Suggest: Extract into smaller functions by responsibility
|
||||
|
||||
2. [SMELL] MED src/cli.py:89
|
||||
Issue: Magic number 3600 without explanation
|
||||
Suggest: Extract to named constant SECONDS_PER_HOUR
|
||||
|
||||
3. [DEAD] MED src/cli.py:12
|
||||
Issue: Import `unused_module` has no references
|
||||
Suggest: Remove unused import
|
||||
|
||||
Would you like me to file any of these as beads issues?
|
||||
Options: all, HIGH only, specific numbers (1,2,3), or none
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
The skill respects `.code-review.yml` in the repo root if present:
|
||||
|
||||
```yaml
|
||||
# Optional configuration
|
||||
ignore_paths:
|
||||
- vendor/
|
||||
- node_modules/
|
||||
- "*.generated.*"
|
||||
|
||||
severity_defaults:
|
||||
bloat: MED
|
||||
dead-code: LOW
|
||||
|
||||
max_file_size_kb: 500 # Skip files larger than this
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. **Be Thorough But Focused** - Each lens checks one concern deeply
|
||||
2. **Evidence Over Opinion** - Cite specific lines and patterns
|
||||
3. **Actionable Suggestions** - Every finding needs a clear fix
|
||||
4. **Respect User Time** - Summarize first, details on request
|
||||
5. **No Spam** - Don't file issues without explicit approval
|
||||
|
||||
## Process Checklist
|
||||
|
||||
1. [ ] Parse target (files/directory/diff)
|
||||
2. [ ] Confirm scope with user if large (>10 files)
|
||||
3. [ ] Run each lens, collecting findings
|
||||
4. [ ] Deduplicate and rank findings
|
||||
5. [ ] Present summary to user
|
||||
6. [ ] Ask which findings to file
|
||||
7. [ ] Create beads issues for approved findings
|
||||
8. [ ] Report issue IDs created
|
||||
|
||||
## Integration
|
||||
|
||||
- **Lenses**: Read from `~/.config/lenses/code/*.md`
|
||||
- **Issue Tracking**: Uses `bd create` for beads issues
|
||||
- **Orch**: Optional consensus filtering via `orch consensus`
|
||||
15
skills/doc-review/.claude-plugin/plugin.json
Normal file
15
skills/doc-review/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
"name": "doc-review",
|
||||
"description": "Lint markdown documentation for AI agent consumption using deterministic rules and LLM semantic checks.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"documentation",
|
||||
"lint",
|
||||
"markdown",
|
||||
"ai-agents"
|
||||
]
|
||||
}
|
||||
114
skills/doc-review/skills/doc-review.md
Normal file
114
skills/doc-review/skills/doc-review.md
Normal file
|
|
@ -0,0 +1,114 @@
|
|||
---
|
||||
name: doc-review
|
||||
description: Lint markdown documentation for AI agent consumption using deterministic rules + LLM semantic checks
|
||||
---
|
||||
|
||||
# doc-review - Documentation Quality for AI Agents
|
||||
|
||||
Evaluate documentation against rubrics optimized for AI "ingestibility" - making docs work well when consumed by LLMs and AI agents.
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoke this skill when:
|
||||
- Writing or updating AGENTS.md, CLAUDE.md, or similar agent-facing docs
|
||||
- Before committing documentation changes
|
||||
- To validate docs follow AI-friendly patterns
|
||||
- Reviewing existing docs for clarity and structure
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Stage 1: Deterministic Rules (fast, free)
|
||||
├── 12 pattern-based checks
|
||||
├── Runs instantly, no API cost
|
||||
└── Catches ~40% of issues
|
||||
|
||||
Stage 2: LLM Semantic Checks (--llm flag)
|
||||
├── 7 contextual detectors
|
||||
├── Evaluates meaning, not just patterns
|
||||
└── Only runs when explicitly requested
|
||||
```
|
||||
|
||||
## Invocation
|
||||
|
||||
```bash
|
||||
# Check a single file
|
||||
doc-review README.md
|
||||
|
||||
# Check multiple files
|
||||
doc-review docs/*.md
|
||||
|
||||
# Apply suggested fixes
|
||||
doc-review --fix README.md
|
||||
|
||||
# Enable LLM semantic checks
|
||||
doc-review --llm AGENTS.md
|
||||
|
||||
# Use specific model for LLM checks
|
||||
doc-review --llm --model gpt-4o README.md
|
||||
|
||||
# Output as SARIF for CI integration
|
||||
doc-review --format sarif docs/ > results.sarif
|
||||
|
||||
# Output as JSON for programmatic use
|
||||
doc-review --format json README.md
|
||||
```
|
||||
|
||||
## What It Checks
|
||||
|
||||
### Deterministic Rules (fast, free)
|
||||
|
||||
| Rule | What it catches |
|
||||
|------|-----------------|
|
||||
| code-lang | Code blocks without language tags |
|
||||
| heading-hierarchy | Skipped heading levels (H2 → H4) |
|
||||
| generic-headings | Vague headings ("Overview", "Introduction") |
|
||||
| hedging-lang | Uncertain language ("might", "consider") |
|
||||
| filler-words | Unnecessary verbosity |
|
||||
| config-precision | Vague versions ("Python 3.x") |
|
||||
| backward-refs | "As mentioned above" references |
|
||||
| terminology | Inconsistent term usage |
|
||||
| security-patterns | Hardcoded secrets, dangerous patterns |
|
||||
| json-yaml-validation | Invalid JSON/YAML in code blocks |
|
||||
| broken-table | Malformed markdown tables |
|
||||
| unclosed-fence | Unclosed code fences |
|
||||
|
||||
### LLM Detectors (--llm flag)
|
||||
|
||||
| Detector | What it catches |
|
||||
|----------|-----------------|
|
||||
| contextual-independence | Sections that don't stand alone |
|
||||
| prerequisite-gap | Missing setup/context |
|
||||
| ambiguity | Unclear instructions |
|
||||
| semantic-drift | Heading/content mismatch |
|
||||
| negative-constraint | "Don't do X" without alternatives |
|
||||
| state-conflict | Contradictory instructions |
|
||||
| terminology-pollution | Inconsistent naming |
|
||||
|
||||
## Output Formats
|
||||
|
||||
- **text** (default): Human-readable with line numbers
|
||||
- **json**: Structured output for programmatic use
|
||||
- **sarif**: SARIF 2.1.0 for CI integration (GitHub, VS Code)
|
||||
|
||||
## Design Philosophy
|
||||
|
||||
doc-review optimizes for **recall over precision**. Findings are candidates for review, not errors:
|
||||
- Agents verify cheaply - checking a flag costs seconds
|
||||
- False negatives compound - missed issues persist
|
||||
- Suppression is explicit - `<!-- doc-review: ignore rule-id -->` documents intent
|
||||
|
||||
## Example Session
|
||||
|
||||
```
|
||||
$ doc-review CLAUDE.md
|
||||
|
||||
CLAUDE.md:45: [code-lang] Code block missing language tag
|
||||
CLAUDE.md:89: [hedging-lang] Uncertain language: "you might want to"
|
||||
CLAUDE.md:112: [backward-refs] Backward reference: "as mentioned above"
|
||||
|
||||
3 issues found (2 HIGH, 1 MED)
|
||||
|
||||
$ doc-review --fix CLAUDE.md
|
||||
Applied 3 fixes to CLAUDE.md
|
||||
```
|
||||
15
skills/niri-window-capture/.claude-plugin/plugin.json
Normal file
15
skills/niri-window-capture/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
"name": "niri-window-capture",
|
||||
"description": "Invisibly capture screenshots of any window across all workspaces using niri compositor.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"niri",
|
||||
"wayland",
|
||||
"screenshot",
|
||||
"window-capture"
|
||||
]
|
||||
}
|
||||
184
skills/niri-window-capture/skills/niri-window-capture.md
Normal file
184
skills/niri-window-capture/skills/niri-window-capture.md
Normal file
|
|
@ -0,0 +1,184 @@
|
|||
---
|
||||
name: niri-window-capture
|
||||
description: Invisibly capture screenshots of any window across all workspaces using niri compositor
|
||||
---
|
||||
|
||||
# Niri Window Capture
|
||||
|
||||
⚠️ **SECURITY NOTICE**: This skill can capture ANY window invisibly, including windows on other workspaces. All captures are logged to systemd journal. See [SECURITY.md](./SECURITY.md) for details.
|
||||
|
||||
Capture screenshots of windows from any workspace without switching views or causing visual changes. Uses niri's direct window rendering capability to access window buffers invisibly.
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoke this skill when the user requests:
|
||||
- "Show me what's in the focused window"
|
||||
- "Capture the Firefox window"
|
||||
- "Show me window X"
|
||||
- "Find the window with [content]" (capture all, analyze)
|
||||
- "What's on workspace 2?" (capture windows from specific workspace)
|
||||
|
||||
## How It Works
|
||||
|
||||
niri compositor maintains window buffers for all windows regardless of workspace visibility. The `screenshot-window` action renders individual windows directly without compositing to screen.
|
||||
|
||||
**Key insight**: Windows from inactive workspaces CAN be captured invisibly because their buffers exist in memory even when not displayed.
|
||||
|
||||
## Helper Scripts
|
||||
|
||||
### capture-focused.sh
|
||||
|
||||
**Purpose**: Capture the currently focused window
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
./scripts/capture-focused.sh
|
||||
```
|
||||
|
||||
**Output**: Path to screenshot file in `~/Pictures/Screenshots/`
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
SCREENSHOT=$(./scripts/capture-focused.sh)
|
||||
# Now analyze: "What's in this screenshot?"
|
||||
```
|
||||
|
||||
### capture-by-title.sh
|
||||
|
||||
**Purpose**: Find and capture window by partial title match (case-insensitive)
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
./scripts/capture-by-title.sh "search-term"
|
||||
```
|
||||
|
||||
**Output**: Path to screenshot file
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
# Capture any Firefox window
|
||||
SCREENSHOT=$(./scripts/capture-by-title.sh "Firefox")
|
||||
|
||||
# Capture terminal with specific text in title
|
||||
SCREENSHOT=$(./scripts/capture-by-title.sh "error")
|
||||
```
|
||||
|
||||
## Direct niri Commands
|
||||
|
||||
For custom workflows, use niri commands directly:
|
||||
|
||||
**List all windows**:
|
||||
```bash
|
||||
niri msg --json windows | jq -r '.[] | "\(.id) - \(.title) - WS:\(.workspace_id)"'
|
||||
```
|
||||
|
||||
**Capture specific window by ID**:
|
||||
```bash
|
||||
niri msg action screenshot-window --id <WINDOW_ID> --write-to-disk true
|
||||
# Screenshot saved to ~/Pictures/Screenshots/
|
||||
```
|
||||
|
||||
**Get focused window**:
|
||||
```bash
|
||||
niri msg --json focused-window | jq -r '.id'
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Find window with specific content
|
||||
|
||||
```bash
|
||||
# Get all window IDs
|
||||
WINDOW_IDS=$(niri msg --json windows | jq -r '.[].id')
|
||||
|
||||
# Capture each window
|
||||
for id in $WINDOW_IDS; do
|
||||
niri msg action screenshot-window --id "$id" --write-to-disk true
|
||||
sleep 0.1
|
||||
SCREENSHOT=$(ls -t ~/Pictures/Screenshots/*.png | head -1)
|
||||
# Analyze screenshot for content
|
||||
# If found, return this one
|
||||
done
|
||||
```
|
||||
|
||||
### Capture all windows on specific workspace
|
||||
|
||||
```bash
|
||||
# Get windows on workspace 2
|
||||
WINDOW_IDS=$(niri msg --json windows | jq -r '.[] | select(.workspace_id == 2) | .id')
|
||||
|
||||
# Capture each
|
||||
for id in $WINDOW_IDS; do
|
||||
niri msg action screenshot-window --id "$id" --write-to-disk true
|
||||
sleep 0.1
|
||||
done
|
||||
```
|
||||
|
||||
### Capture window by app_id
|
||||
|
||||
```bash
|
||||
# Find Firefox window
|
||||
WINDOW_ID=$(niri msg --json windows | jq -r '.[] | select(.app_id == "firefox") | .id' | head -1)
|
||||
|
||||
# Capture it
|
||||
niri msg action screenshot-window --id "$WINDOW_ID" --write-to-disk true
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. **No visual disruption**: All captures are invisible to the user - no workspace switching, no overview mode, no flicker
|
||||
|
||||
2. **Works across workspaces**: Can capture windows from any workspace regardless of which is currently active
|
||||
|
||||
3. **Always add small delay**: Add `sleep 0.1` after screenshot command before finding the file (filesystem needs time to write)
|
||||
|
||||
4. **Screenshot location**: Files go to `~/Pictures/Screenshots/Screenshot from YYYY-MM-DD HH-MM-SS.png`
|
||||
|
||||
5. **Find latest screenshot**: `ls -t ~/Pictures/Screenshots/*.png | head -1`
|
||||
|
||||
6. **Metadata available**: Each window has: id, title, app_id, workspace_id, is_focused, is_urgent, pid
|
||||
|
||||
7. **Audit logging**: All captures logged to systemd journal via `logger -t niri-capture`
|
||||
|
||||
8. **Clipboard behavior**: Screenshots ALWAYS copied to clipboard (niri hardcoded, cannot disable)
|
||||
|
||||
## Security
|
||||
|
||||
**READ [SECURITY.md](./SECURITY.md) BEFORE USING THIS SKILL**
|
||||
|
||||
Key points:
|
||||
- Captures are invisible - user won't know you're capturing other workspaces
|
||||
- All captures logged to systemd journal: `journalctl --user -t niri-capture`
|
||||
- Screenshots always copied to clipboard (cannot disable)
|
||||
- Protect sensitive apps via niri `block-out-from "screen-capture"` rules
|
||||
|
||||
## Requirements
|
||||
|
||||
- niri compositor (verified working with niri 25.08)
|
||||
- jq (for JSON parsing)
|
||||
- logger (from util-linux, for audit trail)
|
||||
- Configured screenshot-path in niri config (default: `~/Pictures/Screenshots/`)
|
||||
|
||||
## Technical Details
|
||||
|
||||
**How it works internally**:
|
||||
- niri uses smithay's `Window` type which references Wayland surface buffers
|
||||
- Applications continuously render to their surface buffers even when not visible
|
||||
- `screenshot-window` action calls `mapped.render()` which renders the window buffer directly
|
||||
- No compositing to output required - direct buffer-to-PNG conversion
|
||||
- Result saved to file or clipboard depending on `--write-to-disk` flag
|
||||
|
||||
**Limitations**:
|
||||
- Only works with niri compositor (uses niri-specific IPC)
|
||||
- Window must exist (can't capture closed windows)
|
||||
- Small delay (0.1s) needed for filesystem write
|
||||
|
||||
## Error Handling
|
||||
|
||||
- No focused window: Scripts exit with error message
|
||||
- Window not found: Scripts exit with descriptive error
|
||||
- Invalid window ID: niri action fails silently (check if file was created)
|
||||
|
||||
## Examples
|
||||
|
||||
See the `examples/` directory for sample usage patterns and expected outputs.
|
||||
16
skills/ops-review/.claude-plugin/plugin.json
Normal file
16
skills/ops-review/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
{
|
||||
"name": "ops-review",
|
||||
"description": "Run multi-lens ops review on infrastructure files. Analyzes Nix, shell scripts, Docker, CI/CD for security, shell-safety, and operational concerns.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"ops-review",
|
||||
"infrastructure",
|
||||
"nix",
|
||||
"devops",
|
||||
"security"
|
||||
]
|
||||
}
|
||||
246
skills/ops-review/skills/ops-review.md
Normal file
246
skills/ops-review/skills/ops-review.md
Normal file
|
|
@ -0,0 +1,246 @@
|
|||
---
|
||||
name: ops-review
|
||||
description: Run multi-lens ops review on infrastructure files. Analyzes Nix, shell scripts, Docker, CI/CD for secrets, shell-safety, blast-radius, privilege, idempotency, supply-chain, observability, nix-hygiene, resilience, and orchestration. Interactive - asks before filing issues.
|
||||
---
|
||||
|
||||
# Ops Review Skill
|
||||
|
||||
Run focused infrastructure analysis using multiple review lenses. Uses a linter-first hybrid approach: static tools for syntax, LLM for semantics. Findings are synthesized and presented for your approval before any issues are filed.
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoke this skill when:
|
||||
- "Review my infrastructure"
|
||||
- "Run ops review on bin/"
|
||||
- "Check this script for issues"
|
||||
- "Analyze my Nix configs"
|
||||
- `/ops-review`
|
||||
|
||||
## Arguments
|
||||
|
||||
The skill accepts an optional target:
|
||||
- `/ops-review` - Reviews recently changed ops files (git diff)
|
||||
- `/ops-review bin/` - Reviews specific directory
|
||||
- `/ops-review deploy.sh` - Reviews specific file
|
||||
- `/ops-review --quick` - Phase 1 lenses only (fast, <30s)
|
||||
|
||||
## Target Artifacts
|
||||
|
||||
| Category | File Patterns |
|
||||
|----------|---------------|
|
||||
| Nix/NixOS | `*.nix`, `flake.nix`, `flake.lock` |
|
||||
| Shell Scripts | `*.sh`, files with `#!/bin/bash` shebang |
|
||||
| Python Automation | `*.py` in ops contexts (scripts/, setup/, deploy/) |
|
||||
| Container Configs | `Dockerfile`, `docker-compose.yml`, `*.dockerfile` |
|
||||
| CI/CD | `.github/workflows/*.yml`, `.gitea/workflows/*.yml` |
|
||||
| Service Configs | `*.service`, `*.timer`, systemd units |
|
||||
| Secrets | `.sops.yaml`, `secrets.yaml`, SOPS-encrypted files |
|
||||
|
||||
## Architecture: Linter-First Hybrid
|
||||
|
||||
```
|
||||
Stage 1: Static Tools (fast, deterministic)
|
||||
├── shellcheck for shell scripts
|
||||
├── statix + deadnix for Nix
|
||||
├── hadolint for Dockerfiles
|
||||
└── yamllint for YAML configs
|
||||
|
||||
Stage 2: LLM Analysis (semantic, contextual)
|
||||
├── Interprets tool output in context
|
||||
├── Finds logic bugs tools miss
|
||||
├── Synthesizes cross-file issues
|
||||
└── Suggests actionable fixes
|
||||
```
|
||||
|
||||
## Available Lenses
|
||||
|
||||
Lenses are focused review prompts located in `~/.config/lenses/ops/`:
|
||||
|
||||
### Phase 1: Core Safety (--quick mode)
|
||||
|
||||
| Lens | Focus |
|
||||
|------|-------|
|
||||
| `secrets.md` | Hardcoded credentials, SOPS issues, secrets in logs |
|
||||
| `shell-safety.md` | set -euo pipefail, quoting, error handling (shellcheck-backed) |
|
||||
| `blast-radius.md` | Destructive ops, missing dry-run, no rollback |
|
||||
| `privilege.md` | Unnecessary sudo, root containers, chmod 777 |
|
||||
|
||||
### Phase 2: Reliability
|
||||
|
||||
| Lens | Focus |
|
||||
|------|-------|
|
||||
| `idempotency.md` | Safe re-run, existence checks, atomic operations |
|
||||
| `supply-chain.md` | Unpinned versions, missing SRI hashes, action SHAs |
|
||||
| `observability.md` | Silent failures, missing health checks, no logging |
|
||||
|
||||
### Phase 3: Architecture
|
||||
|
||||
| Lens | Focus |
|
||||
|------|-------|
|
||||
| `nix-hygiene.md` | Dead code, anti-patterns, module boundaries (statix-backed) |
|
||||
| `resilience.md` | Timeouts, retries, graceful shutdown, resource limits |
|
||||
| `orchestration.md` | Execution order, prerequisites, implicit coupling |
|
||||
|
||||
## Workflow
|
||||
|
||||
### Phase 1: Target Selection
|
||||
1. Parse the target argument (default: git diff of uncommitted ops files)
|
||||
2. Identify files by category (Nix, shell, Docker, etc.)
|
||||
3. Show file list to user for confirmation
|
||||
|
||||
### Phase 2: Pre-Pass (Static Tools)
|
||||
Run appropriate linters based on file type:
|
||||
```bash
|
||||
# Shell scripts
|
||||
shellcheck --format=json script.sh
|
||||
|
||||
# Nix files
|
||||
statix check --format=json file.nix
|
||||
deadnix --output-format=json file.nix
|
||||
|
||||
# Dockerfiles
|
||||
hadolint --format json Dockerfile
|
||||
```
|
||||
|
||||
### Phase 3: Lens Execution
|
||||
For each lens, analyze the target files with tool output in context:
|
||||
|
||||
1. Read the lens prompt from `~/.config/lenses/ops/{lens}.md`
|
||||
2. Include relevant linter output as evidence
|
||||
3. Apply the lens to find semantic issues tools miss
|
||||
4. Collect findings in structured format
|
||||
|
||||
**Finding Format:**
|
||||
```
|
||||
[TAG] <severity:HIGH|MED|LOW> <file:line>
|
||||
Issue: <what's wrong>
|
||||
Suggest: <how to fix>
|
||||
Evidence: <why it matters>
|
||||
```
|
||||
|
||||
### Phase 4: Synthesis
|
||||
After all lenses complete:
|
||||
1. Deduplicate overlapping findings (same issue from multiple lenses)
|
||||
2. Group related issues
|
||||
3. Rank by severity and confidence
|
||||
4. Generate summary report
|
||||
|
||||
### Phase 5: Interactive Review
|
||||
Present findings to user:
|
||||
1. Show executive summary (counts by severity)
|
||||
2. List top issues with details
|
||||
3. Ask: "Which findings should I file as issues?"
|
||||
|
||||
**User can respond:**
|
||||
- "File all" - creates beads issues for everything
|
||||
- "File HIGH only" - filters by severity
|
||||
- "File 1, 3, 5" - specific findings
|
||||
- "None" - just keep the report
|
||||
- "Let me review first" - show full details
|
||||
|
||||
### Phase 6: Issue Filing (if requested)
|
||||
For approved findings:
|
||||
1. Create beads issues with `bd create`
|
||||
2. Include lens tag, severity, file location
|
||||
3. Link related issues if applicable
|
||||
|
||||
## Output
|
||||
|
||||
The skill produces:
|
||||
1. **Console summary** - immediate feedback
|
||||
2. **Beads issues** - if user approves filing
|
||||
|
||||
## Severity Rubric
|
||||
|
||||
| Severity | Criteria |
|
||||
|----------|----------|
|
||||
| **HIGH** | Exploitable vulnerability, data loss risk, will break on next run |
|
||||
| **MED** | Reliability issue, tech debt, violation of best practice |
|
||||
| **LOW** | Polish, maintainability, defense-in-depth improvement |
|
||||
|
||||
Context matters: same issue may be HIGH in production, LOW in homelab.
|
||||
|
||||
## Example Session
|
||||
|
||||
```
|
||||
User: /ops-review bin/deploy.sh
|
||||
|
||||
Agent: I'll review bin/deploy.sh with ops lenses.
|
||||
|
||||
[Running shellcheck...]
|
||||
[Running secrets lens...]
|
||||
[Running shell-safety lens...]
|
||||
[Running blast-radius lens...]
|
||||
[Running privilege lens...]
|
||||
|
||||
## Review Summary: bin/deploy.sh
|
||||
|
||||
| Severity | Count |
|
||||
|----------|-------|
|
||||
| HIGH | 2 |
|
||||
| MED | 3 |
|
||||
| LOW | 1 |
|
||||
|
||||
### Top Issues
|
||||
|
||||
1. [SECRETS] HIGH bin/deploy.sh:45
|
||||
Issue: API token passed as command-line argument (visible in process list)
|
||||
Suggest: Use environment variable or file with restricted permissions
|
||||
|
||||
2. [BLAST-RADIUS] HIGH bin/deploy.sh:78
|
||||
Issue: rm -rf with variable that could be empty
|
||||
Suggest: Add guard: [ -n "$DIR" ] || exit 1
|
||||
|
||||
3. [SHELL-SAFETY] MED bin/deploy.sh:12
|
||||
Issue: Missing 'set -euo pipefail'
|
||||
Suggest: Add at top of script for fail-fast behavior
|
||||
|
||||
Would you like me to file any of these as beads issues?
|
||||
Options: all, HIGH only, specific numbers (1,2,3), or none
|
||||
```
|
||||
|
||||
## Quick Mode
|
||||
|
||||
Use `--quick` for fast pre-commit checks:
|
||||
- Runs only Phase 1 lenses (secrets, shell-safety, blast-radius, privilege)
|
||||
- Target: <30 seconds
|
||||
- Ideal for CI gates
|
||||
|
||||
## Cross-File Awareness
|
||||
|
||||
Before review, build a reference map:
|
||||
- **Shell**: `source`, `.` includes, invoked scripts
|
||||
- **Nix**: imports, flake inputs
|
||||
- **CI**: referenced scripts, env vars, secrets names
|
||||
- **Compose**: service dependencies, volumes, env files
|
||||
- **systemd**: ExecStart targets, dependencies
|
||||
|
||||
This enables finding issues in the seams between components.
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. **Linter-First** - Always run static tools before LLM analysis
|
||||
2. **Evidence Over Opinion** - Cite linter output and specific lines
|
||||
3. **Actionable Suggestions** - Every finding needs a clear fix
|
||||
4. **Respect User Time** - Summarize first, details on request
|
||||
5. **No Spam** - Don't file issues without explicit approval
|
||||
6. **Context Matters** - Homelab ≠ production severity
|
||||
|
||||
## Process Checklist
|
||||
|
||||
1. [ ] Parse target (files/directory/diff)
|
||||
2. [ ] Confirm scope with user if large (>10 files)
|
||||
3. [ ] Run static tools (shellcheck, statix, etc.)
|
||||
4. [ ] Build reference map for cross-file awareness
|
||||
5. [ ] Run each lens, collecting findings
|
||||
6. [ ] Deduplicate and rank findings
|
||||
7. [ ] Present summary to user
|
||||
8. [ ] Ask which findings to file
|
||||
9. [ ] Create beads issues for approved findings
|
||||
10. [ ] Report issue IDs created
|
||||
|
||||
## Integration
|
||||
|
||||
- **Lenses**: Read from `~/.config/lenses/ops/*.md`
|
||||
- **Issue Tracking**: Uses `bd create` for beads issues
|
||||
- **Static Tools**: shellcheck, statix, deadnix, hadolint
|
||||
16
skills/playwright-visit/.claude-plugin/plugin.json
Normal file
16
skills/playwright-visit/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
{
|
||||
"name": "playwright-visit",
|
||||
"description": "Visit web pages using Playwright browser automation. Capture screenshots, extract text, get rendered HTML, or save as PDF.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"playwright",
|
||||
"browser",
|
||||
"screenshot",
|
||||
"web-scraping",
|
||||
"pdf"
|
||||
]
|
||||
}
|
||||
63
skills/playwright-visit/skills/playwright-visit.md
Normal file
63
skills/playwright-visit/skills/playwright-visit.md
Normal file
|
|
@ -0,0 +1,63 @@
|
|||
---
|
||||
name: playwright-visit
|
||||
description: Visit web pages using Playwright browser automation. Capture screenshots, extract text, get rendered HTML, or save as PDF.
|
||||
---
|
||||
|
||||
# Playwright Visit
|
||||
|
||||
Browser automation skill using Playwright to visit web pages and extract content. Uses headless Chromium with a fresh profile (no cookies/history from user's browser).
|
||||
|
||||
## When to Use
|
||||
|
||||
- "Take a screenshot of [url]"
|
||||
- "Get the text content from [webpage]"
|
||||
- "Capture [url] as a screenshot"
|
||||
- "Extract the rendered HTML from [page]"
|
||||
- "Save [url] as a PDF"
|
||||
- When WebFetch fails on JavaScript-heavy sites
|
||||
|
||||
## Process
|
||||
|
||||
1. Identify the URL and desired output format from user request
|
||||
2. Run the appropriate helper script command
|
||||
3. Return the result (file path for screenshot/pdf, content for text/html)
|
||||
|
||||
## Helper Scripts
|
||||
|
||||
### visit.py
|
||||
|
||||
**Screenshot** - Capture page as PNG:
|
||||
```bash
|
||||
./scripts/visit.py screenshot "https://example.com" /tmp/screenshot.png
|
||||
```
|
||||
|
||||
**Text** - Extract visible text content:
|
||||
```bash
|
||||
./scripts/visit.py text "https://example.com"
|
||||
```
|
||||
|
||||
**HTML** - Get rendered HTML (after JavaScript):
|
||||
```bash
|
||||
./scripts/visit.py html "https://example.com"
|
||||
```
|
||||
|
||||
**PDF** - Save page as PDF:
|
||||
```bash
|
||||
./scripts/visit.py pdf "https://example.com" /tmp/page.pdf
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `--wait <ms>` - Wait after page load (default: 1000ms)
|
||||
- `--full-page` - Capture full scrollable page (screenshot only)
|
||||
|
||||
## Requirements
|
||||
|
||||
- NixOS with `python312Packages.playwright` in devShell
|
||||
- System chromium at `/run/current-system/sw/bin/chromium`
|
||||
- Run from skill directory or use `nix develop` first
|
||||
|
||||
## Notes
|
||||
|
||||
- Uses fresh browser profile each run (no login state)
|
||||
- Headless by default
|
||||
- For authenticated pages, consider using `storage_state` parameter (not yet implemented)
|
||||
14
skills/screenshot-latest/.claude-plugin/plugin.json
Normal file
14
skills/screenshot-latest/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
{
|
||||
"name": "screenshot-latest",
|
||||
"description": "Find and analyze the most recent screenshot without typing paths.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"screenshot",
|
||||
"image",
|
||||
"analysis"
|
||||
]
|
||||
}
|
||||
83
skills/screenshot-latest/skills/screenshot-latest.md
Normal file
83
skills/screenshot-latest/skills/screenshot-latest.md
Normal file
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
name: screenshot-latest
|
||||
description: Find and analyze the most recent screenshot without typing paths
|
||||
---
|
||||
|
||||
# Screenshot Latest
|
||||
|
||||
Automatically locates the most recent screenshot file so the user doesn't have to type `~/Pictures/Screenshots/filename.png` every time.
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoke this skill when the user requests:
|
||||
- "Look at my last screenshot"
|
||||
- "Analyze my latest screenshot"
|
||||
- "What's in my recent screenshot"
|
||||
- "Show me my screenshot"
|
||||
- Any variation referencing "screenshot" + "latest/last/recent"
|
||||
|
||||
## Context Gathering
|
||||
|
||||
Verify the screenshot directory exists and contains files:
|
||||
```bash
|
||||
ls -t ~/Pictures/Screenshots/*.{png,jpg,jpeg} 2>/dev/null | head -5
|
||||
```
|
||||
|
||||
If the directory doesn't exist or is empty, inform the user clearly.
|
||||
|
||||
## Process
|
||||
|
||||
1. **Find Latest Screenshot**
|
||||
- Run the helper script: `./scripts/find-latest.sh`
|
||||
- The script returns the absolute path to the most recent screenshot file
|
||||
- Handle errors gracefully (missing directory, no files, permission issues)
|
||||
|
||||
2. **Analyze the Screenshot**
|
||||
- Use the returned file path with your image analysis capability
|
||||
- Read and analyze the image content
|
||||
- Respond to the user's specific question about the screenshot
|
||||
|
||||
3. **Error Handling**
|
||||
- No screenshots found: "No screenshots found in ~/Pictures/Screenshots/"
|
||||
- Directory doesn't exist: "Screenshots directory not found at ~/Pictures/Screenshots/"
|
||||
- Permission denied: "Cannot access screenshots directory (permission denied)"
|
||||
|
||||
## Helper Scripts
|
||||
|
||||
### find-latest.sh
|
||||
|
||||
**Purpose**: Finds the most recent screenshot file by modification time
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
./scripts/find-latest.sh
|
||||
```
|
||||
|
||||
**Output**: Absolute path to the most recent screenshot, or empty string if none found
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. **Simplicity**: This skill does one thing - finds the latest screenshot file
|
||||
2. **No Configuration**: Uses hardcoded ~/Pictures/Screenshots (can be enhanced later if needed)
|
||||
3. **Fast Execution**: Should complete in <1 second even with many files
|
||||
4. **Clear Errors**: Always explain why screenshot couldn't be found
|
||||
|
||||
## Requirements
|
||||
|
||||
- Bash 4.0+
|
||||
- Standard Unix tools (ls, head)
|
||||
- Screenshots directory at ~/Pictures/Screenshots
|
||||
- Supported formats: PNG, JPG, JPEG
|
||||
|
||||
## Output Format
|
||||
|
||||
- Returns: Absolute file path to latest screenshot
|
||||
- No terminal output except errors
|
||||
- Agent uses returned path for image analysis
|
||||
|
||||
## Notes
|
||||
|
||||
- Uses file modification time to determine "latest"
|
||||
- Does not support custom directories (intentionally simple)
|
||||
- Does not support "Nth screenshot" or time filtering (YAGNI)
|
||||
- Future enhancement: Support custom directories if users request it
|
||||
15
skills/spec-review/.claude-plugin/plugin.json
Normal file
15
skills/spec-review/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
"name": "spec-review",
|
||||
"description": "Review spec-kit specifications and plans using multi-model AI consensus (orch) before phase transitions.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"spec-kit",
|
||||
"review",
|
||||
"orch",
|
||||
"consensus"
|
||||
]
|
||||
}
|
||||
80
skills/spec-review/skills/spec-review.md
Normal file
80
skills/spec-review/skills/spec-review.md
Normal file
|
|
@ -0,0 +1,80 @@
|
|||
---
|
||||
name: spec-review
|
||||
description: Review spec-kit specifications and plans using multi-model AI consensus (orch) before phase transitions. Use when working with spec-kit projects and need to validate specs, evaluate architecture decisions, or gate phase transitions.
|
||||
---
|
||||
|
||||
# Spec Review
|
||||
|
||||
Multi-model review of spec-kit artifacts. Uses orch to get diverse AI perspectives that catch blind spots a single model might miss.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Before `/speckit.plan` - review the spec for completeness
|
||||
- Before `/speckit.tasks` - evaluate architecture decisions in the plan
|
||||
- Before `bd create` - review task breakdown before committing to issues
|
||||
- At any phase transition - go/no-go gate check
|
||||
|
||||
## Quick Start
|
||||
|
||||
**Review a spec**:
|
||||
```bash
|
||||
orch consensus --mode critique --temperature 0.8 \
|
||||
--file specs/{branch}/spec.md \
|
||||
"$(cat ~/.claude/skills/spec-review/prompts/spec-critique.txt)" \
|
||||
flash deepseek gpt
|
||||
```
|
||||
|
||||
**Review a plan** (devil's advocate):
|
||||
```bash
|
||||
orch consensus --mode open --temperature 1.0 \
|
||||
--file specs/{branch}/plan.md \
|
||||
"$(cat ~/.claude/skills/spec-review/prompts/plan-review.txt)" \
|
||||
flash:for deepseek:against gpt:neutral
|
||||
```
|
||||
|
||||
**Review tasks** (before bd create):
|
||||
```bash
|
||||
orch consensus --mode critique --temperature 0.7 \
|
||||
--file specs/{branch}/tasks.md \
|
||||
"$(cat ~/.claude/skills/spec-review/prompts/tasks-review.txt)" \
|
||||
flash deepseek gpt
|
||||
```
|
||||
|
||||
**Gate check**:
|
||||
```bash
|
||||
orch consensus --mode vote --temperature 0.5 \
|
||||
--file specs/{branch}/spec.md \
|
||||
"$(cat ~/.claude/skills/spec-review/prompts/gate-check.txt)" \
|
||||
flash deepseek gpt
|
||||
```
|
||||
|
||||
## Detailed Processes
|
||||
|
||||
- [REVIEW_SPEC.md](REVIEW_SPEC.md) - Full spec review process
|
||||
- [REVIEW_PLAN.md](REVIEW_PLAN.md) - Plan evaluation with stances
|
||||
- [REVIEW_TASKS.md](REVIEW_TASKS.md) - Task breakdown review before bd
|
||||
- [GATE_CHECK.md](GATE_CHECK.md) - Go/no-go consensus
|
||||
|
||||
## Model Selection
|
||||
|
||||
**Default (fast, cheap, diverse)**:
|
||||
- `flash` - Gemini 2.5 Flash
|
||||
- `deepseek` - DeepSeek v3
|
||||
- `gpt` - GPT 5.2
|
||||
|
||||
**Thorough review**:
|
||||
- `gemini` - Gemini 3 Pro
|
||||
- `r1` - DeepSeek R1 (reasoning)
|
||||
|
||||
## Why Multi-Model?
|
||||
|
||||
Different models catch different issues:
|
||||
- Different training data → different blind spots
|
||||
- Stances (for/against/neutral) force opposing viewpoints
|
||||
- Higher temperature → more divergent thinking
|
||||
|
||||
## Requirements
|
||||
|
||||
- `orch` CLI in PATH
|
||||
- API keys: GEMINI_API_KEY, OPENAI_API_KEY, OPENROUTER_KEY
|
||||
- Working in a spec-kit project (has `specs/` directory)
|
||||
15
skills/tufte-press/.claude-plugin/plugin.json
Normal file
15
skills/tufte-press/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
"name": "tufte-press",
|
||||
"description": "Generate Tufte-inspired study card JSON from conversation, build PDF, and print.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"tufte",
|
||||
"study-cards",
|
||||
"pdf",
|
||||
"print"
|
||||
]
|
||||
}
|
||||
340
skills/tufte-press/skills/tufte-press.md
Normal file
340
skills/tufte-press/skills/tufte-press.md
Normal file
|
|
@ -0,0 +1,340 @@
|
|||
---
|
||||
name: tufte-press
|
||||
description: Generate Tufte-inspired study card JSON from conversation, build PDF, and print
|
||||
---
|
||||
|
||||
# Tufte Press Study Card Generator
|
||||
|
||||
Generate structured JSON study cards from conversation context, convert to beautifully typeset PDFs with Tufte-inspired layouts, and optionally send to printer.
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoke this skill when the user requests:
|
||||
- "Create a study card about [topic]"
|
||||
- "Generate a tufte-press card for [subject]"
|
||||
- "Make a printable study guide for [concept]"
|
||||
- "Build a study card and print it"
|
||||
- "Convert our conversation to a study card"
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Extract Learning Content from Conversation
|
||||
|
||||
Review the conversation history to identify:
|
||||
- **Topic**: Main subject matter
|
||||
- **Key concepts**: Core ideas discussed
|
||||
- **Prerequisites**: Background knowledge mentioned
|
||||
- **Examples**: Concrete illustrations provided
|
||||
- **Technical details**: Specific facts, equations, or procedures
|
||||
|
||||
Ask clarifying questions if needed:
|
||||
- What depth level? (intro/intermediate/advanced)
|
||||
- How many pages? (1-3 recommended)
|
||||
- Include practice exercises?
|
||||
- Any specific citations to include?
|
||||
- Target audience?
|
||||
|
||||
### Step 2: Generate JSON Following Strict Schema
|
||||
|
||||
**You are now the educator-typesetter.** Generate valid JSON that compiles to LaTeX/PDF without edits.
|
||||
|
||||
**Core Principles:**
|
||||
- Output must be valid JSON that compiles to LaTeX/PDF without edits
|
||||
- Margin notes must be self-contained (restate the term being defined)
|
||||
- Lists must use JSON arrays, not newline-separated strings
|
||||
- Practice strips have prompts only (NO answers in practice_strip)
|
||||
- Self-check questions DO include answers (correct_answer and why_it_matters)
|
||||
- Use Unicode symbols (λ, →, ×) in content; LaTeX in equation_latex
|
||||
- Cite real sources or mark "[NEEDS CLARIFICATION]"
|
||||
|
||||
**Required Schema:**
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"title": "Study Card: [Topic]",
|
||||
"topic": "Brief description",
|
||||
"audience": "Target learners",
|
||||
"learner_focus": "Learning objectives",
|
||||
"estimated_read_time_minutes": 15,
|
||||
"prerequisites": ["prereq1", "prereq2"],
|
||||
"learning_objectives": ["objective1", "objective2"],
|
||||
"sources": [
|
||||
{
|
||||
"title": "Source Title",
|
||||
"author": "Author Name",
|
||||
"year": "2024",
|
||||
"citation": "Full citation",
|
||||
"link": "https://doi.org/..."
|
||||
}
|
||||
],
|
||||
"provenance": {
|
||||
"model": "Claude 3.5 Sonnet",
|
||||
"date": "2025-11-10",
|
||||
"version": "1.0",
|
||||
"notes": "Generated from conversation context"
|
||||
}
|
||||
},
|
||||
"pages": [
|
||||
{
|
||||
"page_number": 1,
|
||||
"layout": "two-column",
|
||||
"main_flow": [
|
||||
{
|
||||
"type": "text",
|
||||
"content": "Opening paragraph with main concept.",
|
||||
"attributes": { "emphasis": "newthought" }
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"content": ["Item 1", "Item 2", "Item 3"],
|
||||
"attributes": { "list_style": "bullet" }
|
||||
},
|
||||
{
|
||||
"type": "equation",
|
||||
"content": "E = mc^2",
|
||||
"attributes": { "equation_latex": "E = mc^{2}" }
|
||||
},
|
||||
{
|
||||
"type": "callout",
|
||||
"content": "Important note or tip.",
|
||||
"attributes": { "callout_title": "Key Insight" }
|
||||
}
|
||||
],
|
||||
"margin_notes": [
|
||||
{
|
||||
"anchor": "concept",
|
||||
"content": "Term — Definition that restates the term being defined",
|
||||
"note_type": "definition"
|
||||
}
|
||||
],
|
||||
"full_width_assets": []
|
||||
}
|
||||
],
|
||||
"drills": {
|
||||
"practice_strip": [
|
||||
{
|
||||
"prompt": "Practice question for active learning (NO answers here)"
|
||||
}
|
||||
],
|
||||
"self_check": [
|
||||
{
|
||||
"question": "Self-assessment question",
|
||||
"correct_answer": "Expected answer",
|
||||
"why_it_matters": "Why this question is important"
|
||||
}
|
||||
]
|
||||
},
|
||||
"glossary": [
|
||||
{
|
||||
"term": "Technical Term",
|
||||
"definition": "Clear definition",
|
||||
"page_reference": [1]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Block Types:**
|
||||
- `text`: { `type`: "text", `content`: string, `attributes`? { `emphasis`?: "newthought"|"bold"|"summary" } }
|
||||
- `list`: { `type`: "list", `content`: [array of strings], `attributes`? { `list_style`: "bullet"|"numbered" } }
|
||||
- **CRITICAL**: `content` MUST be JSON array, NOT newline-separated string
|
||||
- ✅ CORRECT: `"content": ["Item 1", "Item 2", "Item 3"]`
|
||||
- ❌ WRONG: `"content": "Item 1\nItem 2\nItem 3"`
|
||||
- `equation`: { `type`: "equation", `content`: string, `attributes`: { `equation_latex`: string } }
|
||||
- `callout`: { `type`: "callout", `content`: string, `attributes`? { `callout_title`: string } }
|
||||
- `quote`: { `type`: "quote", `content`: string, `attributes`? { `quote_citation`: string } }
|
||||
|
||||
**Margin Notes:**
|
||||
- { `anchor`: string, `content`: string, `note_type`: "definition"|"syntax"|"concept"|"history"|"problem"|"operation"|"equivalence"|"notation"|"property"|"example"|"reference" }
|
||||
- **CRITICAL**: Margin notes must be self-contained and restate the term
|
||||
- ✅ CORRECT: "Free variable — A variable not bound by any λ abstraction"
|
||||
- ❌ WRONG: "A variable not bound by any λ abstraction" (doesn't name term)
|
||||
- **Format**: "**Term** — Definition/explanation"
|
||||
|
||||
**Content Constraints:**
|
||||
- **Length**: 1-3 pages; prefer first page `layout`="two-column"
|
||||
- **Margin notes**: 3-6 per page, each 15-25 words (enough to be self-contained)
|
||||
- **First paragraph**: Start with `attributes.emphasis`="newthought"
|
||||
- **Math**:
|
||||
- Display equations: Use `attributes.equation_latex` for centered equations
|
||||
- Inline math: Use `$...$` for mathematical expressions in running text
|
||||
- ✅ CORRECT: `"The expression $f g h$ parses as $((f g) h)$"`
|
||||
- ✅ CORRECT: `"Substituting 7 for $x$ yields 7"`
|
||||
- Unicode symbols: λ, →, ←, ⇒, ⇔, α, β, γ, Ω, ω, ×, ·, ≡, ≤, ≥
|
||||
- **Reading level**: Upper-undergrad; terse, factual; no fluff
|
||||
- **Practice strips**: Prompts ONLY - NO answers (these are for active learning)
|
||||
- **Self-check questions**: DO include answers - these verify understanding
|
||||
- **Citations**: At least one reputable source with DOI/URL
|
||||
- **Accuracy**: Do not invent facts; omit if unknown
|
||||
|
||||
**Validation Checklist:**
|
||||
- All required fields present
|
||||
- Each equation has `equation_latex`
|
||||
- `page_number` starts at 1 and increments
|
||||
- Arrays exist (even if empty)
|
||||
- Margin notes are self-contained
|
||||
- Lists use JSON arrays not strings
|
||||
- Sources are real (or marked with "[NEEDS CLARIFICATION]")
|
||||
|
||||
**Guardrails:**
|
||||
- If the request is under-specified, return a minimal 1-page scaffold and store 3–5 numbered clarification questions inside `metadata.provenance.notes`
|
||||
- Otherwise produce the full card
|
||||
- Validate internally before saving
|
||||
|
||||
**Self-Check Rubric:**
|
||||
Before finalizing, adjust content to meet:
|
||||
- **Accuracy**: No speculation, cite real sources
|
||||
- **Clarity**: Short sentences, clear definitions
|
||||
- **Annotation utility**: Margin notes actionable and self-contained
|
||||
- **Balance**: Main content vs margin notes flow naturally
|
||||
- **Schema validity**: All required fields present
|
||||
- **Print suitability**: Short margin notes (15-25 words), avoid long lines
|
||||
|
||||
### Step 3: Save JSON to File
|
||||
|
||||
Write the generated JSON to a file in an appropriate location:
|
||||
- Project context: Save to project directory (e.g., `./my-card.json`)
|
||||
- General use: Save to `/tmp/study-card-YYYYMMDD-HHMMSS.json`
|
||||
|
||||
Inform the user where the file was saved.
|
||||
|
||||
### Step 4: Build PDF (if requested)
|
||||
|
||||
Use the helper script to build the PDF:
|
||||
|
||||
```bash
|
||||
~/.claude/skills/tufte-press/scripts/generate-and-build.sh my-card.json --build
|
||||
```
|
||||
|
||||
This will:
|
||||
1. Validate the JSON against the schema
|
||||
2. Convert JSON → LaTeX using Python
|
||||
3. Compile LaTeX → PDF using Tectonic
|
||||
4. Output: `my-card.pdf`
|
||||
|
||||
**Prerequisites:**
|
||||
- `TUFTE_PRESS_REPO` environment variable (default: `~/proj/tufte-press`)
|
||||
- tufte-press repository must be available
|
||||
- Nix development shell (automatically entered if needed)
|
||||
|
||||
### Step 5: Print (if requested)
|
||||
|
||||
Use the helper script with print options:
|
||||
|
||||
```bash
|
||||
~/.claude/skills/tufte-press/scripts/generate-and-build.sh my-card.json --build --print
|
||||
```
|
||||
|
||||
**Print Options:**
|
||||
- `--print`: Send to default printer
|
||||
- `--printer NAME`: Specify printer
|
||||
- `--copies N`: Print N copies (default: 1)
|
||||
- `--duplex`: Enable duplex printing (long-edge for handouts)
|
||||
|
||||
**Example (duplex, 2 copies):**
|
||||
```bash
|
||||
~/.claude/skills/tufte-press/scripts/generate-and-build.sh my-card.json \
|
||||
--build --print --copies 2 --duplex
|
||||
```
|
||||
|
||||
## Helper Scripts
|
||||
|
||||
### `generate-and-build.sh`
|
||||
|
||||
Complete workflow automation:
|
||||
|
||||
```bash
|
||||
# Validate JSON only
|
||||
./scripts/generate-and-build.sh my-card.json
|
||||
|
||||
# Generate PDF
|
||||
./scripts/generate-and-build.sh my-card.json --build
|
||||
|
||||
# Generate and print
|
||||
./scripts/generate-and-build.sh my-card.json --build --print --duplex
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
### 1. Content Quality
|
||||
- Base content on actual conversation history
|
||||
- Include real citations when possible
|
||||
- Mark uncertain information with "[NEEDS CLARIFICATION]"
|
||||
- Keep margin notes concise but self-contained
|
||||
- Use examples from the conversation
|
||||
|
||||
### 2. JSON Generation
|
||||
- Generate valid JSON in a single response
|
||||
- No markdown fences around JSON
|
||||
- Validate structure before saving
|
||||
- Use proper escaping for special characters
|
||||
|
||||
### 3. Build Process
|
||||
- Always validate before building
|
||||
- Check for tufte-press repo availability
|
||||
- Handle build errors gracefully
|
||||
- Provide clear error messages
|
||||
|
||||
### 4. Printing
|
||||
- Confirm print settings with user before printing
|
||||
- Recommend duplex for handouts
|
||||
- Verify printer availability
|
||||
- Show print queue status after submission
|
||||
|
||||
## Error Handling
|
||||
|
||||
**JSON validation fails:**
|
||||
- Review error messages from `metadata-validate.sh`
|
||||
- Common issues: missing required fields, invalid types, bad array formats
|
||||
- Fix JSON and re-validate
|
||||
|
||||
**Build fails:**
|
||||
- Check LaTeX errors in output
|
||||
- Verify special character escaping
|
||||
- Ensure `equation_latex` present for all equations
|
||||
- Check margin note formatting
|
||||
|
||||
**Print fails:**
|
||||
- Verify printer is online: `lpstat -p`
|
||||
- Check print queue: `lpstat -o`
|
||||
- Ensure user has print permissions
|
||||
- Try default printer if named printer fails
|
||||
|
||||
## Example Workflow
|
||||
|
||||
**User**: "Create a study card about recursion from our conversation and print it"
|
||||
|
||||
**Agent** (using this skill):
|
||||
|
||||
1. Review conversation history
|
||||
2. Extract key concepts about recursion
|
||||
3. Generate JSON with proper schema
|
||||
4. Save to `/tmp/study-card-recursion-20251110.json`
|
||||
5. Run: `generate-and-build.sh /tmp/study-card-recursion-20251110.json --build --print --duplex`
|
||||
6. Confirm: "Study card generated and sent to printer (2 pages, duplex)"
|
||||
|
||||
## Requirements
|
||||
|
||||
**Environment:**
|
||||
- tufte-press repository at `~/proj/tufte-press` (or `$TUFTE_PRESS_REPO`)
|
||||
- Nix with flakes enabled
|
||||
- CUPS printing system (for print functionality)
|
||||
|
||||
**Dependencies (via tufte-press):**
|
||||
- Python 3.11
|
||||
- Tectonic (LaTeX compiler)
|
||||
- jq (JSON validation)
|
||||
|
||||
**Skill provides:**
|
||||
- JSON generation from conversation
|
||||
- Build automation script
|
||||
- Print integration
|
||||
- Schema validation
|
||||
|
||||
## Notes
|
||||
|
||||
- **Conversation-aware**: Extracts content from chat history
|
||||
- **Complete workflow**: JSON → PDF → Print in one skill
|
||||
- **Production ready**: Uses validated pipeline from tufte-press project
|
||||
- **Print-optimized**: Duplex support for handout workflow
|
||||
- **Error recovery**: Clear messages and validation at each step
|
||||
15
skills/update-opencode/.claude-plugin/plugin.json
Normal file
15
skills/update-opencode/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
"name": "update-opencode",
|
||||
"description": "Check for and apply OpenCode version updates in Nix-based dotfiles.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"opencode",
|
||||
"nix",
|
||||
"update",
|
||||
"version"
|
||||
]
|
||||
}
|
||||
182
skills/update-opencode/skills/update-opencode.md
Normal file
182
skills/update-opencode/skills/update-opencode.md
Normal file
|
|
@ -0,0 +1,182 @@
|
|||
---
|
||||
name: update-opencode
|
||||
description: Check for and apply OpenCode version updates in Nix-based dotfiles. Use when asked to update OpenCode, check OpenCode version, or upgrade OpenCode.
|
||||
---
|
||||
|
||||
# Update OpenCode Skill
|
||||
|
||||
This skill automates checking for and applying OpenCode version updates in a Nix-based dotfiles setup.
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill when the user requests:
|
||||
- Check OpenCode version or check for updates
|
||||
- Update/upgrade OpenCode to latest version
|
||||
- Install specific OpenCode version
|
||||
- "Is there a newer version of OpenCode?"
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Check Current vs Latest Version
|
||||
|
||||
Run the version check script:
|
||||
```bash
|
||||
cd ~/.claude/skills/update-opencode/scripts
|
||||
./check-version.sh
|
||||
```
|
||||
|
||||
This outputs:
|
||||
```
|
||||
current=X.Y.Z
|
||||
latest=X.Y.Z
|
||||
update_available=yes|no
|
||||
```
|
||||
|
||||
**Report findings to user** with version numbers and update availability.
|
||||
|
||||
### 2. Apply Update (if user confirms)
|
||||
|
||||
If `update_available=yes`, **ask user for confirmation** before proceeding:
|
||||
- Explain that this will modify Nix configuration and rebuild the system
|
||||
- Mention rebuild may take a few minutes
|
||||
- Ask: "Proceed with update to version X.Y.Z?"
|
||||
|
||||
If user confirms, execute the following steps:
|
||||
|
||||
**Step 2a: Fetch SHA256 hash**
|
||||
```bash
|
||||
./fetch-sha256.sh <latest-version>
|
||||
```
|
||||
|
||||
This downloads the release and computes the SRI hash (output: `sha256-...`).
|
||||
|
||||
**Step 2b: Update Nix package file**
|
||||
```bash
|
||||
./update-nix-file.sh <latest-version> <sri-hash>
|
||||
```
|
||||
|
||||
For safety, can use `--dry-run` first to preview changes:
|
||||
```bash
|
||||
./update-nix-file.sh <latest-version> <sri-hash> --dry-run
|
||||
```
|
||||
|
||||
**Step 2c: Trigger system rebuild**
|
||||
```bash
|
||||
cd ~/proj/dotfiles
|
||||
sudo nixos-rebuild switch --flake .#delpad
|
||||
```
|
||||
|
||||
This will rebuild the NixOS configuration with the new OpenCode version.
|
||||
|
||||
**Step 2d: Verify installation**
|
||||
```bash
|
||||
cd ~/.claude/skills/update-opencode/scripts
|
||||
./verify-update.sh <latest-version>
|
||||
```
|
||||
|
||||
This confirms OpenCode reports the expected version.
|
||||
|
||||
### 3. Install Specific Version
|
||||
|
||||
For version pinning or downgrades:
|
||||
|
||||
```bash
|
||||
# Fetch hash for specific version
|
||||
./fetch-sha256.sh 1.0.44
|
||||
|
||||
# Update and rebuild as above
|
||||
./update-nix-file.sh 1.0.44 <sri-hash>
|
||||
cd ~/proj/dotfiles && sudo nixos-rebuild switch --flake .#delpad
|
||||
./verify-update.sh 1.0.44
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
**Tools:**
|
||||
- `jq` - JSON parsing for GitHub API
|
||||
- `curl` - HTTP requests for GitHub API
|
||||
- `nix-prefetch-url` - Download and hash verification
|
||||
- `sed` - File modification
|
||||
- `grep` - Pattern matching
|
||||
|
||||
**Permissions:**
|
||||
- Read access to `~/proj/dotfiles/pkgs/opencode/default.nix`
|
||||
- Write access to dotfiles repository
|
||||
- `sudo` for `nixos-rebuild switch`
|
||||
|
||||
**Network:**
|
||||
- GitHub API access: `https://api.github.com/repos/sst/opencode/releases`
|
||||
- GitHub releases: `https://github.com/sst/opencode/releases/download/`
|
||||
|
||||
## Helper Scripts
|
||||
|
||||
**check-version.sh**
|
||||
- Reads current version from Nix file
|
||||
- Queries GitHub API for latest release
|
||||
- Compares versions
|
||||
- Output: `key=value` pairs
|
||||
|
||||
**fetch-sha256.sh <version>**
|
||||
- Downloads OpenCode release for specified version
|
||||
- Computes SRI hash using `nix-prefetch-url`
|
||||
- Converts to SRI format (sha256-...)
|
||||
- Output: SRI hash string
|
||||
|
||||
**update-nix-file.sh <version> <sha256> [--dry-run]**
|
||||
- Updates version and sha256 fields in Nix file
|
||||
- Validates patterns before modifying
|
||||
- Supports dry-run mode
|
||||
- Verifies changes after update
|
||||
|
||||
**verify-update.sh <version>**
|
||||
- Runs `opencode --version`
|
||||
- Compares output to expected version
|
||||
- Exit code 0 on success, 1 on mismatch
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Network failures:** Report clear error, suggest manual GitHub check
|
||||
|
||||
**Missing Nix file:** Report path error, verify dotfiles location
|
||||
|
||||
**Invalid version:** Report format error (expected X.Y.Z)
|
||||
|
||||
**SHA256 fetch failure:** Do not modify files, report download error
|
||||
|
||||
**Rebuild failure:** Report error with logs, suggest rollback or manual intervention
|
||||
|
||||
**Verification failure:** Report version mismatch, suggest re-running rebuild
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. **Always ask confirmation** before triggering system rebuild
|
||||
2. **Report progress** at each step (fetching hash, updating file, rebuilding)
|
||||
3. **Handle errors gracefully** - explain what went wrong and suggest fixes
|
||||
4. **Verify atomicity** - if any step fails, do not proceed to next step
|
||||
5. **Check prerequisites** - ensure all required tools are installed before starting
|
||||
|
||||
## Examples
|
||||
|
||||
**Example 1: Check for updates**
|
||||
```
|
||||
User: "Check if there's a new OpenCode version"
|
||||
Agent: *runs check-version.sh*
|
||||
Agent: "Current version: 1.0.44, Latest: 1.0.51. Update available."
|
||||
```
|
||||
|
||||
**Example 2: Apply update**
|
||||
```
|
||||
User: "Update OpenCode to latest"
|
||||
Agent: *runs check-version.sh*
|
||||
Agent: "Update available: 1.0.44 → 1.0.51. This will rebuild your system. Proceed?"
|
||||
User: "Yes"
|
||||
Agent: *runs fetch-sha256.sh, update-nix-file.sh, rebuild, verify-update.sh*
|
||||
Agent: "✓ Updated to OpenCode 1.0.51"
|
||||
```
|
||||
|
||||
**Example 3: Specific version**
|
||||
```
|
||||
User: "Install OpenCode 1.0.44"
|
||||
Agent: *fetches hash, updates file, rebuilds, verifies*
|
||||
Agent: "✓ Installed OpenCode 1.0.44"
|
||||
```
|
||||
14
skills/update-spec-kit/.claude-plugin/plugin.json
Normal file
14
skills/update-spec-kit/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
{
|
||||
"name": "update-spec-kit",
|
||||
"description": "Update the spec-kit repository, CLI tool, and all project templates to the latest version.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"spec-kit",
|
||||
"update",
|
||||
"templates"
|
||||
]
|
||||
}
|
||||
153
skills/update-spec-kit/skills/update-spec-kit.md
Normal file
153
skills/update-spec-kit/skills/update-spec-kit.md
Normal file
|
|
@ -0,0 +1,153 @@
|
|||
---
|
||||
name: update-spec-kit
|
||||
description: Update the spec-kit repository, CLI tool, and all project templates to the latest version. Use when the user asks to update spec-kit, upgrade spec-kit templates, refresh spec-kit projects, or sync spec-kit to latest.
|
||||
---
|
||||
|
||||
# Update Spec-Kit
|
||||
|
||||
This skill updates the spec-kit ecosystem to the latest version across three levels:
|
||||
|
||||
1. **Spec-kit repository** - Pull latest commits from upstream
|
||||
2. **Specify CLI tool** - Upgrade the installed CLI to latest
|
||||
3. **Project templates** - Update all projects using spec-kit to latest templates
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoke this skill when the user requests:
|
||||
- "Update spec-kit"
|
||||
- "Upgrade spec-kit to latest"
|
||||
- "Refresh spec-kit templates in my projects"
|
||||
- "Sync all spec-kit projects"
|
||||
- "Make sure spec-kit is up to date"
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Update Spec-Kit Repository
|
||||
|
||||
Navigate to the spec-kit repository and update:
|
||||
|
||||
```bash
|
||||
cd ~/proj/spec-kit
|
||||
git fetch origin
|
||||
git log --oneline main..origin/main # Show what's new
|
||||
git pull --ff-only origin main
|
||||
```
|
||||
|
||||
Report the number of new commits and releases to the user.
|
||||
|
||||
### Step 2: Update Specify CLI Tool
|
||||
|
||||
Update the globally installed CLI:
|
||||
|
||||
```bash
|
||||
uv tool install specify-cli --force --from git+https://github.com/github/spec-kit.git
|
||||
```
|
||||
|
||||
Verify the installation:
|
||||
|
||||
```bash
|
||||
specify --help
|
||||
```
|
||||
|
||||
### Step 3: Update All Project Templates
|
||||
|
||||
Find all projects with spec-kit installed and update them:
|
||||
|
||||
```bash
|
||||
~/.claude/skills/update-spec-kit/scripts/update-all-projects.sh
|
||||
```
|
||||
|
||||
This script:
|
||||
- Finds all directories with `.specify` folders
|
||||
- Detects which AI agent each project uses (claude, cursor, copilot, etc.)
|
||||
- Runs `specify init --here --force --ai <agent>` in each project
|
||||
- Preserves all user work (specs, code, settings)
|
||||
- Only updates template files (commands, scripts)
|
||||
|
||||
## Important Notes
|
||||
|
||||
**Safe Operations:**
|
||||
- Templates are merged/overwritten
|
||||
- User work in `.specify/specs/` is never touched
|
||||
- Source code and git history are preserved
|
||||
- `.vscode/settings.json` is smart-merged (not replaced)
|
||||
|
||||
**What Gets Updated:**
|
||||
- Slash command files (`.claude/commands/*.md`, `.opencode/command/*.md`)
|
||||
- Helper scripts (`.specify/scripts/`)
|
||||
- Template files (`.specify/templates/`)
|
||||
|
||||
**What Stays:**
|
||||
- All your specifications (`.specify/specs/`)
|
||||
- Your source code
|
||||
- Git history
|
||||
- Custom VS Code settings (merged intelligently)
|
||||
|
||||
## Output Format
|
||||
|
||||
After completion, report:
|
||||
1. Number of commits pulled from upstream
|
||||
2. New version tags/releases available
|
||||
3. List of projects updated
|
||||
4. Brief summary of major changes (check CHANGELOG.md)
|
||||
|
||||
## Error Handling
|
||||
|
||||
If updates fail:
|
||||
- Check internet connectivity
|
||||
- Verify git repository is clean (no uncommitted changes)
|
||||
- Ensure `uv` is installed and working
|
||||
- Check that projects have `.specify` directories
|
||||
|
||||
## Examples
|
||||
|
||||
**User Request:**
|
||||
> "Update spec-kit and all my projects"
|
||||
|
||||
**Skill Action:**
|
||||
1. Navigate to ~/proj/spec-kit
|
||||
2. Pull latest changes
|
||||
3. Show: "Pulled 28 new commits (v0.0.72 → v0.0.79)"
|
||||
4. Upgrade CLI tool
|
||||
5. Find 7 projects with spec-kit
|
||||
6. Update each project to v0.0.79
|
||||
7. Report: "Updated: klmgraph, talu, ops-red, ops-jrz1, wifi-tester, delbaker"
|
||||
|
||||
---
|
||||
|
||||
**User Request:**
|
||||
> "Are my spec-kit projects current?"
|
||||
|
||||
**Skill Action:**
|
||||
1. Check spec-kit repo status
|
||||
2. Compare installed CLI version
|
||||
3. Compare one project's templates with latest
|
||||
4. Report status and suggest update if needed
|
||||
|
||||
## Requirements
|
||||
|
||||
- Spec-kit repository at `~/proj/spec-kit`
|
||||
- `uv` package manager installed
|
||||
- `specify` CLI tool installed
|
||||
- Projects with `.specify` directories in `~/proj`
|
||||
|
||||
## Helper Scripts
|
||||
|
||||
### update-all-projects.sh
|
||||
|
||||
**Purpose**: Batch update all projects using spec-kit
|
||||
|
||||
**Location**: `~/.claude/skills/update-spec-kit/scripts/update-all-projects.sh`
|
||||
|
||||
**What it does**:
|
||||
1. Finds all projects with `.specify` directories
|
||||
2. Detects AI agent for each project
|
||||
3. Updates templates using `specify init --force`
|
||||
4. Reports results
|
||||
|
||||
**Agent Detection**:
|
||||
- Checks for `.claude/` directory → `claude`
|
||||
- Checks for `.cursor/` directory → `cursor-agent`
|
||||
- Checks for `.gemini/` directory → `gemini`
|
||||
- Checks for `.github/copilot-instructions.md` → `copilot`
|
||||
- Defaults to `claude` if unclear
|
||||
14
skills/web-research/.claude-plugin/plugin.json
Normal file
14
skills/web-research/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
{
|
||||
"name": "web-research",
|
||||
"description": "Conduct deep web research to gather insights, determine best practices, and discover new developments.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"web-research",
|
||||
"insights",
|
||||
"analysis"
|
||||
]
|
||||
}
|
||||
51
skills/web-research/skills/web-research.md
Normal file
51
skills/web-research/skills/web-research.md
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
---
|
||||
name: web-research
|
||||
description: Conduct deep web research to gather insights, determine best practices, and discover new developments. Produces structured reports.
|
||||
---
|
||||
|
||||
# Web Research
|
||||
|
||||
Conduct deep, comprehensive web research on a topic. This skill acts as a "Lead Researcher," synthesizing information from multiple sources to provide insights, best practices, and trend analysis.
|
||||
|
||||
## When to Use
|
||||
|
||||
- "Research [topic]"
|
||||
- "What are the best practices for [technology]?"
|
||||
- "Gather insights on [subject]"
|
||||
- "What's new in [field]?"
|
||||
- "Compare [option A] and [option B]"
|
||||
|
||||
## Process
|
||||
|
||||
1. Identify the research topic from the user's request.
|
||||
2. Run the helper script with the topic.
|
||||
|
||||
## Helper Scripts
|
||||
|
||||
### research.sh
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
./scripts/research.sh "your research topic"
|
||||
```
|
||||
|
||||
**Backends**:
|
||||
You can choose the synthesis backend using the `RESEARCH_BACKEND` environment variable.
|
||||
- `claude` (default): Uses Claude for both search and synthesis.
|
||||
- `llm`: Uses Claude for search, but pipes results to the `llm` CLI for synthesis.
|
||||
- `kagi`: Uses Kagi's FastGPT API (requires `KAGI_API_KEY` or `/run/secrets/api_keys/kagi`).
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
# Default (Claude)
|
||||
./scripts/research.sh "current best practices for React state management in 2025"
|
||||
|
||||
# Use LLM backend
|
||||
RESEARCH_BACKEND=llm ./scripts/research.sh "current best practices for React state management in 2025"
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
- `claude` CLI tool must be installed and in the PATH.
|
||||
- `llm` CLI tool (optional) for using the `llm` backend.
|
||||
- `KAGI_API_KEY` environment variable OR `/run/secrets/api_keys/kagi` (for `kagi` backend).
|
||||
14
skills/web-search/.claude-plugin/plugin.json
Normal file
14
skills/web-search/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
{
|
||||
"name": "web-search",
|
||||
"description": "Search the web for information, documentation, or troubleshooting help.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"web-search",
|
||||
"documentation",
|
||||
"troubleshooting"
|
||||
]
|
||||
}
|
||||
39
skills/web-search/skills/web-search.md
Normal file
39
skills/web-search/skills/web-search.md
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
---
|
||||
name: web-search
|
||||
description: Search the web for information, documentation, or troubleshooting help using Claude Code's subprocess capability.
|
||||
---
|
||||
|
||||
# Web Search
|
||||
|
||||
Perform a web search to answer questions, find documentation, or troubleshoot issues. This skill wraps `claude -p` with the necessary permissions to access the web.
|
||||
|
||||
## When to Use
|
||||
|
||||
- "Search the web for [topic]"
|
||||
- "Find documentation for [library]"
|
||||
- "Troubleshoot [error message]"
|
||||
- "Check if [package] is available on Nix"
|
||||
- "What is [product]?"
|
||||
|
||||
## Process
|
||||
|
||||
1. Identify the search query from the user's request.
|
||||
2. Run the helper script with the query.
|
||||
|
||||
## Helper Scripts
|
||||
|
||||
### search.sh
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
./scripts/search.sh "your search query"
|
||||
```
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
./scripts/search.sh "how to install ripgrep on nixos"
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
- `claude` CLI tool must be installed and in the PATH.
|
||||
15
skills/worklog/.claude-plugin/plugin.json
Normal file
15
skills/worklog/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
"name": "worklog",
|
||||
"description": "Create comprehensive structured markdown worklogs documenting work sessions in docs/worklogs/.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "dan"
|
||||
},
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"worklog",
|
||||
"documentation",
|
||||
"session-logging",
|
||||
"markdown"
|
||||
]
|
||||
}
|
||||
88
skills/worklog/skills/worklog.md
Normal file
88
skills/worklog/skills/worklog.md
Normal file
|
|
@ -0,0 +1,88 @@
|
|||
---
|
||||
name: worklog
|
||||
description: Create comprehensive structured markdown worklogs documenting work sessions in docs/worklogs/. Use when the user explicitly asks to document work, create/write a worklog, log the session, or record what was accomplished.
|
||||
---
|
||||
|
||||
# Worklog Skill
|
||||
|
||||
Create comprehensive structured worklogs that document work sessions with rich context for future reference.
|
||||
|
||||
**Skill directory:** `~/.claude/skills/worklog/` (contains scripts/, templates/)
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoke this skill when the user requests:
|
||||
- "Document today's work"
|
||||
- "Create a worklog"
|
||||
- "Record this session"
|
||||
- "Write up what we accomplished"
|
||||
- "Log this work session"
|
||||
|
||||
## Context Gathering
|
||||
|
||||
Before writing the worklog, run the metrics script to gather git context:
|
||||
|
||||
```bash
|
||||
scripts/extract-metrics.sh
|
||||
```
|
||||
|
||||
This outputs: branch, uncommitted changes, commits today, files touched, lines added/removed, and recent commit messages.
|
||||
|
||||
## File Location
|
||||
|
||||
Save worklogs to: `docs/worklogs/YYYY-MM-DD-{descriptive-topic-kebab-case}.md`
|
||||
|
||||
Create the `docs/worklogs/` directory if it doesn't exist.
|
||||
|
||||
Use the helper script to suggest the filename:
|
||||
```bash
|
||||
scripts/suggest-filename.sh
|
||||
```
|
||||
|
||||
## Structure
|
||||
|
||||
Read and follow the template: `templates/worklog-template.md`
|
||||
|
||||
The template defines all required sections. Parse it directly rather than relying on this summary.
|
||||
|
||||
## Helper Scripts
|
||||
|
||||
**Suggest Filename:**
|
||||
```bash
|
||||
scripts/suggest-filename.sh
|
||||
```
|
||||
Analyzes recent commits to suggest a descriptive filename
|
||||
|
||||
**Find Related Logs:**
|
||||
```bash
|
||||
scripts/find-related-logs.sh "keyword1 keyword2"
|
||||
```
|
||||
Searches previous worklogs for context continuity
|
||||
|
||||
## Principles
|
||||
|
||||
1. **Be thorough** - More information is better; these logs can be distilled later
|
||||
2. **Document the journey** - Include false starts, debugging, and the path to the solution
|
||||
3. **Focus on why** - Decisions, rationale, and context matter more than what
|
||||
4. **Include specifics** - Code snippets, commands, error messages help reconstruct solutions
|
||||
5. **Think ahead** - What would you need to know in 6 months?
|
||||
6. **Pull previous context** - Use find-related-logs for continuity across sessions
|
||||
7. **Aim for 3-5KB minimum** - Thorough logs typically run 5-15KB
|
||||
|
||||
## Process
|
||||
|
||||
1. Run `scripts/extract-metrics.sh` to gather git context
|
||||
2. Run `scripts/suggest-filename.sh` to get filename suggestion
|
||||
3. Read the complete template from `templates/worklog-template.md`
|
||||
4. Search for related previous worklogs using the find-related-logs script
|
||||
5. Fill in all template sections with detailed information from the session
|
||||
6. Ensure the `docs/worklogs/` directory exists (create if needed)
|
||||
7. Save the worklog with the suggested filename
|
||||
8. Verify metadata frontmatter is complete
|
||||
|
||||
## Requirements
|
||||
|
||||
- Must be in a git repository
|
||||
- Saves to `docs/worklogs/` directory (will create if needed)
|
||||
- Outputs markdown format with YAML frontmatter
|
||||
- Requires helper scripts in `scripts/`
|
||||
Loading…
Reference in a new issue