Compare commits

...

10 commits

Author SHA1 Message Date
dan fe51d14889 bd sync: 2026-01-06 16:29:22 2026-01-06 16:29:22 -08:00
dan 853bf347e4 test: add deploy-skill.sh config injection tests (21 tests)
Covers:
- Basic injection (config inserted before closing brace)
- Idempotency (no duplicates on re-run)
- Multiple injections (different configs can coexist)
- File not found handling (graceful skip)
- Brace structure preservation (nested braces work)
- inject_home_file wrapper (builds correct Nix block)
- Already present detection
- Edge cases (empty props, minimal file)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 16:29:12 -08:00
dan 112d43a4c3 refactor(deploy): move functions to top of file (code-review) 2026-01-03 12:16:01 -08:00
dan f00c0e7e20 bd sync: 2026-01-03 12:13:49 2026-01-03 12:13:49 -08:00
dan e605f26cb1 refactor(specify): simplify branch generation and add main (skills-lzk) 2026-01-03 12:13:21 -08:00
dan 6200abc32f feat(scripts): add atomic file operations and safe temp files (skills-7bu) 2026-01-03 12:08:51 -08:00
dan e164285c6c feat(nix): consolidate skill list into skills.nix (skills-8v0) 2026-01-03 12:06:18 -08:00
dan c186c77fd2 refactor(deploy): dedupe injection calls in deploy-skill.sh (skills-dnm) 2026-01-03 12:02:43 -08:00
dan 1108dda5ef chore: bd doctor --fix and sync 2026-01-03 11:58:37 -08:00
dan ff1d294d59 docs: worklog for agent update tests and bug fix 2026-01-02 02:30:03 -08:00
13 changed files with 814 additions and 302 deletions

18
.beads/.gitignore vendored
View file

@ -10,11 +10,20 @@ daemon.lock
daemon.log
daemon.pid
bd.sock
sync-state.json
last-touched
# Local version tracking (prevents upgrade notification spam after git ops)
.local_version
# Legacy database files
db.sqlite
bd.db
# Worktree redirect file (contains relative path to main repo's .beads/)
# Must not be committed as paths would be wrong in other clones
redirect
# Merge artifacts (temporary files from 3-way merge)
beads.base.jsonl
beads.base.meta.json
@ -23,7 +32,8 @@ beads.left.meta.json
beads.right.jsonl
beads.right.meta.json
# Keep JSONL exports and config (source of truth for git)
!issues.jsonl
!metadata.json
!config.json
# NOTE: Do NOT add negation patterns (e.g., !issues.jsonl) here.
# They would override fork protection in .git/info/exclude, allowing
# contributors to accidentally commit upstream issue databases.
# The JSONL files (issues.jsonl, interactions.jsonl) and config files
# are tracked by git by default since no pattern above ignores them.

View file

@ -1 +1 @@
0.42.0
0.44.0

View file

@ -23,14 +23,14 @@
{"id":"skills-6e3","title":"Searchable Claude Code conversation history","description":"## Context\nClaude Code persists full conversations in `~/.claude/projects/\u003cproject\u003e/\u003cuuid\u003e.jsonl`. This is complete but not searchable - can't easily find \"that session where we solved X\".\n\n## Goal\nMake conversation history searchable without requiring manual worklogs.\n\n## Approach\n\n### Index structure\n```\n~/.claude/projects/\u003cproject\u003e/\n \u003cuuid\u003e.jsonl # raw conversation (existing)\n index.jsonl # session metadata + summaries (new)\n```\n\n### Index entry format\n```json\n{\n \"uuid\": \"f9a4c161-...\",\n \"date\": \"2025-12-17\",\n \"project\": \"/home/dan/proj/skills\",\n \"summary\": \"Explored Wayland desktop automation, AT-SPI investigation, vision model benchmark\",\n \"keywords\": [\"wayland\", \"niri\", \"at-spi\", \"automation\", \"seeing-problem\"],\n \"commits\": [\"906f2bc\", \"0b97155\"],\n \"duration_minutes\": 90,\n \"message_count\": 409\n}\n```\n\n### Features needed\n1. **Index builder** - Parse JSONL, extract/generate summary + keywords\n2. **Search CLI** - `claude-search \"AT-SPI wayland\"` → matching sessions\n3. **Auto-index hook** - Update index on session end or compaction\n\n## Questions\n- Generate summaries via AI or extract heuristically?\n- Index per-project or global?\n- How to handle very long sessions (multiple topics)?\n\n## Value\n- Find past solutions without remembering dates\n- Model reflection: include relevant past sessions in context\n- Replace manual worklogs with auto-generated metadata","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-17T15:56:50.913766392-08:00","updated_at":"2025-12-29T18:35:56.530154004-05:00","closed_at":"2025-12-29T18:35:56.530154004-05:00","close_reason":"Prototype complete: bin/claude-search indexes 122 sessions, searches by keyword. Future: auto-index hook, full-text search, keyword extraction."}
{"id":"skills-6gw","title":"Add artifact provenance to traces","description":"Current: files_created lists paths only.\nProblem: Can't detect regressions or validate outputs.\n\nAdd:\n- Content hash (sha256)\n- File size\n- For modifications: git_diff_summary (files changed, line counts)\n\nExample:\n outputs:\n artifacts:\n - path: docs/worklogs/...\n sha256: abc123...\n size: 1234\n action: created|modified\n\nEnables: diff traces, regression testing, validation.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-23T19:49:48.654952533-05:00","updated_at":"2025-12-29T13:55:35.827778174-05:00","closed_at":"2025-12-29T13:55:35.827778174-05:00","close_reason":"Parked with ADR-001: skills-molecules integration deferred. Current simpler approach (skills as standalone) works well. Revisit when complex orchestration needed."}
{"id":"skills-6jw","title":"spec-review: Add severity labeling to prompts and reviews","description":"Reviews produce flat lists mixing blockers with minor nits. Hard to make decisions.\n\nAdd to prompts:\n- Require severity labels: Blocker / High / Medium / Low\n- Sort output by severity\n- Include impact and likelihood for each issue","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-15T00:23:23.334156366-08:00","updated_at":"2025-12-15T13:00:32.678573181-08:00","closed_at":"2025-12-15T13:00:32.678573181-08:00"}
{"id":"skills-7bu","title":"Add atomic file operations to update scripts","description":"Files affected:\n- skills/update-opencode/scripts/update-nix-file.sh\n- .specify/scripts/bash/update-agent-context.sh\n\nIssues:\n- Uses sed -i which can corrupt on error\n- No rollback mechanism despite creating backups\n- Unsafe regex patterns with complex escaping\n\nFix:\n- Write to temp file, then atomic mv\n- Validate output before replacing original\n- Add rollback on failure\n\nSeverity: MEDIUM","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-24T02:51:02.334416215-05:00","updated_at":"2025-12-24T02:51:02.334416215-05:00"}
{"id":"skills-7bu","title":"Add atomic file operations to update scripts","description":"Files affected:\n- skills/update-opencode/scripts/update-nix-file.sh\n- .specify/scripts/bash/update-agent-context.sh\n\nIssues:\n- Uses sed -i which can corrupt on error\n- No rollback mechanism despite creating backups\n- Unsafe regex patterns with complex escaping\n\nFix:\n- Write to temp file, then atomic mv\n- Validate output before replacing original\n- Add rollback on failure\n\nSeverity: MEDIUM","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-24T02:51:02.334416215-05:00","updated_at":"2026-01-03T12:08:56.822659199-08:00","closed_at":"2026-01-03T12:08:56.822659199-08:00","close_reason":"Implemented atomic updates using temp files and traps in update-nix-file.sh, update-agent-context.sh, and deploy-skill.sh. Added validation before replacing original files."}
{"id":"skills-7s0","title":"Compare STATIC_DATA.md with upstream","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:15:55.193704589-08:00","updated_at":"2025-12-03T20:19:29.659256809-08:00","closed_at":"2025-12-03T20:19:29.659256809-08:00","dependencies":[{"issue_id":"skills-7s0","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:55.195160705-08:00","created_by":"daemon","metadata":"{}"}]}
{"id":"skills-7sh","title":"Set up bd-issue-tracking Claude Code skill from beads repo","description":"Install the beads Claude Code skill from https://github.com/steveyegge/beads/tree/main/examples/claude-code-skill\n\nThis skill teaches Claude how to effectively use beads for issue tracking across multi-session coding workflows. It provides strategic guidance on when/how to use beads, not just command syntax.\n\nFiles to install to ~/.claude/skills/bd-issue-tracking/:\n- SKILL.md - Core workflow patterns and decision criteria\n- BOUNDARIES.md - When to use beads vs markdown alternatives\n- CLI_REFERENCE.md - Complete command documentation\n- DEPENDENCIES.md - Relationship types and patterns\n- WORKFLOWS.md - Step-by-step procedures\n- ISSUE_CREATION.md - Quality guidelines\n- RESUMABILITY.md - Making work resumable across sessions\n- STATIC_DATA.md - Using beads as reference databases\n\nCan symlink or copy the files. Restart Claude Code after install.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T17:53:43.254007992-08:00","updated_at":"2025-12-03T20:04:53.416579381-08:00","closed_at":"2025-12-03T20:04:53.416579381-08:00"}
{"id":"skills-8cc","title":"Remove dead code: unused ARGS variable","description":"File: .specify/scripts/bash/create-new-feature.sh\n\nLine 8: ARGS=() declared but never used\nLine 251: export SPECIFY_FEATURE - unclear if used downstream\n\nFix:\n- Remove unused ARGS declaration\n- Verify SPECIFY_FEATURE is used or remove\n\nSeverity: LOW","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-24T02:50:59.332192076-05:00","updated_at":"2025-12-29T18:38:03.48883384-05:00","closed_at":"2025-12-29T18:38:03.48883384-05:00","close_reason":"Invalid: ARGS is used (line 58, 64). SPECIFY_FEATURE is used by common.sh for feature detection. No dead code."}
{"id":"skills-8d4","title":"Compare CLI_REFERENCE.md with upstream","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:15:53.268324087-08:00","updated_at":"2025-12-03T20:17:26.552616779-08:00","closed_at":"2025-12-03T20:17:26.552616779-08:00","dependencies":[{"issue_id":"skills-8d4","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:53.27265681-08:00","created_by":"daemon","metadata":"{}"}]}
{"id":"skills-8d9","title":"Add conversational patterns to orch skill","description":"## Context\nThe orch skill currently documents consensus and single-shot chat, but doesn't\nteach agents how to use orch for multi-turn conversations with external AIs.\n\n## Goal\nAdd documentation and patterns for agent-driven conversations where the calling\nagent (Claude Code) orchestrates multi-turn dialogues using orch primitives.\n\n## Patterns to document\n\n### Session-based multi-turn\n```bash\n# Initial query\nRESPONSE=$(orch chat \"Analyze this\" --model claude --format json)\nSESSION=$(echo \"$RESPONSE\" | jq -r .session_id)\n\n# Continue conversation\norch chat \"Elaborate on X\" --model claude --session $SESSION\n\n# Inspect state\norch sessions info $SESSION\norch sessions show $SESSION --last 2 --format text\n```\n\n### Cross-model dialogue\n```bash\n# Get one model's take\nCLAUDE=$(orch chat \"Review this\" --model claude --format json)\nCLAUDE_SAYS=$(echo \"$CLAUDE\" | jq -r '.responses[0].content')\n\n# Ask another model to respond\norch chat \"Claude said: $CLAUDE_SAYS\n\nWhat's your perspective?\" --model gemini\n```\n\n### When to use conversations vs consensus\n- Consensus: quick parallel opinions on a decision\n- Conversation: deeper exploration, follow-up questions, iterative refinement\n\n## Files\n- skills/orch/SKILL.md\n\n## Related\n- orch-c3r: Design: Session introspection for agent-driven conversations (in orch repo)","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-18T19:57:28.201494288-08:00","updated_at":"2025-12-29T15:34:16.254181578-05:00","closed_at":"2025-12-29T15:34:16.254181578-05:00","close_reason":"Added conversational patterns section to orch SKILL.md: sessions, cross-model dialogue, iterative refinement, consensus vs chat guidance."}
{"id":"skills-8ma","title":"worklog skill: remove org-mode references, use markdown instead","description":"The worklog skill currently references org-mode format (.org files) in the template and instructions. Update to use markdown (.md) instead:\n\n1. Update ~/.claude/skills/worklog/templates/worklog-template.org → worklog-template.md\n2. Convert org-mode syntax to markdown (#+TITLE → # Title, * → ##, etc.)\n3. Update skill instructions to reference .md files\n4. Update suggest-filename.sh to output .md extension\n\nContext: org-mode is less widely supported than markdown in tooling and editors.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-31T08:43:55.761429693-05:00","created_by":"dan","updated_at":"2026-01-02T00:13:05.338810905-05:00","closed_at":"2026-01-02T00:13:05.338810905-05:00","close_reason":"Migrated worklog skill from org-mode to markdown. Template, scripts, and SKILL.md updated. Backward compatible with existing .org files."}
{"id":"skills-8v0","title":"Consolidate skill list definitions (flake.nix + ai-skills.nix)","description":"Skill list duplicated in:\n- flake.nix (lines 15-27)\n- modules/ai-skills.nix (lines 8-18)\n\nIssues:\n- Manual sync required when adding skills\n- No validation that referenced skills exist\n\nFix:\n- Single source of truth for skill list\n- Consider generating one from the other\n\nSeverity: MEDIUM","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-24T02:51:14.432158871-05:00","updated_at":"2025-12-24T02:51:14.432158871-05:00"}
{"id":"skills-8v0","title":"Consolidate skill list definitions (flake.nix + ai-skills.nix)","description":"Skill list duplicated in:\n- flake.nix (lines 15-27)\n- modules/ai-skills.nix (lines 8-18)\n\nIssues:\n- Manual sync required when adding skills\n- No validation that referenced skills exist\n\nFix:\n- Single source of truth for skill list\n- Consider generating one from the other\n\nSeverity: MEDIUM","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-24T02:51:14.432158871-05:00","updated_at":"2026-01-03T12:06:23.731969973-08:00","closed_at":"2026-01-03T12:06:23.731969973-08:00","close_reason":"Created skills.nix as single source of truth for skill names and descriptions. Updated flake.nix and Home Manager module to use it."}
{"id":"skills-8y6","title":"Define skill versioning strategy","description":"Git SHA alone is insufficient. Need tuple approach:\n\n- skill_source_rev: git SHA (if available)\n- skill_content_hash: hash of SKILL.md + scripts\n- runtime_ref: flake.lock hash or Nix store path\n\nQuestions to resolve:\n- Do Protos pin to versions (stable but maintenance) or float on latest (risky)?\n- How to handle breaking changes in skills?\n- Record in wisp trace vs proto definition?\n\nFrom consensus: both models flagged versioning instability as high severity.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T19:49:30.839064445-05:00","updated_at":"2025-12-23T20:55:04.439779336-05:00","closed_at":"2025-12-23T20:55:04.439779336-05:00","close_reason":"ADRs revised with orch consensus feedback"}
{"id":"skills-9af","title":"spec-review: Add spike/research task handling","description":"Tasks like 'Investigate X' can linger without clear outcomes.\n\nAdd to REVIEW_TASKS:\n- Flag research/spike tasks\n- Require timebox and concrete outputs (decision record, prototype, risks)\n- Pattern for handling unknowns","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-15T00:23:26.887719136-08:00","updated_at":"2025-12-15T14:08:13.441095034-08:00","closed_at":"2025-12-15T14:08:13.441095034-08:00"}
{"id":"skills-9bc","title":"Investigate pre-compression hook for worklogs","description":"## Revised Understanding\n\nClaude Code already persists full conversation history in `~/.claude/projects/\u003cproject\u003e/\u003csession-id\u003e.jsonl`. Pre-compact hooks aren't needed for data capture.\n\n## Question\nWhat's the ideal workflow for generating worklogs from session data?\n\n## Options\n\n### 1. Post-session script\n- Run after exiting Claude Code\n- Reads most recent session JSONL\n- Generates worklog from conversation content\n- Pro: Async, doesn't interrupt flow\n- Con: May forget to run it\n\n### 2. On-demand slash command\n- `/worklog-from-session` or similar\n- Reads current session's JSONL file\n- Generates worklog with full context\n- Pro: Explicit control\n- Con: Still need to remember\n\n### 3. Pre-compact reminder\n- Hook prints reminder: \"Consider running /worklog\"\n- Doesn't automate, just nudges\n- Pro: Simple, non-intrusive\n- Con: Easy to dismiss\n\n### 4. Async batch processing\n- Process old sessions whenever\n- All data persists in JSONL files\n- Pro: No urgency, can do later\n- Con: Context may be stale\n\n## Data Format\nSession files contain:\n- User messages with timestamp\n- Assistant responses with model info\n- Tool calls and results\n- Git branch, cwd, version info\n\n## Next Steps\n- Decide preferred workflow\n- Build script to parse session JSONL → worklog format","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-17T14:32:32.568430817-08:00","updated_at":"2025-12-17T15:56:38.864916015-08:00","closed_at":"2025-12-17T15:56:38.864916015-08:00","close_reason":"Pivoted: worklogs may be redundant given full conversation persistence. New approach: make conversations searchable directly."}
@ -62,7 +62,7 @@
{"id":"skills-czz","title":"Research OpenCode agents for skill integration","description":"DEPLOYMENT.md:218 has TODO to research OpenCode agents. Need to understand how Build/Plan/custom agents work and whether skills need agent-specific handling.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-30T11:58:24.855701141-08:00","updated_at":"2025-12-28T20:48:58.373191479-05:00","closed_at":"2025-12-28T20:48:58.373191479-05:00","close_reason":"Researched OpenCode agents - documented in DEPLOYMENT.md. Skills deploy globally, permissions control per-agent access."}
{"id":"skills-d6r","title":"Design: orch as local agent framework","description":"# Orch Evolution: From Consensus Tool to Agent Framework\n\n## Current State\n- `orch consensus` - multi-model queries\n- `orch chat` - single model queries\n- No state, no pipelines, no retries\n\n## Proposed Extensions\n\n### Pipeline Mode\n```bash\norch pipeline config.yaml\n```\nWhere config.yaml defines:\n- Stages (triage → specialists → verify)\n- Routing logic (if triage finds X, run specialist Y)\n- Retry policy\n\n### Evaluate Mode (doc-review specific)\n```bash\norch evaluate doc.md --rubrics=1,4,7 --output=patches/\n```\n- Applies specific rubrics to document\n- Outputs JSON or patches\n\n### Parallel Mode\n```bash\norch parallel --fan-out=5 --template=\"evaluate {rubric}\" rubrics.txt\n```\n- Fan-out to multiple parallel calls\n- Aggregate results\n\n## Open Questions\n1. Does this belong in orch or a separate tool?\n2. Should orch pipelines be YAML-defined or code-defined?\n3. How does this relate to Claude Code Task subagents?\n4. What's the minimal viable extension?\n\n## Context\nEmerged from doc-review skill design - need multi-pass evaluation but don't want to adopt heavy framework (LangGraph, etc.)","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-04T16:06:56.681282678-08:00","updated_at":"2025-12-04T16:44:08.652185174-08:00","closed_at":"2025-12-04T16:44:08.652185174-08:00"}
{"id":"skills-d87","title":"orch skill is documentation-only, needs working invocation mechanism","description":"The orch skill provides SKILL.md documentation but no working invocation mechanism.\n\n**Resolution**: Install orch globally via home-manager (dotfiles-3to). The skill documents a system tool, doesn't need to bundle it.\n\n**Blocked by**: dotfiles-3to (Add orch CLI to home-manager packages)","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-14T11:54:03.157039164-08:00","updated_at":"2025-12-16T18:45:24.39235833-08:00","closed_at":"2025-12-16T18:45:24.39235833-08:00","close_reason":"Updated docs to use globally installed orch CLI"}
{"id":"skills-dnm","title":"Refactor deploy-skill.sh: dedupe injection calls","description":"File: bin/deploy-skill.sh (lines 112-189)\n\nIssues:\n- Three nearly-identical inject_nix_config() calls\n- Only difference is config block content and target file\n- Repeated pattern bloats file\n\nFix:\n- Parameterize inject_nix_config() better\n- Or create config-specific injection functions\n- Reduce duplication\n\nSeverity: MEDIUM","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-24T02:51:01.855452762-05:00","updated_at":"2025-12-24T02:51:01.855452762-05:00"}
{"id":"skills-dnm","title":"Refactor deploy-skill.sh: dedupe injection calls","description":"File: bin/deploy-skill.sh (lines 112-189)\n\nIssues:\n- Three nearly-identical inject_nix_config() calls\n- Only difference is config block content and target file\n- Repeated pattern bloats file\n\nFix:\n- Parameterize inject_nix_config() better\n- Or create config-specific injection functions\n- Reduce duplication\n\nSeverity: MEDIUM","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-24T02:51:01.855452762-05:00","updated_at":"2026-01-03T12:02:48.140656044-08:00","closed_at":"2026-01-03T12:02:48.140656044-08:00","close_reason":"Refactored injection logic using inject_home_file helper, deduping Claude, OpenCode and Antigravity blocks."}
{"id":"skills-dpw","title":"orch: add command to show available/configured models","description":"## Problem\n\nWhen trying to use orch, you have to trial-and-error through models to find which ones have API keys configured. Each failure looks like:\n\n```\nError: GEMINI_API_KEY not set. Required for Google Gemini models.\n```\n\nNo way to know upfront which models are usable.\n\n## Proposed Solution\n\nAdd `orch models` or `orch status` command:\n\n```bash\n$ orch models\nAvailable models:\n ✓ flash (GEMINI_API_KEY set)\n ✓ gemini (GEMINI_API_KEY set)\n ✗ deepseek (OPENROUTER_KEY not set)\n ✗ qwen (OPENROUTER_KEY not set)\n ✓ gpt (OPENAI_API_KEY set)\n```\n\nOr at minimum, on failure suggest alternatives:\n```\nError: GEMINI_API_KEY not set. Try --model gpt or --model deepseek instead.\n```\n\n## Context\n\nHit this while trying to brainstorm with high-temp gemini - had to try 4 models before realizing none were configured in this environment.","status":"closed","priority":3,"issue_type":"feature","created_at":"2025-12-04T14:10:07.069103175-08:00","updated_at":"2025-12-04T14:11:05.49122538-08:00","closed_at":"2025-12-04T14:11:05.49122538-08:00"}
{"id":"skills-e8h","title":"Investigate waybar + niri integration improvements","description":"Look into waybar configuration and niri compositor integration.\n\nPotential areas:\n- Waybar modules for niri workspaces\n- Status indicators\n- Integration with existing niri-window-capture skill\n- Custom scripts in pkgs/waybar-scripts\n\nRelated: dotfiles has home/waybar.nix (196 lines) and pkgs/waybar-scripts/","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-28T20:11:23.115445797-05:00","created_by":"dan","updated_at":"2025-12-28T20:37:16.465731945-05:00","closed_at":"2025-12-28T20:37:16.465731945-05:00","close_reason":"Moved to dotfiles repo - waybar config lives there"}
{"id":"skills-e96","title":"skill: semantic-grep using LSP","description":"Use workspace/symbol, documentSymbol, and references instead of ripgrep.\n\nExample: 'Find all places where we handle User objects but only where we modify the email field directly'\n- LSP references finds all User usages\n- Filter by AST analysis for .email assignments\n- Return hit list for bead or further processing\n\nBetter than regex for Go interfaces, Rust traits, TS types.","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-24T02:29:57.119983837-05:00","updated_at":"2025-12-24T02:29:57.119983837-05:00","dependencies":[{"issue_id":"skills-e96","depends_on_id":"skills-gga","type":"blocks","created_at":"2025-12-24T02:30:06.632906383-05:00","created_by":"daemon"}]}
@ -97,7 +97,7 @@
{"id":"skills-le9","title":"beads new --from-cursor: capture symbol context","description":"When creating a bead, auto-capture LSP context:\n- Current symbol FQN (fully qualified name)\n- Definition snippet\n- Top 10 references/callers\n- Current diagnostics for the symbol\n\nMakes beads self-contained without copy/paste archaeology. Symbol URI allows jumping back to exact location even if file moved.","status":"open","priority":3,"issue_type":"feature","created_at":"2025-12-24T02:29:55.989876856-05:00","updated_at":"2025-12-24T02:29:55.989876856-05:00","dependencies":[{"issue_id":"skills-le9","depends_on_id":"skills-gga","type":"blocks","created_at":"2025-12-24T02:30:06.416484732-05:00","created_by":"daemon"}]}
{"id":"skills-lie","title":"Compare DEPENDENCIES.md with upstream","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:15:53.925914243-08:00","updated_at":"2025-12-03T20:19:28.665641809-08:00","closed_at":"2025-12-03T20:19:28.665641809-08:00","dependencies":[{"issue_id":"skills-lie","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:53.9275694-08:00","created_by":"daemon","metadata":"{}"}]}
{"id":"skills-lvg","title":"Compare ISSUE_CREATION.md with upstream","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-03T20:15:54.609282051-08:00","updated_at":"2025-12-03T20:19:29.134966356-08:00","closed_at":"2025-12-03T20:19:29.134966356-08:00","dependencies":[{"issue_id":"skills-lvg","depends_on_id":"skills-ebh","type":"discovered-from","created_at":"2025-12-03T20:15:54.610717055-08:00","created_by":"daemon","metadata":"{}"}]}
{"id":"skills-lzk","title":"Simplify branch name generation in create-new-feature.sh","description":"File: .specify/scripts/bash/create-new-feature.sh (lines 137-181)\n\nIssues:\n- 3 nested loops/conditionals\n- Complex string transformations with multiple sed operations\n- Stop-words list and filtering logic hard to maintain\n\nFix:\n- Extract to separate function\n- Simplify word filtering logic\n- Add input validation\n\nSeverity: MEDIUM","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-24T02:51:14.286951249-05:00","updated_at":"2025-12-24T02:51:14.286951249-05:00"}
{"id":"skills-lzk","title":"Simplify branch name generation in create-new-feature.sh","description":"File: .specify/scripts/bash/create-new-feature.sh (lines 137-181)\n\nIssues:\n- 3 nested loops/conditionals\n- Complex string transformations with multiple sed operations\n- Stop-words list and filtering logic hard to maintain\n\nFix:\n- Extract to separate function\n- Simplify word filtering logic\n- Add input validation\n\nSeverity: MEDIUM","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-24T02:51:14.286951249-05:00","updated_at":"2026-01-03T12:13:27.083639201-08:00","closed_at":"2026-01-03T12:13:27.083639201-08:00","close_reason":"Simplifed generate_branch_name logic, added main() function, and BASH_SOURCE guard for testability."}
{"id":"skills-m21","title":"Apply niri-window-capture code review recommendations","description":"CODE-REVIEW-niri-window-capture.md identifies action items: add dependency checks to scripts, improve error handling for niri failures, add screenshot directory validation, implement rate limiting. See High/Medium priority sections.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-30T11:58:24.648846875-08:00","updated_at":"2025-12-28T20:16:53.914141949-05:00","closed_at":"2025-12-28T20:16:53.914141949-05:00","close_reason":"Implemented all 4 high-priority recommendations from code review: dependency checks, directory validation, error handling, audit logging"}
{"id":"skills-mx3","title":"spec-review: Define consensus thresholds and decision rules","description":"'Use judgment' for mixed results leads to inconsistent decisions.\n\nDefine:\n- What constitutes consensus (2/3? unanimous?)\n- How to handle NEUTRAL votes\n- Tie-break rules\n- When human override is acceptable and how to document it","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-15T00:23:24.121175736-08:00","updated_at":"2025-12-15T13:58:04.339283238-08:00","closed_at":"2025-12-15T13:58:04.339283238-08:00"}
{"id":"skills-njb","title":"worklog: clarify or remove semantic compression references","description":"SKILL.md references 'semantic compression is a planned workflow' multiple times but it's not implemented. Speculative generality - adds cognitive load for non-existent feature. Either implement or move to design notes. Found by smells lens review.","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-25T02:03:25.387405002-05:00","updated_at":"2025-12-27T10:11:48.169923742-05:00","closed_at":"2025-12-27T10:11:48.169923742-05:00","close_reason":"Closed"}
@ -111,7 +111,7 @@
{"id":"skills-r5c","title":"Extract shared logging library from scripts","description":"Duplicated logging/color functions across multiple scripts:\n- bin/deploy-skill.sh\n- skills/tufte-press/scripts/generate-and-build.sh\n- Other .specify scripts\n\nPattern repeated:\n- info(), warn(), error() functions\n- Color definitions (RED, GREEN, etc.)\n- Same 15-20 lines in each file\n\nFix:\n- Create scripts/common-logging.sh\n- Source from all scripts that need it\n- Estimated reduction: 30+ lines of duplication\n\nSeverity: MEDIUM","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-24T02:50:58.324852578-05:00","updated_at":"2025-12-29T18:48:20.448077879-05:00","closed_at":"2025-12-29T18:48:20.448077879-05:00","close_reason":"Minimal duplication: only 2 files with different logging styles. Shared library overhead not justified."}
{"id":"skills-rex","title":"Test integration on worklog skill","description":"Use worklog skill as first real test case:\n- Create wisp for worklog execution\n- Capture execution trace\n- Test squash → digest\n- Validate trace format captures enough info for replay\n\nMigrated from dotfiles-drs.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-23T19:21:18.75525644-05:00","updated_at":"2025-12-29T13:55:35.814174815-05:00","closed_at":"2025-12-29T13:55:35.814174815-05:00","close_reason":"Parked with ADR-001: skills-molecules integration deferred. Current simpler approach (skills as standalone) works well. Revisit when complex orchestration needed.","dependencies":[{"issue_id":"skills-rex","depends_on_id":"skills-3em","type":"blocks","created_at":"2025-12-23T19:22:00.34922734-05:00","created_by":"dan"}]}
{"id":"skills-rpf","title":"Implement playwright-visit skill for browser automation","description":"## Overview\nBrowser automation skill using Playwright to visit web pages, take screenshots, and extract content.\n\n## Key Findings (from dotfiles investigation)\n\n### Working Setup\n- Use `python312Packages.playwright` from nixpkgs (handles Node driver binary patching for NixOS)\n- Use `executable_path='/run/current-system/sw/bin/chromium'` to use system chromium\n- No `playwright install` needed - no browser binary downloads\n\n### Profile Behavior\n- Fresh/blank profile every launch by default\n- No cookies, history, or logins from user's browser\n- Can persist state with `storage_state` parameter if needed\n\n### Example Code\n```python\nfrom playwright.sync_api import sync_playwright\n\nwith sync_playwright() as p:\n browser = p.chromium.launch(\n executable_path='/run/current-system/sw/bin/chromium',\n headless=True\n )\n page = browser.new_page()\n page.goto('https://example.com')\n print(page.title())\n browser.close()\n```\n\n### Why Not uv/pip?\n- Playwright pip package bundles a Node.js driver binary\n- NixOS can't run dynamically linked executables without patching\n- nixpkgs playwright handles this properly\n\n## Implementation Plan\n1. Create `skills/playwright-visit/` directory\n2. Add flake.nix with devShell providing playwright\n3. Create CLI script with subcommands:\n - `screenshot \u003curl\u003e \u003coutput.png\u003e` - capture page\n - `text \u003curl\u003e` - extract text content \n - `html \u003curl\u003e` - get rendered HTML\n - `pdf \u003curl\u003e \u003coutput.pdf\u003e` - save as PDF\n4. Create skill definition for Claude Code integration\n5. Document usage in skill README\n\n## Dependencies\n- nixpkgs python312Packages.playwright\n- System chromium (already in dotfiles)\n\n## Related\n- dotfiles issue dotfiles-m09 (playwright skill request)","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-16T16:02:28.577381007-08:00","updated_at":"2025-12-29T00:09:50.681141882-05:00","closed_at":"2025-12-29T00:09:50.681141882-05:00","close_reason":"Implemented: SKILL.md, visit.py CLI (screenshot/text/html/pdf), flake.nix devShell, README. Network down so couldn't test devShell build, but code complete."}
{"id":"skills-s92","title":"Add tests for config injection (deploy-skill.sh)","description":"File: bin/deploy-skill.sh (lines 112-137)\n\nCritical logic with NO test coverage:\n- Idempotency (running twice should be safe)\n- Correct brace matching in Nix\n- Syntax validity of injected config\n- Rollback on failure\n\nRisk: MEDIUM-HIGH - can break dotfiles Nix config\n\nFix:\n- Test idempotent injection\n- Validate Nix syntax after injection\n- Test with malformed input\n\nSeverity: MEDIUM","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-24T02:51:01.314513824-05:00","updated_at":"2025-12-24T02:51:01.314513824-05:00"}
{"id":"skills-s92","title":"Add tests for config injection (deploy-skill.sh)","description":"File: bin/deploy-skill.sh (lines 112-137)\n\nCritical logic with NO test coverage:\n- Idempotency (running twice should be safe)\n- Correct brace matching in Nix\n- Syntax validity of injected config\n- Rollback on failure\n\nRisk: MEDIUM-HIGH - can break dotfiles Nix config\n\nFix:\n- Test idempotent injection\n- Validate Nix syntax after injection\n- Test with malformed input\n\nSeverity: MEDIUM","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-24T02:51:01.314513824-05:00","updated_at":"2026-01-06T16:29:18.728097676-08:00","closed_at":"2026-01-06T16:29:18.728097676-08:00","close_reason":"21 tests added covering idempotency, brace preservation, inject_home_file wrapper, edge cases"}
{"id":"skills-ty7","title":"Define trace levels (audit vs debug)","description":"Two trace levels to manage noise vs utility:\n\n1. Audit trace (minimal, safe, always on):\n - skill id/ref, start/end\n - high-level checkpoints\n - artifact hashes/paths\n - exit status\n\n2. Debug trace (opt-in, verbose):\n - tool calls with args\n - stdout/stderr snippets\n - expanded inputs\n - timing details\n\nConsider OpenTelemetry span model as reference.\nGPT proposed this; Gemini focused on rotation/caps instead.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-23T19:49:48.514684945-05:00","updated_at":"2025-12-29T13:55:35.838961236-05:00","closed_at":"2025-12-29T13:55:35.838961236-05:00","close_reason":"Parked with ADR-001: skills-molecules integration deferred. Current simpler approach (skills as standalone) works well. Revisit when complex orchestration needed."}
{"id":"skills-u3d","title":"Define skill trigger conditions","description":"How does an agent know WHEN to apply a skill/checklist?\n\nOptions:\n- frontmatter triggers: field with patterns\n- File-based detection\n- Agent judgment from description\n- Beads hooks on state transitions\n- LLM-based pattern detection","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T17:59:09.69468767-05:00","updated_at":"2025-12-28T22:25:38.579989006-05:00","closed_at":"2025-12-28T22:25:38.579989006-05:00","close_reason":"Resolved: agent judgment from description is the standard. Good descriptions + 'When to Use' sections are sufficient. No new trigger mechanism needed - would add complexity without clear benefit."}
{"id":"skills-uan","title":"worklog: merge Guidelines and Remember sections","description":"Guidelines (8 points) and Remember (6 points) sections overlap significantly - both emphasize comprehensiveness, future context, semantic compression. Consolidate into single principles list. Found by bloat lens review.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-25T02:03:16.148596791-05:00","updated_at":"2025-12-27T10:05:51.527595332-05:00","closed_at":"2025-12-27T10:05:51.527595332-05:00","close_reason":"Closed"}

View file

@ -1 +1 @@
skills-hgm
skills-lzk

View file

@ -2,71 +2,6 @@
set -e
JSON_MODE=false
SHORT_NAME=""
BRANCH_NUMBER=""
ARGS=()
i=1
while [ $i -le $# ]; do
arg="${!i}"
case "$arg" in
--json)
JSON_MODE=true
;;
--short-name)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
# Check if the next argument is another option (starts with --)
if [[ "$next_arg" == --* ]]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
SHORT_NAME="$next_arg"
;;
--number)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
if [[ "$next_arg" == --* ]]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
BRANCH_NUMBER="$next_arg"
;;
--help|-h)
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>"
echo ""
echo "Options:"
echo " --json Output in JSON format"
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
echo " --number N Specify branch number manually (overrides auto-detection)"
echo " --help, -h Show this help message"
echo ""
echo "Examples:"
echo " $0 'Add user authentication system' --short-name 'user-auth'"
echo " $0 'Implement OAuth2 integration for API' --number 5"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
i=$((i + 1))
done
FEATURE_DESCRIPTION="${ARGS[*]}"
if [ -z "$FEATURE_DESCRIPTION" ]; then
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>" >&2
exit 1
fi
# Function to find the repository root by searching for existing project markers
find_repo_root() {
local dir="$1"
@ -83,20 +18,21 @@ find_repo_root() {
# Function to check existing branches (local and remote) and return next available number
check_existing_branches() {
local short_name="$1"
local specs_dir="$2"
# Fetch all remotes to get latest branch info (suppress errors if no remotes)
git fetch --all --prune 2>/dev/null || true
# Find all branches matching the pattern using git ls-remote (more reliable)
local remote_branches=$(git ls-remote --heads origin 2>/dev/null | grep -E "refs/heads/[0-9]+-${short_name}$" | sed 's/.*\/\([0-9]*\)-.*/\1/' | sort -n)
local remote_branches=$(git ls-remote --heads origin 2>/dev/null | grep -E "refs/heads/[0-9]+-${short_name}$" | sed 's/.*\/\([0-9]*\)-.* /\1/' | sort -n)
# Also check local branches
local local_branches=$(git branch 2>/dev/null | grep -E "^[* ]*[0-9]+-${short_name}$" | sed 's/^[* ]*//' | sed 's/-.*//' | sort -n)
# Check specs directory as well
local spec_dirs=""
if [ -d "$SPECS_DIR" ]; then
spec_dirs=$(find "$SPECS_DIR" -maxdepth 1 -type d -name "[0-9]*-${short_name}" 2>/dev/null | xargs -n1 basename 2>/dev/null | sed 's/-.*//' | sort -n)
if [ -d "$specs_dir" ]; then
spec_dirs=$(find "$specs_dir" -maxdepth 1 -type d -name "[0-9]*-${short_name}" 2>/dev/null | xargs -n1 basename 2>/dev/null | sed 's/-.*//' | sort -n)
fi
# Combine all sources and get the highest number
@ -111,28 +47,6 @@ check_existing_branches() {
echo $((max_num + 1))
}
# Resolve repository root. Prefer git information when available, but fall back
# to searching for repository markers so the workflow still functions in repositories that
# were initialised with --no-git.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if git rev-parse --show-toplevel >/dev/null 2>&1; then
REPO_ROOT=$(git rev-parse --show-toplevel)
HAS_GIT=true
else
REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")"
if [ -z "$REPO_ROOT" ]; then
echo "Error: Could not determine repository root. Please run this script from within the repository." >&2
exit 1
fi
HAS_GIT=false
fi
cd "$REPO_ROOT"
SPECS_DIR="$REPO_ROOT/specs"
mkdir -p "$SPECS_DIR"
# Function to generate branch name with stop word filtering and length filtering
generate_branch_name() {
local description="$1"
@ -140,121 +54,175 @@ generate_branch_name() {
# Common stop words to filter out
local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
# Convert to lowercase and split into words
local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
# Split into words and lowercase
local words=$(echo "$description" | tr '[:upper:]' '[:lower:]' | tr -s '[:punct:][:space:]' '\n')
# Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
local meaningful_words=()
for word in $clean_name; do
# Skip empty words
[ -z "$word" ] && continue
for word in $words; do
[[ -z "$word" ]] && continue
# Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
# Keep if not a stop word AND (length >= 3 OR uppercase acronym in original)
if ! echo "$word" | grep -qiE "$stop_words"; then
if [ ${#word} -ge 3 ]; then
meaningful_words+=("$word")
elif echo "$description" | grep -q "\b${word^^}\b"; then
# Keep short words if they appear as uppercase in original (likely acronyms)
if [[ ${#word} -ge 3 ]] || echo "$description" | grep -q "\b${word^^}\b"; then
meaningful_words+=("$word")
fi
fi
done
# If we have meaningful words, use first 3-4 of them
if [ ${#meaningful_words[@]} -gt 0 ]; then
local max_words=3
if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
local result=""
local count=0
for word in "${meaningful_words[@]}"; do
if [ $count -ge $max_words ]; then break; fi
if [ -n "$result" ]; then result="$result-"; fi
result="$result$word"
count=$((count + 1))
done
echo "$result"
if [[ ${#meaningful_words[@]} -gt 0 ]]; then
# Use first 4 meaningful words joined by hyphens
printf "%s\n" "${meaningful_words[@]}" | head -n 4 | tr '\n' '-' | sed 's/-$//'
else
# Fallback to original logic if no meaningful words found
echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//' | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
# Fallback: just take first 3 non-empty words
echo "$description" | tr '[:upper:]' '[:lower:]' | tr -s '[:punct:][:space:]' '\n' | grep -v '^$' | head -n 3 | tr '\n' '-' | sed 's/-$//'
fi
}
# Generate branch name
if [ -n "$SHORT_NAME" ]; then
# Use provided short name, just clean it up
BRANCH_SUFFIX=$(echo "$SHORT_NAME" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//')
else
# Generate from description with smart filtering
BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
fi
main() {
local json_mode=false
local short_name=""
local branch_number=""
local args=()
local i=1
while [ $i -le $# ]; do
local arg="${!i}"
case "$arg" in
--json)
json_mode=true
;;
--short-name)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
i=$((i + 1))
local next_arg="${!i}"
if [[ "$next_arg" == --* ]]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
short_name="$next_arg"
;;
--number)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
i=$((i + 1))
local next_arg="${!i}"
if [[ "$next_arg" == --* ]]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
branch_number="$next_arg"
;;
--help|-h)
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>"
echo ""
echo "Options:"
echo " --json Output in JSON format"
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
echo " --number N Specify branch number manually (overrides auto-detection)"
echo " --help, -h Show this help message"
echo ""
echo "Examples:"
echo " $0 'Add user authentication system' --short-name 'user-auth'"
echo " $0 'Implement OAuth2 integration for API' --number 5"
return 0
;;
*)
args+=("$arg")
;;
esac
i=$((i + 1))
done
# Determine branch number
if [ -z "$BRANCH_NUMBER" ]; then
if [ "$HAS_GIT" = true ]; then
# Check existing branches on remotes
BRANCH_NUMBER=$(check_existing_branches "$BRANCH_SUFFIX")
else
# Fall back to local directory check
HIGHEST=0
if [ -d "$SPECS_DIR" ]; then
for dir in "$SPECS_DIR"/*; do
[ -d "$dir" ] || continue
dirname=$(basename "$dir")
number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0")
number=$((10#$number))
if [ "$number" -gt "$HIGHEST" ]; then HIGHEST=$number; fi
done
fi
BRANCH_NUMBER=$((HIGHEST + 1))
local feature_description="${args[*]}"
if [ -z "$feature_description" ]; then
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>" >&2
exit 1
fi
fi
FEATURE_NUM=$(printf "%03d" "$BRANCH_NUMBER")
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local repo_root
local has_git
# GitHub enforces a 244-byte limit on branch names
# Validate and truncate if necessary
MAX_BRANCH_LENGTH=244
if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
# Calculate how much we need to trim from suffix
# Account for: feature number (3) + hyphen (1) = 4 chars
MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - 4))
# Truncate suffix at word boundary if possible
TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
# Remove trailing hyphen if truncation created one
TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
>&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
>&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
fi
if git rev-parse --show-toplevel >/dev/null 2>&1; then
repo_root=$(git rev-parse --show-toplevel)
has_git=true
else
repo_root="$(find_repo_root "$script_dir")"
if [ -z "$repo_root" ]; then
echo "Error: Could not determine repository root." >&2
exit 1
fi
has_git=false
fi
if [ "$HAS_GIT" = true ]; then
git checkout -b "$BRANCH_NAME"
else
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
fi
cd "$repo_root"
local specs_dir="$repo_root/specs"
mkdir -p "$specs_dir"
FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
mkdir -p "$FEATURE_DIR"
# Generate branch name suffix
local branch_suffix
if [ -n "$short_name" ]; then
branch_suffix=$(echo "$short_name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//')
else
branch_suffix=$(generate_branch_name "$feature_description")
fi
TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md"
SPEC_FILE="$FEATURE_DIR/spec.md"
if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi
# Determine feature number
local feature_num
if [ -n "$branch_number" ]; then
feature_num="$branch_number"
else
feature_num=$(check_existing_branches "$branch_suffix" "$specs_dir")
fi
# Set the SPECIFY_FEATURE environment variable for the current session
export SPECIFY_FEATURE="$BRANCH_NAME"
local branch_name="${feature_num}-${branch_suffix}"
if $JSON_MODE; then
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM"
else
echo "BRANCH_NAME: $BRANCH_NAME"
echo "SPEC_FILE: $SPEC_FILE"
echo "FEATURE_NUM: $FEATURE_NUM"
echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME"
fi
# Validate and truncate if necessary
local max_branch_length=244
if [ ${#branch_name} -gt $max_branch_length ]; then
local max_suffix_length=$((max_branch_length - 4))
local truncated_suffix=$(echo "$branch_suffix" | cut -c1-$max_suffix_length | sed 's/-$//')
local original_branch_name="$branch_name"
branch_name="${feature_num}-${truncated_suffix}"
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
>&2 echo "[specify] Original: $original_branch_name (${#original_branch_name} bytes)"
>&2 echo "[specify] Truncated to: $branch_name (${#branch_name} bytes)"
fi
if [ "$has_git" = true ]; then
git checkout -b "$branch_name"
else
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $branch_name"
fi
local feature_dir="$specs_dir/$branch_name"
mkdir -p "$feature_dir"
local template="$repo_root/.specify/templates/spec-template.md"
local spec_file="$feature_dir/spec.md"
if [ -f "$template" ]; then cp "$template" "$spec_file"; else touch "$spec_file"; fi
# Set the SPECIFY_FEATURE environment variable for the current session
export SPECIFY_FEATURE="$branch_name"
if $json_mode; then
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$branch_name" "$spec_file" "$feature_num"
else
echo "BRANCH_NAME: $branch_name"
echo "SPEC_FILE: $spec_file"
echo "FEATURE_NUM: $feature_num"
echo "SPECIFY_FEATURE environment variable set to: $branch_name"
fi
}
# Execute main function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@"
fi

View file

@ -327,29 +327,37 @@ create_new_agent_file() {
fi
local substitutions=(
"s|\[PROJECT NAME\]|$project_name|"
"s|\[DATE\]|$current_date|"
"s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
"s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
"s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
"-e" "s|\[PROJECT NAME\]|$project_name|"
"-e" "s|\[DATE\]|$current_date|"
"-e" "s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
"-e" "s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
"-e" "s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
"-e" "s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
"-e" "s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
)
for substitution in "${substitutions[@]}"; do
if ! sed -i.bak -e "$substitution" "$temp_file"; then
log_error "Failed to perform substitution: $substitution"
rm -f "$temp_file" "$temp_file.bak"
return 1
fi
done
# Perform all substitutions in one pass to a second temp file
local temp_file_final
temp_file_final="${temp_file}.final"
if ! sed "${substitutions[@]}" "$temp_file" > "$temp_file_final"; then
log_error "Failed to perform substitutions"
rm -f "$temp_file_final"
return 1
fi
# Replace the working temp file
mv "$temp_file_final" "$temp_file"
# Convert \n sequences to actual newlines
newline=$(printf '\n')
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
# Clean up backup files
rm -f "$temp_file.bak" "$temp_file.bak2"
temp_file_nl="${temp_file}.nl"
if ! sed "s/\\\\n/${newline}/g" "$temp_file" > "$temp_file_nl"; then
log_error "Failed to convert newlines"
rm -f "$temp_file_nl"
return 1
fi
mv "$temp_file_nl" "$temp_file"
return 0
}

View file

@ -37,6 +37,82 @@ EOF
exit 1
}
# Function to inject config into Nix file
inject_nix_config() {
local target_file="$1"
local config_block="$2"
local marker="$3" # Unique string to check if already deployed
if [[ ! -f "$target_file" ]]; then
echo "⚠️ File not found: $target_file (skipping)"
return
fi
if grep -q "$marker" "$target_file"; then
echo " Config already present in $(basename "$target_file")"
else
echo "Injecting config into $(basename "$target_file")..."
# Create a secure temporary file
local temp_file
temp_file=$(mktemp "${target_file}.XXXXXX")
# Ensure cleanup on exit or error
trap 'rm -f "$temp_file"' EXIT
# Insert before the last line (assuming it is '}')
if ! head -n -1 "$target_file" > "$temp_file"; then
echo "Error: failed to read $target_file" >&2
return 1
fi
echo "$config_block" >> "$temp_file"
if ! tail -n 1 "$target_file" >> "$temp_file"; then
echo "Error: failed to append to $temp_file" >&2
return 1
fi
# Validate: temp file should be larger than original (since we're adding)
local orig_size
orig_size=$(stat -c%s "$target_file")
local new_size
new_size=$(stat -c%s "$temp_file")
if [[ $new_size -le $orig_size ]]; then
echo "Error: Validation failed, new file is not larger than original" >&2
return 1
fi
# Atomic move
if ! mv "$temp_file" "$target_file"; then
echo "Error: Failed to replace $target_file" >&2
return 1
fi
# Clear trap after successful move
trap - EXIT
echo "✓ Updated $(basename "$target_file")"
fi
}
# Helper to inject a home.file entry into a Nix config
# Usage: inject_home_file <target_nix_file> <dest_path_in_home> <source_relative_to_config> <extra_props> <comment>
inject_home_file() {
local target_file="$1"
local home_path="$2"
local source_path="$3"
local extra_props="$4"
local comment="$5"
local config_block="
# Skill: $comment
home.file.\"$home_path\" = {
source = $source_path;
$extra_props
};"
inject_nix_config "$target_file" "$config_block" "$home_path"
}
if [[ -z "$SKILL_NAME" ]]; then
usage
fi
@ -108,80 +184,42 @@ if [[ -n "$SECURITY_WARNING" ]]; then
echo "$SECURITY_WARNING"
fi
# Function to inject config into Nix file
inject_nix_config() {
local target_file="$1"
local config_block="$2"
local marker="$3" # Unique string to check if already deployed
if [[ ! -f "$target_file" ]]; then
echo "⚠️ File not found: $target_file (skipping)"
return
fi
if grep -q "$marker" "$target_file"; then
echo " Config already present in $(basename "$target_file")"
else
echo "Injecting config into $(basename "$target_file")..."
# Create backup
cp "$target_file" "${target_file}.bak"
# Insert before the last line (assuming it is '}')
# We use a temporary file to construct the new content
head -n -1 "$target_file" > "${target_file}.tmp"
echo "$config_block" >> "${target_file}.tmp"
tail -n 1 "$target_file" >> "${target_file}.tmp"
mv "${target_file}.tmp" "$target_file"
echo "✓ Updated $(basename "$target_file")"
fi
}
echo "Configuring system..."
echo ""
# 1. Claude Code Config
CLAUDE_CONFIG="
# Skill: $SKILL_NAME
home.file.\".claude/skills/$SKILL_NAME\" = {
source = ../claude/skills/$SKILL_NAME;
recursive = true;
};"
inject_nix_config "$DOTFILES_REPO/home/claude.nix" "$CLAUDE_CONFIG" ".claude/skills/$SKILL_NAME"
inject_home_file "$DOTFILES_REPO/home/claude.nix" \
".claude/skills/$SKILL_NAME" \
"../claude/skills/$SKILL_NAME" \
"recursive = true;" \
"$SKILL_NAME"
# 2. OpenCode Config
OPENCODE_CONFIG="
# Skill: $SKILL_NAME
home.file.\".config/opencode/skills/$SKILL_NAME\" = {
source = ../claude/skills/$SKILL_NAME;
recursive = true;
};"
inject_nix_config "$DOTFILES_REPO/home/opencode.nix" "$OPENCODE_CONFIG" ".config/opencode/skills/$SKILL_NAME"
inject_home_file "$DOTFILES_REPO/home/opencode.nix" \
".config/opencode/skills/$SKILL_NAME" \
"../claude/skills/$SKILL_NAME" \
"recursive = true;" \
"$SKILL_NAME"
# 3. Antigravity / Global Config
# Check if antigravity.nix exists, otherwise warn
ANTIGRAVITY_NIX="$DOTFILES_REPO/home/antigravity.nix"
if [[ -f "$ANTIGRAVITY_NIX" ]]; then
# For global scripts, we need to find executable scripts in the skill
SCRIPTS=$(find "$SKILL_SOURCE/scripts" -name "*.sh" -type f)
if [[ -n "$SCRIPTS" ]]; then
GLOBAL_CONFIG=""
if [[ -d "$SKILL_SOURCE/scripts" ]]; then
SCRIPTS=$(find "$SKILL_SOURCE/scripts" -name "*.sh" -type f)
for script in $SCRIPTS; do
SCRIPT_NAME=$(basename "$script")
SCRIPT_NO_EXT="${SCRIPT_NAME%.*}"
# If skill has only one script and it matches skill name or is 'search', use skill name
# Otherwise use script name
LINK_NAME="$SCRIPT_NO_EXT"
GLOBAL_CONFIG="$GLOBAL_CONFIG
# Skill: $SKILL_NAME ($SCRIPT_NAME)
home.file.\".local/bin/$LINK_NAME\" = {
source = ../claude/skills/$SKILL_NAME/scripts/$SCRIPT_NAME;
executable = true;
};"
inject_home_file "$ANTIGRAVITY_NIX" \
".local/bin/$LINK_NAME" \
"../claude/skills/$SKILL_NAME/scripts/$SCRIPT_NAME" \
"executable = true;" \
"$SKILL_NAME ($SCRIPT_NAME)"
done
inject_nix_config "$ANTIGRAVITY_NIX" "$GLOBAL_CONFIG" ".local/bin/$LINK_NAME"
fi
else
echo "⚠️ $ANTIGRAVITY_NIX not found. Skipping global binary configuration."
@ -195,4 +233,4 @@ echo ""
echo " cd $DOTFILES_REPO"
echo " sudo nixos-rebuild switch --flake .#delpad"
echo ""
echo "Then restart your agents."
echo "Then restart your agents."

299
bin/tests/test-deploy-skill.sh Executable file
View file

@ -0,0 +1,299 @@
#!/usr/bin/env bash
# Tests for deploy-skill.sh config injection functions
# Run: bash bin/tests/test-deploy-skill.sh
set -uo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PASSED=0
FAILED=0
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color
# Extract inject_nix_config function
eval "$(sed -n '/^inject_nix_config()/,/^}/p' "$SCRIPT_DIR/../deploy-skill.sh")"
# Extract inject_home_file function
eval "$(sed -n '/^inject_home_file()/,/^}/p' "$SCRIPT_DIR/../deploy-skill.sh")"
# Test helpers
assert_contains() {
local description="$1"
local pattern="$2"
local file="$3"
if grep -q "$pattern" "$file" 2>/dev/null; then
echo -e "${GREEN}PASS${NC}: $description"
((PASSED++))
else
echo -e "${RED}FAIL${NC}: $description"
echo " Pattern not found: '$pattern'"
echo " File contents:"
cat "$file" | sed 's/^/ /'
((FAILED++))
fi
}
assert_not_contains() {
local description="$1"
local pattern="$2"
local file="$3"
if ! grep -q "$pattern" "$file" 2>/dev/null; then
echo -e "${GREEN}PASS${NC}: $description"
((PASSED++))
else
echo -e "${RED}FAIL${NC}: $description"
echo " Pattern found but should not be: '$pattern'"
((FAILED++))
fi
}
assert_count() {
local description="$1"
local expected="$2"
local pattern="$3"
local file="$4"
local actual
actual=$(grep -c "$pattern" "$file" 2>/dev/null || echo "0")
if [[ "$actual" == "$expected" ]]; then
echo -e "${GREEN}PASS${NC}: $description"
((PASSED++))
else
echo -e "${RED}FAIL${NC}: $description"
echo " Expected count: $expected, Actual: $actual"
echo " Pattern: '$pattern'"
((FAILED++))
fi
}
assert_last_line() {
local description="$1"
local expected="$2"
local file="$3"
local actual
actual=$(tail -n 1 "$file")
if [[ "$actual" == "$expected" ]]; then
echo -e "${GREEN}PASS${NC}: $description"
((PASSED++))
else
echo -e "${RED}FAIL${NC}: $description"
echo " Expected last line: '$expected'"
echo " Actual last line: '$actual'"
((FAILED++))
fi
}
assert_output_contains() {
local description="$1"
local pattern="$2"
local output="$3"
if echo "$output" | grep -q "$pattern"; then
echo -e "${GREEN}PASS${NC}: $description"
((PASSED++))
else
echo -e "${RED}FAIL${NC}: $description"
echo " Pattern not found: '$pattern'"
echo " Output was: $output"
((FAILED++))
fi
}
# Setup: create a minimal Nix file
setup_nix_file() {
local content="$1"
local test_file
test_file=$(mktemp)
echo -e "$content" > "$test_file"
echo "$test_file"
}
cleanup() {
rm -f "$1"
}
echo "=== Deploy Skill Config Injection Tests ==="
echo ""
# --- Basic Injection ---
echo "## Basic Injection"
# Test: Config injected before closing brace
test_file=$(setup_nix_file "{ config, pkgs, ... }:
{
home.packages = [ pkgs.git ];
}")
inject_nix_config "$test_file" " # Added config
home.file.\"test\" = { source = ./test; };" "home.file.\"test\"" > /dev/null
assert_contains "Config block injected" "home.file.\"test\"" "$test_file"
assert_last_line "Closing brace preserved at end" "}" "$test_file"
assert_contains "Original content preserved" "home.packages" "$test_file"
cleanup "$test_file"
# --- Idempotency ---
echo ""
echo "## Idempotency"
# Test: Running twice doesn't duplicate
test_file=$(setup_nix_file "{ config, pkgs, ... }:
{
home.packages = [ ];
}")
inject_nix_config "$test_file" " home.file.\"skill-a\" = { source = ./a; };" "skill-a" > /dev/null
inject_nix_config "$test_file" " home.file.\"skill-a\" = { source = ./a; };" "skill-a" > /dev/null
assert_count "Config not duplicated" "1" "skill-a" "$test_file"
cleanup "$test_file"
# Test: Different configs can be added
test_file=$(setup_nix_file "{ config, pkgs, ... }:
{
}")
inject_nix_config "$test_file" " home.file.\"skill-b\" = {};" "skill-b" > /dev/null
inject_nix_config "$test_file" " home.file.\"skill-c\" = {};" "skill-c" > /dev/null
assert_contains "First config present" "skill-b" "$test_file"
assert_contains "Second config present" "skill-c" "$test_file"
assert_count "Each config appears once" "1" "skill-b" "$test_file"
assert_count "Each config appears once (c)" "1" "skill-c" "$test_file"
cleanup "$test_file"
# --- File Not Found ---
echo ""
echo "## File Not Found Handling"
output=$(inject_nix_config "/nonexistent/path/file.nix" "config" "marker" 2>&1)
assert_output_contains "Skips missing file gracefully" "skipping" "$output"
# --- Brace Structure ---
echo ""
echo "## Brace Structure Preservation"
# Test: Complex nested structure
test_file=$(setup_nix_file "{ config, pkgs, ... }:
{
programs.git = {
enable = true;
userName = \"test\";
};
home.packages = with pkgs; [
ripgrep
fd
];
}")
inject_nix_config "$test_file" "
# New skill
home.file.\".skill\" = {
source = ./skill;
recursive = true;
};" ".skill" > /dev/null
assert_last_line "Closing brace still at end after complex inject" "}" "$test_file"
assert_contains "Nested braces preserved" "programs.git" "$test_file"
assert_contains "New config added" ".skill" "$test_file"
cleanup "$test_file"
# --- inject_home_file Wrapper ---
echo ""
echo "## inject_home_file Wrapper"
test_file=$(setup_nix_file "{ config, pkgs, ... }:
{
}")
inject_home_file "$test_file" \
".claude/skills/my-skill" \
"../claude/skills/my-skill" \
"recursive = true;" \
"my-skill" > /dev/null
assert_contains "Home path in config" ".claude/skills/my-skill" "$test_file"
assert_contains "Source path in config" "../claude/skills/my-skill" "$test_file"
assert_contains "Extra props included" "recursive = true" "$test_file"
assert_contains "Comment included" "# Skill: my-skill" "$test_file"
cleanup "$test_file"
# Test: inject_home_file idempotency
test_file=$(setup_nix_file "{ config, pkgs, ... }:
{
}")
inject_home_file "$test_file" ".test/path" "./source" "" "test" > /dev/null
inject_home_file "$test_file" ".test/path" "./source" "" "test" > /dev/null
assert_count "inject_home_file idempotent" "1" ".test/path" "$test_file"
cleanup "$test_file"
# --- Already Present Detection ---
echo ""
echo "## Already Present Detection"
test_file=$(setup_nix_file "{ config, pkgs, ... }:
{
# Existing skill
home.file.\".claude/skills/existing\" = {
source = ./existing;
};
}")
output=$(inject_nix_config "$test_file" "new config" ".claude/skills/existing" 2>&1)
assert_output_contains "Detects existing config" "already present" "$output"
cleanup "$test_file"
# --- Empty Extra Props ---
echo ""
echo "## Edge Cases"
# Test: Empty extra props
test_file=$(setup_nix_file "{ config, pkgs, ... }:
{
}")
inject_home_file "$test_file" ".simple/path" "./src" "" "simple" > /dev/null
assert_contains "Works with empty extra props" ".simple/path" "$test_file"
cleanup "$test_file"
# Test: Single line file (edge case - script assumes multi-line)
test_file=$(setup_nix_file "{}")
inject_nix_config "$test_file" " config = true;" "config" > /dev/null
assert_contains "Works with minimal file" "config = true" "$test_file"
# Note: Single-line "{}" becomes last line since head -n -1 returns empty
# This is acceptable - real Nix files are always multi-line
assert_last_line "Original content preserved as last line" "{}" "$test_file"
cleanup "$test_file"
# --- Summary ---
echo ""
echo "=== Summary ==="
echo -e "Passed: ${GREEN}$PASSED${NC}"
echo -e "Failed: ${RED}$FAILED${NC}"
if [[ $FAILED -gt 0 ]]; then
exit 1
fi

View file

@ -0,0 +1,172 @@
---
title: "Agent File Update Tests and Section Ordering Bug Fix"
date: 2026-01-02
keywords: [testing, bash, agent-context, bug-fix, state-machine, specify]
commits: 1
compression_status: uncompressed
---
# Session Summary
**Date:** 2026-01-02
**Focus Area:** Add comprehensive tests for agent file update logic and fix discovered bug
# Accomplishments
- [x] Created test-agent-update.sh with 33 test cases
- [x] Discovered and fixed bug in update-agent-context.sh section ordering
- [x] Tests cover: basic functionality, missing sections, timestamps, idempotency, change limits, DB entries, placement, edge cases
- [x] Closed skills-hgm (Add tests for agent file update logic)
- [x] Previous session: Completed skills-x33 (branch name tests - 27 tests)
# Key Decisions
## Decision 1: Test extraction approach using awk instead of sed
- **Context:** Need to extract functions from update-agent-context.sh for isolated testing
- **Options considered:**
1. `sed -n '/^function()/,/^}$/p'` - Original approach
2. `awk '/^function\(\)/,/^}$/ {print; if (/^}$/) exit}'` - More precise
3. Source entire script with mocked dependencies
- **Rationale:** awk with explicit exit on closing brace is more reliable for multi-function scripts
- **Impact:** Clean function extraction without pulling in adjacent functions
## Decision 2: Fix the bug rather than document as expected behavior
- **Context:** Found that Recent Changes section was silently skipped when preceded by Active Technologies
- **Rationale:** This was clearly unintended behavior - the code was supposed to update both sections
- **Impact:** Agent context files now correctly receive change entries regardless of section ordering
# Problems & Solutions
| Problem | Solution | Learning |
|---------|----------|----------|
| eval/sed extraction didn't work correctly for update_existing_agent_file | Used awk with explicit exit on `^}$` pattern | awk is more precise for function boundary detection |
| Tests failing: change entries not added to existing Recent Changes | Traced through code, found `## Recent Changes` caught by generic "close tech section on any ##" logic | State machine section handlers must check specific headers before generic patterns |
| Idempotency test counting wrong pattern | Pattern "Python + Django" appeared in both tech and change sections | Use anchored regex `^- pattern$` for specific line matching |
# Technical Details
## Code Changes
- Total files modified: 2
- Key files changed:
- `.specify/scripts/bash/update-agent-context.sh` - Fixed section ordering in while loop
- `.specify/scripts/bash/tests/test-agent-update.sh` - New test file (33 tests)
## The Bug
The `update_existing_agent_file` function processes files line-by-line with a state machine. When in the Active Technologies section (`in_tech_section=true`), encountering any `##` header would:
1. Add pending tech entries
2. Echo the header line
3. Set `in_tech_section=false`
4. **Continue to next iteration** ← Bug here!
This meant `## Recent Changes` was echoed but the change entry addition code (which checks `if [[ "$line" == "## Recent Changes" ]]`) was never reached.
**Fix:** Check for `## Recent Changes` BEFORE the generic "any ## closes tech section" logic:
```bash
# Handle Recent Changes section FIRST (before generic ## handling)
if [[ "$line" == "## Recent Changes" ]]; then
# Close tech section if we were in it
if [[ $in_tech_section == true ]]; then
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
in_tech_section=false
fi
echo "$line" >> "$temp_file"
# Add new change entry right after the heading
if [[ -n "$new_change_entry" ]]; then
echo "$new_change_entry" >> "$temp_file"
fi
...
fi
```
## Test Categories
1. **Basic Functionality** (4 tests) - Tech/change entries added, existing content preserved
2. **Missing Sections** (6 tests) - Creates sections when absent
3. **Timestamp Updates** (2 tests) - Updates `**Last updated**:` dates
4. **Idempotency** (1 test) - No duplicates when re-running
5. **Change Entry Limits** (5 tests) - Max 2 existing changes kept
6. **Database Entries** (3 tests) - DB-only additions work
7. **Tech Entry Placement** (3 tests) - Entries at correct position
8. **Edge Cases** (5 tests) - Empty, NEEDS CLARIFICATION, EOF handling
9. **format_technology_stack** (4 tests) - Helper function unit tests
## Commands Used
```bash
# Run the tests
bash .specify/scripts/bash/tests/test-agent-update.sh
# Debug function extraction
eval "$(awk '/^update_existing_agent_file\(\)/,/^}$/ {print; if (/^}$/) exit}' script.sh)"
# Trace execution
bash -x -c '...'
```
# Process and Workflow
## What Worked Well
- Test-first debugging: Writing tests exposed the bug immediately
- Incremental tracing: Adding debug output to narrow down the issue
- Hypothesis testing: Created minimal test cases to verify the bug cause
## What Was Challenging
- Function extraction: eval/sed patterns are finicky with nested braces
- Understanding state machine flow: Had to trace through manually to find where `continue` was short-circuiting
# Learning and Insights
## Technical Insights
- Bash state machines need careful ordering: specific cases before generic patterns
- `continue` in while loops can silently skip subsequent handlers
- Test fixtures with `setup_test_file()` using heredocs are clean for bash testing
- grep patterns for line counting need anchoring to avoid false positives
## Process Insights
- When tests fail unexpectedly, the code might be buggy (not just the tests)
- Adding debug output (`echo "DEBUG: ..." >&2`) is fast way to trace bash
# Context for Future Work
## Open Questions
- Should we add more tests for create_new_agent_file function?
- Consider shellcheck/shfmt for the test files?
## Next Steps
- skills-s92: Add tests for deploy-skill.sh (continues testing theme)
- skills-7bu: Add atomic file operations (related to file update code)
- Rebuild home-manager to deploy updated script
## Related Work
- [2026-01-01-ops-review-phase-3-and-worklog-migration.md](2026-01-01-ops-review-phase-3-and-worklog-migration.md) - Previous session
- skills-x33: Branch name tests (27 tests, completed in prior session)
# Raw Notes
- The deployed worklog skill still uses .org format; repo has .md
- Network issue prevented bd sync (192.168.1.108:3000 unreachable)
- Total test count across both test files: 60 tests (27 branch + 33 agent)
# Session Metrics
- Commits made: 1
- Files touched: 2
- Lines added/removed: +536/-13
- Tests added: 33
- Tests passing: 33/33

View file

@ -12,22 +12,7 @@
skillsModule = import ./modules/ai-skills.nix;
# List of available skills
availableSkills = [
"bd-issue-tracking"
"code-review"
"doc-review"
"niri-window-capture"
"ops-review"
"orch"
"screenshot-latest"
"spec-review"
"tufte-press"
"worklog"
"update-spec-kit"
"update-opencode"
"web-search"
"web-research"
];
availableSkills = builtins.attrNames (import ./skills.nix);
in
flake-utils.lib.eachDefaultSystem
(system:

View file

@ -8,18 +8,11 @@ let
# Derive repo root from skillsPath (skills/ is a subdirectory)
repoRoot = dirOf cfg.skillsPath;
skillsData = import ../skills.nix;
skillsList = ''
Available skills:
- code-review: Multi-lens code review with issue filing
- niri-window-capture: Invisibly capture window screenshots
- ops-review: Multi-lens ops/infrastructure review
- screenshot-latest: Find latest screenshots
- tufte-press: Generate study card JSON
- worklog: Create org-mode worklogs
- update-spec-kit: Update spec-kit ecosystem
- update-opencode: Update OpenCode via Nix
- web-search: Search the web via Claude
- web-research: Deep web research with multiple backends
${concatStringsSep "\n" (map (name: " - ${name}: ${skillsData.${name}}") (attrNames skillsData))}
'';
in {
options.services.ai-skills = {

18
skills.nix Normal file
View file

@ -0,0 +1,18 @@
{
ai-tools-doctor = "Check and sync AI tool versions";
bd-issue-tracking = "BD issue tracking skill";
code-review = "Multi-lens code review with issue filing";
doc-review = "AI-assisted documentation review";
niri-window-capture = "Invisibly capture window screenshots";
ops-review = "Multi-lens ops/infrastructure review";
orch = "Orchestration and consensus skill";
playwright-visit = "Browser automation and content extraction";
screenshot-latest = "Find latest screenshots";
spec-review = "Technical specification review";
tufte-press = "Generate study card JSON";
worklog = "Create structured worklogs";
update-spec-kit = "Update spec-kit ecosystem";
update-opencode = "Update OpenCode via Nix";
web-search = "Search the web via Claude";
web-research = "Deep web research with multiple backends";
}

View file

@ -113,13 +113,34 @@ if [[ "$DRY_RUN" == true ]]; then
exit 0
fi
# Perform atomic update using sed
sed -i \
# Perform atomic update
TEMP_FILE=$(mktemp "${NIX_FILE}.XXXXXX")
trap 'rm -f "$TEMP_FILE"' EXIT
if ! sed \
-e "s/version = \"[^\"]*\"/version = \"$VERSION\"/" \
-e "s/sha256 = \"[^\"]*\"/sha256 = \"$SHA256\"/" \
"$NIX_FILE"
"$NIX_FILE" > "$TEMP_FILE"; then
echo "Error: failed to generate updated Nix file" >&2
exit 1
fi
# Verify update succeeded
# Basic validation: check if it's not empty and contains expected values
if [[ ! -s "$TEMP_FILE" ]]; then
echo "Error: Generated file is empty" >&2
exit 1
fi
if ! grep -q "version = \"$VERSION\"" "$TEMP_FILE" || ! grep -q "sha256 = \"$SHA256\"" "$TEMP_FILE"; then
echo "Error: Generated file validation failed (expected patterns not found)" >&2
exit 1
fi
# Atomic move
mv "$TEMP_FILE" "$NIX_FILE"
trap - EXIT
# Verify update succeeded (extra safety check)
UPDATED_VERSION=$(grep -oP 'version\s*=\s*"\K[^"]+' "$NIX_FILE")
UPDATED_SHA256=$(grep -oP 'sha256\s*=\s*"\K[^"]+' "$NIX_FILE")