Skip to content

EA System Development

Building Hal's personal executive assistant as a complete system - from voice capture on the phone through to an intelligent idea corpus managed by Claude Code. The core system is live and the voice capture pipeline shipped 15 Feb 2026.

Current State

The EA is operational with all core pieces in place. Hal captures ideas by voice (iPhone Action Button) or text (Claude Code), the EA enriches and organises them, and the corpus is growing. The system is in daily use and calibration.

What exists: - GitHub repo with domain structure, manifest, and idea files - CLAUDE.md defining EA personality, categorisation rules, enrichment behaviour - Slash commands (/inbox, /review, /idea, /fix, /reorg) - Voice capture pipeline: iPhone Action Button → Cloudflare Worker → inbox/ - Bootstrap mode active (reasoning blocks on every idea file) - learnings.md for operational memory across sessions - ~10 in-flight ideas across 4 domains

What's next: - Harden the voice pipeline with an offline fallback (see Phase 3 notes) - Continue Phase 2 calibration - Phase 4: bootstrap exit and polish

Phases

Phase 1: Repo structure and EA brain - DONE

Set up the repo skeleton, CLAUDE.md, slash commands, and all the scaffolding the EA needs to operate.

  • [x] Create private GitHub repo (hal-ea)
  • [x] Create full folder structure (domains/, .claude/commands/, etc.)
  • [x] Write CLAUDE.md (personality, categorisation rules, enrichment rules, file formats)
  • [x] Write manifest.md
  • [x] Write slash commands (/inbox, /review, /idea, /fix, /reorg)
  • [x] Include bootstrap mode section in CLAUDE.md
  • [x] Test enrichment with real ideas

Phase 2: Daily use and calibration - IN PROGRESS

Use the system daily. Calibrate the EA's behaviour by iterating on CLAUDE.md based on what it gets right and wrong.

  • [x] Process real ideas through the system
  • [x] Run /review and /idea sessions
  • [x] Add learnings.md for operational memory
  • [ ] Continue building corpus (target: 15-30 in-flight ideas)
  • [ ] Correct categorisation and enrichment mistakes as they arise
  • [ ] Update "How Hal Thinks" in CLAUDE.md based on observed patterns
  • [ ] Run /reorg once the corpus is large enough to need it
  • [ ] Decide when bootstrap mode has served its purpose

Phase 3: Voice capture pipeline - DONE

Cloudflare Worker deployed at https://hal-ea-worker.halsarj.workers.dev. Same proven pattern as Goldie Food Discovery worker - receives audio POST, transcribes via Whisper Large V3 Turbo, commits markdown to inbox/ via GitHub Contents API. Zero cost (free tier).

Architecture:

iPhone (record + upload) → Cloudflare Worker (transcribe + commit) → GitHub inbox/

Simplified from the original spec - direct POST, no offline queue. Matches the Goldie pattern that's been reliable in production.

Key design decisions: - GitHub PAT stored on Cloudflare, never on the phone - Shared secret for authentication between Shortcut and Worker - Worker code at /Users/halsarjant/hal-ea-worker/

Shipped tasks: - [x] Deploy Worker with nodejs_compat flag and [ai] binding - [x] Set Worker secrets (SHARED_SECRET, GITHUB_PAT, GITHUB_OWNER, GITHUB_REPO) - [x] Generate classic GitHub PAT (no expiry, Contents read/write) - [x] Build iPhone Shortcut (3 actions: Record Audio, POST to worker, Show Notification) - [x] Configure Action Button to run Shortcut - [x] Test end-to-end: Action Button, speak, stop, file appears in inbox/

Outstanding: offline fallback. The current Shortcut fires and forgets. If Cloudflare is down, the phone has no signal, or the Worker errors out, the recording is silently lost. For a 30-second throwaway thought that's fine. For a 5-minute stream-of-consciousness capture, losing it silently would be genuinely painful - Hal wouldn't find out until he runs /inbox hours later and there's nothing there.

The fix is to add a save-to-iCloud step in the Shortcut before the POST, and only delete the local file on a successful response. This was in the original spec as a queue-flush pattern but was dropped for simplicity in v1. Now that the happy path is proven, this is the clear next hardening step.

Step-by-step Shortcut changes:

Open "EA Voice Note" in the Shortcuts app. The current flow is: Record Audio → Get Contents of URL → Show Notification. Modify it to:

  1. Record Audio (keep as-is)
  2. Add action: Save File. Tap "Add Action", search "Save File". Set destination to: iCloud Drive > Shortcuts > ea-voice-queue. You'll need to create the ea-voice-queue folder first (open Files app, navigate to iCloud Drive > Shortcuts, create new folder "ea-voice-queue"). Set the filename to the "Current Date" magic variable formatted as yyyy-MM-dd-HHmmss with .m4a appended. Make sure "Save File" receives the Recorded Audio variable from step 1.
  3. Get Contents of URL (keep as-is - POST to worker with auth header, body is Recorded Audio)
  4. Add action: If. Tap "Add Action", search "If". Set it to check the output of "Get Contents of URL". Condition: "has any value" (a successful response returns JSON).
  5. Inside the If block - add action: Delete Files. Search "Delete Files". Set it to delete the Saved File variable from step 2. Turn off "Confirm Before Deleting".
  6. Inside the If block - add action: Show Notification. Title: "Logged!"
  7. In the Otherwise block - add action: Show Notification. Title: "Saved locally - will retry". Body: "Voice note saved to ea-voice-queue. Will upload next time."
  8. Delete the old "Show Notification" action at the end (it's now inside the If/Otherwise).

The final flow should read:

Record Audio
Save File (to ea-voice-queue/)
Get Contents of URL (POST to worker)
If [Contents of URL] has any value
  Delete Files (the saved file)
  Show Notification: "Logged!"
Otherwise
  Show Notification: "Saved locally - will retry"
End If

Testing:

  1. Happy path: Run the Shortcut normally. Speak a few words, stop. Should get "Logged!" notification and the file should appear in inbox/ (run /inbox to confirm). Check Files app - ea-voice-queue folder should be empty (file was deleted after success).
  2. Failure path: Turn on Airplane Mode, run the Shortcut. Speak, stop. Should get "Saved locally" notification. Check Files app - ea-voice-queue should contain the .m4a file. Turn off Airplane Mode. The file sits there safely until you deal with it.
  3. Manual recovery: If files accumulate in ea-voice-queue, you can POST them manually from a Mac: curl -X POST https://hal-ea-worker.halsarj.workers.dev -H "Authorization: Bearer <secret>" -H "Content-Type: audio/m4a" --data-binary @the-file.m4a

A "Flush EA Queue" shortcut that auto-retries queued files on each new recording would be the next refinement, but manual recovery is fine to start.

Phase 4: Bootstrap exit and polish - NOT STARTED

Wrap up the calibration period and harden the system for long-term use.

  • [ ] Remove bootstrap mode from CLAUDE.md
  • [ ] Strip all reasoning blocks from existing idea files
  • [ ] Consolidate learnings.md - promote stable patterns into CLAUDE.md rules
  • [ ] Review and refine categorisation heuristics based on accumulated corrections
  • [ ] Consider typed quick-capture path (text notes from phone without voice)
  • [ ] Consider automation if manual /inbox feels tedious (Mac cron or scheduled job)
  • [ ] Document anything non-obvious for future reference

Done when: the system runs smoothly day-to-day with no friction. Nothing breaks silently. Bootstrap mode is gone. CLAUDE.md reflects what the EA has actually learned.

Behavioural Decisions

Key design decisions made during the spec phase that shape how the EA behaves. Useful reference if revisiting any of these:

Decision Choice Rationale
Original capture Respected, not sacred EA produces a cohesive document. Cleans up transcription noise. Not verbatim.
Multi-idea voice notes Always split Distinct ideas always become separate files.
Enrichment model One cohesive document No separate "Original" + "EA Notes". One document with enrichment woven in.
Enrichment purpose Connect, challenge, develop Pattern-spotter, smart colleague, thought partner. Use judgement per idea.
Document format Wiki page YAML frontmatter for metadata, then free-form content.
Domains EA manages freely Create, merge, rename, retire. Rationale in commit messages.
Tags Removed EA finds connections by reading content. Corpus is small enough that tags add overhead.
Statuses seed/developing/actionable + terminal Promote freely, never archive without asking.
Connections Bidirectional Always update both files. No separate graph.
Review style Surprise me EA surfaces whatever is most interesting or urgent. No rigid template.
Processing trigger Manual /inbox No automation. Run when ready to engage.

What Would Make This Actionable

Phases 1-3 are done. The system works end-to-end. Two things remain before this idea reaches terminal status:

  1. Offline fallback for voice capture - the one known fragility. Build the save-locally-first pattern into the Shortcut so nothing is ever silently lost. This is a 15-minute Shortcut edit, not a code change.

  2. Phase 4: bootstrap exit - strip reasoning blocks, consolidate learnings into CLAUDE.md, harden for long-term use. This is a judgement call about when the EA has been calibrated enough.

Connections

  • Angle-First Generation for Vibe Design - the EA system is the consumer of whatever capture pipeline gets built. The vibe design skill is a potential enrichment pattern - could the EA use angle-first divergence when exploring an idea in /idea sessions?
  • Goldie's Food Discovery - SHIPPED. Validated the EA capture pipeline pattern in a different domain. Voice → Worker → GitHub → Claude Code → HTML site, all working in production.
  • Read-Only Hosted View of EA Repo - a read layer on top of this system, making the corpus browsable without GitHub or Claude
EA Reasoning (bootstrap mode) - Placed in ai-tooling rather than creating a standalone "meta" domain. The EA system is an AI-assisted workflow - it fits naturally alongside the vibe design idea. Both are about how Hal uses AI tools. - Status: developing. The system is live and being used daily, with active calibration. It's well past seed (there's a working system, not just an idea) but not yet actionable (voice pipeline unbuilt, bootstrap mode still active). - Phases structured to reflect reality: Phase 1 is done, Phase 2 is in progress, Phase 3 is the clear next build. This gives Hal a checklist he can track progress against. - Preserved the key technical details from the spec (Worker code pattern, Shortcut actions, dedup design) so Hal doesn't need to reference the Desktop files during build. But kept it concise - the full spec lives on the Desktop if needed. - Connected to vibe design - both are about how AI tools work. The connection is genuine but light. Didn't force connections to other ideas.