Skip to content

Angle-First Generation for Vibe Design

An improvement to the vibe design workflow that front-loads creative divergence before any design work begins. Instead of jumping straight into generating full variants (which tends to cluster around obvious interpretations of a brief), the tool starts by producing a high volume of single-line angle sentences - 50 or more - that each describe the idea from a genuinely different perspective. A verification agent then gates quality before anything gets built out.

The Workflow

  1. Angle generation: Claude produces ~50 single-line sentences, each describing the design idea from a distinct angle. The volume is the point - the first 10 will be predictable, but sentences 30-50 are where the unexpected interpretations live.

  2. Verification agent: A separate agent reviews the full set against three criteria:

  3. Diversity: Are the angles genuinely different from each other, not surface-level rephrases?
  4. Creativity: Do they push beyond the obvious? Are there angles that surprise?
  5. Brief compliance: Do they all still serve the original brief, or have some drifted into irrelevance?

The verifier can reject, flag duplicates, or request regeneration of weak angles. This is the quality gate that makes the rest of the workflow trustworthy.

  1. Variant creation: Pick 3-5 of the strongest angles and flesh each into a full design variant. Each variant inherits its angle's perspective, so they're structurally different from each other rather than being slight variations on the same theme.

  2. Iteration (two modes):

  3. Remix: Go deeper on a variant with directed feedback. "I like this direction but make it more minimal" or "push the typography harder." The variant evolves.
  4. Shuffle: Bin a variant entirely - it's not working. Replace it with a fresh variant generated from one of the unused angles in the pool. You're not starting from scratch; you're spending from a pre-verified creative budget.

Why This Is Interesting

The core insight is that the quality of generative output is downstream of the quality of the creative brief, and most briefs are too narrow. By forcing 50 angles before any design work, you're essentially expanding the brief into a wide possibility space, then verifying that space is genuinely wide before narrowing back down.

This is divergent-convergent thinking, but the AI-native version. Traditionally this happens in a brainstorm, on a whiteboard, across a team over hours. Here it's compressed into the first step of an automated pipeline, with a verification layer that a human brainstorm doesn't have.

The unused angles are valuable too. When you shuffle a variant, you're drawing from a curated pool of verified-good angles that just haven't been explored yet. That's a much better starting point than "generate something different" with no direction.

What Needs Sharpening

What the output types are. This is a general-purpose creative workflow - it could apply to websites, UI, copy, architecture, anything where you want divergent options. The implementation as a Claude Code skill means the "variant" format adapts to whatever project context the skill is invoked in. The question is whether the skill needs mode-specific prompting (e.g. "you're generating website angles" vs "you're generating UI layout angles") or whether a good brief is sufficient.

The verification agent is the hardest piece. "Different" is measurable (semantic similarity, topic clustering). "Creative" is subjective and culture-dependent. Risk: the verifier becomes a conformity filter that kills the most interesting angles because they're too far from the centre. It might need a mode that explicitly protects outlier angles.

How the verification agent communicates. Does it return a pass/fail on the full set? A scored ranking? Specific feedback on which angles are too similar? The UX of the verification step matters - if it's a black box that just says "regenerate 12 of these," Hal won't trust it. If it shows its reasoning (these 4 angles are essentially the same idea, these 3 don't address the brief), it becomes a collaborator rather than a gate.

The "50" number. Is this fixed or adaptive? A simple brief ("design a landing page for a coffee shop") might genuinely have 50 angles. A highly constrained brief ("redesign this specific form to reduce drop-off") might struggle past 20 meaningful ones. The number might need to flex, or the verification agent might need to flag when the brief is too narrow for the volume requested.

Angle vs. variant ratio. 50 angles down to 3-5 variants means ~90% of angles are held in reserve. That's a lot of creative inventory sitting unused. Is there a way to surface the "bench" - show Hal the full pool so he can manually pull interesting angles into the active set, not just rely on shuffle to surface them randomly?

Connections

  • Speculative Web Builds for SMBs - angle-first generation could be the engine that produces website variants for SMB cold outreach. Instead of one speculative build per business, generate 50 angles on what the site could be, verify diversity, flesh out the best 3, and send the strongest.
  • EA System Development - could the EA use angle-first divergence when exploring ideas in /idea sessions? Generate multiple angles on an idea before converging on development direction.
EA Reasoning (bootstrap mode) - Initially categorised as ventures, recategorised to ai-tooling after Hal's correction. This is a workflow improvement for Claude Code (likely a skill), not a venture. It applies across many/most projects rather than being a standalone product. Created the ai-tooling domain for ideas about improving AI-assisted workflows and Claude Code capabilities. - Connected to speculative web builds because that's a concrete use case where this workflow pattern would apply directly. The vibe design tool generates variants; speculative web builds needs variants of websites. Natural fit. - Considered connecting to automated report pipelines (both use multi-agent patterns with quality gates), but the connection is superficial - the underlying problems are different enough that linking them would be forcing it. - Status: seed. Single capture, no validation yet. The verification agent concept is the novel piece that needs the most thought. The diverge-then-converge pattern is well-established; the question is whether AI verification of "creativity" actually works or just produces mediocrity.