It’s all about patterns.


Edit: after 103 features, here are the stats 🀯.

“Vibe coding” β€” prompting an AI and hoping for the best β€” is fun for prototypes. But it doesn’t scale. The moment your project grows beyond a handful of files, you need something more structured. Not more manual work, but better patterns.

I built a complete support platform in two days β€” 66 features, from scratch to pilot-ready. Not by chatting with an AI and copy-pasting snippets, but by defining a repeatable system of patterns that let the AI work autonomously while I stay in control of what gets built.

Here’s the workflow.

Three layers, one pattern

The system has three layers, each feeding into the next:

  1. SYSTEM.md β€” a single file where I describe features as user stories, grouped by phase
  2. Feature files β€” structured YAML specs that Claude Code generates from those stories
  3. An autonomous loop β€” a shell script that feeds features to Claude Code one at a time until the backlog is done

The pattern is always the same: describe intent clearly, decompose into structured specs, execute autonomously with guardrails. Every feature, every phase, every project follows this same shape.

Custom skills: teaching the AI your standards

Claude Code supports skills β€” reusable instruction sets that get injected into the AI’s context when invoked. Some are community-provided, but two of the skills central to this workflow I wrote myself:

  • feature-breakdown β€” Takes a user story from SYSTEM.md and decomposes it into phased, dependency-aware YAML feature files. It analyzes the codebase to determine which files are relevant, writes acceptance criteria in Given/When/Then format, and assigns the right skills to each ticket. This is the bridge between human intent and machine-executable specs.
  • fastify-postgres β€” Encodes the backend conventions for the project: Fastify with Drizzle ORM, TypeScript, feature-based folder structure, authentication patterns, validation with Zod, error handling, and testing strategy. When Claude Code implements a backend feature, this skill ensures it follows the same architecture every time β€” consistent route structure, proper schema migrations, and matching test patterns.

These skills are what turn generic AI output into code that fits your project. Without them, every feature would require explaining the same conventions over and over. With them, Claude Code already knows how your backend is structured and how your features should be decomposed β€” before it writes a single line.

The pattern in action

Let me walk through a real example β€” adding a feedback feature β€” to show how these layers connect.

Write the user story

Everything starts with a user story in SYSTEM.md. Here’s a real example from phase 6:

## Phase 6 - Feedback component + Mail Templates

- Feedback
    - As a user, I want to have a possibility to send feedback to the
      platform administrators. I want to have a FAB icon button that   
      opens a popup with textarea. This is fire and forget (from the  
      user perspective). It can be used as many times as the user 
      wants. Send an 'admin' notification when the user submits this.
    - As administrator, I want a list of this feedback, accessible    
      from the dashboard. List the user (link to it), timestamp    
      received and the message. Admins can also delete the message.
- Mail Templates
    - TBD

That’s it. Two paragraphs of plain language describing what I want.

Break it down into feature tickets

I use a custom Claude Code skill called feature-breakdown that analyzes the user story and decomposes it into ordered, dependency-aware tickets. One command:

“Do a feature breakdown for the feedback component from phase 6.”

Claude produces a breakdown table:

FT-IDNameSkillsDepends on
FT-063Feedback schema and servicefastify-postgresβ€”
FT-064Feedback API routesfastify-postgresFT-063
FT-065Feedback FAB and popup UIreact-best-practices, frontend-designFT-064
FT-066Admin feedback list pagereact-best-practices, frontend-designFT-064

Each ticket maps to a YAML file with everything Claude Code needs to implement it autonomously: context from the codebase, relevant files, acceptance criteria written as Given/When/Then, constraints, and which skills to apply.

Here’s what FT-065 (the feedback button UI) looks like:

feature:
  id: "FT-065"
  name: "Feedback FAB and popup UI"
  description: "A floating action button (FAB) visible on all 
    authenticated pages that opens a modal with a textarea for 
    submitting feedback. Fire-and-forget from the user's perspective."
  skills:
    - "react-best-practices"
    - "frontend-design"
  relevant_files:
    - "frontend/src/components/FeedbackFab.tsx"
    - "frontend/src/components/ui/Modal.tsx"
    - "frontend/src/App.tsx"
    - "frontend/src/lib/api.ts"
  dependencies: ["FT-064"]
  acceptance_criteria:
    - id: "ac-065a"
      given: "An authenticated user on any page"
      when: "They see the screen"
      then: "A FAB icon button is visible in a fixed position (bottom-
        right)"
    - id: "ac-065b"
      given: "The FAB is visible"
      when: "The user clicks it"
      then: "A modal opens with a textarea and a submit button"
    - id: "ac-065c"
      given: "The feedback modal is open"
      when: "The user types a message and clicks submit"
      then: "The feedback is sent via POST /api/feedback, the modal 
      closes, and a success toast is shown"
    # ... more criteria
  constraints:
    - "Use the existing Modal and Textarea UI components"
    - "FAB should be positioned fixed bottom-right with appropriate 
        z-index"
    - "Keep the component self-contained in FeedbackFab.tsx"

The acceptance criteria become test cases. The constraints keep the implementation consistent with existing patterns. The skills tell Claude Code which coding guidelines to apply. After I confirm the breakdown, the YAML files are generated and committed.

Let the loop run

This is where it gets interesting. I have a shell script (loop.sh) that drives Claude Code in headless mode. It pipes a prompt file into claude -p, which runs Claude Code non-interactively with full tool access.

The prompt file (RALPH.md) contains a simple algorithm:

  1. Read the current phase from the backlog
  2. Find the highest-priority pending feature whose dependencies are all done
  3. Implement it, generate tests from the acceptance criteria
  4. Run the full test suite, linter, and Docker build β€” all tests must pass, not just the new ones
  5. Mark the feature as done, commit, and move to the next one
  6. When no pending features remain, signal completion and stop

The loop script invokes this prompt repeatedly. Each iteration picks up the next ready feature. If a feature depends on another that isn’t done yet, it’s skipped. When the backlog is empty, the loop exits cleanly.

28 minutes later

I committed the user story, kicked off the feature breakdown, and started the loop. Here’s the git log:

Git log showing 6 commits over 28 minutes, from “Added feedback story” to “Add admin feedback list page with pagination and delete (FT-066)

And here’s what was built β€” a feedback button available on every page:

Feedback modal with textarea, shown over the application interface

And the admin page to manage incoming feedback:

Admin feedback list showing user, message, timestamp, and delete action

Four features, each with full test coverage (the project currently has 957 backend tests and 265 frontend tests), implemented in about 28 minutes. That’s roughly 5 minutes per feature for these straightforward ones. More complex features average around 10 minutes.

Why patterns beat prompts

Vibe coding is ad hoc. You describe something, the AI generates something, you tweak it, repeat. It works β€” until it doesn’t. The moment you need consistency across dozens of features, you need structure.

Patterns give you control. I write the requirements. I review the breakdown before it’s committed. I can inspect every feature file, adjust acceptance criteria, or add constraints. The AI never decides what to build β€” only how to build what I specified.

Patterns give you quality. Every feature goes through the same pipeline: implement, test, lint, build. If any check fails, the AI fixes it before committing. No shortcuts. The pattern enforces discipline that ad hoc prompting can’t.

Patterns compound. The YAML format, the skills, and the loop script are reusable across projects. Once the workflow is set up, each new project benefits from the same infrastructure. The investment pays for itself many times over.

In two days, I went from an empty repository to a complete support platform β€” 66 features, ready to pilot. Not by writing code faster or by vibing with an AI, but by defining clear patterns and letting the machine follow them.

Are you my next customer to solve your challenges? Contact me.

Hi, I’m Martijn