← Back to Blog

What Is Spec-Driven Development? (And Why It Fixes Vibe Coding)

Key Takeaways

  • Spec-driven development (SDD) makes a machine-readable specification the primary artifact; code, tests, and docs are derived from it
  • GitHub released Spec Kit in September 2025; by April 2026 it had over 90,000 stars and supported 20+ coding agents
  • 66% of developers say their top AI frustration is code that’s “almost right, but not quite” — the failure mode specs are designed to catch
  • Birgitta Boeckeler identifies three SDD maturity levels: spec-first, spec-anchored, and spec-as-source
  • Specs have failure modes too: Thoughtworks Radar rated SDD “Assess, not Adopt” in November 2025 and Marmelab documented a 1,300-line spec for a one-feature date display

45% of AI-generated code samples introduced OWASP Top 10 vulnerabilities across 100+ tested models (Cloud Security Alliance, April 2026). 66% of developers say their top AI frustration is output that’s “almost right, but not quite” (Stack Overflow 2025 Developer Survey). The models keep improving. The failure mode hasn’t changed.

The gap is the spec. Without a contract describing what you want, an AI fills the void with plausible-looking code that drifts from intent — three rewrites later you’re still not shipping. Spec-driven development closes that gap by making the specification, not the prompt and not the code, the source of truth your tools and agents build from.

If you’ve been vibe coding and watching your AI rewrite the same dashboard six times in a row, the fix isn’t a better prompt. It’s a spec.

What Is Spec-Driven Development?

Wikipedia’s definition is the cleanest: “Spec-driven development is a software engineering methodology where a formal, machine-readable specification serves as the primary artifact from which implementation, testing, and documentation are derived” (Wikipedia, 2026).

The practitioner framing from GitHub’s Den Delimarsky is more operational: “Instead of coding first and writing docs later, in spec-driven development, you start with a spec. This is a contract for how your code should behave and becomes the source of truth your tools and AI agents use to generate, test, and validate code” (GitHub Blog, September 2, 2025).

Both definitions share one idea: the spec is upstream of everything. Code is a compilation target. Tests are a consistency check. Documentation is a projection. The spec is what you author, review, and version.

The Term Is Older Than It Looks

Spec-driven development didn’t arrive with AI. Wikipedia traces it to 1960s NASA workflows and a formal academic treatment by Ostroff, Makalsky, and Paige at the XP 2004 conference. Formal methods, contract programming, and model-driven engineering all sit in the same lineage. What changed in 2025 is that large language models made the cost of “write the spec first” collapse: the spec itself can be drafted, refined, and turned into code by the same agent, as long as the spec is the artifact everyone argues about.

The Problem Vibe Coding Created

Vibe coding made it possible to describe a feature in plain English and get working code back in seconds. That’s the upside. The downside shows up at scale, and the data from the last twelve months is unambiguous.

A Veracode study cited in the Cloud Security Alliance’s April 4, 2026 research note found 45% of AI-generated code introduced OWASP Top 10 vulnerabilities across 100+ tested LLMs; Java samples failed 72% of the time, and 88% were vulnerable to log injection (CSA Research Note). Apiiro’s enterprise telemetry in the same note showed AI-assisted developers produced commits at 3–4x the rate of peers, while security findings rose roughly tenfold and privilege-escalation paths climbed 322% over six months.

Productivity data is just as stark. A July 2025 METR randomized controlled trial found experienced open-source developers were 19% slower when using AI coding tools, despite predicting a 24% speedup (METR RCT, July 2025). The Stack Overflow 2025 Developer Survey (n = 48,945) found 84% of developers use or plan to use AI, but only 33% trust AI accuracy while 46% actively distrust it.

The “almost right” tax

66% of developers cite “AI solutions that are almost right, but not quite” as their top AI frustration (Stack Overflow 2025). Debugging plausible-looking wrong code is often slower than writing it yourself. Specs exist to prevent “almost right” from ever leaving the planning phase.

The pattern is consistent: AI writes fast, generates superficially plausible code, and leaves you to clean up architectural drift and security gaps. The Stack Overflow team connected the dots explicitly in their 2025 write-up, calling out “spec-driven development” by name as the structural response. We covered the full scaling picture in Vibe Coding Has a Scaling Problem.

How Spec-Driven Development Works

GitHub’s Spec Kit is the clearest reference implementation. It formalizes a four-phase workflow every spec-driven project moves through, and the phases work whether you’re using Claude Code, Cursor, Copilot, Gemini CLI, or any of the 20+ other agents Spec Kit targets.

The Four Phases

  1. Constitution. Project-wide invariants. Your stack, your conventions, the things every feature inherits. This is the document every downstream spec references.
  2. Specify. A feature-level spec: goals, non-goals, constraints, acceptance criteria. This is what the agent reads before it starts planning.
  3. Plan. The agent decomposes the spec into architectural decisions and task breakdowns, then hands the plan back for human review.
  4. Tasks / Implement. Only now does code get written. Each task traces back to an acceptance criterion in the spec, which means divergence is visible rather than silent.

An optional Clarify phase sits between Specify and Plan; the agent asks the questions a human reviewer would ask before committing to an approach. The Spec Kit repo is open source, MIT-licensed, and sat at roughly 90,000 stars with active v0.7.x releases as of April 2026 (github.com/github/spec-kit).

The Three Maturity Levels

Birgitta Boeckeler’s October 2025 article on martinfowler.com breaks spec-driven development into three ascending levels of commitment (Boeckeler, October 2025):

  • Spec-first. You write a spec before prompting. The spec informs the AI but isn’t regenerated as code changes. Simplest, lightest, most teams start here.
  • Spec-anchored. Spec and code stay in sync. When code drifts, the spec is updated; when the spec changes, code is regenerated. This is where Spec Kit and Amazon Kiro live.
  • Spec-as-source. The spec is the only thing humans author. Code is fully derived output, closer to how Terraform generates infrastructure from HCL. Tessl Framework is the most public example.

Most teams don’t need level three. Moving from unstructured prompting to spec-first captures most of the reliability gain.

Spec-Driven Development vs. Vibe Coding

Spec-driven development doesn’t replace vibe coding; it constrains it. The two answer different questions at different points in the workflow.

Vibe Coding Spec-Driven Development
Primary artifact The prompt The specification
Source of truth Generated code The spec
Best for Exploration, prototypes, UI tweaks Anything touching auth, payments, data
Failure mode Pattern drift, “almost right” output Over-specification, review overload
Iteration loop Re-prompt until code works Revise spec, regenerate code
Review target Generated code diff Spec diff first, then code diff

The healthy version of the two is layered: vibe-code inside a well-written spec. The spec bounds what the AI is allowed to do; the prompt fills in the how. When the output drifts, you fix the spec, not the prompt.

Context Engineering — The Layer Below Specs

A spec tells the AI what to build. Context engineering tells it what it already knows. The term was coined in parallel by Shopify CEO Tobi Lütke and Andrej Karpathy in late June 2025, within two days of each other.

Context engineering is the delicate art and science of filling the context window with just the right information for the next step. — Andrej Karpathy, June 25, 2025

Lütke’s framing, two days earlier, was more practical: “the art of providing all the context for the task to be plausibly solvable by the LLM” (@tobi on X, June 23, 2025). Simon Willison collected both quotes and argued the term better reflects what production LLM work actually looks like (Willison, June 27, 2025).

The relationship to specs is directional: context engineering feeds the spec, and the spec feeds the task. A spec with no context produces code that’s technically correct but violates every convention in your repo. A context without a spec produces code that fits the repo but does the wrong thing. You need both.

VibeReady’s structured vibe coding framework treats them as two of three layers — context engineering, AI coding guardrails, and spec-driven workflows — that together form a complete harness. Specs without context, or context without enforcement, fail in predictable ways.

The Tools Shipping Spec-Driven Workflows

Three tools define the current state of spec-driven development. Each takes a different position on the Boeckeler maturity ladder.

  • GitHub Spec Kit. Open source, MIT-licensed, roughly 90,000 stars as of April 2026. Supports Claude Code, Copilot, Cursor CLI, Gemini CLI, Codex CLI, Qwen, opencode, and more. Lives at the spec-anchored level: specs and code evolve together through the Constitution/Specify/Plan/Tasks flow.
  • Amazon Kiro. Commercial AWS offering, same spec-anchored tier. Kiro emphasizes tight AWS integration and specification reuse across services.
  • Tessl Framework. Commercial, the most aggressive of the three. Pushes toward spec-as-source: humans author specs, everything else is generated. Thoughtworks’ Technology Radar flagged all three by name when it placed spec-driven development in its “Assess” ring in November 2025 (Thoughtworks Radar Vol. 33).

The tools handle generation. They don’t handle enforcement. That’s where harness engineering picks up — the tests, type checks, and quality gates that verify the generated code actually matches the spec. Specs and harnesses are complements: the spec is what you wanted, the harness proves you got it.

When Spec-Driven Development Backfires

Spec-driven development has a credible set of critics. Ignoring them produces the exact overhead they warn about.

François Zaninotto at Marmelab documented the most concrete example in November 2025: a single feature to display the current date required 8 files and roughly 1,300 lines of specification using Spec Kit (Marmelab, November 12, 2025). His argument is that SDD is a rebranded waterfall optimized for removing developers from the loop.

SDD is a step in the wrong direction. It tries to solve a faulty challenge: “How do we remove developers from software development?” — François Zaninotto, Marmelab

Thoughtworks’ Technology Radar was more measured but still cautious, placing SDD in “Assess” rather than “Trial” or “Adopt” and warning the workflows are “elaborate and opinionated” and may represent “a bitter lesson — that handcrafting detailed rules for AI ultimately doesn’t scale.” Boeckeler, a qualified supporter, has flagged the same failure modes: review overload for small features and non-deterministic LLM output undermining the promised control.

The practical heuristic: spec-driven development is overhead for anything simpler than a feature spec. Use it where the cost of architectural drift is high (auth, billing, multi-tenant data, API contracts) and skip it where the cost of being wrong is a page refresh.

How to Start Without Rewriting Everything

You don’t need Spec Kit, a Constitution document, or a four-phase workflow to practice spec-driven development. You need a one-page spec and the discipline to hand it to the AI before you prompt.

  1. Write a one-page PRD before prompting. Goals, non-goals, constraints, acceptance criteria. Fifteen minutes. This single step is the biggest reliability gain most teams will see, and it costs nothing.
  2. Use AGENTS.md as your Constitution. Stack choices, conventions, architectural rules, forbidden patterns. Next.js 16.2 now ships AGENTS.md in create-next-app by default; we walk through a full AGENTS.md-first workflow in our step-by-step tutorial.
  3. Treat the spec as the diff target. When the AI produces something wrong, revise the spec first, then regenerate the code. Don’t re-prompt your way around a spec gap — that’s the vibe-coding failure mode.
  4. Pair the spec with a harness. Specs without automated tests and type checks drift silently. The spec says what you want; the harness proves the code matches. See harness engineering for the enforcement layer.
  5. Graduate to Spec Kit when the overhead earns itself. Once you have a handful of features that share a Constitution, formalizing with Spec Kit or Kiro starts paying back. Before that, a directory of markdown specs works fine.

Specs work when the context and the guardrails are already in place. VibeReady wires up all three layers — context engineering, AI coding guardrails, and spec-driven workflows — so your first prompt runs inside a structured harness instead of a vibe. See editions from $149 →

The point of spec-driven development isn’t specs. It’s getting AI to build the thing you actually wanted, the first time, at the architectural level your future self will have to maintain. A one-page PRD beats a four-hour debugging session. Every time.

Frequently Asked Questions

Is spec-driven development the same as TDD or BDD?

No. Test-driven and behavior-driven development start with executable tests. Spec-driven development starts with a machine-readable specification that generates code, tests, and documentation together. The spec is the source of truth; tests are one of its outputs. Wikipedia traces SDD back to a 2004 XP conference paper, predating most agentic coding tools.

Do I need GitHub Spec Kit to practice spec-driven development?

No. Spec Kit formalizes a four-phase workflow (Constitution, Specify, Plan, Tasks) and ships templates for 20+ coding agents, but any structured PRD works. An AGENTS.md file, a one-page product spec, or a CLAUDE.md with acceptance criteria all qualify. Spec Kit is the reference implementation, not the methodology itself.

Won't writing specs slow down vibe coding?

For a landing page or a date component, yes. Marmelab documented a case where Spec Kit generated 1,300 lines of spec for a single feature. For anything that touches auth, payments, or data, the tradeoff flips: specs catch architectural drift before the AI writes 4,000 lines in the wrong direction. Use specs where the cost of being wrong is high.

How is this different from structured vibe coding?

Spec-driven development is the upstream methodology — define the contract before prompting. Structured vibe coding is the full implementation: context engineering, AI coding guardrails, and spec-driven workflows together. VibeReady ships structured vibe coding out of the box. Learn more at https://vibeready.sh/structured-vibe-coding

What's the minimum viable spec for an AI coding task?

One page. Goals (what the feature does), non-goals (what it explicitly doesn't do), constraints (stack, conventions, API contracts), and acceptance criteria (how you'll know it's done). Hand that to Claude Code or Cursor before you ask for a single line of code. It's the highest-leverage prompt you can write.

Have more questions? See our full FAQ →