There are many different approaches to software development. In the earliest days, there was the “Test Last” or “Code First” approach. A business analyst would work with the client to develop requirements. Once all the requirements were gathered, they would pass them to the developers, who would write the application from beginning to end. The “finished” product would be handed back to the client, or maybe to a QA team for testing (if there was a big enough budget). That testing was usually manual, and slow, and it depended on whomever was selected for testing understanding the product and what it was supposed to do, which they often really didn’t. Sometimes, the process worked. Often, it didn’t. Features and edge cases would often be missed. Critical business logic would be incorrect. The end product was often nothing like the client envisioned, or wanted. Sometimes, if there was a big enough budget available, there would be iterations to fix some of the more egregious issues. But often they would have to make do with the result.
Eventually more automated approaches to testing emerged as developers began to implement various types of testing into their development process. These tests helped developers ensure that their code was more accurate and worked as expected. But they also had several significant shortfalls. First, they were highly technical in nature. Only the developers could understand what the tests were accomplishing. And second, they were still part of the “test last” philosophy and were often afterthoughts. As such, the tests were often written not to ensure the code met the business requirements, but rather that the code did what the code had already been written to do, whether that logic was really right or not.
Test Driven Development (TDD)
In the 1990s, that philosophy started to change with the introduction of Test Driven Development (TDD). The idea behind TDD was that you wrote the technical tests first. The tests would be written to cover the business requirements that had been provided (probably…maybe?). “If my input is X, the output is Y”. The tests would fail because no functional code had yet been written. Once the tests were in place, then the developer would write the functional code, and keep altering and improving it until all the tests passed.
The approach improved the quality of the final product, but it still had a big flaw. It remained highly technical, something only the developers really understood. The client had to depend on the developer having created the tests correctly, and to have covered all of the test cases correctly. For the non-developer, it was still difficult, if not impossible, to understand the tests, what they were doing, and whether or not they covered the requirements.
Behavior Driven Development (BDD)
In answer to the problem of TDD being highly technical in nature, along came another development philosophy: Behavior Driven Development (BDD). In BDD, the features to be developed would be laid out in a human readable format that could also be implemented into code tests that were, if not entirely readable for the non-developer, they were a substantial step forward. Each feature requirement was written in the following format, a syntax called “Gherkin”:
Given X
When Y
Then Z
For example, if you had a requirement that when an existing user logs in, they are directed to the user dashboard. You would write the feature thus:
Given there is an existing user
When the user successfully logs in with their username and password on the login page
Then the user is redirected to the base user dashboard page
You could also stack multiple requirement pieces into a single statement:
Given there is an existing user
Given the user is of type paid user
When the user successfully logs in with their username and password on the login page
Then the user is redirected to the base user dashboard page
Then the system updates the users last successful login datetime
Then the system displays the users remaining number of credits
And so on. There is a popular library called Cucumber that has a .NET version called ReqnRoll (formerly Specflow) that lets you write testing files using the Gherkin syntax that can then be run in an automated manner the same as unit tests.
It was a significant shift from TDD in that it allowed the client to define the terms of what a successful application would be in a way that the developers could fairly easily implement. The client would write the requirements in the Gherkin syntax, then the developer could easily turn that into test code that would be automatically tested alongside the unit tests that they were writing. It was an effective compromise between the technical and non-technical sides of the project. But the downside was that it required the client to write the requirements in that very specific syntax. While it was more “natural” than unit tests, though, it was still somewhat technical. It resulted in a lot of pushback from the non-technical side and has never been widely implemented.
Specification Driven Development (SDD)
In the last few years we have a seen the emergence of a return to the original design philosophy of “I’m just gonna list out all the requirements in plain written text and you go write your software from that.” These days, that idea has been formalized as Specification Driven Development (SDD). Whatever the terminology, it’s really just a return to the way software was designed from the beginning. In a lot of ways, the typical modern SDD approach has evolved from BDD ideas. For instance, you will often see the re-use of the “Given => When => Then” terminology from Gherkin as part of the feature specifications.
As formalized under the modern SDD approach, the idea is that a set of detailed specifications are written and fleshed out before a single line of code is written. The planning and design phases are worked through before the implementation phase begins. But that doesn’t mean that all the requirements have to be completed before any code is written. SDD is fully intended to be implemented in an iterative approach. You start by laying out the minimum viable product (MVP), and the requirements for that. You iterate through the concepts and planning for that, then implement. You keep cycling through each set of steps as you continue to enhance and expand the product. SDD documents are meant to be living, breathing, evolving documentation.
These documents are typically written in Markdown format. And there’s a reason for that. SDD these days relies heavily on the emergence of something else that is key to making the whole process far more successful that it used to be: AI. By integrating AI into the workflow for creating specifications, the documents created are far more detailed and clear, and created far more quickly, than requirements documents have traditionally been. And we use Markdown because it is the language of documents that most AI agents work the best with and it’s easy for anyone to understand and create.
There are various toolkits out there to assist with SDD, but they all rely on using agentic AI tools to help the client lay out and write the design specifications. We’ll circle back to the tools in a little bit. But first, let’s look at the typical SDD design cycle and the document artifacts that are involved. Before we get into the specific documents, it’s important to note that the documents should be kept in the same place as the code of the application itself. The standard these days for most of us is storing code in a git repository. We include the SDD documents in that same repo instead of somewhere else like Jira or DevOps or SharePoint. Everything related to the project is kept together. That also has the advantage of giving everyone easy access to the historical versions of everything, a natural part of the git repo structure.
Agent
You will often want to start with an agents.md document. This document is used by pretty much every AI agent tool to guide what it is allowed to do and how it should carry out certain tasks when doing its job. These are rules that are relevant specific to the AI, but aren’t relevant to any human contributors. This gives the agents clear instructions that it needs to understand in a separate document so it doesn’t clutter up any other documents that the rest of us need to read or work with. This document can be as simple or as complex as you need it to be to effectively guide what you want your AI agent to do as part of its work process.
Example (from the agents.md github repo):
# Sample AGENTS.md file
## Dev environment tips
- Use `pnpm dlx turbo run where <project_name>` to jump to a package instead of scanning with `ls`.
- Run `pnpm install --filter <project_name>` to add the package to your workspace so Vite, ESLint, and TypeScript can see it.
- Use `pnpm create vite@latest <project_name> -- --template react-ts` to spin up a new React + Vite package with TypeScript checks ready.
- Check the name field inside each package's package.json to confirm the right name—skip the top-level one.
## Testing instructions
- Find the CI plan in the .github/workflows folder.
- Run `pnpm turbo run test --filter <project_name>` to run every check defined for that package.
- From the package root you can just call `pnpm test`. The commit should pass all tests before you merge.
- To focus on one step, add the Vitest pattern: `pnpm vitest run -t "<test name>"`.
- Fix any test or type errors until the whole suite is green.
- After moving files or changing imports, run `pnpm lint --filter <project_name>` to be sure ESLint and TypeScript rules still pass.
- Add or update tests for the code you change, even if nobody asked.
## PR instructions
- Title format: [<project_name>] <Title>
- Always run `pnpm lint` and `pnpm test` before committing.Constitution
After setting the bounds for the AI, the a common next step is to write a constitution. These are the rules by which the whole process of creating everything else will be guided. A typical constitution details things like standards for style and formatting, the testing approach that will be taken, standards for security, the architectural pattern that will be followed, the typical workflow, the governance process for changes, and so forth. It may or may not also include details around the tech stack, frameworks to be used, and other technical details, or those might be reserved for the technical plan(s). There will only be one constitution document for the project. It is a guideline not just for the user, but also for the AI tools assisting with the SDD process.
Example:
# SpecKit Constitution
## Core Principles
### I. Specification-Driven Development
Every feature MUST begin with a written specification before any implementation work starts. Specifications MUST include:
- Clear user scenarios with prioritized user stories (P1, P2, P3...)
- Functional requirements that are independently testable
- Success criteria that are measurable and technology-agnostic
- Edge cases and error scenarios
**Rationale**: Specifications prevent scope creep, ensure shared understanding between stakeholders, enable better planning, and create verifiable acceptance criteria before code is written. Prioritized user stories allow incremental delivery of value.
### II. Independent User Stories
Each user story MUST be independently testable and deliverable. User stories MUST be prioritized (P1, P2, P3...) where P1 represents the minimum viable product (MVP). Each story MUST:
- Be implementable without dependencies on other stories
- Deliver standalone value that can be demonstrated to users
- Have clear acceptance criteria using Given-When-Then format
- Include reasoning for its priority level
**Rationale**: Independent stories enable parallel development, incremental delivery, and allow teams to pivot based on feedback without throwing away work. Prioritization ensures the most valuable features are delivered first.
### III. Test-First Development (NON-NEGOTIABLE)
Tests MUST be written before implementation. The strict workflow is:
1. Write tests based on specification acceptance criteria
2. Verify tests are approved and match requirements
3. Confirm tests fail (red)
4. Implement feature (green)
5. Refactor while maintaining green tests
Contract tests and integration tests are required for:
- New library or service interfaces
- Changes to existing contracts or APIs
- Inter-service communication points
- Shared data schemas
**Rationale**: Test-first development ensures requirements are understood before coding, creates a safety net for refactoring, documents expected behavior, and prevents regression bugs. The red-green-refactor cycle is fundamental to maintaining code quality.
### IV. Constitution Compliance
All implementation plans MUST pass Constitution Check gates before moving to implementation. Constitution violations MUST be:
- Explicitly identified during the planning phase
- Justified with clear business or technical rationale
- Documented in the Complexity Tracking section of the plan
- Approved by stakeholders before proceeding
**Rationale**: The constitution represents hard-won lessons and non-negotiable standards. Requiring justification for violations ensures exceptions are conscious decisions, not accidents, and prevents erosion of standards over time.
### V. Simplicity & Clarity
Favor simple, clear solutions over clever or complex ones. Code and documentation MUST be:
- Written for humans first, computers second
- Self-documenting with meaningful names and structure
- Free from premature optimization or unnecessary abstraction
- Accompanied by clear rationale when complexity is unavoidable
Apply YAGNI (You Aren't Gonna Need It): Only build what is specified. Additional features require new specifications.
**Rationale**: Simple code is easier to understand, maintain, test, and debug. Complexity is expensive and should only be introduced when justified by real requirements, not anticipated future needs.
## Development Standards
### Documentation Requirements
Every feature MUST maintain documentation in the `/specs/[###-feature-name]/` directory:
- `spec.md` - Feature specification with user stories and requirements
- `plan.md` - Technical implementation plan with architecture decisions
- `tasks.md` - Detailed task breakdown organized by user story
- `research.md` - Technology research and decision rationale (if applicable)
- `data-model.md` - Entity definitions and relationships (if applicable)
- `contracts/` - API contracts using OpenAPI or GraphQL schema (if applicable)
### Naming Conventions
- Feature branches: `[###-short-name]` where ### is sequential number
- Specs directories: `/specs/[###-short-name]/`
- Task IDs: Sequential (T001, T002, ...) with [US#] tag indicating user story
- Priority markers: P1 (MVP), P2 (High), P3 (Medium), P4 (Low)
### Clarification Protocol
When requirements are unclear during specification, use `[NEEDS CLARIFICATION: specific question]` markers. Maximum 3 clarifications per specification. Prioritize by impact: scope > security/privacy > user experience > technical details. Make informed decisions for everything else, documenting assumptions.
## Workflow Requirements
### Feature Development Workflow
1. **Specify**: Create feature specification using `/speckit.specify` command
2. **Plan**: Generate technical implementation plan using `/speckit.plan` command
3. **Constitution Check**: Verify compliance with all principles (automated in plan phase)
4. **Tasks**: Break plan into actionable tasks using `/speckit.tasks` command
5. **Implement**: Execute tasks following test-first approach
6. **Review**: Verify all acceptance criteria met and constitution compliance maintained
### Phase Gates
- **Specification → Planning**: Specification MUST have clear user scenarios and requirements
- **Planning → Tasks**: Plan MUST pass Constitution Check or have justified violations
- **Tasks → Implementation**: All foundational/blocking tasks MUST be identified
- **Implementation → Review**: All tests MUST pass and acceptance criteria MUST be met
### Branch and Version Management
- Create feature branches from main using calculated sequential numbers
- Check remote branches, local branches, and specs directories for existing numbers
- Use next available number for new features with same short-name
- Feature branch merges require passing tests and constitution compliance verification
## Governance
### Constitutional Authority
This constitution supersedes all other development practices and style guides. When conflicts arise between this constitution and other documentation, the constitution takes precedence.
### Amendment Process
Constitutional amendments require:
1. Documented proposal with rationale and impact analysis
2. Review of all templates and dependent artifacts for consistency
3. Version increment following semantic versioning rules
4. Sync Impact Report documenting all changes and affected files
5. Stakeholder approval before ratification
### Versioning Policy
Constitution versions follow semantic versioning (MAJOR.MINOR.PATCH):
- **MAJOR**: Backward incompatible changes (principle removals, redefinitions)
- **MINOR**: New principles added or material expansions to existing principles
- **PATCH**: Clarifications, wording improvements, typo fixes, non-semantic refinements
### Compliance Verification
All pull requests and code reviews MUST verify:
- Feature has complete specification before implementation
- Tests written before code (red-green-refactor evidence)
- Constitution Check passed or violations justified
- User stories are independently testable
- Documentation is complete and current
Complexity that violates constitution principles MUST be explicitly justified in the plan's Complexity Tracking section.
---
**Version**: 1.0.0 | **Ratified**: 2025-12-16 | **Last Amended**: 2025-12-16For all other documents, there will be one or more of each for each block of work.
Specifications
For each block of work, the core document will be the specification. These are the user requirements for a particular feature. The document will consist of an overview of the feature, then one or more user stories. Each user story will have a number, a description, a priority, an explanation of why the priority is set as it is, and then details for acceptance criteria.
Example:
# Feature Specification: MVP
**Feature Branch**: '001-MVP'
**Created**: January 31, 2026
**Status**: Draft
**Input**: Description: We are creating a web application that will be used by staff to request personal days. It will keep track of PTO available, requests made, hours used, and management approval status
## User Scenarios
### User Story 1 - User Login (Priority: P1)
As a user, I need to be able to log in to the application using my network credentials so that I can enter my PTO requests and see the status.
**Why this priority**: User tracking is core to all the other functionality of the project.
**Independent Test**: Can be fully tested by logging in using a sample user's Entra ID credentials.
**Acceptance Scenarios**:
1. **Given** I am a current staff member with assigned network credentials, **When** I log in to the application using my Entra ID credentials, **Then** the application recognizes my user account and successfully retrieves all related PTO data for my user id
### User Story 2 ....Checklists
As the requirements of the specification are worked out, there may be value in creating one or more checklist documents. Two of the more common checklists are a plan checklist and a requirements checklist. The plan checklist is a list of technology and coding specific details that define the tech stack, the language requirements, libraries to be used, coding standards to be enforced, and so forth.
Example:
# Implementation Plan Quality Checklist
**Purpose**: Validate implementation plan completeness before proceeding to task breakdown
**Created**: January 16, 2026
**Feature**: [plan.md](plan.md)
## Constitution Compliance
### I. .NET Standards (NON-NEGOTIABLE)
- [x] Targeting .NET 10.0 (`net10.0`)
- [x] Using Aspire 13 for orchestration
- [x] Latest C# features enabled
- [x] Nullable reference types enabled
- [x] Implicit usings enabled
- [x] Warnings treated as errors
### II. Central Package Management (NON-NEGOTIABLE)
- [x] All packages managed via `Directory.Packages.props`
- [x] No version numbers in individual `.csproj` files
- [x] New packages documented: FastEndpoints, FluentValidation, Stripe.net, SendGrid
### III. Code Style & Formatting
- [x] File-scoped namespaces required
- [x] Private fields use `_camelCase`
- [x] Async methods suffixed with `Async`
- [x] Allman brace style (opening braces on new line)
- [x] Tabs for indentation
- [x] Primary constructors with readonly field assignment
...A requirements checklist is typically a list of details that ensure the specification document is fully completed and worked through.
Example:
# Specification Quality Checklist: 001-MVP
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: January 16, 2026
**Feature**: [spec.md](spec.md)
## Content Quality
- [x] No implementation details (languages, frameworks, APIs)
- [x] Focused on user value and business needs
- [x] Written for non-technical stakeholders
- [x] All mandatory sections completed
## Requirement Completeness
- [x] No [NEEDS CLARIFICATION] markers remain
- [x] Requirements are testable and unambiguous
- [x] Success criteria are measurable
- [x] Success criteria are technology-agnostic (no implementation details)
- [x] All acceptance scenarios are defined
- [x] Edge cases are identified
- [x] Scope is clearly bounded
- [x] Dependencies and assumptions identified
## Feature Readiness
- [x] All functional requirements have clear acceptance criteria
- [x] User scenarios cover primary flows
- [x] Feature meets measurable outcomes defined in Success Criteria
- [x] No implementation details leak into specification
## Notes
- Specification is complete and ready for planning phase
- Made reasonable assumptions documented in Assumptions section:Other checklists may or may not be created depending on needs. As you can see, each item in a checklist has a completed box [ ] at the start of each line that either the user or the AI can check off as each item is resolved. All items in all of the checklists should be completed before the final task document is created for that feature.
Research
A research document is basically a decision document that details why certain choices were made. These may be technical choices, like why SQL Server with EntityFramework was chosen over PostgreSQL with Dapper. Or they may be vendor choices like why SendGrid was chosen over Mailgun for bulk email. Basically, any major decision regarding the architecture and design of the application for this Feature should be documented in the research document so that a record is available and easily findable.
Example:
# Research
**Feature Branch**: 001-MVP
**Created**: February 2, 2026
## Technical Decisions
**Decision**: Use Azure SQL Server with EntityFramework Core for data access and storage
**Rationale**:
- Already using Azure SQL Server with other projects, so knowledge base is in place
- Built-in change tracking for data entities
- Excellent performance
**Alternatives Considered**:
- PostgreSQL: Rejected for lack of experience with current development team
- Dapper: Rejected - No performance boost, no built in change tracking, requires more development
- MongoDB: Rejected - Poor security, less performant
**Implemenation Notes**:
- Work with devOps team to set up new Azure SQL instances based on expected MVP data sizes and throughputData Model
Any expected data structure needed for the feature is detailed in this document. Generally it should be kept language/system agnostic, specifying lists of tables, fields, field types, sizes, and how they related to one another, as well as indexes, views, and needed procedures and functions, but not implementation details such as C# classes for the DTOs. Keeping it generic means it can be applied to any database and programming language should those architecture decisions either not be decided yet, or in case they change later. If it’s your style, however, to provide specific code examples here, you can do so. It’s up to you.
Example:
# Data Model: MVP
**Feature Branch**: 001-MVP
**Created**: January 27, 2026
## Database Architecture
This solution uses two databases, one for user information and identity, and the other for all other application data. This decouples application data from identity concerns.
## Application Database
### Entities
#### Product
This table contains a list of products in inventory.
| Field | Type | Constraints | Description |
|-------|------|-------------|-------------|
| Id | string (GUID) | PK | Unique identifier of the product |
| Name | string | Required, Max 256 | A title for the product |
| Description | string | Required, No Max | Contains the full product description text |
...
**Business Rules**:
- Each product must have a name and description. This information should be approved by marketing before product is visible to shoppersContracts
If you’re doing API development, you will often create contract documents that define the various endpoints, their locations, names, types, and the data being passed back and forth. The specifics of the document will vary depending on what type of API service you’re doing, but the most common standard these days are to use openapi.yaml documents. These documents consist of several blocks of information that lay out the various details of the server and each endpoint.
Example:
openapi: 3.0.3
info:
title: Social Post API
version: 0.1.0
description: API for generating and publishing social media posts to LinkedIn, Mastodon, and BlueSky.
servers:
- url: http://localhost:5000
paths:
/posts:
post:
summary: Create and publish a post
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/PostRequest'
responses:
'202':
description: Accepted for publishing
content:
application/json:
schema:
$ref: '#/components/schemas/PostAccepted'
'400':
description: Validation error
...
components:
schemas:
PostRequest:
type: object
required: [text, platforms]
properties:
text: { type: string }
platforms:
type: array
items:
type: string
enum: [linkedin, mastodon, bluesky]
mediaIds:
type: array
items: { type: string }
prompt: { type: string }
tone: { type: string, enum: [Professional, Casual, Humorous] }
PostAccepted:
type: object
properties:
id: { type: string }
status: { type: string, enum: [Queued, Publishing] }
...
Status:
type: object
properties:
linkedin: { type: string }
mastodon: { type: string }
bluesky: { type: string }Tasks
Once all of the various requirements and specifications are worked out and finalized, it’s time to break them down into a list of developer tasks. This document is the point where the developer starts slinging code. This document typically consists of a numbered task list broken up by phases and checkpoints. Each phase should be a related set of tasks. Each checkpoint should be a logical stopping point where all code tasks up to that point should be completed before moving on to the next block of tasks.
There may also be testing steps indicated that will detail how work up to that point should be verified as completed successfully. For example, after the development of the user login/logout functionality, a testing step might be: “Register as a new user, login as that user and verify that you can successfully access the application. Log out and verify that you can no longer access the application”.
Example:
# Tasks: Product Sales Platform
**Input**: Design documents from `/specs/001-MVP/`
**Prerequisites**: plan.md, spec.md, research.md, data-model.md
## Format: `[ID] [P?] [Story?] Description`
- **[P]**: Can run in parallel (different files, no dependencies)
- **[Story]**: Which user story this task belongs to (US1, US2, etc.)
- Include exact file paths in descriptions
## User Stories Summary
| Priority | Story | Title | Core Value |
|----------|-------|-------|------------|
| P1 | US1 | Sales platform | Core platform functionality |
| P1 | US2 | User login | Role-based access control |
| P2 | US3 | Add products to sales platform | Ability to add sales items |
| P2 | US4 | Customer registration | Adding customers to platform |
| P2 | US5 | Customer shopping cart | ability to make purchases overview |
---
## Phase 1: Setup (Shared Infrastructure) [] Not yet begun
**Purpose**: Create new projects and configure dependencies
- [ ] T001 Create Directory.Build.props at src/ToySales/Directory.Build.props with shared properties (net10.0, nullable, implicit usings)
- [ ] T002 Create Directory.Packages.props at src/ToySales/Directory.Packages.props with centralized package versions
- [ ] T003 [P] Create ToySales.Domain class library project at src/ToySales/ToySales.Domain/ToySales.Domain.csproj
- [ ] T004 [P] Create ToySales.Infrastructure class library project at src/ToySales/ToySales.Infrastructure/ToySales.Infrastructure.csproj
- [ ] T005 [P] Create ToySales.Generators class library project at src/ToySales/ToySales.Generators/ToySales.Generators.csproj
- [ ] T006 [P] Create ToySales.Contracts class library project at src/ToySales/ToySales.Contracts/ToySales.Contracts.csproj
...
## Phase 2: Foundational
**Goal**: Core infrastructure must be in place to allow creation of users and add products and cart
**Independent Test**: Register as a new user, verify app is accessible
...
**Checkpoint**: Users must be able to register, log in and log out.Other Documents
There’s no limit on other documents you can create, depending on your needs. It’s not uncommon to also create documents around setting up test data, developer environments, how to run the application, reference documents for internal libraries, and so forth. It’s generally preferable these all be in Markdown format as well, but most AI agents can also read PDFs and many other common document formats such as Office.
SDD Tools
There are a number of tools out there focused on SDD, such as Tessl, Amazon Kiro, OpenSpec, and GitHub Spec Kit. They mostly work the same way. They provide a set of commands or scripts that you can use from a command line, in the tool they offer, or in an IDE of your choice. There is some variation from tool to tool. For instance, Kiro has it’s own IDE and it’s own AI for helping to guide the creation of your documents and writing your code. Spec Kit, on the other hand, is mostly designed as scripted commands you install into VS Code and it lets you use one of any of the more common AI agents, such as ChatGPT or Open AI’s Claude instead of providing it’s own AI or IDE.
But whatever toolset you use, the process is the same. You are working in partnership with the AI to lay out, in detail, your specifications and documents for each feature set before you write even a single line of code. The selling point is this: These tools make it exceptionally easy for a non-technical person to lay out in detail, at a very rapid pace, all of the requirements for each feature. The better the specifications, the better the code is going to be. The better the code is, the more likely the end product will meet the requirements that the client has.
It’s also been my experience that these tools, with their use of AI agents, can oftentimes think of related, and semi-related, things that those laying out specifications and features might not have thought about. One example from my personal experience: I was experimenting with Spec Kit using Claude Opus 4.5 as its AI agent, and was using it to design a particular SASS product. I had specified only that the product “would use a payment service” for subscriptions and that my API would be C# based. In the research document it created, it provided information about 4 different payment service platforms that offered C# integration libraries and some of the advantages and disadvantages of each, all without any additional prompting.
In a typical non-technical/technical partnership, using these tools, the client and/or business analyst could go through and detail out each feature and it’s spec document and non-technical research and checklist documents, then turn it over to the technical team to create the plan, technical research and checklists, and tasks documents. The process is iterative, going back and forth. As I mentioned, SDD and the documents it creates are intended to be living, breathing, changing entities.
Conclusion
Far too often in development, documents are created, approved, stored in SharePoint or Jira, and then forgotten about and never looked at again. Spec Driven Development takes the opposite approach. By making the document process interactive, iterative, and most importantly, integrated with, and a core part of, the code repository itself, it helps them to be an active and ongoing part of the development process. Whether the code that gets written is crafted by a developer, an AI coding agent, or a combination of the two, having detailed specifications will vastly improve the quality of that code. And that can only be a good thing for everyone involved.


