ADR Template: How AI Generates Architecture Decision Records Your Future Self Will Thank You For
What is an Architecture Decision Record (ADR)?
An Architecture Decision Record (ADR) is a document that captures a single architectural decision: the context that forced it, what was decided, the consequences, and the alternatives that were rejected. Introduced by Michael Nygard in 2011 and adopted by companies like Spotify and GitHub, ADRs live alongside the code in a repository and provide the institutional memory that prevents teams from re-analyzing past decisions. AI can generate a first-draft ADR in 3-5 minutes from a natural language description of the decision.
TL;DR
- -ADRs reduce onboarding time and prevent repeated analysis of past decisions — teams at Spotify and GitHub attribute significant DX gains to systematic ADR adoption
- -The Alternatives Considered section is the most frequently skipped and the highest-value: it records why rejected options were ruled out, not just what was chosen
- -AI generates a complete ADR draft in 3-5 minutes vs 30-40 minutes manually; the key is providing specific metrics rather than vague context
- -Retrospective ADR generation from PR diffs is practical: a dedicated prompt extracts architectural decisions from existing Git history with 80%+ accuracy
- -ADR coverage below 50% of architectural PRs signals process failure; Time-to-ADR above one week means context is already lost
Teams make dozens of architectural decisions every month but document almost none of them. The rest dissolve into Slack threads, hallway conversations, and the minds of people who will leave the company within a year.
Six months later, a new developer stares at the code and asks: “Why Redis here instead of PostgreSQL for queues?” Nobody remembers. An archaeological dig through Git history, Slack, and Notion begins. Two hours spent investigating a decision that originally took 15 minutes.
Architecture Decision Records (ADRs) solve this problem. But they don’t get written. The reason is simple: drafting an ADR takes 30-40 minutes, and the developer has already moved on to the next task. AI compresses that to 3-5 minutes. This article covers ADR structure, prompts for LLM-based generation, real-world examples, and CI pipeline automation.
What ADRs are and why capturing architectural decisions matters
An ADR (Architecture Decision Record) is a document that captures one specific architectural decision. Not a spec, not an RFC, not a design document. One decision, one file.
Michael Nygard introduced the concept in 2011. The format took hold at large companies (Spotify, Thoughtworks, GitHub) but remains rare in smaller teams. The main reason: the writing overhead feels higher than the value it delivers.
Three situations where the absence of ADRs hurts the most:
Onboarding. A new developer reads the code and encounters an unconventional decision. Without an ADR, they either spend hours investigating, or treat it as a mistake and “fix” it. Both paths are expensive for the team.
Revisiting decisions. Context changes: load increases, new requirements emerge, a dependency goes stale. Without a record of why the current solution was chosen and which alternatives were rejected, the team re-runs the entire analysis from scratch.
Audits and compliance. In regulated industries (fintech, healthtech), architectural decisions require documented justification. ADRs close that gap automatically.
ADR template structure: 7 required sections
A minimum viable ADR contains seven sections. Each answers a specific question.
# ADR-{number}: {Decision title}
## Status
Proposed | Accepted | Deprecated | Superseded by ADR-{number}
## Date
2026-03-26
## Context
What problem or situation forces this decision?
Technical constraints, business requirements, current system state.
## Decision
Exactly what was decided. A concrete statement without vague language.
## Consequences
Positive and negative effects of the decision.
What becomes easier, what becomes harder.
## Alternatives Considered
Which options were evaluated and why they were rejected.
Comparison criteria, trade-offs.
## References
Links to issues, PRs, discussions, documentation, benchmarks.
Status has four values. Proposed means the decision is under discussion. Accepted means it’s adopted and in use. Deprecated means it’s outdated, but no replacement has been chosen yet. Superseded by ADR-{N} means it was replaced by a newer decision with a direct link.
Context is the most important section. Without context, a decision loses meaning. “We chose Redis for caching” tells you nothing. “We chose Redis for caching because PostgreSQL LISTEN/NOTIFY couldn’t deliver sub-millisecond latency for autocomplete at 10K RPS” tells you everything.
Alternatives Considered is the section most often skipped and the one that provides the most value. When the question “why not Kafka?” comes up a year later, the answer is already recorded.
Prompt for generating ADRs with AI
A baseline prompt that works with Claude, GPT-4, and Gemini:
You are a senior software architect. Generate an ADR (Architecture Decision Record)
using the following template.
DECISION: {description of the decision made}
PROJECT CONTEXT:
- Stack: {languages, frameworks, infrastructure}
- Scale: {load, team size, product stage}
- Constraints: {budget, deadlines, compliance, legacy}
OUTPUT FORMAT:
# ADR-{number}: {Title}
## Status
Accepted
## Date
{current date}
## Context
Describe the problem that led to this decision. Include technical and business factors.
Specific metrics where applicable. 3-5 sentences.
## Decision
State the decision in one paragraph. No words like "we decided" or "we will" - facts only.
Specify the scope: what is included, what is not.
## Consequences
### Positive
- List 3-5 concrete improvements
### Negative
- List 2-3 trade-offs or risks
## Alternatives Considered
For each alternative:
### {Alternative name}
- Description in 1-2 sentences
- Reason for rejection (specific, not "didn't fit")
## References
- Links to relevant resources
RULES:
- Active voice, no filler
- Specific metrics instead of "faster/better/simpler"
- If metrics are unavailable, use qualitative criteria
- Every Consequences item must be verifiable
This prompt covers 80% of cases. The remaining 20% require specialized variants.
Advanced prompts: migrations, technology choices, deprecations
Prompt for a migration ADR
Migrations carry the highest risk and the longest consequence horizon.
Generate an ADR for a migration.
FROM: {current solution, version, how long in production}
TO: {target solution}
REASON FOR MIGRATION: {what broke or stopped working}
In addition to the standard template, add these sections:
## Migration Strategy
- Approach: Big Bang | Strangler Fig | Parallel Run
- Justification for the approach
- Migration phases with estimated timelines
## Rollback Plan
- Rollback triggers
- Rollback procedure
- Acceptable rollback window
## Success Criteria
- Metrics that define migration as complete
- Stabilization period after migration
Prompt for a technology selection ADR
Generate an ADR for a technology selection.
TASK: {what problem the chosen technology solves}
CANDIDATES: {list of evaluated options}
CHOICE: {winner}
Additionally:
## Evaluation Criteria
Comparison table of candidates by criteria:
| Criterion | Weight | {Candidate 1} | {Candidate 2} | {Candidate 3} |
Criteria: performance, DX, community/support, cost, learning curve, integration
## Proof of Concept
- What was tested in the PoC
- Results with numbers
Prompt for a deprecation ADR (Deprecated/Superseded)
Generate an ADR recording the rejection of a previously accepted decision.
ORIGINAL ADR: {number and title}
REASON FOR REJECTION: {what changed since the decision was made}
REPLACEMENT: {new decision, if any}
Additionally:
## What Changed
- Specific context changes that made the decision obsolete
- Date or event that made it apparent
## Lessons Learned
- What the team learned from using the previous solution
- How this influences similar future decisions
Example: ADR for a real project
A team is choosing a caching strategy for a travel app API. Here is what AI generates using the prompt above:
# ADR-012: Redis for External API Response Caching
## Status
Accepted
## Date
2026-03-26
## Context
The travel app API calls external services (Foursquare, weather, flights)
for every user request. Average response time: 800ms on a cold call. 40%
of requests are repeat calls to the same data within a 15-minute window
(user switches between tabs, refreshes the page). External API rate limits:
500 req/min (Foursquare), 1000 req/min (weather). At 200 DAU, current load
is at 60% of the limit.
## Decision
Use Redis (Upstash Serverless) as a cache layer between edge functions
and external APIs. TTL: 15 minutes for geo data, 60 minutes for weather,
5 minutes for flight prices. Cache key: `{api}:{endpoint}:{normalized_params_hash}`.
Invalidation strategy: TTL-based, no manual invalidation in the first phase.
## Consequences
### Positive
- Response time for cached requests: 800ms to 15-25ms (Upstash REST API)
- Rate limit consumption drops by 40% with current usage patterns
- Edge functions free up faster, reducing compute consumption
### Negative
- Additional dependency: Upstash (managed, but still a point of failure)
- Stale data within the TTL window: users may see outdated ticket prices
- Cost: ~$5/mo at current load, scales with growth
## Alternatives Considered
### In-memory cache (Map in Deno isolate)
Zero latency, but state is lost on cold start. At the current cold start
frequency (every 3-5 minutes), hit rate would be below 20%. Doesn't
justify the implementation complexity.
### Cloudflare KV
Eventual consistency with up to 60-second delay. Acceptable for flight
price caching, but creates UX issues for geo data (user is moving).
Cost is comparable to Redis.
### PostgreSQL materialized views
Requires reworking the data layer. Not suitable for edge functions due
to connection latency (50-100ms vs 5-15ms for Redis REST API).
## References
- Upstash Redis REST API: https://docs.upstash.com/redis
- Foursquare rate limits: https://docs.foursquare.com/reference/rate-limits
Note the specificity. No phrases like “improves performance” or “reduces load.” Numbers instead: 800ms to 15-25ms, 40% reduction in rate limit consumption, $5/mo. Every item can be verified six months from now.
Generating ADRs from Git history and PR descriptions
AI can extract architectural decisions from existing artifacts. This closes the “we made a decision three months ago and forgot to document it” problem.
Prompt for retrospective generation:
Analyze the following PR (diff + description + comments) and determine
whether it contains an architectural decision. If so, generate an ADR.
PR TITLE: {title}
PR DESCRIPTION: {description}
PR DIFF (key files):
{diff}
PR COMMENTS:
{review comments}
Criteria for an architectural decision:
- Adding a new dependency
- Changing data structure or DB schema
- New pattern (caching, queues, retry strategy)
- Changing an API contract
- Choosing between two or more approaches (recorded in the discussion)
If the PR contains no architectural decision, respond "No ADR needed" with
a brief explanation.
For Claude Code, this process can be automated with:
# Fetch the latest PR diff and generate an ADR
gh pr view --json title,body,comments,files | \
claude -p "Analyze this PR and generate an ADR if you find an architectural decision."
Automation: ADRs as part of the CI/CD pipeline
Manual ADR generation works but requires discipline. CI automation removes the human factor.
GitHub Action: checking for an ADR
# .github/workflows/adr-check.yml
name: ADR Check
on:
pull_request:
types: [opened, synchronize]
jobs:
check-adr:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Detect architectural changes
id: detect
run: |
CHANGED_FILES=$(gh pr diff ${{ github.event.pull_request.number }} --name-only)
ADR_NEEDED=false
# New dependencies
if echo "$CHANGED_FILES" | grep -q "package.json\|Cargo.toml\|go.mod\|requirements.txt"; then
ADR_NEEDED=true
fi
# DB migrations
if echo "$CHANGED_FILES" | grep -q "migration\|schema"; then
ADR_NEEDED=true
fi
# Infrastructure config
if echo "$CHANGED_FILES" | grep -q "docker\|terraform\|cloudflare\|nginx"; then
ADR_NEEDED=true
fi
echo "adr_needed=$ADR_NEEDED" >> $GITHUB_OUTPUT
- name: Check for ADR file
if: steps.detect.outputs.adr_needed == 'true'
run: |
CHANGED_FILES=$(gh pr diff ${{ github.event.pull_request.number }} --name-only)
if ! echo "$CHANGED_FILES" | grep -q "^docs/adr/"; then
echo "::warning::This PR contains architectural changes but no ADR. Consider adding one to docs/adr/"
fi
env:
GH_TOKEN: ${{ github.token }}
This workflow warns rather than blocks. Blocking PRs through ADR requirements creates friction that kills adoption.
ADR file structure in the repository
docs/
└── adr/
├── README.md # Index of all ADRs
├── template.md # Template
├── 001-use-astro.md
├── 002-redis-caching.md
└── 003-event-driven.md
Three-digit numbering with leading zeros. Files in chronological order. One file, one decision.
Script for creating a new ADR
#!/bin/bash
# scripts/new-adr.sh
ADR_DIR="docs/adr"
LAST_NUM=$(ls "$ADR_DIR" | grep -oP '^\d+' | sort -n | tail -1)
NEXT_NUM=$(printf "%03d" $((10#$LAST_NUM + 1)))
SLUG=$(echo "$1" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | tr -cd '[:alnum:]-')
FILENAME="${ADR_DIR}/${NEXT_NUM}-${SLUG}.md"
cat > "$FILENAME" << EOF
# ADR-${NEXT_NUM}: $1
## Status
Proposed
## Date
$(date +%Y-%m-%d)
## Context
<!-- What problem forces this decision? -->
## Decision
<!-- What was decided? -->
## Consequences
### Positive
-
### Negative
-
## Alternatives Considered
###
-
## References
-
EOF
echo "Created: $FILENAME"
Usage: ./scripts/new-adr.sh "Switch from REST to GraphQL".
Integrating ADRs with context engineering for AI agents
ADRs gain additional value as context for AI coding. When an AI agent (Claude Code, Cursor, Copilot) works with a codebase, ADRs provide architectural context that does not exist in the code itself.
Adding ADRs to the project’s CLAUDE.md:
## Architecture Decisions
Key ADRs to follow when making changes:
- ADR-005: Event-driven architecture for notifications (docs/adr/005-event-driven.md)
- ADR-008: PostgreSQL RLS for multi-tenancy (docs/adr/008-rls-multitenancy.md)
- ADR-012: Redis caching strategy (docs/adr/012-redis-caching.md)
The AI agent respects these decisions when generating code. Instead of suggesting a REST call for notifications, it uses the event bus because ADR-005 recorded that decision. More on structuring context for AI: Context Engineering Guide.
Common mistakes when writing ADRs
Context that is too abstract. “We needed to improve performance” is useless. “API response time grew to 2.3s at p95; SLA requires < 500ms” is useful.
Missing Alternatives. Without an alternatives section, the ADR looks like post-hoc justification rather than a deliberate choice. Even if there was only one alternative (doing nothing), that is worth recording.
Scope that is too broad. An ADR captures one decision. “Moving to microservices” is not one decision. It is ten. Each service, each contract, each communication mechanism deserves its own ADR.
Stale Status. An ADR with Accepted status that was replaced long ago by another decision misleads readers. Updating the status to Superseded by ADR-{N} takes seconds and saves others hours.
Confusing ADRs with documentation. An ADR does not describe how the system works. It describes why the system works the way it does. How is the job of SOPs and operational documentation.
Metrics: measuring ADR practice effectiveness
Four metrics that show whether ADRs are working for the team:
| Metric | How to measure | Target |
|---|---|---|
| ADR coverage | Number of ADRs / number of architectural PRs per month | > 70% |
| Time-to-ADR | Time from decision to recorded ADR | < 24 hours |
| Reference rate | How often ADRs are cited in PRs and discussions | > 2 times/month per ADR |
| Onboarding feedback | New joiners rate ADR usefulness (1-5) | > 4.0 |
ADR coverage below 50% means the process has not taken hold. Time-to-ADR above one week means context is being lost and the record becomes a fictional reconstruction.
Checklist: introducing ADRs to a team
- Create the
docs/adr/directory and the template - Write 3-5 ADRs for already-made decisions (retrospectively, with AI help)
- Add the
new-adr.shscript for quick creation - Set up the GitHub Action with a soft warning (not a block)
- Include key ADRs in CLAUDE.md / .cursorrules for AI agents
- Review ADRs like code: through PRs
- Walk through all ADRs quarterly and update statuses
The first three steps take 30 minutes with AI. The rest is a habit that forms over 2-3 sprints.
FAQ
Should ADRs be reviewed like code — through pull requests?
Yes, this is the most effective approach. Reviewing ADRs via PRs keeps the architectural record auditable, allows team members to challenge or refine the decision before it is locked in, and creates a natural link between the code change and the rationale. The review process also surfaces disagreements early: an ADR under debate in a PR is better than a contested decision discovered six months into implementation.
What is the right granularity for an ADR — one per service or one per significant choice?
One ADR per atomic decision, not per service or per project. Choosing Redis for caching and choosing Upstash specifically as the managed provider are two separate ADRs. “Moving to microservices” is not one ADR — it is at minimum one per service boundary, one for the communication protocol, and one for the deployment strategy. Overly broad ADRs lose precision; overly narrow ones become noise.
How should ADRs be handled when a team is acquired or the codebase is inherited?
Treat inherited ADRs as unverified hypotheses. Read them for context, but audit each against the current state of the system. Some decisions will be stale (the library is deprecated, the load assumption changed), some will still be valid. The fastest path is a one-day “ADR audit sprint”: read every ADR, update Status fields, and add a brief note to any where the context has shifted. This investment pays back in the first month of development.