SOP Generator: How AI Creates Documentation From Chaos

What is an SOP (Standard Operating Procedure)?

A Standard Operating Procedure (SOP) is a documented, step-by-step instruction set for a repeatable process, designed so anyone on a team can execute it consistently without relying on tribal knowledge. SOPs reduce bus factor, onboarding time, and errors in critical operations like deployments, migrations, and incident response.

TL;DR

  • -Traditional SOP writing takes 5–8x longer than executing the process — LLMs cut that cost by an order of magnitude
  • -Don't create new documentation artifacts — reuse what already exists: recordings, chat logs, config files
  • -Loom video → Whisper transcription → LLM prompt = clean SOP in 3–5 minutes instead of 40
  • -Prioritize processes by two axes: criticality (what breaks if it fails) × frequency (how often it runs)
  • -CI/CD configs and Dockerfiles are already formalized processes — LLMs can generate human-readable SOPs from them

SOP (Standard Operating Procedures) are documented step-by-step instructions for repeatable processes. In large companies, dedicated teams write them. In small teams of 1–5 people, nobody does: the deploy process lives in one person’s head, migration steps are buried in a three-month-old chat thread, staging setup instructions exist as a Loom video nobody can find when they need it.

This isn’t a discipline problem. Traditional process documentation takes disproportionate effort: writing up a 5-minute deploy takes 40 minutes, and half the steps go stale within a month. The economics work against documentation — until LLMs cut the cost of creating it by an order of magnitude.

This article explains how to use LLMs to generate SOPs from artifacts you already have: screen recordings, chat logs, config files. No dedicated “documentation sessions.” No corporate overhead. The same approach to reusing existing data described in context engineering — applied to process documentation.

Why SOPs Don’t Get Written in Small Teams

“Faster to just do it than to describe it.” Describing a process takes 5–8x longer than executing it. This math holds until someone else needs to run it. A new hire, a freelancer, or yourself six months from now.

“The process keeps changing.” In active development, everything evolves. A hand-written SOP goes stale in a month. Maintaining docs becomes its own task, and there’s no bandwidth for it.

“Everyone just knows.” Until the first vacation, sick leave, or resignation. Bus factor = 1. Standard situation for small teams.

“It’s bureaucracy.” People associate SOPs with ISO certification and 500-page binders. But a startup SOP is a 2-page markdown file, not a regulation.

LLMs change the economics: generating a structured instruction from raw data takes minutes, not hours.

Data Sources: What Your Team Already Has

The key principle: don’t create new artifacts — reuse what already exists. Processes are already explained to colleagues in chats, demonstrated on calls, formalized in configs.

Screen Recordings and Calls

Record the process via Loom or Google Meet, then transcribe it (Whisper, built-in transcription). Feed the transcript to an LLM with a prompt to convert it into step-by-step instructions. Output: a clean SOP in 3–5 minutes instead of 40 minutes of manual writing.

Chat Logs

A ten-message Slack thread answering “how do I set up staging?” is already an SOP draft. Copy the conversation into an LLM and you get a structured document with no extra effort.

Code and Configs

CI/CD pipelines, Dockerfiles, Makefiles: these are formalized processes already written in a language LLMs understand well. The model reads .github/workflows/deploy.yml and generates a human-readable instruction: what each step does, which environment variables are needed, what to check after execution.

Process TypeCapture MethodRationale
DeployTerminal recording (asciinema)Exact commands, reproducibility
OnboardingLoom video + transcriptHeavy visual context
Bug handlingSlack thread copyArtifact already exists
Dependency updatesScreenshots + notesMixed process (GUI + CLI)

Step-by-Step: Generating SOPs With LLMs

Step 1. Process Inventory and Prioritization

Before documenting, you need prioritization. Two parameters decide what to document first: criticality (what happens if the process breaks) and frequency (how often it runs).

Prompt for prioritization:

Here are tasks performed regularly in our project:
- Production deploy
- Setting up a new developer environment
- Handling user bug reports
- Monthly dependency updates
- Database backup
- Mobile app release
- Onboarding a new team member

Sort by two axes: criticality (what happens if the process breaks)
and execution frequency. Which should be documented first?

Start where “critical” and “frequent” overlap. Usually that’s deploy and incident response.

Step 2. Capture Raw Material

For each process in your top 5, pick a capture method from the table above. The principle: don’t set aside dedicated time for documentation. Record during actual execution. Next deploy: with asciinema running. Next onboarding: with Loom recording.

Step 3. Generate the SOP via LLM

A base prompt that produces consistent results:

You are a technical writer. Convert the raw material below
into a step-by-step instruction (SOP).

Rules:
- Each step starts with a verb (Open, Run, Verify, Confirm)
- Terminal commands go in code blocks
- Expected result after each step
- "What to do if something goes wrong" section at the end
- Time estimate for each step
- Prerequisites: what must be set up in advance

Format: Markdown.
Reader level: mid-level developer familiar with Linux and Git,
but unfamiliar with this specific project.

Raw material:
[transcript / chat log / notes]

Key elements of the prompt: explicit step format (verb first), requiring expected results after each step (without this the instruction becomes unverifiable), and specifying the reader level (determines what to explain vs. what to assume).

Step 4. Verify the Generated SOP

The generated document needs checking. Two approaches:

Self-check. Walk through the steps literally. Not “yeah, that makes sense” — actually execute each command. Missing steps or imprecise wording surfaces on the first run.

Peer check. Hand the document to a colleague who doesn’t know the process. If they complete it without questions, the document works. Every question is a signal to add detail.

Step 5. Storage and Discoverability

An SOP is useless if you can’t find it when you need it. Main options:

  • Knowledge bases (Notion, Outline). Search, tagging, API for automation. Each SOP is a separate document in an “Operations” collection with tags: deploy, onboarding, incident-response, maintenance.
  • Repository. A docs/sop/ folder in Git. Upside: version control, code review on changes, proximity to code. Downside: non-technical team members won’t dig through Git.
  • CLAUDE.md / README. For processes specific to an AI agent or the project itself. Deploy checklists, commit conventions, project structure — essentially SOPs embedded in the working context.

Ready-Made Prompt Templates by SOP Type

Deploy and DevOps Processes

Based on this CI/CD config, generate an SOP for manual deploy
(in case CI/CD is broken):

[pipeline YAML]

Include:
- Pre-deploy checklist
- Rollback commands
- Escalation contacts/links

Incident Response

Generate an SOP for responding to a production incident.
Project context: [stack, hosting, monitoring].

Structure:
1. Severity classification (P1/P2/P3)
2. Immediate actions for each level
3. Communication (who to notify, message template)
4. Diagnostics (where to check logs, metrics)
5. Post-mortem template

If your project already has observability set up, you can enrich the prompt with specific tools — as described in the article on observability with Langfuse.

Developer Onboarding

Here are the project's README.md and CLAUDE.md:
[both files]

Generate an SOP for onboarding a new developer:
- Environment setup (step by step, all dependencies)
- Access provisioning (list of services + who grants access)
- First task (what to pick up, how to verify, where to deploy)
- Key contacts and communication channels

Periodic Maintenance

Here are notes from the last dependency update:
[notes]

Convert into a monthly checklist. Include:
- What to check before updating
- Update order (what goes first, what follows)
- How to test after updating
- Common problems and their fixes

How to Keep SOPs Up to Date

The main problem with SOPs: they go stale. A few approaches to slow this down.

Revision date in the header. Format: Last verified: 2026-03-15. If the date is older than 3 months, the document gets flagged for review. Automate this with a cron script that scans documents.

Update during execution. Every time you follow an SOP is a chance to update it. Notice step 4 has changed? Fix it now while the context is fresh. 30 seconds of updating now beats a full rewrite in six months.

LLM for incremental updates. Feed the old SOP plus a diff of changes to the model:

Here is the current deploy SOP (written 3 months ago):
[old document]

Here is what changed since then:
- Migrated from Vercel to Cloudflare Pages
- Added an IndexNow ping step
- Removed the manual cache purge step (automated it)

Update the SOP, preserving format and style.

Git versioning. If the SOP lives in a repository, every change is visible in commit history. You can trace process evolution and roll back if needed.

What to Document and What to Skip: Three Criteria

A common mistake: trying to document everything. Three filters:

  1. Repeatability. The process runs at least once a month.
  2. Transferability. The process can and should be executed by more than one person.
  3. Criticality. A process failure causes real damage.

If a process doesn’t pass at least two of the three, documenting it isn’t worth the effort.

Minimum SOP Set for Any Project

Five documents covering the core risks:

  1. Production deploy — including rollback procedure
  2. Incident response — from severity classification to post-mortem
  3. Secrets and access management — where things are stored, how to get access, how to rotate
  4. Onboarding — from zero to first commit
  5. Periodic maintenance — backups, updates, monitoring

For edge functions and serverless, where the deploy process is unique, consider adding a dedicated SOP that accounts for patterns like circuit breaker.

Tools for Creating and Storing SOPs

ToolRole in the Process
Claude / ChatGPTSOP generation from raw data
LoomScreen recording of processes
WhisperAudio/video transcription
Outline / NotionStorage and search
GitSOP version control alongside code
asciinemaTerminal session recording

Common Mistakes in AI-Generated SOPs

Hallucinated steps. An LLM may insert a step that doesn’t exist in the source material, or reference a non-existent command flag. Verification is mandatory: every generated SOP gets walked through manually at least once.

Over-detailing. The model may break a trivial action into five sub-steps. Calibrate the detail level to the target reader: a mid-level developer doesn’t need instructions on how to open a terminal.

Context-dependent processes. Some processes depend on system state, and a linear instruction doesn’t work. In these cases, a decision tree format helps: “If X, do step A; if Y, do step B.”

False sense of security. Having an SOP doesn’t guarantee anyone will read it. The document must be accessible at the moment it’s needed: in the project README, in a pinned channel message, in CLAUDE.md.

Where to Start Today

  1. Pick one critical process. Usually it’s production deploy.
  2. Next time you run it, start recording (asciinema for terminal, Loom for GUI).
  3. Feed the recording/transcript to an LLM with the prompt from the “Generate the SOP” section.
  4. Walk through the generated document, executing each step literally.
  5. Fix inaccuracies, save it somewhere accessible.
  6. Repeat for the next process on the priority list.

15–20 minutes per process. Five processes, an hour and a half. After that, critical knowledge is no longer locked inside one person’s head.

FAQ

How do you handle processes that differ significantly between team members — like deploy steps that vary by environment or OS?

Document the most common path first, then add a “Variations” section at the end with environment-specific branches. For processes with many branches, a decision tree format works better than a linear list: “If deploying to staging, go to Step A. If deploying to production, go to Step B.” Feed the LLM a note about the variation matrix upfront so it structures the document accordingly rather than picking one path arbitrarily.

Does an LLM-generated SOP expose sensitive information if the raw material contains credentials or internal URLs?

Yes, if you paste raw chat logs or terminal recordings that include secrets. Scrub credentials before sending any material to an LLM — replace real tokens with placeholders like $DB_PASSWORD. The generated SOP should reference environment variables or a secrets manager, not actual values. This applies even to internal LLM deployments: the principle is that SOPs are shared documents and should never contain live credentials.

After how many process changes does it make more sense to regenerate the SOP from scratch rather than update it incrementally?

A practical threshold is three or more structural changes in one cycle — for example, a new tool replaces an old one, a step is reordered, and a verification check is added. Incremental updates work well for isolated additions, but when the logical flow changes in multiple places, the LLM’s incremental prompt tends to produce a patchwork document. At that point, running the full generation prompt against fresh notes takes less time and produces cleaner output than correcting a document with conflicting instructions.