AI Prompts for Writing Effective Software Engineering Documentation
Engineering documentation is universally acknowledged as important and universally written poorly under deadline pressure. AI hasn't solved the problem of teams not writing documentation, but it has dramatically lowered the time cost of writing documentation that's actually useful. The difference is in how you prompt: asking for documentation that teaches vs documentation that describes, and documentation written for a specific audience with specific goals.
Architecture Decision Record Prompts for Capturing Engineering Decisions
ADRs (Architecture Decision Records) are the most valuable documentation a team can maintain — they answer 'why did we build it this way?' months or years later. Writing them during the decision process is hard because the decision is fresh and obvious. Writing them afterward is harder because you've forgotten the alternatives you rejected. AI makes ADR writing fast enough to happen in real time. Prompt: 'Write an ADR for this engineering decision: [describe the decision made]. Include: (1) title in format ADR-[number]: [decision] (2) status: [Accepted/Superseded/Deprecated], (3) context: what problem were we facing and what constraints existed, (4) decision: what we decided and why in 2-3 sentences, (5) considered alternatives: list the 2-3 alternatives we considered, with one paragraph on each covering why we considered it and why we didn't choose it, (6) consequences: positive consequences, negative consequences, and risks accepted, (7) date and decision makers.' The 'considered alternatives' section is the most important for the future reader. The decision itself is visible in the code. Why you didn't choose the alternatives is what gets lost.
ADRs that say 'we considered X but chose Y because Y is better' are useless. ADRs that say 'we considered X but chose Y because X would have required us to [specific cost or constraint], whereas Y fits our current [scale/team/budget/timeline] even though X would have been better if [specific future condition]' — those keep paying dividends.
ADR most valuable section: 'considered alternatives with specific rejection reasons'
Consequences: separate positive, negative, and risks accepted — don't merge them
Include constraints that existed at the time — they explain decisions that look strange later
Date and decision makers: this matters when the original context has changed
Store ADRs in the repo: docs/decisions/ADR-001.md format, versioned with the code
When to write: any decision that would take >1 hour to explain to a new team member
Runbook and Incident Response Prompts for On-Call Documentation
Runbooks are the documentation most teams skip and most regret skipping at 3am during an incident. The AI prompt that produces useful runbooks: 'Write a runbook for handling [type of incident or operational procedure] for our [service/system]. Structure: (1) overview — what this runbook covers and when to use it, (2) initial assessment — the first 5 things to check in the first 5 minutes, in priority order, (3) diagnostic steps — step-by-step commands and checks for the most common causes, with expected output vs problem output shown for each check, (4) resolution procedures — specific actions for each root cause identified in diagnostics, (5) verification steps — how to confirm the issue is resolved and the system is healthy, (6) escalation criteria — what conditions mean this needs to go to a more senior engineer or vendor support, (7) post-incident actions — what to do after the immediate issue is resolved. Use this command syntax: [shell/bash/kubectl/aws-cli] — write actual executable commands, not descriptions of commands.' The 'actual executable commands' requirement is what makes runbooks useful during incidents. Runbooks that say 'check the database connection' are useless at 3am. Runbooks that show `psql $DATABASE_URL -c 'SELECT COUNT(*) FROM pg_stat_activity WHERE state = active;'` are what you need when you're half-awake.
Test your AI-generated runbooks by running through them on a development environment before an incident. Commands that look right but have wrong flags or missing environment variables create confusion at the worst time. Every runbook should be validated by going through it cold at least once.
Write actual executable commands, not descriptions of commands
Initial assessment: 5 things to check in 5 minutes, in priority order
Expected output vs problem output for each diagnostic check
Escalation criteria: specific conditions, not 'when things get bad escalate'
Test the runbook cold on a dev environment before it's needed in production
Runbook locations: PagerDuty alert links, Confluence, Notion — accessible without code repo access