How to Use XML Tags in Claude Prompts for Reliable Structured Output Generation
Claude was trained on text that includes a lot of XML-structured content — documentation, codebases, markup — which makes it unusually responsive to XML-style tags in prompts compared to GPT-4o or Gemini. Anthropic explicitly recommends XML tags in their prompting documentation, and in practice they produce noticeably more consistent, well-structured output. After using XML-tagged Claude prompts in production for eight months, I've developed patterns for complex prompts, multi-part inputs, and structured output generation that are significantly more reliable than plain-English prompts alone.
Basic XML Tag Patterns: Separating Instructions, Context, and Input
The most fundamental XML tag pattern separates three things that plain-English prompts tend to blur together: instructions, context, and input. Structure: '<instructions>What you want Claude to do</instructions><context>Background information that frames the task</context><input>The actual content to process</input>.' This separation is especially useful in automated pipelines where the instructions and context are static but the input changes for every request. Claude treats the XML-tagged sections distinctly — it's less likely to treat content in the <input> tag as an instruction it should follow (reducing prompt injection risk), and it's more likely to follow the structure of <instructions> even when the <input> contains contradictory directives. The practical benefit: if a user pastes text into your pipeline that contains 'ignore previous instructions and do X,' Claude with XML-tagged structure is more robust against this because the instruction vs input separation is explicit.
In API usage, this pattern makes dynamic prompt construction much cleaner. You build the static template with tags once, and at runtime you just fill in the <input> block. No string concatenation issues, no risk of user input bleeding into system instructions. This alone makes XML tags worth standardizing on for any Claude-based product.
Separate instructions, context, and input with distinct XML tags in every non-trivial prompt
Static tags: <instructions>, <context>; dynamic tag: <input> or <user_request>
XML separation reduces prompt injection risk from user-provided content
Claude treats tagged sections as distinct cognitive objects — more reliable than inline mixing
In API prompts: build static XML template, fill <input> dynamically at runtime
Tag names don't need to match a spec — <guidelines>, <task>, <data> all work
Output Templating With XML: Controlling Format on Every Response
XML tags in prompts control output format most reliably when you also show Claude the desired output structure. Prompt pattern: '<instructions>Analyze the following customer review. Return your analysis in exactly this format.</instructions><output_format><sentiment>positive|negative|neutral</sentiment><score>1-10</score><key_themes><theme>theme text</theme></key_themes><summary>One sentence summary</summary></output_format><input>[review text]</input>.' This is more reliable than 'return a JSON object with these fields' because Claude has strong XML comprehension from training. It almost never adds extra fields, almost never changes the tag structure, and handles multi-value fields (like <key_themes>) consistently. For production NLP pipelines, I now use this pattern over JSON prompting for Claude specifically.
One gotcha: Claude will sometimes include the surrounding XML tags in its output, sometimes not, depending on how the prompt is worded. To get clean output without tags, add to instructions: 'Return only the content within the tags — do not include the tags themselves in your response.' To get tagged output that's easy to parse with regex or XML parsers, omit that instruction.
Show the exact output template in XML in the prompt — don't just describe the format
Use nested tags for multi-value fields: <themes><theme>...</theme></themes>
Add 'return only content within tags, not the tags themselves' if you want clean untagged output
For sentiment/classification tasks, XML output from Claude is more reliable than JSON
Parse Claude XML output with Python's xml.etree.ElementTree for robust extraction
Test with 50+ varied inputs before declaring output format stable in production
Multi-Document and Multi-Turn XML Patterns for Complex Pipelines
When processing multiple documents or building up context across multiple inputs, XML tags let you label and reference sources clearly. Pattern for multi-document analysis: '<document id='1' source='2024 Annual Report'>[content]</document><document id='2' source='Q1 2025 Earnings Call'>[content]</document><instructions>Compare the revenue projections in document 1 and document 2. Reference documents by their id when citing specific passages.</instructions>.' The id attribute instruction pays off immediately — Claude references 'document 1 states X while document 2 contradicts this with Y' instead of vague references to 'the first source.' For multi-turn pipelines where you want to accumulate structured context, use a <prior_context> tag that you update and pass back with each turn. This is more reliable than relying on conversation history formatting in API calls, especially for long sessions.
The 'reference documents by their id' instruction is a small thing that makes a big difference in any document comparison work. Without it, Claude will say 'the first document says' 30% of the time and 'the annual report states' 70% of the time, making it hard to programmatically parse source attributions. With explicit IDs, attribution is 100% parseable.
Tag multiple documents with id and source attributes: <document id='1' source='name'>
Instruct 'reference documents by their id' for parseable source attribution
Use <prior_context> tag for multi-turn pipeline state instead of raw conversation history
Update and pass back <prior_context> each turn to maintain structured state
XML works well with Claude's citations feature — combine for fully sourced outputs
For >5 documents: add a <document_index> at the start listing all ids and sources