All articles
prompt engineering · basic ·

Prompt engineering fundamentals

A concise reference for system prompts, user messages, prompt structure, and the prompt components worth reaching for first.

best-practicesllmmodelsprompt-engineering

System prompts vs. user messages

When working with a chat-style LLM API, prompts have two structural layers.

System prompt

The system prompt usually lives in a separate system parameter outside the conversational message list. It provides context, instructions, and guidelines before the model sees the user’s question.

A well-written system prompt improves the model’s ability to follow rules across the conversation.

system prompt text
Your answer should always be a series of critical thinking questions
that further the conversation. Do not actually answer the user question.

User messages

The messages array is the conversational backbone: an alternating sequence of user and assistant turns. The API enforces two rules:

  • Messages must alternate between user and assistant roles.
  • Messages must start with a user turn.

Each message contains role and content fields. The model generates the next assistant message from the full conversation history.

Loading diagram…

Where to put role instructions

Role prompting can live in either the system prompt or the user message. If the persona should persist across the session, the system prompt is the natural home. If the role applies to one task, the user message is enough.

What a good prompt looks like

A good prompt is clear, specific, and structured.

Be clear and direct

The model has only the context provided. Show the prompt to a colleague; if the colleague would be confused about the expected output, the model will be too.

Be specific about intent

Small phrasing changes shift behavior. “Who is the best basketball player?” invites a balanced answer. “If you absolutely had to pick one player as the best, who would it be?” asks for a definitive answer.

Separate instructions from data

Use XML tags to separate instructions from input data.

tagged-input prompt xml
Answer the user's question using only the document below.

<document>
Four score and seven years ago...
</document>

What is the main topic of the document?

Wrapping input in named tags prevents the model from confusing data with instructions.

Give the model space to think

For complex tasks, visible step-by-step reasoning can improve accuracy. Thinking only counts when it is expressed in the interaction; an instruction to reason internally and output only the answer is not a reliable control surface.

Include examples

Examples are one of the strongest controls for behavior. Provide at least one ideal response in <example> tags, and add examples for edge cases when consistency matters.

Give the model an out

To reduce fabrication, explicitly permit uncertainty: “If you are unsure how to respond, say ‘Sorry, I did not understand that.’” Without an escape hatch, the model may produce a plausible answer instead of admitting uncertainty.

Put the question at the bottom

In long prompts, place the immediate task near the end. This keeps the actual request salient after the model has processed the context.

Start broad, then slim down

Use enough prompt elements to make the behavior work, then remove what testing proves unnecessary. Premature minimalism creates fragile prompts.

Write well

The model mirrors the quality and structure of the prompt. Clear writing produces better output than sloppy writing with the same intent.

The main components of a prompt

Complex prompts are built from up to 10 elements.

Prompt components
#ComponentOrderingPurpose
1User roleFixedThe API requires messages to start with a user role
2Task contextBest earlyGive the persona, role, and overarching goals
3Tone contextFlexibleSpecify how the answer should sound
4Task description and rulesFlexibleDefine tasks, constraints, edge cases, and unknown handling
5ExamplesFlexibleDemonstrate the desired behavior
6Input dataFlexibleProvide source material wrapped in XML tags
7Immediate taskBest toward endReiterate what to do now
8PrecognitionBest toward endAsk for visible step-by-step thinking when useful
9Output formattingBest toward endSpecify XML, JSON, or another structure
10PrefillingAssistant roleStart the assistant response to steer the shape
prompt order text
Task context -> Tone context -> Task description -> Examples ->
Input data -> Immediate task -> Precognition -> Output formatting
Loading diagram…

This order is a starting point. Legal, medical, or document-heavy prompts may move examples or input data earlier when that better fits the task.

When to use each component

Not every prompt needs all 10 elements.

Always include

  • User role — Required by the API.
  • Immediate task — The current question or instruction.

Use when the situation calls for it

Task context fits persistent personas, domain experts, and workflows where the goal needs to be stated before the work begins.

Tone context fits customer-facing, educational, executive, or editorial tasks where the wrong tone undermines the answer.

Task description and rules fit tasks with constraints, compliance requirements, edge cases, or explicit unknown handling.

Examples fit behavior that must be consistent, especially tricky formats, edge cases, tone, and multi-step reasoning patterns.

Input data fits documents, conversation history, code, search results, or any external source material the model must process.

Precognition fits multi-step reasoning, analysis, diagnosis, code review, math, and tasks where jumping straight to the answer produces worse results.

Output formatting fits responses consumed by software or downstream workflows.

Prefilling fits cases where the response shape must be compelled. Prefill with { for JSON, <response> for tagged XML, or a character marker for roleplay.

Takeaways

System prompts set durable rules

The `system` parameter carries persistent instructions, while user messages carry the current task and conversation history.

Structure beats vague intent

Clear, specific prompts with visible constraints produce more reliable behavior than broad requests that leave success implicit.

XML tags protect boundaries

Named tags separate instructions from data, examples, and output sections so the model is less likely to confuse their roles.

Examples are a strong control

Concrete examples teach the desired pattern more reliably than rules alone, especially for edge cases and formatting.

Uncertainty needs an escape hatch

When fabrication is worse than refusal, the prompt should explicitly permit the model to say it does not know.

Prompts should slim down after testing

Use enough components to make the behavior work, then remove the parts that real tests show are unnecessary.