Prompt Engineering
Core ConceptsThe practice of crafting inputs to AI models to reliably get better, more accurate, or more specific outputs.
Full Explanation
Prompt engineering includes techniques like chain-of-thought (asking the model to reason step by step), few-shot prompting (giving examples), role assignment, and structured output formatting. A well-engineered prompt can dramatically improve output quality without changing the underlying model.
Adding 'Let's think step by step' to a math problem prompt significantly improves LLM accuracy.
Related Terms
Providing a small number of input-output examples in the prompt to guide the AI's response format and style.
A prompting technique where the AI is guided to reason step-by-step before producing a final answer, significantly improving accuracy on complex tasks.
Instructions given to an AI model before the user conversation begins, shaping its behavior, persona, and constraints.