AI Glossary
41 AI terms explained in plain English — from LLMs and tokens to RAG, fine-tuning, and agentic AI.
The core innovation in Transformer models that allows AI to weigh the importance of different parts of input text when generating each output token.
An AI system that can autonomously take actions, use tools, and complete multi-step tasks without human intervention at each step.
The challenge of ensuring AI systems behave in accordance with human values, intentions, and societal well-being.
AI systems that operate autonomously over extended periods, taking sequences of actions to complete complex goals.
A software interface that allows developers to access AI model capabilities programmatically — the foundation of all AI-powered products.
The field of research and practice focused on building AI systems that are safe, reliable, controllable, and aligned with human values.
The maximum amount of text (measured in tokens) that an AI model can 'see' and process in a single interaction.
A prompting technique where the AI is guided to reason step-by-step before producing a final answer, significantly improving accuracy on complex tasks.
Anthropic's technique for training Claude to be helpful, harmless, and honest by having AI models critique and revise their own outputs based on a set of principles.
Further training a pre-trained AI model on a smaller, task-specific dataset to specialize its behavior.
Providing a small number of input-output examples in the prompt to guide the AI's response format and style.
A large AI model trained on broad data at scale that can be adapted for many different downstream tasks.
Reinforcement Learning from Human Feedback — a training technique used to align AI models with human preferences and values.
A technique that enhances LLM responses by retrieving relevant documents from an external knowledge base before generating an answer.
AI models specifically optimized for multi-step logical reasoning, math, and complex problem-solving — typically by using chain-of-thought at inference time.
Constraints imposed by AI API providers on how many requests or tokens a user can process per minute, hour, or day.
Instructions given to an AI model before the user conversation begins, shaping its behavior, persona, and constraints.
Constraining an AI model to generate responses in a specific format (JSON, XML, etc.) for reliable programmatic processing.
Search that finds results based on meaning and intent, not just keyword matching.
The basic unit of text that AI language models process — roughly equivalent to 3/4 of a word in English.
The neural network architecture that underpins all modern large language models, introduced by Google in 2017.
A parameter (0 to 2) that controls how random or deterministic an AI model's outputs are.
A sampling parameter that limits token selection to the smallest set of tokens whose cumulative probability exceeds P.
The ability for AI models to call external functions, APIs, or tools to retrieve information or take actions beyond their training data.
The process of splitting text into tokens (the basic units an LLM processes) before feeding it to a model.
Ready to compare AI tools?
Now that you know the terminology, see how the top AI tools actually stack up.