We Compare AI

Foundation Model

Core Concepts
Simple Definition

A large AI model trained on broad data at scale that can be adapted for many different downstream tasks.

Full Explanation

The term was coined by Stanford's Center for Research on Foundation Models in 2021. Foundation models (GPT-4, Claude, Gemini, LLaMA) are trained once at enormous scale, then fine-tuned or prompted for specific applications. The 'foundation' metaphor captures that these models form the base layer for thousands of AI applications built on top.

Last verified: 2026-03-30← Back to Glossary