Written by Beginners in AI
Last updated: March 2026

This glossary explains 50 common AI terms in plain English — no technical background needed. Every definition is written so you can understand it on the first read and explain it to a friend over coffee. Terms are listed alphabetically for easy reference, and each entry includes why the concept actually matters in everyday life.

Whether you just heard a term on the news, saw it in a meeting, or stumbled across it on social media, this is the page to bookmark.

A

1. AGI (Artificial General Intelligence)

A hypothetical type of AI that could understand, learn, and perform any intellectual task a human can do — not just one narrow thing. Today's AI tools are "narrow AI," meaning they are very good at specific tasks (like writing or image recognition) but cannot genuinely reason across all domains the way a person can. AGI matters because it is the long-term goal many AI labs are working toward, and debates about its timeline shape AI policy and safety decisions worldwide. Most experts surveyed by AI Impacts in 2025 estimate AGI is still decades away.

2. Algorithm

A set of step-by-step instructions that tells a computer how to solve a problem or complete a task. Think of it like a recipe: the ingredients are data, and the algorithm is the set of directions that turns those ingredients into a result. Algorithms power everything from your Netflix recommendations to spam filters in your email.

3. API (Application Programming Interface)

A set of rules that lets two pieces of software talk to each other. When an app on your phone checks the weather, it uses an API to ask a weather service for the latest data. In AI, APIs let developers plug AI capabilities (like text generation from Claude or image creation from DALL-E) into their own apps without building the AI from scratch.

4. Artificial Intelligence (AI)

The broad field of computer science focused on building machines that can perform tasks that normally require human intelligence — things like understanding language, recognizing images, making decisions, and solving problems. AI is an umbrella term. Everything else in this glossary falls under it. As of 2026, AI is used in healthcare, finance, education, marketing, transportation, and virtually every other industry.

5. Attention Mechanism

The part of a modern AI model that lets it focus on the most relevant pieces of input when generating a response. When you ask an AI a question about a long document, the attention mechanism helps it zero in on the sentences that actually matter. It is the core innovation behind transformer models (see: Transformer) and was introduced in Google's landmark 2017 paper "Attention Is All You Need."

B

6. Bias (in AI)

When an AI system produces results that are systematically unfair or skewed, usually because the data it was trained on reflected existing human biases. For example, if a hiring AI was trained mostly on resumes from men, it might unfairly rank women's resumes lower. Bias matters because AI decisions increasingly affect real lives — from loan approvals to medical diagnoses — and unchecked bias can scale discrimination faster than any human could.

C

7. Chatbot

A software application that simulates conversation with humans, usually through text. Modern AI chatbots like Claude, ChatGPT, and Grok are powered by large language models and can hold nuanced, multi-turn conversations. Older chatbots followed rigid scripts; today's versions can understand context, answer follow-up questions, and even adjust their tone. If you're wondering which chatbot to start with, check out our guide to the best free AI tools.

8. Claude

An AI assistant built by Anthropic, designed with a focus on being helpful, harmless, and honest. Claude is known for thoughtful, detailed responses and strong performance on complex reasoning tasks. Claude Code is Anthropic's coding-focused tool that can write, debug, and explain code. As of early 2026, Claude is widely used for writing, research, analysis, and programming tasks. See our ChatGPT vs Claude comparison to learn how they differ.

9. ChatGPT

An AI chatbot created by OpenAI, launched in November 2022. ChatGPT popularized conversational AI for the general public and reached 100 million users within two months of launch — the fastest-growing consumer application in history at that time. It is powered by OpenAI's GPT series of large language models.

10. Context Window

The amount of text (measured in tokens) that an AI model can "see" and work with at one time. Think of it like the AI's short-term memory. A larger context window means the AI can read longer documents, remember more of your conversation, or process bigger tasks in a single pass. As of March 2026, leading models offer context windows ranging from 128,000 to over 1 million tokens.

D

11. Data

The raw information — text, images, numbers, audio, video — that AI systems learn from and work with. Data is the fuel for every AI model. The quality and diversity of training data directly affects how well an AI performs. The phrase "garbage in, garbage out" applies perfectly: if you train an AI on bad data, you get bad results.

12. Deep Learning

A specialized type of machine learning that uses neural networks (see: Neural Network) with many layers to learn complex patterns from large amounts of data. Deep learning is what powers image recognition, voice assistants, language translation, and most modern AI breakthroughs. It is called "deep" because the neural networks have many layers stacked on top of each other, not because it requires deep thinking on your part.

13. Diffusion Model

A type of AI model that generates images (or other media) by starting with random noise and gradually refining it into a clear picture, step by step. This is how tools like Midjourney, DALL-E, and Stable Diffusion create images from text descriptions. The name comes from the physics concept of diffusion — the process works in reverse, turning chaos into order.

E

14. Embedding

A way of representing words, sentences, or other data as lists of numbers (called vectors) so that an AI can understand relationships between them. Words with similar meanings end up with similar number patterns. Embeddings are how AI tools "understand" that "dog" and "puppy" are related, even though the words look completely different. They power search engines, recommendation systems, and AI-assisted writing tools.

15. Ethical AI

The practice of developing and deploying AI systems in ways that are fair, transparent, accountable, and respectful of human rights. This includes addressing bias, ensuring privacy, being transparent about how AI makes decisions, and considering the societal impact of AI tools. As AI becomes more powerful, ethical AI practices are increasingly backed by regulation — the EU AI Act, which took effect in 2025, is the most comprehensive example. Learn more in our guide on whether AI is safe.

F

16. Fine-Tuning

The process of taking a pre-trained AI model and training it further on a specific, smaller dataset to make it better at a particular task. It is like hiring a general contractor and then giving them specialized training for your exact project. Fine-tuning is how companies customize general-purpose AI models for their specific needs — for example, training a language model on medical literature so it can assist doctors more effectively.

17. Foundation Model

A large AI model trained on broad data that can be adapted for many different tasks. Models like GPT-4, Claude, and Gemini are foundation models — they were not built for one single purpose but can be fine-tuned or prompted to do writing, coding, analysis, translation, and more. The term was coined by Stanford's Center for Research on Foundation Models in 2021.

G

18. Generative AI

AI that creates new content — text, images, music, video, code — rather than just analyzing or categorizing existing content. When ChatGPT writes an email for you, when Midjourney creates an image from your description, or when Claude helps you draft a business plan, that is generative AI at work.

19. Gemini

Google's family of AI models, previously known as Bard. Gemini is designed to be multimodal (see: Multimodal) from the ground up, meaning it can work with text, images, audio, and video. It is integrated into Google products like Search, Workspace, and Android.

20. GPT (Generative Pre-trained Transformer)

A family of large language models created by OpenAI. "Generative" means it creates content, "Pre-trained" means it learned from massive datasets before being fine-tuned, and "Transformer" refers to the underlying architecture. GPT-4 and its successors power ChatGPT and are among the most widely used AI models in the world.

21. Grok

An AI assistant built by xAI, Elon Musk's AI company. Grok is known for its real-time access to information from the X platform (formerly Twitter), its willingness to tackle edgy or unconventional questions, and its conversational, sometimes witty personality. As of 2026, Grok has become a popular alternative to ChatGPT and Claude, particularly for users who want AI with up-to-the-minute information and a less filtered communication style.

22. Guardrails

Safety measures built into AI systems to prevent them from producing harmful, biased, or inappropriate outputs. Guardrails can include content filters, output restrictions, behavioral guidelines, and monitoring systems. Every major AI company implements guardrails, though they differ in how strict or permissive they are. Guardrails are why AI chatbots will typically decline to help with dangerous requests.

H

23. Hallucination

When an AI confidently generates information that sounds plausible but is actually wrong or made up. For example, an AI might invent a fake research study, cite a book that does not exist, or provide incorrect statistics with total confidence. Hallucination is one of the biggest challenges in AI today. It is why you should always fact-check important AI-generated content — AI does not "know" things the way humans do; it predicts what text should come next based on patterns.

I

24. Inference

The process of using a trained AI model to make predictions or generate outputs. Training is when the AI learns; inference is when it uses what it learned. Every time you send a message to ChatGPT, Claude, or Grok and get a response, that is inference happening. Inference requires significant computing power, which is why running AI at scale is expensive.

Enjoying this glossary? Subscribe to Beginners in AI for daily explanations of AI news, tools, and concepts — all in plain English. Free forever.

L

25. Large Language Model (LLM)

A type of AI model trained on massive amounts of text data that can understand and generate human language. LLMs are what power tools like ChatGPT, Claude, and Grok. "Large" refers to both the enormous training datasets (often trillions of words from books, websites, and other text) and the billions of parameters (see: Parameters) in the model. LLMs are the foundation of the current AI revolution. For a beginner-friendly introduction, see our AI for Beginners guide.

M

26. Machine Learning (ML)

A branch of AI where computers learn to perform tasks by finding patterns in data, rather than being explicitly programmed with rules. Instead of writing code that says "if the email contains these words, it is spam," you feed the system thousands of examples of spam and legitimate emails and let it figure out the pattern itself. Machine learning powers everything from fraud detection at your bank to song recommendations on Spotify.

27. Model

In AI, a model is the trained system that takes input and produces output. Think of it as the "brain" that has learned from data. When people say "GPT-4 is a model" or "Claude is a model," they mean it is a specific trained AI system with particular capabilities. Different models have different strengths, weaknesses, and specialties.

28. Multimodal

An AI system that can understand and work with multiple types of input — text, images, audio, and video — rather than just one type. A multimodal AI can look at a photo and describe what is in it, listen to audio and transcribe it, or analyze a chart and explain the trends. Most leading AI models in 2026, including Claude, GPT-4, Gemini, and Grok, are multimodal.

N

29. Natural Language Processing (NLP)

The branch of AI focused on helping computers understand, interpret, and generate human language. NLP is what lets AI tools read your emails, translate languages, summarize articles, and hold conversations. Before modern NLP, interacting with computers required precise, structured commands. NLP lets you talk to computers the way you talk to people.

30. Neural Network

A computing system inspired by the human brain, made up of layers of interconnected nodes (like simplified neurons) that process information. Data goes in one side, passes through the layers where patterns are detected, and results come out the other side. Neural networks are the building blocks of deep learning and power most modern AI. They are not actual neurons — the name is a metaphor for how the system's architecture loosely mimics biological brain structure.

O

31. Open Source (in AI)

AI models or tools whose code is publicly available for anyone to view, use, modify, and distribute. Meta's LLaMA models are a prominent example. Open-source AI democratizes access to the technology, allowing researchers, startups, and hobbyists to build on top of existing work without starting from scratch. The tradeoff is that open-source models can be harder to use and may lack the safety guardrails of commercial products.

32. Overfitting

When an AI model learns its training data too well — including the noise and irrelevant details — and performs poorly on new, unseen data. It is like a student who memorizes the exact answers to practice questions but cannot solve new problems. Overfitting is a common challenge in machine learning that engineers work to prevent through techniques like regularization and validation testing.

P

33. Parameters

The internal settings (numerical values) that an AI model adjusts during training to improve its performance. Think of parameters as the millions or billions of tiny dials the model tunes to get better at its task. GPT-4 is estimated to have over 1 trillion parameters. More parameters generally means the model can learn more complex patterns, but also requires more computing power to train and run.

34. Perplexity

Perplexity is an AI-powered search engine that answers questions by searching the internet and citing its sources directly. Unlike traditional search engines that give you a list of links, Perplexity reads the sources and synthesizes an answer for you, showing exactly where each piece of information came from. Read our Perplexity vs Google comparison to see how it stacks up.

35. Prompt

The text instruction or question you give to an AI system. When you type "Write me a thank-you email" into Claude or ChatGPT, that is your prompt. The quality of your prompt directly affects the quality of the AI's response. A vague prompt gets a vague answer; a specific, well-structured prompt gets a much more useful result.

36. Prompt Engineering

The skill of writing effective prompts to get the best possible results from AI tools. It involves techniques like being specific about what you want, providing context, giving examples, specifying the format of the output, and assigning the AI a role. Good prompt engineering can dramatically improve AI outputs. For practical tips, see our guide on how to write AI prompts.

R

37. RAG (Retrieval-Augmented Generation)

A technique where an AI model retrieves relevant information from external sources (like databases or documents) before generating its response, rather than relying solely on what it learned during training. RAG helps reduce hallucinations (see: Hallucination) by grounding the AI's answers in actual, up-to-date data. It is widely used in enterprise AI applications where accuracy is critical, like customer support systems and internal knowledge bases.

38. Reinforcement Learning

A type of machine learning where an AI learns by trial and error, receiving rewards for good actions and penalties for bad ones. It is similar to how you might train a dog: the AI tries something, gets feedback on whether it was good or bad, and adjusts its behavior. Reinforcement learning from human feedback (RLHF) is the technique used to make chatbots like ChatGPT and Claude more helpful and less harmful.

S

39. Sentiment Analysis

The use of AI to determine the emotional tone of text — whether it is positive, negative, or neutral. Businesses use sentiment analysis to monitor customer reviews, social media mentions, and support tickets at scale. For example, a company might use sentiment analysis to automatically flag angry customer emails for priority response.

40. Supervised Learning

A type of machine learning where the AI is trained using labeled data — examples that include both the input and the correct answer. It is like studying with a textbook that has an answer key. The AI learns to match inputs to correct outputs by seeing thousands or millions of labeled examples. Email spam filters are a classic example: the AI is shown thousands of emails labeled "spam" or "not spam" and learns to classify new emails.

T

41. Temperature

A setting that controls how creative or random an AI's responses are. Low temperature (closer to 0) makes the AI more predictable and focused, producing the most likely response. High temperature (closer to 1 or 2) makes the AI more creative and varied, but also more unpredictable. Writing a legal document? Use low temperature. Brainstorming creative ideas? Use high temperature. Many AI tools let you adjust this setting.

42. Token

The basic unit of text that AI models process. A token is roughly 3/4 of a word in English — so the word "understanding" might be split into "under" and "standing" as two tokens. AI models read, process, and generate text in tokens, not words. Token limits determine how much text you can include in a prompt and how long the AI's response can be.

43. Training Data

The dataset used to teach an AI model. For large language models, training data typically includes billions of pages of text from books, websites, academic papers, code repositories, and other sources. The content, quality, and diversity of training data fundamentally shapes what the AI knows and how it behaves. Controversies around training data — including copyright concerns and data privacy — remain active legal and ethical debates as of 2026.

44. Transformer

The neural network architecture behind virtually all modern large language models. Introduced in a 2017 Google research paper titled "Attention Is All You Need," transformers process all parts of an input simultaneously (rather than one word at a time, like older models), making them much faster and better at understanding context. The "T" in GPT stands for Transformer. This single innovation is arguably the most important technical breakthrough behind the current AI boom.

45. Turing Test

A test proposed by mathematician Alan Turing in 1950 to determine whether a machine can exhibit intelligent behavior indistinguishable from a human. In the classic version, a human judge has text conversations with both a human and a machine. If the judge cannot reliably tell which is which, the machine "passes" the test. While modern AI chatbots can often fool people in short conversations, most researchers consider the Turing Test an incomplete measure of true intelligence.

U

46. Unsupervised Learning

A type of machine learning where the AI finds patterns in data without being given labeled examples or correct answers. Instead of being told "this is a cat, this is a dog," the AI looks at thousands of images and discovers on its own that some images cluster together. Unsupervised learning is useful for discovering hidden patterns, grouping similar customers, and detecting anomalies like fraud.

V

47. Vector Database

A specialized database designed to store and search embeddings (see: Embedding) — the numerical representations of text, images, or other data. Vector databases make it possible to find similar items quickly, which is essential for AI-powered search, recommendation systems, and RAG (see: RAG). Popular vector databases include Pinecone, Weaviate, and Chroma.

48. Vision Model

An AI model specifically designed to understand and analyze images or video. Vision models can identify objects, read text in photos, describe scenes, detect faces, and more. Modern multimodal AI tools like Claude, GPT-4, and Gemini include vision capabilities, meaning you can upload a photo and ask questions about it.

W

49. Weights

The numerical values inside a neural network that determine how strongly different connections influence the output. During training, the AI adjusts its weights to get better at its task — similar to how a musician adjusts their technique through practice. When people talk about "model weights," they are referring to these billions of internal values that encode everything the model has learned. Weights are what get saved when a model is finished training.

Z

50. Zero-Shot Learning

An AI's ability to perform a task it was never specifically trained to do, based on its general knowledge. For example, if you ask Claude to translate a language it was not explicitly trained to translate, and it does a reasonable job by applying its general understanding of language patterns, that is zero-shot learning. This capability is one of the most remarkable properties of large language models — they can generalize to new tasks without additional training.

Want to stay current as AI evolves? These 50 terms are just the starting point. Subscribe to Beginners in AI for daily explanations of new AI developments, tools, and concepts — all written in the same plain-English style as this glossary. Free forever.

How to Use This Glossary

This glossary is designed to be a living reference. Bookmark this page and come back whenever you encounter an unfamiliar AI term. Here are a few tips:

  • Use Ctrl+F (or Cmd+F on Mac) to quickly search for any term on this page

  • Follow the cross-references — terms marked with "(see: Term Name)" link to related concepts in this glossary

  • Start with the terms you hear most often — you do not need to memorize all 50 at once

  • Share it with colleagues who are also learning about AI — a shared vocabulary makes team conversations about AI much more productive

The AI field moves fast, and new terms emerge regularly. This glossary is updated monthly to reflect the latest terminology. If there is a term you think should be included, reach out at beginnersinai.com.

This glossary is part of the Beginners in AI educational library. For daily AI news and tutorials in plain English, subscribe free at beginnersinai.com.

Subscribe free at beginnersinai.com for daily AI news and tips.

Reply

Avatar

or to participate

Keep Reading