AI Coding Glossary

The AI Coding Glossary is a comprehensive collection of common AI coding concepts and terms. It’s a quick reference for both beginners and experienced developers looking for definitions and refreshers related to AI coding.

It covers the fundamental concepts, terminology, and patterns that are essential for understanding AI-assisted programming. From core machine-learning concepts like transformers and tokenization to practical coding patterns like prompt engineering and chain of thought (CoT), this glossary helps you navigate the vocabulary of AI programming.

Whether you’re working with large language models (LLMs), implementing RAG systems, or optimizing prompts for better code generation, these terms form the foundation of modern AI-enhanced development practices.

  • agent A system that perceives, decides, and acts toward goals, often looping over steps with tools, memory, and feedback.
  • agentic coding An approach to software development in which AI agents plan, write, run, and iteratively improve code.
  • artificial intelligence (AI) The field of building machines and software that perform tasks requiring human-like intelligence.
  • context engineering The systematic design and optimization of the information given to a model at inference time so it can answer effectively.
  • context window The maximum span of tokens that a language model can consider at once.
  • generative model A model that learns a data distribution so it can generate new samples or assign probabilities to observations.
  • generative pre-trained transformer (GPT) Autoregressive language models that use the transformer architecture and are pre-trained on large text corpora.
  • hallucination When a generative model produces confident but false or unverifiable content and presents it as fact.
  • large language model (LLM) A neural network that predicts the next token to perform general-purpose language tasks.
  • machine learning A subfield of AI that builds models that improve their performance on a task by learning patterns from data.
  • model context protocol (MCP) An open, client-server communication standard that lets AI applications connect to external tools and data sources.
  • neural network A computational model composed of layered, interconnected units that learn learn input-to-output mappings.
  • prompt The input text or a structured message that tells a generative model what to do.
  • prompt engineering The practice of designing and refining prompts for generative models.
  • system prompt A message that establishes a model’s role, goals, constraints, and style before user inputs.
  • temperature A decoding parameter that rescales model logits before sampling.
  • token A minimal unit of text used by NLP systems and language models.
  • training The process of fitting a model’s parameters to data by minimizing a loss function.
  • transformer A neural network model that uses self-attention to handle sequences without recurrence or convolutions.
  • vibe coding An AI-assisted programming style where a developer describes goals in natural language and accepts model-generated code with minimal manual editing.