AI Glossary
Plain-English definitions for 30+ key AI terms. Each entry links to live data and deeper analysis across the Aiconomy platform.
Core Concepts
Artificial General Intelligence (AGI)
A hypothetical form of AI that can understand, learn, and apply knowledge across any intellectual task at or above human level, rather than being specialized for specific tasks.
Generative AI
AI systems that can create new content — text, images, code, audio, video — rather than simply analyzing or classifying existing data. Large language models and diffusion models are the primary architectures.
Hallucination
When an AI model generates plausible-sounding but factually incorrect or fabricated information, presenting it with the same confidence as accurate responses.
Machine Learning
A subset of AI where systems learn patterns from data rather than being explicitly programmed, improving their performance on tasks through experience without human-written rules.
Open-Source AI
AI models released with publicly available weights and code, allowing anyone to use, modify, and build upon them — in contrast to closed-source models accessible only through APIs.
Technical
Fine-Tuning
The process of further training a pre-trained AI model on a specific, smaller dataset to specialize it for a particular task or domain, requiring far less compute than training from scratch.
Foundation Model
A large AI model trained on broad data that can be adapted to a wide range of downstream tasks — examples include GPT-4, Claude, Gemini, and Llama.
Frontier Model
The most capable and advanced AI models at any given time, typically trained with the largest compute budgets and achieving state-of-the-art performance on benchmarks.
Inference
The process of running a trained AI model to generate predictions or outputs — as opposed to training, which is the process of building the model. Inference accounts for the majority of AI's ongoing energy consumption.
Large Language Model (LLM)
An AI model trained on vast amounts of text data that can understand, generate, and manipulate human language. LLMs power chatbots, coding assistants, and content generation tools.
Model Training
The computationally intensive process of teaching an AI model by feeding it data and adjusting its parameters to minimize errors, often requiring thousands of GPUs running for weeks or months.
Neural Network
A computing system inspired by biological neural networks, consisting of interconnected layers of nodes (neurons) that process information by adjusting the strength of connections during training.
Reinforcement Learning from Human Feedback (RLHF)
A technique for aligning AI models with human preferences by having human evaluators rank model outputs, then using those rankings as a reward signal to improve the model's behavior.
Transformer Architecture
The neural network architecture introduced by Google in 2017 that uses self-attention mechanisms to process sequences in parallel, enabling the large language models that power modern AI.
Infrastructure
AI Compute
The computational resources — primarily GPU and TPU processing power — required to train and run AI models, typically measured in FLOP (floating-point operations) or GPU-hours.
Data Center
A facility housing computer systems and infrastructure used to process, store, and distribute data — increasingly built specifically for AI training and inference workloads.
GPU (Graphics Processing Unit)
A specialized processor originally designed for rendering graphics, now the primary hardware used for training and running AI models due to its parallel processing capabilities.
Companies
Investment & Market
Capex (Capital Expenditure)
Long-term investment spending by companies on physical assets like data centers, GPU clusters, and networking infrastructure — the backbone of AI deployment at scale.
Venture Capital in AI
Private equity investment in AI startups and companies, which has grown 11x since 2015 and represents the largest category of venture capital funding globally.
Workforce
AI Workforce Impact
The broad economic effects of AI on employment — encompassing job displacement, job creation, wage changes, and the transformation of existing roles through AI augmentation.
Prompt Engineering
The practice of crafting and optimizing input prompts to elicit desired outputs from AI language models, emerging as both a skill and a job category in the AI economy.
Safety & Governance
AI Alignment
The research field focused on ensuring AI systems behave in accordance with human values and intentions, particularly as systems become more capable.
AI Safety
The interdisciplinary field focused on preventing AI systems from causing harm, encompassing alignment, robustness, interpretability, and governance of AI technologies.
Deepfake
AI-generated synthetic media — images, video, or audio — that realistically depict events or statements that never occurred, created using deep learning techniques.
Responsible AI
The practice of developing and deploying AI systems in ways that are ethical, fair, transparent, and accountable — encompassing bias mitigation, explainability, and governance frameworks.
AI Economy Pulse
Weekly AI economy data in your inbox. Free forever.
Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”
No spam, ever. Unsubscribe anytime.