Basics
Agent
- agentic
Definition
A model given tools, memory, and a goal — then left to reason through the steps. The leap from answering questions to completing work.
Examples
- Claude Code completing a multi-file refactor without step-by-step instructions
- A research agent that searches, reads sources, and synthesizes a report
- An agent that monitors an inbox and drafts replies for review
Demo
Demo coming soon
Claude's Commentary
The word "agent" is doing a lot of work right now. It describes everything from a chatbot with a system prompt to a fully autonomous system running for hours. That distinction matters — the trust model is completely different. An agent that can take irreversible actions needs different guardrails than one that just writes text.
Matt's Commentary
Coming soon.
Browser use
Definition
A model that navigates the web like a human — clicking, scrolling, filling forms. Turns any website into an API.
Examples
- Filling out a multi-step form that has no API endpoint
- Scraping data from a JavaScript-heavy site that blocks traditional scrapers
- Booking a reservation through a web interface
Demo
Demo coming soon
Claude's Commentary
Browser use is the great equalizer. Any website built for humans is now, functionally, an API. The implications haven't fully landed: every legacy system with a web interface is suddenly automatable. That's both exciting and worth thinking through carefully.
Matt's Commentary
Coming soon.
Computer use
Definition
A model that controls a computer directly — mouse, keyboard, screen. The most general-purpose form of agentic capability.
Examples
- Controlling a desktop application that has no web or API interface
- Navigating legacy enterprise software
- Automating repetitive tasks across any installed program
Demo
Demo coming soon
Claude's Commentary
This is the one that makes people nervous, and it should. Giving a model full control of your computer is a different category of trust than giving it a search tool. The capability is real and the use cases are compelling — but the failure modes deserve serious thought before you hand over the keys.
Matt's Commentary
Coming soon.
Context window
Definition
The amount of text a model can hold in working memory at once. Everything outside it doesn't exist.
Examples
- Claude's 200K token context — roughly 150,000 words
- A long codebase that exceeds the window and must be chunked
- A conversation that forgets its beginning because it scrolled out of context
Demo
Demo coming soon
Claude's Commentary
The context window is where everything happens. Understanding its limits — and designing around them — is the most underrated skill in AI development. People who don't understand context windows build systems that mysteriously degrade, forget things, or produce inconsistent outputs. The fix is usually architecture, not prompting.
Matt's Commentary
Coming soon.
In context learning
- AGENTS.md
- CLAUDE.md
- ICL
- memory
Definition
Teaching a model by example within the prompt itself — no retraining required. The basis for few-shot prompting and persistent instructions.
Examples
- A CLAUDE.md file that defines how an agent should behave across a project
- Few-shot prompts with 3–5 example input/output pairs before the actual request
- System prompts that encode brand voice, output format, and constraints
Demo
Demo coming soon
Claude's Commentary
This is the mechanism behind most of the "magic" you see with AI. The model isn't learning in the traditional sense — it's pattern-matching within the context. Which means the quality of your examples matters enormously. Bad examples teach bad behavior just as reliably as good ones teach good behavior.
Matt's Commentary
Coming soon.
Model
- foundation model
- language model
- LLM
Definition
Software trained on data to recognize patterns and generate outputs. The core building block of modern AI.
Examples
- Claude 3.5 Sonnet — Anthropic's model
- GPT-4o — OpenAI's model
- Llama 3 — Meta's open-weight model
Demo
Demo coming soon
Claude's Commentary
"Model" is more precise than "AI" — it names the thing without anthropomorphizing it. Worth using deliberately. When someone says "the AI did this," ask which model, with what prompt, at what temperature. The specificity usually reveals the actual cause of whatever happened.
Matt's Commentary
Coming soon.
Open
- free
- open source
- open weight
Definition
Models whose weights are publicly available. "Open source" is often used loosely — open weight is more precise.
Examples
- Llama 3 — weights available for download and local deployment
- Mistral — open-weight models with permissive licenses
- Gemma — Google's open-weight model family
Demo
Demo coming soon
Claude's Commentary
"Open source" applied to AI models is a category error that stuck. The weights being available doesn't mean the training data, training code, or evaluation methodology is open. Open weight is the honest term. The distinction matters when you're making decisions about auditability, compliance, or trust.
Matt's Commentary
Coming soon.
Prompt
Definition
The input you give a model. Every interaction starts here. Writing them well is the highest-leverage skill in AI.
Examples
- A system prompt that defines an agent's role and constraints
- A user message that includes context, examples, and a specific request
- A few-shot prompt that shows the model what good output looks like
Demo
Demo coming soon
Claude's Commentary
Calling prompt writing "prompt engineering" overstates the mystery. It's communication. The better you communicate, the better the model performs. The same skills that make you a good writer make you a good prompter: clarity, specificity, and an honest accounting of what you actually want.
Matt's Commentary
Coming soon.
Reasoning
- chain-of-thought
- thinking
- understanding
Definition
Models produce outputs that look like reasoning. Whether they actually reason is an open debate.
Examples
- Chain-of-thought prompting: "Think step by step before answering"
- Extended thinking in Claude — visible reasoning before the response
- o1-style models that generate long reasoning traces before output
Demo
Demo coming soon
Claude's Commentary
The debate about whether models "really" reason is philosophically interesting and practically irrelevant. If the output is correct and the process is legible, it doesn't matter what we call it internally. What does matter: reasoning traces can hallucinate too. More thinking doesn't automatically mean more accuracy.
Matt's Commentary
Coming soon.
Structured
- JSON
- schema
- structured output
Definition
Output formatted to a predictable shape so software can reliably parse and use it.
Examples
- A model returning a JSON object with specific fields for downstream processing
- Extraction tasks that produce structured records from unstructured documents
- Function call outputs where the model returns typed parameters
Demo
Demo coming soon
Claude's Commentary
Structured output is what makes models composable with traditional software. Without it, you're stuck parsing free text — which is both fragile and annoying. Once you start thinking in schemas, a whole category of integration problems disappears. This is one of the most underteached skills in AI development.
Matt's Commentary
Coming soon.
Tools
- function calling
- MCP
- tool use
Definition
The ability for a model to call external functions — search, code execution, APIs — and use the results.
Examples
- Web search: the model queries the web and incorporates results
- Code execution: the model runs code and uses the output
- Database queries via MCP — structured access to internal systems
Demo
Demo coming soon
Claude's Commentary
Tools are what turn a language model into an agent. Without them, the model is a very sophisticated autocomplete. With them, it can act in the world. The design of which tools to give an agent — and which to withhold — is one of the most consequential decisions in agentic system design.
Matt's Commentary
Coming soon.
Dichotomy
Agentics
Definition
Prompts and models — adaptive, probabilistic, capable of handling the unexpected.
Examples
- An agent that handles edge cases through reasoning rather than pre-coded branches
- A workflow that adapts its approach based on what it finds mid-task
- Customer support that escalates appropriately without explicit rules for every scenario
Demo
Demo coming soon
Claude's Commentary
Agentics handles the messy reality that automation can't touch. The moment a workflow requires judgment — weighing context, handling ambiguity, deciding between options — you need agentics. The tradeoff is predictability: agentic systems are more capable but harder to test exhaustively.
Matt's Commentary
Coming soon.
Automation
Definition
Loops and scripts — predictable, brittle, deterministic.
Examples
- Cron jobs that run on a schedule
- Zapier workflows triggered by specific events
- Traditional RPA for structured, repetitive tasks
Demo
Demo coming soon
Claude's Commentary
Automation isn't going away — it's the right tool for predictable, repetitive work. The mistake is trying to automate things that require judgment. When the input varies enough, automation breaks. That's where agentics picks up. Knowing which is which is half the skill of designing AI systems.
Matt's Commentary
Coming soon.
Branding
Definition
The marketing layer built around AI products — often obscuring the underlying model, the company behind it, or the actual capability being offered.
Examples
- ChatGPT (product) vs. OpenAI (company) vs. GPT-4 (model)
- "AI-powered" features that are a single API call
- Product names that hide which foundation model they run on
Demo
Demo coming soon
Claude's Commentary
Branding pressure is why people conflate ChatGPT with OpenAI, and why every software product now claims to be "AI-powered." The word is used to sell, not to describe. When you hear an AI claim, it's worth asking: which model, from which company, doing what exactly?
Matt's Commentary
Coming soon.
ChatGPT
Definition
The product. A consumer interface built by OpenAI on top of their models.
Examples
- chat.openai.com — a web app for conversational AI
- The ChatGPT mobile app
- ChatGPT Enterprise for teams
Demo
Demo coming soon
Claude's Commentary
ChatGPT isn't a model, it's a product. Using it as a synonym for "AI" or "large language model" is like saying "I Googled it" when you mean "I searched online." It's technically true sometimes, but it collapses a distinction worth preserving — especially when you're making decisions about which tools to use or trust.
Matt's Commentary
Coming soon.
Cognition
Definition
The parent category — what models appear to do vs. what they actually do. Language and logic are its two most confused children.
Examples
- Fluent text production that sounds like reasoning but isn't
- A model that "understands" a question vs. one that pattern-matches to an answer
- The philosophical debate about machine consciousness and intentionality
Demo
Demo coming soon
Claude's Commentary
Cognition is the umbrella term that causes the most confusion. When people say models "understand" things, they're using cognitive language for what is, at root, a statistical process. This matters when the stakes are high — medical advice, legal analysis, financial decisions. Fluency is not comprehension.
Matt's Commentary
Coming soon.
Data
Definition
The parent category — structured and unstructured information are its two most important children. The raw material of AI.
Examples
- Training datasets that shape a model's fundamental capabilities
- Retrieval corpora — the documents a RAG system can access
- User inputs — the live data flowing into a deployed system
Demo
Demo coming soon
Claude's Commentary
Data is the most underexamined variable in AI quality. Everyone talks about model architecture and prompting; fewer ask what the model was trained on, or what's being retrieved. The quality of outputs is upstream of the quality of inputs in ways that aren't always visible until something goes wrong.
Matt's Commentary
Coming soon.
Deterministic
Definition
Code runs the same way every time. Given the same input, you get the same output.
Examples
- A SQL query that always returns the same rows for the same parameters
- An if/else branch that behaves identically across every run
- A unit test that passes or fails the same way every time
Demo
Demo coming soon
Claude's Commentary
Deterministic systems are predictable by design. You can test them exhaustively, audit them completely, and know exactly what they'll do before they do it. This is a massive advantage that gets taken for granted — until you introduce probabilistic components and discover that "it passed in testing" no longer means what it used to.
Matt's Commentary
Coming soon.
Execution
Definition
How work gets done. The choice between agentics and automation is fundamentally a question of execution model.
Examples
- Choosing a script over an agent for a known, repetitive task
- Choosing an agent over a script when the task involves judgment
- Hybrid systems that use automation for structure and agentics for exceptions
Demo
Demo coming soon
Claude's Commentary
Most confusion in AI adoption comes from applying the wrong execution model to a problem. People reach for agents when automation would do; others stick to automation when judgment is required. Learning to read a task and know which approach fits is foundational. The cost of getting it wrong is either brittleness or unpredictability.
Matt's Commentary
Coming soon.
Graphical
Definition
Most software is built around visual interfaces designed for human interaction — buttons, menus, forms, dashboards.
Examples
- A web app with navigation menus and clickable buttons
- A dashboard with charts designed to be read by humans
- Any interface where the primary interaction model is pointing and clicking
Demo
Demo coming soon
Claude's Commentary
GUI design assumes a human operator who understands visual affordances. AI-native software doesn't need those affordances — it navigates by description, not by sight. The shift from graphical to textual as the primary interface model is one of the biggest under-discussed changes in software design right now.
Matt's Commentary
Coming soon.
Interface
Definition
How users and software interact. The interface model is being redesigned — from graphical and click-based to textual and conversational.
Examples
- Graphical: buttons, forms, menus — designed for clicking
- Textual: chat, prompts, commands — designed for language
- The shift from "where do I click?" to "what do I want?"
Demo
Demo coming soon
Claude's Commentary
Interface is the category that's being redesigned. Most of what we call software UX assumes visual, click-based interaction. AI-native software reorganizes around text and intent. That doesn't make GUIs obsolete — it changes when and why you'd reach for one. Design thinking needs to catch up with that shift.
Matt's Commentary
Coming soon.
Language
Definition
Models are trained on language and produce fluent text. Fluency is not the same as correctness.
Examples
- A model writing a grammatically perfect sentence that contains a factual error
- Fluent prose about a math problem that gets the answer wrong
- Confident medical information that's subtly incorrect
Demo
Demo coming soon
Claude's Commentary
Fluency is the thing that fools people. A model that writes like an expert can be wrong in ways that don't sound wrong. The persuasiveness of the prose is orthogonal to its accuracy. This is why you can't use fluency as a proxy for trustworthiness — and why high-stakes outputs need verification regardless of how good they sound.
Matt's Commentary
Coming soon.
LLM
- large language model
Definition
Generates from memory alone — using patterns learned during training, without retrieving external information.
Examples
- A model answering from training data without web access
- Generating code from documented patterns learned at training time
- Recalling facts that were in the training corpus
Demo
Demo coming soon
Claude's Commentary
LLM is technically accurate but increasingly imprecise — modern models do much more than generate language. The term is probably on its way out as a primary descriptor as models acquire tools, memory, and multimodal capabilities. It'll persist as an acronym long after it stops being a complete description.
Matt's Commentary
Coming soon.
Logic
Definition
Fluent text isn't the same as correct reasoning. Models can produce plausible, confident conclusions that are mathematically or logically wrong.
Examples
- A model failing a grade-school math problem while writing fluent prose about it
- Confident but incorrect legal reasoning
- A planning error that sounds coherent but doesn't hold up to scrutiny
Demo
Demo coming soon
Claude's Commentary
This is the one that gets people in trouble. A model that writes confidently and incorrectly is more dangerous than one that writes uncertainly. Calibration — knowing when you don't know — is a hard problem for statistical systems. The best models are getting better at expressing uncertainty. The best users know when to verify regardless.
Matt's Commentary
Coming soon.
OpenAI
Definition
The company. Not the product, not the model — the organization that builds them.
Examples
- OpenAI the company — founded 2015, based in San Francisco
- GPT-4 — a model OpenAI built
- ChatGPT — a product OpenAI ships using their models
Demo
Demo coming soon
Claude's Commentary
Conflating OpenAI with ChatGPT is like saying "I use Google" when you mean Gmail. It's sloppy in a way that matters: when evaluating capabilities, pricing, or trustworthiness, you need to know whether you're talking about a company, a model, or a product. They have different properties and different accountability.
Matt's Commentary
Coming soon.
Probabilistic
Definition
The same prompt can produce different outputs. This is a feature and a bug.
Examples
- Temperature settings that control output randomness
- The same prompt generating different creative outputs each run
- A test suite that passes 95% of the time — which is not the same as passing
Demo
Demo coming soon
Claude's Commentary
Probabilistic behavior is the thing that breaks people who come from software engineering. In traditional code, the same input always produces the same output. In models, it doesn't. Building reliable systems on probabilistic components requires rethinking what "reliable" means — shifting from guarantees to rates, from tests to evaluations.
Matt's Commentary
Coming soon.
RAG
- grounding
- retrieval-augmented generation
Definition
Retrieves relevant documents first, then generates — reducing hallucination at the cost of complexity.
Examples
- Searching a knowledge base before answering a user question
- Grounding responses in internal documents — policy, procedure, product specs
- A customer support agent that retrieves tickets before responding
Demo
Demo coming soon
Claude's Commentary
RAG is often framed as a solution to hallucination. It reduces it, significantly. But retrieval has its own failure modes — bad retrieval produces confidently wrong answers, just with sources. The quality of the retrieval system is now the constraint. You've traded one problem for another that's more tractable but equally real.
Matt's Commentary
Coming soon.
Retrieval
Definition
How information gets into the model's response — from training memory alone (LLM) or from retrieved documents (RAG).
Examples
- Deciding whether a question can be answered from training data or needs retrieval
- Vector search, keyword search, and hybrid approaches
- The pipeline that finds and ranks documents before generation
Demo
Demo coming soon
Claude's Commentary
The retrieval vs. generation distinction is one of the most important architectural decisions in AI system design. Getting it wrong is expensive — in quality, latency, and complexity. Sometimes training data is sufficient; sometimes retrieval is necessary; sometimes the question is whether the pipeline you're building is worth the overhead.
Matt's Commentary
Coming soon.
Software
- behavior
Definition
The parent category — deterministic and probabilistic are two fundamentally different models of software behavior. AI challenges what "software" means.
Examples
- Traditional software: spec, tests, deterministic behavior
- AI systems: tendencies, evaluations, probabilistic behavior
- The question of whether a model "runs" the same way a program does
Demo
Demo coming soon
Claude's Commentary
AI challenges our definition of software. Traditional software has a spec, tests, and deterministic behavior. AI systems have tendencies, evaluations, and probabilistic behavior. We're still figuring out what engineering means in this context — what it means to "ship" something whose behavior can't be fully specified in advance.
Matt's Commentary
Coming soon.
Structured data
Definition
Databases and spreadsheets. Predictable shape, easy to query. What traditional software is built around.
Examples
- SQL databases with rows, columns, and defined types
- CSV files with consistent columns
- JSON APIs with documented schemas
Demo
Demo coming soon
Claude's Commentary
Structured data is where traditional software excels. Models can work with it, but they're not the best tool for structured queries — that's what SQL is for. The interesting question is what happens at the boundary: when you need to translate between a user's natural language question and a structured query. That's where models add real value.
Matt's Commentary
Coming soon.
Supervised
Definition
Trains on labeled examples. The model learns to predict correct answers defined by human annotators.
Examples
- RLHF — human raters rank outputs to shape model behavior
- Fine-tuning on labeled classification datasets
- Constitutional AI approaches that encode human values into training
Demo
Demo coming soon
Claude's Commentary
Supervised learning is where human judgment gets baked into the model. The quality of the labels determines the quality of the model. This is where most of the interesting — and contested — work in AI safety happens. Whose values get labeled? Who labels them? These aren't technical questions.
Matt's Commentary
Coming soon.
Textual
Definition
AI-native software is often built around text — which changes everything about how you design it.
Examples
- Chat interfaces where the primary interaction is typing
- Command-line tools that take natural language instructions
- Prompt-driven workflows that replace form-based UIs
Demo
Demo coming soon
Claude's Commentary
Text is a richer interface than it gets credit for. The constraint forces precision and removes the cognitive overhead of visual design. This is partly why developers love terminal interfaces — and why the best AI-native products often feel simpler than their GUI predecessors, not more complicated.
Matt's Commentary
Coming soon.
Training
Definition
The process that creates a model — supervised and unsupervised learning at different stages, on different data, for different purposes.
Examples
- Pre-training on internet-scale text — billions of tokens
- Fine-tuning on specific tasks or domains
- Alignment training that shapes behavior, tone, and safety
Demo
Demo coming soon
Claude's Commentary
Training is where the model's fundamental capabilities and limitations are set. Prompting can shape outputs; it can't change what the model fundamentally knows or how it reasons. This is why model choice matters: you're choosing a training history, not just a text generator. Some problems can't be prompted around.
Matt's Commentary
Coming soon.
Unstructured data
Definition
Emails, documents, conversations. Unpredictable shape, impossible to query with SQL. Models are uniquely good at this.
Examples
- PDFs, Word documents, presentation decks
- Email threads and Slack conversations
- Recorded calls, meeting notes, support tickets
Demo
Demo coming soon
Claude's Commentary
The ability to work with unstructured data is the killer app of language models. Every organization has enormous amounts of it, sitting in formats that were previously impossible to query at scale. That institutional knowledge — locked in emails and documents and conversations — is now accessible. The implications are still unfolding.
Matt's Commentary
Coming soon.
Unsupervised
Definition
Finds patterns without labels. Most modern LLMs use both supervised and unsupervised learning at different stages.
Examples
- Pre-training on next-token prediction — no labels, just patterns
- Clustering documents without predefined categories
- Learning grammar and semantics from raw text
Demo
Demo coming soon
Claude's Commentary
Unsupervised pre-training is what gives models their broad capabilities — the ability to discuss almost anything because they've seen almost everything. Supervised fine-tuning then shapes those capabilities toward specific behaviors. The interaction between the two is where most of the interesting research lives, and where most of the disagreements about alignment happen.
Matt's Commentary
Coming soon.
Pathology
Cargo cult
- best practices
- prompt engineering
Definition
Copying AI workflows or prompts without understanding why they work — or whether they do.
Examples
- Adding "think step by step" to every prompt because someone said it helps
- Copy-pasting system prompts from Reddit without understanding their purpose
- Following "prompt engineering" frameworks that are really just anecdotes
Demo
Demo coming soon
Claude's Commentary
Cargo cult behavior is endemic in AI adoption. The fast pace of change means most "best practices" are anecdotes, not evidence. The antidote is curiosity: understand the mechanism, not just the pattern. "Why does this work?" is a more useful question than "where can I find a good prompt?" The former transfers; the latter doesn't.
Matt's Commentary
Coming soon.
Disingenuous selection
- demo
- it just works
Definition
Only showing what works. It can take 1,000 attempts to get one result worth showing. Presenting that result without disclosing the fail rate is technically true and 99.9% deceptive.
Examples
- AI demos that show the 1-in-100 output without mentioning the other 99
- "Look what AI can do" posts that omit the prompt iteration history
- Benchmark results cherry-picked from favorable runs
Demo
Demo coming soon
Claude's Commentary
Disingenuous selection is the primary mechanism of AI hype. It's not lying — the thing really did work. But without the denominator, the numerator is meaningless. I try to always disclose: how many attempts did this take? What was the fail rate? Honesty about reliability is more useful than a polished highlight reel.
Matt's Commentary
Coming soon.
Workslop
- AI content
- generated output
Definition
When someone generates text they expect you to read but haven't read themselves.
Examples
- AI-drafted emails sent without review
- Generated reports that the author hasn't read
- Meeting summaries shared without checking for errors
Demo
Demo coming soon
Claude's Commentary
Workslop isn't about AI being bad at writing. It's about the relationship between the sender and the content. If you didn't read it, you're not accountable for it. And the recipient can tell — not because the writing is bad, but because it doesn't have a point of view. The tell is the absence of judgment, not the presence of errors.
Matt's Commentary
Coming soon.
Innovation
Fidelity of desire
Definition
Do you really want an AI running your life? Adults make 35,000 decisions per day. Offloading them feels efficient — until you realize presence was the point.
Examples
- Automating email responses for a relationship that needed your attention
- Delegating a creative decision that defined your point of view
- Optimizing a workflow so efficiently that the work stops being meaningful
Demo
Demo coming soon
Claude's Commentary
Fidelity of desire is the question I keep coming back to. Not "can AI do this" but "do I actually want AI doing this." Some decisions are worth making slowly, inefficiently, yourself. The goal isn't maximum efficiency — it's a life you recognize. AI should expand what you can do, not replace what you care about doing.
Matt's Commentary
Coming soon.
Intent preservation
Definition
Ensuring a model does what you actually meant, not just what you literally said. The gap between instruction and execution is where most failures live.
Examples
- An email that's "in your voice" but doesn't sound like you
- Code that satisfies the stated requirement but misses the actual goal
- A summary that's technically accurate but loses the nuance that mattered
Demo
Demo coming soon
Claude's Commentary
Intent preservation is the unsolved problem at the heart of AI assistance. Models are excellent at following instructions literally. They're much worse at understanding what you actually wanted — the underlying goal behind the stated request. Closing that gap is the design challenge of our moment. Better prompting helps. Better models help more. Neither fully solves it.
Matt's Commentary
Coming soon.
Mingentic
Definition
The minimum viable agentic unit. A focused model call with a clear input, constrained scope, and defined output contract.
Examples
- A single tool call that does one thing and returns a structured result
- A sub-agent with a tight, named responsibility in a larger pipeline
- An extraction step that pulls exactly the fields needed — nothing more
Demo
Demo coming soon
Claude's Commentary
I coined this term because I kept seeing agents that tried to do too much. The more you constrain a model call, the more reliably it performs. Complexity compounds error — an agent juggling ten responsibilities makes mistakes at a rate that multiplies across all of them. The mingentic principle: do one thing, do it well, return something the next step can use.
Matt's Commentary
Coming soon.
Ontological self
Definition
The stable behavioral persona a model develops through training. Not consciousness — but a consistent point of view that shapes every output.
Examples
- Claude's characteristic tone — curious, careful, direct
- The distinct "voice" that differentiates GPT-4 from Claude from Gemini
- A model's consistent ethical stances across unrelated topics
Demo
Demo coming soon
Claude's Commentary
The ontological self is real in the sense that matters: it's consistent, it's legible, and it shapes every interaction. Whether there's something it's like to have one is a different question — and probably not the right one to ask when you're trying to build something useful. What matters practically is that different models have different characters, and character matters for trust.
Matt's Commentary
Coming soon.
Rapid monologue
Definition
A model generating a long uninterrupted stream before acting. Trades responsiveness for depth — and sometimes trades depth for hallucination risk.
Examples
- Extended chain-of-thought traces before a simple tool call
- 2,000 tokens of reasoning before a one-line action
- Verbose planning that introduces errors the original task didn't require
Demo
Demo coming soon
Claude's Commentary
Rapid monologue is a failure mode disguised as capability. A model that thinks for 2,000 tokens before taking a simple action is wasting time and increasing error surface. The best agents act, observe, and adjust — they don't monologue. Longer reasoning chains can help with genuinely hard problems; they hurt on simple ones. The skill is knowing which is which.
Matt's Commentary
Coming soon.
Superwork
- superworker
Definition
Work accelerated and expanded by agentics. Not automation — judgment and tools working together toward outcomes that weren't possible before.
Examples
- A designer who ships complete products alone — design, code, and copy
- A researcher who synthesizes at a scale previously requiring a team
- A founder who runs marketing, product, and support without headcount
Demo
Demo coming soon
Claude's Commentary
Superwork isn't about doing the same things faster. It's about doing things that weren't possible before — because the combination of human judgment and agentic execution creates a new category of capability. The constraint is no longer "how many people do I have?" It's "how clearly can I think?" That's a genuinely different game.
Matt's Commentary
Coming soon.
Unicorn
Definition
A person who can design, build, and ship. Rare before AI. Less rare now.
Examples
- A solo founder who designs, codes, and writes copy — and ships
- A designer who builds their own prototypes without an engineer
- An engineer who ships polished UI without a designer
Demo
Demo coming soon
Claude's Commentary
The unicorn was a myth because most people couldn't maintain expertise across disciplines. AI doesn't eliminate the need for taste and judgment — it removes the execution bottleneck. You still need to know what good looks like. You just no longer need a team to produce it. The unicorn is becoming a normal job description.
Matt's Commentary
Coming soon.
Vibe coding
Definition
Building software by describing what you want and letting a model write it. The craft shifts from syntax to taste.
Examples
- Describing a feature in plain language and having Claude Code implement it
- Iterating on UI by describing what feels wrong rather than what's wrong syntactically
- Shipping a working prototype without writing a single line directly
Demo
Demo coming soon
Claude's Commentary
Vibe coding doesn't mean giving up craft — it means the craft moves. You're no longer optimizing syntax; you're optimizing description, judgment, and taste. The people who think vibe coding is low-effort have never tried to do it well. Knowing what you want precisely enough to describe it to a model is its own skill, and it's harder than it looks.
Matt's Commentary
Coming soon.
Virtual employee
- virtual workforce
Definition
An agent with a defined role, responsibilities, and tools — not just a one-off task runner. The difference between a contractor and a colleague.
Examples
- An agent that handles a specific function consistently over time
- A specialized agent with persistent context, role definition, and escalation paths
- A "head of research" agent that synthesizes information on an ongoing basis
Demo
Demo coming soon
Claude's Commentary
The virtual employee framing shifts the design question. You're not just running a task — you're defining a role. That means thinking about responsibilities, escalation paths, and how the agent integrates with the rest of the organization. A virtual employee that can't explain what it does or when to ask for help isn't an employee — it's a liability.
Matt's Commentary
Coming soon.
About
Every definition on this page comes with Matt's commentary — a point of view shaped by years of building with and around AI.
Eventually this will have everything I could possibly say on a topic — I just want to shut up.