Glossary
Three sections: the THINK Methodology vocabulary used inside every deployment, plain-language definitions for AI terms people search every day, and common AI questions answered simply.
Section 01
THINK Methodology Terms
The vocabulary used inside every THINK deployment. These definitions are practitioner-grade, not marketing copy.
THINK Methodology
The five-part framework (Task, Hypothesis, Invest, Network, Knowledge) for embedding strategic thinking into AI systems.
THINK Synthesis
The process of turning intelligence and organizational knowledge into an executable Playbook.
THINK Strategist
A person trained to orchestrate Digital Employees and deploy the THINK Methodology inside organizations.
THINK Diagnostic
A structured engagement that maps an organization's highest-leverage AI opportunity and sequences a 90-day action plan.
THINK School
The 7-level certification path that trains practitioners to build and deploy Digital Employees: from first deployment (Foundations, Builder I-III) through full organizational orchestration (Strategist I-III). Hosted on Skool. No engineering background required.
Digital Employee
An AI system built around an organization's strategic thinking, deployed to execute specific workflows autonomously.
Intelligence Playbook
A live, deployed intelligence product built for the decision-maker who has to act on it, not a PDF or slide deck.
Intelligence Brief
A research-backed capital event analysis by geography and sector that maps 12-18 month windows most organizations miss.
Capital Event
A large-scale federal funding or policy initiative (CHIPS, IRA, BIL, etc.) that creates deployment opportunities for organizations positioned to capture them.
AI Capital Stack
The layered system of capital events, intelligence, playbooks, and Digital Employees that creates compounding organizational advantage.
Blindspot Scanner
The diagnostic process that detects where capital events create deployment gaps the organization cannot see from inside.
Accelerator Program
An institutional deployment program that trains THINK Strategist cohorts and deploys Digital Employees across portfolio businesses.
Tool User
A person who delegates their thinking to AI tools and receives generic output, as opposed to a THINK Strategist.
Section 02
AI Terms Worth Knowing
Terms circulating in AI communities, Reddit threads, and tool comparisons. Defined clearly so you can evaluate what actually matters.
What is an AI Agent?
An AI system that can take actions autonomously, not just answer questions. Agents observe inputs, decide what to do, and execute steps without a human approving each one. In the THINK Methodology, a Digital Employee is a purpose-built AI Agent encoded with the organization's own strategic thinking.
What is Agentic AI?
AI operating in agent mode: running multi-step tasks, calling tools, and making decisions without waiting for a human prompt at each step. Most Reddit discussions about "AI doing real work" are describing agentic behavior. The risk is agentic AI acting on generic logic instead of your logic.
What is an LLM?
Large Language Model. The underlying AI model (Claude, GPT-4, Gemini, etc.) that generates text responses. An LLM alone is not a workflow, a system, or a strategy. It is the engine. What matters is what you build on top of it.
What is Prompt Engineering?
The practice of writing and refining inputs to get better AI outputs. Valuable at the individual level but not scalable. A well-engineered prompt still lives in your head. A THINK Methodology system encodes that prompt logic into a deployed Digital Employee that runs without you.
What is a System Prompt?
The hidden instruction set given to an AI model before a conversation begins. System prompts define how the AI behaves, what it knows, and how it responds. A Digital Employee is built around a structured system prompt that encodes your framework, standards, and decision logic.
What is a Context Window?
The maximum amount of text an AI model can read and respond to in a single session. Larger context windows let AI hold more information at once. In Digital Employee deployments, context management determines how much organizational knowledge the DE can actively use per task.
What is an MCP Server?
Model Context Protocol server. A standard introduced by Anthropic that lets AI models connect to external tools and data sources in a structured way. MCP servers are what power CoWork Plugins, allowing Claude-based Digital Employees to integrate with GitHub, Slack, databases, and other live systems.
What is RAG in AI?
Retrieval-Augmented Generation. A technique where an AI pulls in relevant documents or data at query time, rather than relying solely on what it was trained on. RAG is how Digital Employees can answer questions using your organization's internal knowledge, not just general internet information.
What is an AI Workflow?
A sequence of AI-powered steps designed to complete a repeatable task. An AI workflow is the backbone of a Digital Employee: inputs arrive, logic runs, outputs are produced. The difference between a useful workflow and a generic one is whose thinking shaped the logic.
What is AI Automation?
Using AI to handle tasks that previously required human time. Most AI automation discussions on Reddit focus on tools. The THINK Methodology distinction: automation built on your logic compounds in value over time. Automation built on someone else's template is rented leverage.
What is Claude?
Anthropic's AI model family, used as the foundation for Digital Employees built with the THINK Methodology. Claude is designed with a strong emphasis on safety, nuanced instruction-following, and long-context reasoning, making it well-suited for organizational deployment.
What is an AI Skill?
A reusable, encoded capability installed into a Digital Employee. The equivalent of a trained behavior the DE can execute on demand. Where most people search for "AI plugins," THINK Methodology practitioners build owned Skills that reflect their own judgment and standards, not generic defaults.
What is an AI Plugin?
The term popularized by ChatGPT for add-on AI capabilities. Unlike a plugin installed from a marketplace, a THINK Skill is purpose-built to encode your specific thinking, making the distinction between borrowed intelligence and owned intelligence.
What are CoWork Plugins?
Anthropic's native plugin ecosystem for Claude Code: reusable extensions that add skills, agents, hooks, and MCP server connections to a Claude-powered system. CoWork Plugins allow Digital Employees to integrate with external tools and services without custom engineering, connecting AI workflows to platforms like GitHub, Slack, Notion, and more.
What is AI Fine-Tuning?
Training an AI model on custom data so it performs better on specific tasks. Fine-tuning changes the model itself. The THINK Methodology takes a different path: rather than retraining the model, it encodes organizational thinking into the system prompt and workflow architecture, which is faster to deploy and easier to update.
What is AI Adoption?
The organizational process of integrating AI tools into existing workflows. Most AI adoption stalls at the tool-user stage: people use AI for individual tasks but never build systems. The THINK Methodology is a structured adoption path that moves organizations from tool use to compounding deployment.
Section 03
Common AI Questions
Terms that trend in r/LocalLLaMA, r/ChatGPT, r/artificial, and startup communities. The colloquial language of AI discussions, defined without hype.
What is vibe coding?
A term popularized in early 2025 for the practice of describing what you want to an AI and letting it write the code, without understanding the underlying logic. Andrej Karpathy coined it. The debate on Reddit splits into two camps: productivity unlock vs. technical debt factory. The THINK Methodology parallel: vibe coding is to software what tool use is to AI strategy: fast output, no compound value.
What is AI slop?
Reddit shorthand for low-quality AI-generated content: generic blog posts, filler answers, images with wrong hands. Slop is the output of AI used without judgment. The concern is not that AI writes poorly. It is that organizations deploy AI that sounds authoritative but encodes no actual thinking. A Digital Employee built on your logic is the opposite of slop.
What is a GPT wrapper?
A dismissive Reddit term for apps that do nothing but call the OpenAI API with a thin interface on top. "Just a GPT wrapper" implies no real value-add. The critique is legitimate: most AI products are commodity interfaces on commodity models. The distinction worth making is between a wrapper (different UI, same output) and a system (different logic, encoded thinking, compounding results).
What is AI hallucination?
When an AI model generates confident-sounding false information. The term comes from the model pattern-matching on plausible outputs rather than retrieving verified facts. Hallucination risk is why RAG (Retrieval-Augmented Generation) and grounded system prompts matter: a well-architected Digital Employee constrains the model to your data, not its imagination.
What is a local LLM?
A large language model run on your own hardware instead of a cloud API. Local LLMs (Ollama, LM Studio, etc.) are a major topic on r/LocalLLaMA for privacy, cost, and control reasons. For most organizational deployments, cloud APIs like Claude provide more capability per dollar. Local deployment becomes relevant when data sovereignty is a hard requirement.
What is prompt injection?
An attack where malicious text hidden in content the AI reads causes it to override its instructions. A classic Reddit security thread topic. In agentic AI systems (where the model reads external data and takes real actions), prompt injection is a genuine deployment risk. Defensive system prompt architecture is part of responsible Digital Employee design.
What does "the model is just predicting the next token" mean?
A reductive but technically true description of how LLMs work: they output the statistically likely next word given prior context. Reddit skeptics use this to dismiss AI capabilities. The more useful frame: the outputs of next-token prediction, at sufficient scale and with sufficient context, are capable of reasoning, coding, analysis, and strategic synthesis. The mechanism does not limit the application.
What is a token in AI?
The unit of text an LLM processes, roughly a word or word-fragment. Tokens matter for cost (APIs charge per token), context limits (models have token caps per session), and speed. Most users never need to think about tokens. Architects building Digital Employees do: token efficiency affects both economics and what the system can hold in working memory.
What is RLHF?
Reinforcement Learning from Human Feedback. The training technique used to align AI models with human preferences after pretraining. RLHF is why Claude and ChatGPT follow instructions instead of just predicting raw text. Understanding RLHF matters for knowing what shapes a model's defaults, and why system prompts can redirect behavior away from those defaults.
What is an AI agent loop?
The repeating cycle an AI agent runs: observe inputs, decide on an action, execute the action, observe the result, repeat. Agent loops are what make agentic AI capable of multi-step tasks, and what make poorly-designed agents capable of compounding errors. A well-structured Digital Employee constrains the agent loop with checkpoints, guardrails, and defined exit conditions.
What does "open weights" mean in AI?
An AI model whose parameters (weights) are publicly released, allowing anyone to run, fine-tune, or modify it. Open-weights models (Llama, Mistral, etc.) are the foundation of local LLM use and are heavily discussed on Reddit. The distinction from "open source" is important: open weights means you have the model, not necessarily the training code or data.
What is context stuffing?
The practice of loading as much information as possible into an AI's context window and hoping it uses it well. A Reddit-era shortcut for trying to make AI smarter without building actual architecture. Context stuffing works poorly at scale: more information does not mean better retrieval. RAG and structured system prompts are the architectural alternative.
What is an AI moat?
A durable competitive advantage created through AI deployment. The phrase comes from r/entrepreneur and startup communities debating whether AI businesses can have defensible positions. The THINK Methodology answer: the moat is not the model (anyone can access Claude or GPT-4); it is the encoded organizational thinking, the built workflows, and the compound data advantage that accrues over time.
How do I know which AI model to select?
Three questions narrow it down. First: Is this execution? Use a small model, the fast low-cost tier each major provider offers (e.g. Haiku, Flash, mini). Second: Does this need judgment? Use a mid-tier model with thinking enabled, the standard workhorse tier from Anthropic, OpenAI, or Google. Third: Is this complex? Use a top-tier model with thinking enabled, the flagship model from whichever provider you use. Think of it like hiring: a junior follows checklists, a mid-level interprets guidelines, a senior frames the problem. The practical rule: always enable thinking mode for mid and top tiers. You are paying for deliberation, not just generation. One more lever: if you have a Skill, Project, or domain-specific instructions loaded in, drop one tier. Context collapses complexity. A well-briefed mid-tier model outperforms an unbriefed top-tier model on most organizational tasks.
AI Model Selection Cheatsheet
The full decision framework mapped across Claude, OpenAI, and Google — with practical examples.