Every AI tool — ChatGPT, Claude, Gemini, Copilot — is built from the same components. The interfaces look different, but underneath they work the same way. Learn these components once and you can pick up any AI tool and know exactly what it can do.
Most people only use one: the prompt. There’s a lot more underneath.
The 5 Components of Every AI Tool
1. User Prompt
What you type. The question, the instruction, the request. This is the one everyone knows.
2. System Prompt
Instructions the AI reads before it reads yours. They shape its tone, its focus, its boundaries. If you’ve set up ChatGPT’s custom instructions or a Claude project prompt — you’ve already written one.
3. LLM (The Model)
The engine that generates the response. GPT-4o, Claude Opus, Gemini Pro — different models, different strengths. Some are faster. Some are smarter. Some are cheaper. The model you pick changes what the AI can do.
4. Memory
How much the AI retains, and for how long. Two kinds:
-
Session memory (context window) — how much the AI holds during one conversation. Most models: 128k–200k tokens (400–600 pages). Claude Opus: up to 1 million tokens (3,000 pages). Go past the limit and the AI starts forgetting earlier parts.
-
Persistent memory — what carries over between sessions. Files, instructions, and project context the AI loads at startup so it doesn’t start from zero.
5. Tools
What the AI can connect to beyond the chat. Search the web. Pull from Google Analytics. Send a Slack message. Without tools, the AI only knows what you paste in. With tools, it works from your actual data.
Most people stop at #1. Components #4 and #5 — memory and tools — are where most people haven’t gone yet, and they’re the two that change everything.
Going Deeper: Claude Code
Claude Code is Anthropic’s AI assistant. It runs on your desktop, browser, or code editor. It exposes all five components above — and adds more on top.
- Model: LLM — Opus, Sonnet, or Haiku. Switch mid-session.
- Memory:
CLAUDE.md+ Context Window — 1M tokens of session memory, plus persistent project context across sessions. - Tools: MCP Servers — connects Claude to HubSpot, GA4, Gmail, Notion, and more.
- Workflows: Skills — repeatable processes Claude follows when triggered.
- Rules: Hooks — scripts that run automatically to enforce standards.
- Workers: Subagents — isolated Claude instances for parallel deep tasks.
- Ecosystem: Plugins & Marketplaces — everything above, bundled and shareable.
Model: LLM
Three models. You can switch between them mid-session.
Opus — most capable. Complex reasoning, long documents, nuanced tasks. 1M token context window (3,000 pages in one session). Sonnet — faster, cheaper. 200k context window. Good for straightforward work. Haiku — lightest. Quick lookups, simple formatting, fast answers.
I keep Opus for strategy and drafting. Subagents run on Sonnet or Haiku for simpler tasks. Best model where it matters, cheaper model everywhere else.
In most AI tools, you pick one model and that’s it. In Claude Code, the model is a variable you tune per task.
Memory: CLAUDE.md + Context Window
Session memory — Opus holds up to 1M tokens in one conversation. That’s a full brand strategy, a week of analytics data, and several long threads — at the same time. When the context fills up, Claude Code automatically compresses earlier parts to make room. Long sessions don’t hit a wall.
Persistent memory — a file called CLAUDE.md. You write it once. Claude reads it at the start of every session.
Mine includes my brand voice rules, project folder structure, tool configurations, and details like “em dashes in blog posts, plain hyphens in WhatsApp messages.” Every session starts with Claude already knowing all of this.
Most AI tools give you one type of memory or the other. Claude Code gives you both.
Tools: MCP Servers
MCP (Model Context Protocol) is an open standard that connects Claude to external platforms. Once configured, Claude reads and writes to your actual, live data.
What I have connected right now:
- Google Analytics — traffic data, reports, real-time metrics
- Google Ads — campaign performance, keywords
- Meta Ads — campaign insights, audiences
- HubSpot — contacts, CRM records
- Gmail — threads, drafts, labels
- Notion — pages, databases, content
- WhatsApp — chats, messages
- Google Search Console — search performance, URL inspection
When I ask “what’s this week’s traffic?”, Claude pulls the numbers from GA4 directly. No tab-switching. No screenshots. No pasting.
If you only set up one thing from this list, start here.
Workflows: Skills
A skill is a text file (SKILL.md) with step-by-step instructions. Claude follows that process every time you trigger it.
I have a skill called /blog-post. When I type that, Claude runs my SEO research steps, follows my outline structure, writes in my brand voice (from CLAUDE.md), and adds meta descriptions in my format. I wrote it once. Claude runs it the same way every time.
Skills can also trigger automatically when Claude recognises your request matches one. As of early 2026, there are over 4,200 community skills you can install with one command.
Rules: Hooks
A hook is a script that runs automatically at a specific point in Claude’s workflow. You don’t invoke it. It just runs.
The difference: you ask for a skill. A hook runs whether you ask or not.
I have one that checks every client-facing message against my communication rules before I see the draft. Another runs a brand voice check on every blog edit. I never have to remember. They just happen.
Workers: Subagents
When Claude gets a complex task, it can spin up separate instances of itself — each in its own isolated space. One researches. One writes. One reviews. They don’t clutter your main conversation.
I have subagents for account strategy, content creation, SEO analysis, and paid media auditing. Each has its own instructions and tools. When I say “research this and write a blog post,” one handles research, another handles writing. Results come back as clean summaries.
The benefit: parallel work instead of sequential, and each worker stays focused.
Ecosystem: Plugins & Marketplaces
A plugin bundles skills, hooks, and MCP servers into one installable package. A marketplace is where you find them.
Claude Code ships with the official Anthropic marketplace built in — type /plugin, Discover tab, and you’ll see integrations for GitHub, Slack, Figma, Notion, Sentry, and more. Beyond that, dozens of community marketplaces with thousands of plugins. One command to add, one command to install.
If you build workflows that work well, you can package them as plugins and share through a private marketplace. Your whole team gets the same setup without building it individually.
What This Looks Like in Practice
Morning. I ask Claude to pull this week’s GA4 data and summarise traffic. MCP connects to GA4. A skill formats the summary. Output goes to Notion — also via MCP.
Midday. I type /blog-post "AI tools for small business". The skill runs keyword research, drafts in my voice (from CLAUDE.md), adds SEO metadata. A hook checks brand guidelines before I see it.
Afternoon. Claude drafts a WhatsApp message about a client’s campaign results. It pulls data from Google Ads via MCP, writes in first person (per my CLAUDE.md), and a hook validates tone before it sends.
No pasting context. No re-explaining processes. No copying between tools.
Getting Started
Step 1: Write a CLAUDE.md. Describe your business, brand voice, and preferences. Claude reads it every session.
Step 2: Browse the marketplace. Type /plugin → Discover tab. Install what’s useful.
Step 3: Try a skill. Install a community skill or write your own. If you can write a checklist, you can write a skill.
Step 4: Connect your tools. Add MCP servers for the platforms you use.
Most of what you need already exists in the ecosystem. The work is choosing what matters for your workflow and plugging it in.
Claude Code is available as a CLI, desktop app, web app, and IDE extension. The plugin system and marketplace are included.