DATE Dec 27 2025 | READ 10 MIN

Context is King: The New Way to Use AI

DOCUMENT
FEATURED ATTACHMENT

Are you still using AI like a smarter Google? Asking questions and hoping for useful answers?

I was too. And I was missing the point.

The Problem: AI Without Context

Let me be honest: ChatGPT, Claude, and other LLMs are incredible. They can write code, explain complex topics, and help you think through problems. But there’s a fundamental limitation that most people don’t think about.

They know nothing about you.

They don’t know about your work. They don’t know what meetings you have scheduled. They don’t know what tasks are piling up on your to-do list. They don’t know the codebase you’re working on or the documents your team shares. They don’t know about you.

Every company today has their own “internal ChatGPT”. A wrapper with the company logo, integrated into Slack, answering questions about documentation…

But it only answers. It doesn’t do anything.

The Shift: Context is King

Here’s what I’ve learned: the new way to use AI isn’t about crafting better prompts or learning magic techniques. It’s about giving AI access to your world.

When AI has context about your environment — your emails, your calendar, your documents, your tools — something magical happens. It stops just answering and starts acting.

Instead of asking “How should I organize my emails?” you can say “Analyze my emails from today, summarize the important ones, and create tasks for anything that requires action.”

And it just… does it.

BEFORE

AI Without Context

Generic answers that apply to anyone

  • Copy-paste information manually
  • Adapt generic answers to your situation
  • Do the actual work yourself
  • AI as 'fancy autocomplete'
AFTER

AI With Context

Personalized actions based on your world

  • AI understands your environment
  • Receives specific, actionable responses
  • Automates real tasks end-to-end
  • AI as true assistant

MCP: The Protocol Making This Possible

If you haven’t heard about MCP (Model Context Protocol), it’s worth understanding. In simple terms, MCP is a standard way to connect AI models to external tools and data sources.

Think about it: before MCP, every integration was custom. Want Claude to access your database? Write a plugin. Want it to send emails? Another plugin. Each one with its own API, authentication, data format.

MCP standardizes all of this.

How MCP Works

1
🔌
1. Connection

The client connects to the MCP Server via SSE (Server-Sent Events) or stdio.

2
🔍
2. Discovery

The client lists all available tools (tools/list).

3
💬
3. Prompt

Tools are sent to the LLM along with the user's message.

4
🛠️
4. Tool Use

The LLM decides which tool to use and returns a tool_use with arguments.

5
5. Execution

The client executes the tool on the MCP Server and returns the result to the LLM.

The key point here is: the LLM decides which tool to use. You just pass the list of available tools and it automatically chooses based on the conversation context.

In practice:

  • Connect Gmail → AI can read, search, summarize, and send your emails
  • Connect Calendar → AI can check availability, schedule meetings, and block time
  • Connect Notion → AI can create pages, update databases, and organize your notes
  • Connect GitHub → AI can review PRs, create issues, and check build status

Think of it as a universal adapter. Instead of building custom integrations for every AI tool, MCP provides a common language. Any MCP-compatible client (like Cursor, Claude Desktop, or custom agents) can connect to any MCP server and use its capabilities.

If you want to dive deeper into MCP, check out the official documentation and the Anthropic announcement.

TrampoAI: From Theory to Practice

To understand how this works in practice, I built TrampoAI — a complete MCP client with a chat interface.

Architecture

┌─────────────┐     ┌─────────────┐     ┌─────────────────┐
│   React     │────▶│   Express   │────▶│   MCP Servers   │
│   (Vite)    │◀────│   Backend   │◀────│   (via SSE)     │
└─────────────┘     └──────┬──────┘     └─────────────────┘


                    ┌─────────────┐
                    │  Anthropic  │
                    │    API      │
                    └─────────────┘

The flow is simple:

  1. Frontend sends a message to the backend
  2. Backend collects tools from all connected MCP Servers
  3. Backend sends message + tools to Anthropic API
  4. Claude responds with tool_use if it needs to execute something
  5. Backend executes the tool on the corresponding MCP Server
  6. Backend sends the tool_result back to Claude
  7. Claude generates the final response

Stack

FRONTEND
React 18 + Vite + TypeScript
BACKEND
Express + TypeScript
MCP SDK
@modelcontextprotocol/sdk
LLM
Claude Sonnet 4
STYLING
Tailwind CSS
ICONS
Lucide Icons

Demo: One Prompt, Four Tools

Here’s TrampoAI in action. I connected three services:

My Connected Services

👤
You
🤖
AI Agent
🔗 MCP Gateway
📧 Gmail
📝 Notion
📅 Calendar

Then I asked something like this:

“Read my last 10 emails, summarize them, find the ones I need to take action on, create tasks in Notion, and block 1 hour at 3pm today on my calendar to work on them.”

Watch what happens:

With one prompt, Claude automatically:

  1. Called the Gmail MCP to read my emails
  2. Summarized and filtered which ones needed action
  3. Called the Notion MCP to create tasks
  4. Called the Google Calendar MCP to block time

No manual orchestration. No hardcoded workflows. The LLM figured out which tools to use and in what order.

This is the difference between AI that informs and AI that acts. I didn’t ask for information. I asked for action. And because the AI had context about my email, my calendar, and my task management system, it could actually deliver.

The Scaling Problem: MCP Mess

Ok, it works. But what happens when you have 10 MCP Servers? 50? 100?

BEFORE

Naive MCP

Each client connects to each MCP Server

  • N clients × M servers = N×M connections
  • Each client manages its own credentials
  • No centralized access control
  • No unified observability
AFTER

The Problem

This doesn't scale

  • Connection explosion
  • Scattered credentials
  • Who accessed what?
  • How much are we spending?

Building TrampoAI, I realized in practice: managing multiple MCP Servers is a nightmare.

Each server has its own connection, its own tools, its own credentials. There’s no way to know who executed what, how much it cost, if someone is abusing the system.

The Solution: MCP Mesh

This is where Deco’s MCP Mesh comes in.

The idea is simple: instead of each client connecting directly to each MCP Server, you have a control plane in the middle.

MCP Mesh Architecture

💻 Clients
🔀 MCP Mesh
⚙️ MCP Servers

MCP Mesh provides:

1. One Endpoint for Everything

No matter how many MCP Servers you have. Your clients connect to a single endpoint. The Mesh routes to the right tools.

2. RBAC and Policies

Access control at the control plane level. Who can use which tool? Does it need approval to execute? All configurable.

3. Observability

Unified logs for all executions. Latency, errors, costs — all in one place.

4. Smart Routing

When you have many tools, sending all of them to the LLM gets expensive (tokens) and slow. The Mesh has intelligent selection strategies.

5. No Vendor Lock-in

Open-source, self-hosted. Runs on your Kubernetes, on your infrastructure. You’re in control.

# Run locally in 1 minute
npx @decocms/mesh

I work at Deco, so I’m obviously biased. But I genuinely believe this is the direction things are going. As MCP adoption grows, the need for proper infrastructure to manage it becomes critical — just like how APIs needed API gateways, MCP needs its own management layer.

A Glimpse of the Future

Let me share something I’ve been thinking about a lot.

I imagine a future where everyone has their own personal MCP mesh — a web of connections to all your contexts, your information, your tools, your life. And alongside it, your own AI agent. Not a product you use. A companion that knows you.

Think about it: you’re walking around, talking to your AI about an idea you just had. You tell it to save a note. You ask about that message your friend sent yesterday. You wonder out loud what your week looks like, and it just tells you — because it knows. It has access to everything you’ve given it permission to see.

Your emails, your calendar, your notes, your messages, your documents, your photos, your music, your finances — all connected, all contextual, all yours.

This isn’t science fiction. The pieces are already here. MCP is the protocol. The mesh is the infrastructure. The agents are getting smarter every day. What’s missing is just time — and adoption.

I truly believe we’re heading toward a world where your AI isn’t just a tool you open when you need something. It’s always there, always aware of your context, always ready to help. Like having a brilliant friend who never forgets anything and is genuinely trying to make your life easier.

That’s the future I’m excited about. And honestly? We’re closer than most people think.

What I Learned

Building TrampoAI was real proof of how things work under the hood.

Key Takeaways

1
MCP is simple

The protocol itself is elegant. SSE + JSON-RPC. Easy to implement.

2
🎯
Context changes everything

A smaller model with your context beats a bigger model without it.

3
⚠️
Managing is complex

Multiple servers, credentials, connections... quickly becomes chaos.

4
🔀
You need a Mesh

For production, you need a control plane. Simple as that.

Conclusion

Here’s my take: the next leap in AI usefulness won’t come from bigger models or better training. It will come from better context.

A model with perfect knowledge and zero context about your life is less useful than a smaller model that knows exactly what you’re working on, what you need to do, and what tools you have available.

We’re moving from “AI as oracle” to “AI as assistant.” An oracle answers questions. An assistant knows your world and acts within it.

Your AI is only as useful as the context you give it.


If you want to understand how this works in practice, check out TrampoAI. It’s open-source and you can run it locally in 5 minutes.

And if you want to explore contextual AI yourself, start small. Pick one tool you use daily — maybe your calendar or your notes app — and find an MCP server for it. Connect it to your AI client and see what becomes possible.

The shift from generic to contextual AI isn’t coming. It’s already here.

Let me know what you think on my socials. I’d love to hear how you’re using contextual AI in your workflows!

END OF DOCUMENT