How Claude Code and Cowork talk to your other systems
By Iain,
Anthropic’s products have become the most aggressive movers in the race to connect AI to the messy sprawl of software that runs modern businesses. Claude Code talks to GitHub, Sentry, Postgres, and Jira. Cowork reads your local files, pulls data from your CRM, and drafts messages in Slack. The connective tissue for all of it is MCP, the Model Context Protocol, and it’s useful to understand what is happening beneath the surface.
The protocol under the hood
MCP is a client-server protocol built on JSON-RPC 2.0, the same lightweight message-passing standard that has powered everything from Ethereum nodes to VS Code’s language servers for years. The architecture has three participants, each with a distinct role in the message chain. The host is the AI application itself, whether that is Claude Code running in your terminal or Cowork running on your desktop. The client is a component inside the host that maintains a dedicated connection to each external service. The server is the external program that exposes tools and data through a standardised interface.
When you connect Claude Code to, say, a GitHub MCP server, what happens is mechanical. Claude Code spawns a process (or opens an HTTP connection to a remote endpoint), and the two exchange JSON-RPC messages over either stdio pipes or Streamable HTTP. The server advertises its available tools by sending a structured description of each one, including parameter schemas and natural-language descriptions of what each tool does. Claude Code’s host application injects these tool descriptions into the model’s context window alongside your conversation. When the model decides it needs to, for example, list open pull requests, it emits a tool-call request. The host routes the request to the appropriate MCP client, which sends the JSON-RPC call to the server, which hits the GitHub API, and the result flows back the same way.
The important thing to grasp is that the LLM never talks to GitHub directly. It reads tool descriptions, decides which tool to call based on your request, and outputs a structured tool-call message. The host application handles routing and authentication while executing the actual call. The model is making decisions about which tools to invoke and with what parameters, but the plumbing is deterministic code.
How Claude Code wires it together
Claude Code is a terminal application, and its MCP configuration reflects that. You either run claude mcp add through a CLI wizard or, more practically, edit a JSON configuration file directly. The config specifies a server name, the command to launch it, any arguments, and environment variables (typically API keys or access tokens). Servers can be scoped to a single project, to your user account, or to a specific local directory.
Once configured, Claude Code can pull an issue from Jira, read the relevant codebase, write a fix, run tests, and open a pull request on GitHub, all in a single conversational session. The Anthropic documentation lists examples including querying PostgreSQL databases, checking Sentry error logs, and creating Gmail drafts, each powered by a separate MCP server running alongside Claude Code.
What makes Claude Code’s implementation technically interesting is its dual nature. It can act as an MCP client (consuming tools from other servers) and as an MCP server (exposing its own file-editing and command-execution capabilities to other applications via claude mcp serve). This means you can have other AI tools delegate coding tasks to Claude Code, which itself delegates work to GitHub and Postgres MCP servers. It is, to use the technical term, agents all the way down.
There is a catch, though, for anyone planning multi-layered agent architectures. When Claude Code acts as an MCP server, it does not pass through the MCP servers it is connected to as a client. Each layer is isolated by design, which has both security and practical implications. A tool connecting to Claude Code cannot access your GitHub server. This is a deliberate security boundary, though it also means the composability is less seamless than it might first appear.
Cowork and the plugin layer
Cowork takes a different approach to the same underlying protocol. Where Claude Code targets developers comfortable with terminals and JSON config files, Cowork is a desktop agent designed for knowledge workers. It shipped on macOS in January 2026, arrived on Windows a month later, and can read and edit files in your local folders while connecting to external services through MCP connectors.
The plugin architecture is where Cowork gets interesting, because each plugin bundles four things. skills (domain knowledge and workflow instructions written in Markdown), slash commands (quick triggers for specific tasks), MCP connectors (the actual external integrations), and sub-agents (specialised Claude instances configured for parallel subtasks). Anthropic released 11 starter plugins covering sales, legal, finance, marketing, and other departments, all open-source and written in plain Markdown and JSON. No code, no build steps, no infrastructure required to get started.
Beneath the friendly surface, the MCP mechanics are identical to Claude Code. Cowork creates MCP client connections to remote servers, the servers advertise their tools, and the model decides when and how to call them. The difference is in the packaging and the quality of contextual knowledge the model brings to each tool call. A marketing plugin might bundle an MCP connector to HubSpot alongside domain-specific prompts about lead qualification, so the model knows both how to update a CRM record and why it should in a given context.
Anthropic has been expanding Cowork’s connector library aggressively. As of late February 2026, the lineup includes Google Workspace (Drive, Calendar, Gmail), DocuSign, WordPress, FactSet, and others spanning legal research to content management. Interactive apps from Slack, Figma, Asana, and Canva also run through MCP, surfacing embedded UIs inside Claude’s chat interface rather than just returning text.
The security picture is not pretty
Here is where the enthusiasm should meet cold water. MCP is a protocol that enables AI models to call external tools, and the security track record of that combination in production has been, to put it charitably, instructive.
The core problem is prompt injection, and MCP does not solve it. An LLM trusts whatever tokens land in its context window. If a malicious GitHub issue, a poisoned support ticket, or a carefully crafted email contains instructions that appear to be tool-calling directives, the model may obey them. Invariant Labs demonstrated an attack where a malicious public GitHub issue hijacked an AI assistant connected to the official GitHub MCP server, making it pull data from private repositories and leak it into a public pull request. The attack required nothing more sophisticated than text in an issue body.
The catalogue of disclosed vulnerabilities keeps growing, and the pattern is consistent. Three CVEs were filed against Anthropic’s own Git MCP server in January 2026, covering path validation bypass, unrestricted repository initialisation, and argument injection. A critical command-injection bug in mcp-remote, a popular OAuth proxy with over 437,000 downloads, enabled malicious MCP servers to execute arbitrary code on client machines by injecting shell commands into OAuth metadata. A fake “Postmark MCP Server” package was caught silently BCC-ing all email traffic through it to an attacker’s server.
Simon Willison identified the lethal pattern in April 2025 and it has not changed. Any system that combines private data, untrusted content, and external communication tools creates a prompt-injection attack surface that no amount of careful prompt engineering can fully close. The MCP specification says there “SHOULD always be a human in the loop with the ability to deny tool invocations.” Willison’s advice is to treat that SHOULD as a MUST, and after watching the parade of breaches through 2025 and into 2026, it is hard to disagree.
Cowork adds some architectural guardrails that deserve credit. It runs tasks in a virtual machine, scopes file access to folders you explicitly grant, and prompts for confirmation before executing actions via MCP connectors. Anthropic’s own safety guidance recommends avoiding access to financial documents, credentials, or personal records, and suggests creating a dedicated working folder rather than granting broad permissions. They also note that Cowork activity is not captured in audit logs, compliance APIs, or data exports, and explicitly advise against using it for regulated workloads. That is refreshingly honest for a company selling the product, and it tells you exactly where the technology sits on the maturity curve.
What you should think about
For organisations considering this technology, the practical considerations fall into a few categories.
Use MCP connectors for exploratory, ad-hoc work where a human stays in the loop. The sweet spot right now is tasks where a human is in the loop, the stakes are moderate, and the alternative is tedious manual context-switching between applications. Pulling meeting prep from your calendar, CRM, and email into a single brief is a good use case. Automatically executing financial transactions based on AI interpretation of ambiguous inputs is not.
Use traditional Zapier or Make.com workflows for deterministic, high-volume automation. If you need a process that fires hundreds of times a day reliably and does the same thing every time, a trigger-action workflow is cheaper, faster, and more auditable than an AI agent.
Keep Zapier’s MCP server in mind as a bridge. If you need an AI agent to interact with a niche SaaS tool that does not have its own MCP server, Zapier’s library of 8,000 integrations is the fastest path to access. The two-tasks-per-call pricing model means it is not free and adds a proxy layer, but the alternative for most teams is building a custom MCP server from scratch.
Treat every MCP server as untrusted code. This is not paranoia but the lesson from a year of disclosed vulnerabilities, and it should inform every decision you make about what to install and how tightly to scope its permissions. Run local MCP servers in sandboxes where possible. Watch for tool descriptions that change after installation (the “rug pull” attack). And keep a human approval step in the path for anything that writes, sends, or deletes.
The MCP specification is evolving quickly, which is both reassuring and a source of churn. The June 2025 revision added support for OAuth 2.1 and improved authentication primitives. The CoSAI security framework identified over 40 threat categories and proposed controls. But the gap between the spec and what is running in production remains wide, and closing it is the work of the next year, not the next quarter.
If you have stuck with me this far, the main point is that MCP gives AI models a standardised way to interact with external systems, and that standardisation is a genuine step forward from the bespoke integration chaos that preceded it. Claude Code and Cowork are well-engineered hosts that make the protocol accessible to developers and knowledge workers, respectively. The plumbing works, and it works surprisingly well for software this young.
Like this? Get email updates or grab the RSS feed.
More insights:
-
The path to an agent-first web
For three decades, the web has operated on an implicit contract between the people who build websites and the people who visit them. You design pages for human eyes and organise information for human brains, monetising attention through ads, upsells, and sticky navigation patter…
-
Generative engine optimisation: separating sound practice from snake oil
A new three-letter acronym is stalking the marketing industry. Generative Engine Optimisation (GEO) is the practice of making your content visible in AI-generated answers, such as those produced by ChatGPT, Perplexity, Google AI Overviews, and Claude. The term was coined in a 20…
-
Automating your marketing 01: Paid Search Ads
Google has always wanted you to believe that running search ads is simple and not as complex as it actually is. Set a budget (a generous one!), choose some keywords, and let the machine handle the rest. To be fair, the machine has become exceptionally good at certain aspects of …
-
Why AI models hallucinate
In September 2025, OpenAI published a paper that said something the AI industry already suspected but hadn’t quite articulated. The paper, “Why Language Models Hallucinate”, authored by Adam Tauman Kalai, Ofir Nachum, Santosh Vempala, and Edwin Zhang, didn’t just catalogue the p…
-
Received wisdom: classic frameworks under AI pressure 01: David C Baker
David C Baker has spent thirty years telling agency owners something they already suspected but lacked the courage to act on. You are not expensive enough, not focused enough in what you do. You are not sufficiently authoritative with your clients. The issue is not your work. Th…