The TypeScript MCP SDK gets 19.4 million npm downloads per week. That is more than Express. More than Next.js. Developers are building MCP servers, and most of them are choosing TypeScript to do it.
We covered building an MCP server in Python already. TypeScript takes a different approach. Instead of decorated functions, you get Zod schemas for type-safe tool inputs, two API levels depending on how much control you want, and a transport layer that works on Cloudflare Workers without changes.
This tutorial builds a complete MCP server from scratch. By the end, you will have a working server with typed tools, tested with MCP Inspector, connected to OpenClaw, and ready to deploy to Cloudflare Workers or Docker. Every code example runs as-is.
Project setup and first server
You need Node.js 18 or higher. The SDK also runs on Bun and Deno, but Node has the widest MCP tooling support.
Create a new project:
mkdir my-mcp-server && cd my-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
npx tsc --init
Two dependencies: the official MCP SDK (v1.27.1 as of February 2026, 11,900+ GitHub stars) and Zod for schema validation. TypeScript handles the rest.
Update your tsconfig.json to target ES2022 with Node module resolution, and set outDir to "./build". Then create src/index.ts:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "my-first-server",
version: "1.0.0"
});
server.tool(
"add",
{ a: z.number().describe("First number"), b: z.number().describe("Second number") },
async ({ a, b }) => ({
content: [{ type: "text", text: String(a + b) }]
})
);
const transport = new StdioServerTransport();
await server.connect(transport);
Build and run:
npx tsc
node build/index.js
That is a working MCP server. The McpServer class is the high-level API. You call .tool() with a name, a Zod schema object for the parameters, and an async handler. The SDK converts Zod schemas into JSON Schema for the MCP protocol, validates incoming requests against them, and gives you typed parameters in the handler.
The StdioServerTransport communicates over stdin/stdout. Every MCP client that supports local servers (Claude Desktop, OpenClaw, Cursor) speaks this transport. One important rule: all logging must go to stderr, not stdout. Stdout is reserved for MCP protocol messages. A stray console.log will corrupt the message stream.
// Wrong - breaks the protocol
console.log("debug info");
// Right - goes to stderr
console.error("debug info");
Typed tools with Zod: where TypeScript pays off
The reason to pick TypeScript over Python for MCP servers is Zod. Every tool input gets validated at runtime and at compile time. If you define a tool that expects a number and someone sends a string, the SDK rejects it before your code runs.
A more realistic tool that searches a product catalog:
server.tool(
"search_products",
{
query: z.string().min(2).max(100).describe("Search query"),
category: z.enum(["electronics", "clothing", "books", "all"])
.default("all")
.describe("Product category filter"),
maxResults: z.number().int().positive().max(50).default(10)
.describe("Maximum results to return"),
inStock: z.boolean().default(true)
.describe("Only show in-stock items")
},
async ({ query, category, maxResults, inStock }) => {
// TypeScript knows: query is string, category is one of the enum values,
// maxResults is number, inStock is boolean
const results = await searchCatalog(query, category, maxResults, inStock);
return {
content: [{ type: "text", text: JSON.stringify(results, null, 2) }]
};
}
);
The .describe() calls on each field are not just documentation. MCP clients read them to decide when and how to use each parameter. OpenClaw uses these descriptions to figure out what to pass. Write them like you are explaining the parameter to someone who has never seen your code.
For error handling, return isError: true in the response instead of throwing:
server.tool(
"fetch_url",
{ url: z.string().url().describe("URL to fetch") },
async ({ url }) => {
try {
const response = await fetch(url);
if (!response.ok) {
return {
content: [{ type: "text", text: `HTTP ${response.status}: ${response.statusText}` }],
isError: true
};
}
const text = await response.text();
return { content: [{ type: "text", text: text.slice(0, 5000) }] };
} catch (error) {
return {
content: [{ type: "text", text: `Failed to fetch: ${error}` }],
isError: true
};
}
}
);
The agent reads the error message and can retry with different parameters or try an alternative approach. An unhandled exception just kills the tool call with no context.
Tip: The SDK also has a low-level
Serverclass where you register raw request handlers withsetRequestHandler(ListToolsRequestSchema, ...). Use it when you need full control over capability negotiation or custom protocol extensions. For most servers,McpServeris enough.
Adding resources and prompts
Tools handle actions. Resources expose data. Prompts provide reusable templates. Most MCP servers only use tools, but the other two primitives are useful in specific cases.
A resource lets the MCP client pull data into context without calling a function:
server.resource(
"config://app-settings",
"Current application configuration",
async () => ({
contents: [{
uri: "config://app-settings",
text: JSON.stringify(getAppConfig(), null, 2),
mimeType: "application/json"
}]
})
);
// Dynamic resource with URI template
server.resource(
"db://schema/{tableName}",
"Database table schema",
async ({ tableName }) => ({
contents: [{
uri: `db://schema/${tableName}`,
text: await getTableSchema(tableName),
mimeType: "text/plain"
}]
})
);
Prompts work like saved templates that the client can invoke with parameters:
server.prompt(
"code_review",
{ language: z.string(), code: z.string() },
({ language, code }) => ({
messages: [{
role: "user",
content: {
type: "text",
text: `Review this ${language} code for bugs, security issues, and style:\n\n\`\`\`${language}\n${code}\n\`\`\``
}
}]
})
);
When to use which: if the AI agent should do something (call an API, query a database, create a file), make it a tool. If the agent needs to read something (config files, schemas, documentation), make it a resource. If the user wants a standardized interaction pattern (review templates, analysis frameworks), make it a prompt.
Testing with MCP Inspector
The MCP Inspector (v0.21.1) is the official testing tool. It gives you a web UI where you can discover tools, fill in parameters, and call them interactively.
npx @modelcontextprotocol/inspector node build/index.js
This starts the Inspector UI on port 6274 and a proxy on port 6277. Open http://localhost:6274 in your browser.
Three things to know:
- Do not start your server first. The Inspector spawns it as a child process. If you start the server manually and then run the Inspector, you will get connection errors because the Inspector cannot find the stdin/stdout pipes.
- Use full paths.
npx @modelcontextprotocol/inspector node ./build/index.jsworks. Relative paths without./sometimes don't. - Check the log panel. Every JSON-RPC message between the Inspector and your server is logged. When a tool call fails, the raw request and response are there.
The Inspector also supports CLI mode for non-interactive testing. Useful for CI pipelines where you want to verify your server starts correctly and exposes the expected tools.
Connecting to Claude Desktop and OpenClaw
Once the Inspector shows your tools working, connect to a real client.
Claude Desktop
Edit your Claude Desktop config:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"my-ts-server": {
"command": "node",
"args": ["/absolute/path/to/build/index.js"]
}
}
}
Restart Claude Desktop fully (quit the app, not just close the window). Your tools show up in the MCP tools panel.
OpenClaw
Add the server to ~/.openclaw/openclaw.json:
{
"mcpServers": {
"my-ts-server": {
"command": "node",
"args": ["/absolute/path/to/build/index.js"],
"transport": "stdio"
}
}
}
Or use the CLI:
openclaw mcp add --transport stdio my-ts-server node /absolute/path/to/build/index.js
OpenClaw discovers your tools from their Zod schemas. The tool name, parameter descriptions, and types all come from what you defined in the .tool() call. Clear descriptions lead to better tool selection by the agent.
For servers that need API keys, pass environment variables in the config:
{
"mcpServers": {
"my-ts-server": {
"command": "node",
"args": ["/absolute/path/to/build/index.js"],
"env": {
"API_KEY": "${MY_API_KEY}",
"DATABASE_URL": "${DATABASE_URL}"
}
}
}
}
The ${VAR} syntax reads from your shell environment. Never hardcode secrets in the config file. For more on securing your MCP server connections, see our OpenClaw MCP security guide.
Going remote: Streamable HTTP transport
Stdio works for local servers. For remote deployments, the MCP spec (version 2025-03-26) defines Streamable HTTP as the standard transport. It replaced the older SSE transport, which had problems with resumable connections and required long-lived server sessions.
Streamable HTTP uses a single HTTP endpoint. Clients send JSON-RPC messages via POST. The server responds with either a direct JSON response or upgrades to SSE for streaming. GET requests with SSE handle server-initiated notifications.
Add Streamable HTTP alongside stdio in the same server:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import { createServer } from "http";
const server = new McpServer({ name: "my-server", version: "1.0.0" });
// ... register tools, resources, prompts ...
const args = process.argv.slice(2);
if (args.includes("--http")) {
const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: undefined });
const httpServer = createServer(async (req, res) => {
await transport.handleRequest(req, res);
});
httpServer.listen(3000, () => console.error("MCP server on http://localhost:3000"));
await server.connect(transport);
} else {
const transport = new StdioServerTransport();
await server.connect(transport);
}
Run locally with node build/index.js (stdio) or node build/index.js --http (HTTP on port 3000). The sessionIdGenerator: undefined runs in stateless mode, which works for most use cases and is required for serverless platforms.
Warning: Remote MCP servers need authentication. The MCP spec requires OAuth 2.1 with PKCE for all remote transports. Without it, anyone who finds your endpoint can call your tools. Cloudflare Workers have built-in OAuth support via
workers-oauth-provider. For self-hosted servers, you will need to add middleware.
Deploying to Cloudflare Workers and Docker
Cloudflare Workers
Cloudflare has native MCP support through their Agents SDK. The newer approach uses the McpAgent class, which runs as a Durable Object with built-in state management. Companies like Atlassian, PayPal, Sentry, and Webflow have deployed MCP servers this way.
A minimal Worker-based MCP server:
import { McpAgent } from "agents/mcp";
import { z } from "zod";
export class MyMCP extends McpAgent {
server = new McpServer({ name: "worker-mcp", version: "1.0.0" });
async init() {
this.server.tool(
"hello",
{ name: z.string() },
async ({ name }) => ({
content: [{ type: "text", text: `Hello from the edge, ${name}!` }]
})
);
}
}
Deploy with wrangler deploy. The Agents SDK handles Streamable HTTP transport and SSE automatically. Pair it with workers-oauth-provider for authentication.
There is also the older workers-mcp package that translates Worker methods into MCP tools via a build step and uses a local proxy for stdio. It works but the McpAgent approach is what Cloudflare recommends now.
Docker
For self-hosted deployments, a multi-stage Dockerfile keeps the image small:
FROM node:20-slim AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY tsconfig.json ./
COPY src/ ./src/
RUN npx tsc
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/build ./build
COPY --from=builder /app/package*.json ./
RUN npm ci --omit=dev
USER node
ENV NODE_ENV=production
EXPOSE 3000
CMD ["node", "build/index.js", "--http"]
Build and run:
docker build -t my-mcp-server .
docker run -p 3000:3000 my-mcp-server
The builder stage compiles TypeScript, the production stage copies only the compiled JavaScript and production dependencies. Running as the node user (non-root) is a small thing that matters. Docker's MCP Toolkit provides additional tooling for discovering and distributing MCP servers as containers.
For OpenClaw Docker sandboxing details, see our Docker sandboxing guide.
The complete server
Everything from this tutorial in a single file. It has typed tools with Zod validation, a resource, error handling, and dual transport support.
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import { createServer } from "http";
import { z } from "zod";
const server = new McpServer({ name: "my-server", version: "1.0.0" });
// --- Tools ---
server.tool(
"search_products",
{
query: z.string().min(2).max(100).describe("Search terms"),
category: z.enum(["electronics", "clothing", "books", "all"]).default("all"),
maxResults: z.number().int().positive().max(50).default(10)
},
async ({ query, category, maxResults }) => {
// Replace with your actual search logic
return {
content: [{ type: "text", text: `Found ${maxResults} results for "${query}" in ${category}` }]
};
}
);
server.tool(
"fetch_url",
{ url: z.string().url().describe("URL to fetch") },
async ({ url }) => {
try {
const response = await fetch(url);
if (!response.ok) {
return {
content: [{ type: "text", text: `HTTP ${response.status}` }],
isError: true
};
}
return { content: [{ type: "text", text: (await response.text()).slice(0, 5000) }] };
} catch (error) {
return { content: [{ type: "text", text: `Failed: ${error}` }], isError: true };
}
}
);
// --- Resources ---
server.resource(
"config://settings",
"Server configuration",
async () => ({
contents: [{
uri: "config://settings",
text: JSON.stringify({ version: "1.0.0", environment: process.env.NODE_ENV }),
mimeType: "application/json"
}]
})
);
// --- Transport ---
const args = process.argv.slice(2);
if (args.includes("--http")) {
const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: undefined });
const httpServer = createServer(async (req, res) => {
await transport.handleRequest(req, res);
});
httpServer.listen(3000, () => console.error("Listening on http://localhost:3000"));
await server.connect(transport);
} else {
await server.connect(new StdioServerTransport());
}
About 60 lines of code. The Zod schemas give you runtime validation and TypeScript autocompletion. The dual transport means the same code works locally with Claude Desktop and OpenClaw (stdio) and remotely via HTTP.
If you already have our Python MCP server tutorial open in another tab, you will notice the structure is similar. The difference is in the details: Zod vs type hints, explicit transport switching vs FastMCP defaults, and the Cloudflare Workers deployment path that TypeScript opens up.
For picking which MCP servers to run alongside yours, check our best MCP servers in 2026 list. For connecting everything to OpenClaw, our MCP connection guide covers the details.
What tool are you going to build first?