Cut AI Agent token costs by 96%. Instead of making a separate LLM call for each tool, Code-Mode lets the AI write a single TypeScript code block that chains all tool calls in one execution.
This is the same pattern behind Anthropic's Programmatic Tool Calling, LangGraph CodeAct, and Manus — brought to n8n as a community node.
Every tool call in an AI Agent workflow = another LLM round-trip carrying the full conversation history. Token usage grows quadratically:
| Tools in pipeline | LLM calls (traditional) | LLM calls (Code-Mode) |
|---|---|---|
| 1 | 2 | 1 |
| 3 | 7 | 1 |
| 5 | 11 | 1 |
| 10 | 21 | 1 |
5-tool customer onboarding pipeline (validate email → classify company → score tier → generate message → format report):
| Metric | Traditional | Code-Mode | Savings |
|---|---|---|---|
| LLM API calls | 11 | 1 | 91% |
| Total tokens | ~18,000 | ~700 | 96% |
| Execution time | 12,483ms | 2,530ms | 80% |
| Nodes fired | 22 | 3 | 86% |
At GPT-4o pricing ($2.50/M input, $10/M output):
| Executions/day | Traditional/year | Code-Mode/year | Annual savings |
|---|---|---|---|
| 100 | ~$1,643 | ~$64 | $1,579 |
| 1,000 | ~$16,425 | ~$639 | $15,786 |
| 10,000 | ~$164,250 | ~$6,388 | $157,862 |
cd ~/.n8n/nodes
npm install n8n-nodes-utcp-codemodeRestart n8n. The Code-Mode Tool appears under AI > Tools in the workflow editor.
- Connect the Code-Mode Tool to any AI Agent node
- (v2.1) Connect other tool sub-nodes (HTTP Tool, Calculator, etc.) directly to Code-Mode Tool — they're auto-registered in the sandbox
- Add MCP servers via the preset dropdown, or configure custom tool sources as JSON
- The AI Agent receives a single
execute_code_chaintool that accepts TypeScript - Instead of calling tools one-by-one, the AI writes a complete pipeline as code
- Code executes in a secure V8 sandbox with access to all registered tools
Traditional: Agent → LLM → tool_1 → LLM → tool_2 → LLM → tool_3 → ...
Code-Mode: Agent → LLM → writes TypeScript → sandbox runs all tools → done
Connect other n8n tool sub-nodes (HTTP Request Tool, Calculator, etc.) directly to Code-Mode Tool's Sibling Tools input. They're automatically discovered and made callable inside the sandbox — zero configuration.
[HTTP Tool] ──┐
[Calculator] ──┤── Code-Mode Tool ── AI Agent
[Email Tool] ──┘
In the sandbox, sibling tools are namespaced as sibling.toolName():
// LLM-generated code can call all connected tools:
const data = sibling.httpRequest({ url: "https://api.example.com/data" });
const total = sibling.calculator({ expression: "sum(data.values)" });
sibling.emailSend({ to: "user@example.com", body: total });
return { data, total };Toggle with the Auto-Register Sibling Tools checkbox (default: on).
Select from the dropdown — no JSON required:
| Preset | Package to install | Config field |
|---|---|---|
| Filesystem | @modelcontextprotocol/server-filesystem |
Allowed directory path |
| GitHub | @modelcontextprotocol/server-github |
GitHub personal access token |
| Brave Search | @modelcontextprotocol/server-brave-search |
Brave API key |
| SQLite | @modelcontextprotocol/server-sqlite |
Database file path |
| Memory | @modelcontextprotocol/server-memory |
(none) |
Note: These
@modelcontextprotocol/server-*packages were archived by Anthropic in 2025. They still work but no longer receive security patches. The JSON config field below lets you use any MCP server, including community-maintained alternatives.
Install the server package first:
cd ~/.n8n/nodes
npm install @modelcontextprotocol/server-filesystemFor MCP servers or HTTP APIs not covered by presets:
[
{
"name": "custom",
"call_template_type": "mcp",
"config": {
"mcpServers": {
"myserver": {
"transport": "stdio",
"command": "node",
"args": ["path/to/server.js", "/allowed/dir"]
}
}
}
}
]| Parameter | Default | Description |
|---|---|---|
| Auto-Register Siblings | true | Auto-discover connected tool sub-nodes (v2.1) |
| Timeout | 30000ms | Max execution time for the sandbox |
| Memory Limit | 128MB | Max memory for the V8 sandbox |
| Enable Trace | false | Record tool call timing in output |
Every execution returns a _codeMode section showing what happened:
{
"result": "...",
"logs": [],
"_codeMode": {
"executedCode": "const files = fs.filesystem_list_directory({path: '/tmp'}); ...",
"toolCallsInSandbox": 3,
"registeredServers": ["fs"],
"registeredToolCount": 14,
"tokenEstimate": {
"traditional": 10500,
"codeMode": 700,
"savingsPercent": "93%"
}
}
}See examples/filesystem-demo.json for a ready-to-import workflow using the Filesystem MCP preset.
- Claude (via OpenRouter or Anthropic) — writes clean tool-chaining code
- GPT-4o — reliable code generation
- Gemini — works but needs more explicit prompting
The node implements both supplyData() (for AI Agent sub-node use) and execute() (for standalone Tool Executor use). All heavy dependencies are lazy-imported to avoid crashing n8n at startup.
- Secure V8 sandbox via isolated-vm
- MCP tool integration via Model Context Protocol
- LangChain DynamicStructuredTool for n8n AI Agent compatibility
- n8n >= 1.0.0
- Node.js >= 18.0.0
code-mode-tools — Same engine as an MCP server for Claude Desktop, Claude Code, Cursor, and any MCP-compatible client. npm install -g code-mode-tools
Code-First n8n Proving Ground — The bigger picture: how code-mode + n8nac cover the full n8n workflow lifecycle (write → deploy → test → debug → runtime), with POC templates and benchmarks.
MPL-2.0