Skip to content

mj-deving/n8n-nodes-utcp-codemode

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

n8n-nodes-utcp-codemode

n8n TypeScript npm

Cut AI Agent token costs by 96%. Instead of making a separate LLM call for each tool, Code-Mode lets the AI write a single TypeScript code block that chains all tool calls in one execution.

This is the same pattern behind Anthropic's Programmatic Tool Calling, LangGraph CodeAct, and Manus — brought to n8n as a community node.

The Problem

Every tool call in an AI Agent workflow = another LLM round-trip carrying the full conversation history. Token usage grows quadratically:

Tools in pipeline LLM calls (traditional) LLM calls (Code-Mode)
1 2 1
3 7 1
5 11 1
10 21 1

Benchmark Results

5-tool customer onboarding pipeline (validate email → classify company → score tier → generate message → format report):

Metric Traditional Code-Mode Savings
LLM API calls 11 1 91%
Total tokens ~18,000 ~700 96%
Execution time 12,483ms 2,530ms 80%
Nodes fired 22 3 86%

At GPT-4o pricing ($2.50/M input, $10/M output):

Executions/day Traditional/year Code-Mode/year Annual savings
100 ~$1,643 ~$64 $1,579
1,000 ~$16,425 ~$639 $15,786
10,000 ~$164,250 ~$6,388 $157,862

Installation

cd ~/.n8n/nodes
npm install n8n-nodes-utcp-codemode

Restart n8n. The Code-Mode Tool appears under AI > Tools in the workflow editor.

How It Works

  1. Connect the Code-Mode Tool to any AI Agent node
  2. (v2.1) Connect other tool sub-nodes (HTTP Tool, Calculator, etc.) directly to Code-Mode Tool — they're auto-registered in the sandbox
  3. Add MCP servers via the preset dropdown, or configure custom tool sources as JSON
  4. The AI Agent receives a single execute_code_chain tool that accepts TypeScript
  5. Instead of calling tools one-by-one, the AI writes a complete pipeline as code
  6. Code executes in a secure V8 sandbox with access to all registered tools
Traditional:  Agent → LLM → tool_1 → LLM → tool_2 → LLM → tool_3 → ...
Code-Mode:    Agent → LLM → writes TypeScript → sandbox runs all tools → done

Auto-Register Sibling Tools (v2.1)

Connect other n8n tool sub-nodes (HTTP Request Tool, Calculator, etc.) directly to Code-Mode Tool's Sibling Tools input. They're automatically discovered and made callable inside the sandbox — zero configuration.

[HTTP Tool] ──┐
[Calculator] ──┤── Code-Mode Tool ── AI Agent
[Email Tool] ──┘

In the sandbox, sibling tools are namespaced as sibling.toolName():

// LLM-generated code can call all connected tools:
const data = sibling.httpRequest({ url: "https://api.example.com/data" });
const total = sibling.calculator({ expression: "sum(data.values)" });
sibling.emailSend({ to: "user@example.com", body: total });
return { data, total };

Toggle with the Auto-Register Sibling Tools checkbox (default: on).

Configuration

MCP Server Presets

Select from the dropdown — no JSON required:

Preset Package to install Config field
Filesystem @modelcontextprotocol/server-filesystem Allowed directory path
GitHub @modelcontextprotocol/server-github GitHub personal access token
Brave Search @modelcontextprotocol/server-brave-search Brave API key
SQLite @modelcontextprotocol/server-sqlite Database file path
Memory @modelcontextprotocol/server-memory (none)

Note: These @modelcontextprotocol/server-* packages were archived by Anthropic in 2025. They still work but no longer receive security patches. The JSON config field below lets you use any MCP server, including community-maintained alternatives.

Install the server package first:

cd ~/.n8n/nodes
npm install @modelcontextprotocol/server-filesystem

Custom Tool Sources (JSON)

For MCP servers or HTTP APIs not covered by presets:

[
  {
    "name": "custom",
    "call_template_type": "mcp",
    "config": {
      "mcpServers": {
        "myserver": {
          "transport": "stdio",
          "command": "node",
          "args": ["path/to/server.js", "/allowed/dir"]
        }
      }
    }
  }
]

Other Parameters

Parameter Default Description
Auto-Register Siblings true Auto-discover connected tool sub-nodes (v2.1)
Timeout 30000ms Max execution time for the sandbox
Memory Limit 128MB Max memory for the V8 sandbox
Enable Trace false Record tool call timing in output

Output Transparency (v1.1+)

Every execution returns a _codeMode section showing what happened:

{
  "result": "...",
  "logs": [],
  "_codeMode": {
    "executedCode": "const files = fs.filesystem_list_directory({path: '/tmp'}); ...",
    "toolCallsInSandbox": 3,
    "registeredServers": ["fs"],
    "registeredToolCount": 14,
    "tokenEstimate": {
      "traditional": 10500,
      "codeMode": 700,
      "savingsPercent": "93%"
    }
  }
}

Example Workflow

See examples/filesystem-demo.json for a ready-to-import workflow using the Filesystem MCP preset.

Best With

  • Claude (via OpenRouter or Anthropic) — writes clean tool-chaining code
  • GPT-4o — reliable code generation
  • Gemini — works but needs more explicit prompting

How It's Built

The node implements both supplyData() (for AI Agent sub-node use) and execute() (for standalone Tool Executor use). All heavy dependencies are lazy-imported to avoid crashing n8n at startup.

Requirements

  • n8n >= 1.0.0
  • Node.js >= 18.0.0

Not using n8n?

code-mode-tools — Same engine as an MCP server for Claude Desktop, Claude Code, Cursor, and any MCP-compatible client. npm install -g code-mode-tools

Related

Code-First n8n Proving Ground — The bigger picture: how code-mode + n8nac cover the full n8n workflow lifecycle (write → deploy → test → debug → runtime), with POC templates and benchmarks.

License

MPL-2.0

About

n8n community node: cut AI Agent token costs by 96% with code-mode execution

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors