This document provides a comprehensive analysis of n8n's internal architecture, focusing on node loading mechanisms, memory management, and multi-instance behavior. Based on code analysis of the n8n codebase, this explains the differences between community nodes and custom nodes, and why custom nodes work better in multi-server setups.
Disclaimer: This file was generated by artificial intelligence after analyzing the source code and processing numerous questions I had about the N8N project. It may not be 100% accurate.
n8n uses a sophisticated three-tier memory structure for managing nodes:
// In LoadNodesAndCredentials class
loaded: { nodes: {}, credentials: {} } // Immediately loaded instances
known: { nodes: {}, credentials: {} } // Metadata (className, sourcePath)
types: { nodes: [], credentials: [] } // Type descriptions for frontend
1. Lazy Loading System
- n8n first loads lightweight JSON metadata files
- Actual node classes are loaded on-demand when first accessed
- Uses
LazyPackageDirectoryLoader
for efficient memory usage
2. Node Discovery Process
// Base paths scanned during initialization
const basePathsToScan = [
path.join(CLI_DIR, '..'), // n8n-nodes-base
path.join(CLI_DIR, 'node_modules'), // n8n-nodes-base
path.join(nodesDownloadDir, 'node_modules'), // Community nodes
customExtensionDir // Custom nodes
];
3. Hot Reload Mechanism
- Uses
chokidar
file watcher - Monitors changes to
.js
and.json
files - Debounced by 100ms to prevent excessive reloads
- Broadcasts
nodeDescriptionUpdated
events to frontend
Trigger | Scope | Method |
---|---|---|
File changes | Specific package | Hot reload via chokidar |
Package install/update | Community packages | postProcessLoaders() |
n8n restart | All nodes | Full initialization |
Manual reload | All nodes | API endpoint trigger |
Storage Location: ~/.n8n/nodes/node_modules/[package-name]/
Installation Process:
npm pack
downloads tarball from registry- Extract to managed node_modules directory
- Strip development dependencies from package.json
- Execute
npm install
in package directory - Register package in
installed_packages
database table - Load via
PackageDirectoryLoader
Database Tracking:
-- Community nodes require database entries
installed_packages: packageName, installedVersion, authorName, authorEmail
installed_nodes: name, type, latestVersion, package
Loading Class: LazyPackageDirectoryLoader
→ PackageDirectoryLoader
Storage Location: ~/.n8n/custom/
Installation Process:
- Direct file placement (manual copy)
- No package management required
- No dependency resolution
- No database tracking
File Pattern:
~/.n8n/custom/
├── my-node.node.js
├── my-credential.credentials.js
└── subdirectory/
└── another-node.node.js
Loading Class: CustomDirectoryLoader
Aspect | Community Nodes | Custom Nodes |
---|---|---|
Storage | ~/.n8n/nodes/node_modules/ |
~/.n8n/custom/ |
Package Management | Full npm ecosystem | Manual file management |
Database Tracking | Required | None |
Dependencies | Managed via npm | Self-contained |
Loading | Lazy + Package loader | Direct file loading |
Multi-instance | Complex synchronization | Simple file sharing |
Updates | Via npm/UI | Manual replacement |
- Custom nodes: Simple
.js
files work perfectly with shared filesystems (EFS) - Community nodes: Complex
node_modules
structures can have issues with:- Symlink resolution across instances
- File permissions in shared storage
- Package lock conflicts
- Binary dependency architecture mismatches
- Custom nodes: No database dependencies
- Community nodes: Require
installed_packages
table synchronization- Risk of race conditions during package installation
- Database state mismatch between instances
- Custom nodes: Simpler cache management
- Community nodes: Complex
require.cache
cleanup needed// Community packages need cache cleanup on reload unloadAll() { const filesToDelete = Object.keys(require.cache) .filter(filePath => isContainedWithin(this.directory, filePath)); filesToDelete.forEach(filePath => delete require.cache[filePath]); }
- Custom nodes: Direct file scanning and loading
- Community nodes: Multi-step process with potential failure points:
- Package integrity verification
- npm dependency resolution
- Database transaction management
- Cross-instance state synchronization
1. Database Update
await WorkflowRepository.update(workflowId, { active: true });
2. ActiveWorkflowManager Registration
// Three types of activatable workflows
if (shouldAddWebhooks) {
added.webhooks = await this.addWebhooks(workflow, additionalData, 'trigger', activationMode);
}
if (shouldAddTriggersAndPollers) {
added.triggersAndPollers = await this.addTriggersAndPollers(dbWorkflow, workflow, options);
}
3. Validation Checks
- Verify workflow has trigger/webhook/poller nodes
- Check credential permissions
- Validate node configurations
- Ensure no critical errors exist
4. Memory Storage
this.activeWorkflows[workflowId] = {
triggerResponses: ITriggerResponse[]
};
1. Remove from Active Memory
await this.removeWorkflowTriggersAndPollers(workflowId);
2. Clean Up Resources
- Remove HTTP webhook endpoints
- Stop polling timers
- Cancel pending operations
- Clear execution queues
3. Database Update
await WorkflowRepository.update(workflowId, { active: false });
Pub/Sub Architecture:
// Commands published to other instances
void this.publisher.publishCommand({
command: 'add-webhooks-triggers-and-pollers',
payload: { workflowId }
});
void this.publisher.publishCommand({
command: 'remove-triggers-and-pollers',
payload: { workflowId }
});
Instance-Specific Behavior:
- Each instance manages its own webhook registrations
- Database serves as source of truth for workflow state
- Webhook endpoints are instance-specific
- Triggers and pollers distributed across instances
// Environment variables controlling community packages
N8N_COMMUNITY_PACKAGES_ENABLED=true
N8N_COMMUNITY_PACKAGES_REGISTRY=https://registry.npmjs.org
N8N_REINSTALL_MISSING_PACKAGES=false
N8N_UNVERIFIED_PACKAGES_ENABLED=true
N8N_VERIFIED_PACKAGES_ENABLED=true
N8N_COMMUNITY_PACKAGES_PREVENT_LOADING=false
// Custom extension directory (default: ~/.n8n/custom)
CUSTOM_EXTENSION_ENV=~/.n8n/custom
n8n uses a Redis-based leadership election system to prevent duplicate executions in multi-server setups:
Leadership Mechanism:
// packages/cli/src/scaling/multi-main-setup.ee.ts
private async tryBecomeLeader() {
const { hostId } = this.instanceSettings;
// Redis SET NX operation - only succeeds if key doesn't exist
const keySetSuccessfully = await this.publisher.setIfNotExists(
this.leaderKey,
hostId,
this.leaderKeyTtl
);
if (keySetSuccessfully) {
this.instanceSettings.markAsLeader();
this.emit('leader-takeover');
} else {
this.instanceSettings.markAsFollower();
}
}
Key Configuration:
# Multi-main setup environment variables
N8N_MULTI_MAIN_SETUP_ENABLED=true
N8N_MULTI_MAIN_SETUP_KEY_TTL=10 # seconds
N8N_MULTI_MAIN_SETUP_CHECK_INTERVAL=3 # seconds
The ScheduledTaskManager ensures only the leader instance executes scheduled triggers:
// packages/core/src/execution-engine/scheduled-task-manager.ts
registerCron(workflow: Workflow, cronExpression: CronExpression, onTick: () => void) {
const cronJob = new CronJob(
cronExpression,
() => {
// KEY MECHANISM: Only execute on leader
if (this.instanceSettings.isLeader) onTick();
},
undefined,
true,
workflow.timezone,
);
// Register the cron job
this.cronJobs.set(workflow.id, [cronJob]);
}
This is why your scheduler triggers never duplicated - only the leader instance executes them!
n8n has a built-in ConcurrencyControlService that manages execution limits:
// packages/cli/src/concurrency/concurrency-control.service.ts
async throttle({ mode, executionId }: { mode: ExecutionMode; executionId: string }) {
if (!this.isEnabled || this.isUnlimited(mode)) return;
await this.getQueue(mode)?.enqueue(executionId);
}
Configuration:
# Concurrency limits
N8N_CONCURRENCY_PRODUCTION_LIMIT=10 # Max concurrent executions
N8N_CONCURRENCY_EVALUATION_LIMIT=5 # Max concurrent evaluations
EXECUTIONS_MODE=queue # Enable queue mode
The ActiveWorkflowManager ensures triggers and pollers are only registered on the leader:
// packages/cli/src/active-workflow-manager.ts
shouldAddTriggersAndPollers() {
return this.instanceSettings.isLeader;
}
shouldAddWebhooks(activationMode: WorkflowActivateMode) {
return this.shouldAddWebhooks(activationMode);
}
In scaling mode, n8n uses Redis queues for coordination:
// packages/cli/src/scaling/scaling.service.ts
setupWorker(concurrency: number) {
// Workers process jobs from shared Redis queue
void this.queue.process(JOB_TYPE_NAME, concurrency, async (job: Job) => {
await this.jobProcessor.processJob(job);
});
}
┌─────────────────────────────────────────────────────────────────────┐
│ Multi-Instance Setup │
├─────────────────────────────────────────────────────────────────────┤
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Instance A │ │ Instance B │ │ Instance C │ │
│ │ (LEADER) │ │ (FOLLOWER) │ │ (FOLLOWER) │ │
│ │ │ │ │ │ │ │
│ │ ✓ Triggers │ │ ✗ Triggers │ │ ✗ Triggers │ │
│ │ ✓ Pollers │ │ ✗ Pollers │ │ ✗ Pollers │ │
│ │ ✓ Schedulers │ │ ✗ Schedulers │ │ ✗ Schedulers │ │
│ │ ✓ Webhooks │ │ ✓ Webhooks │ │ ✓ Webhooks │ │
│ │ ✓ Executions │ │ ✓ Executions │ │ ✓ Executions │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
│ │ │ │ │
│ └──────────────────────┼──────────────────────┘ │
│ │ │
│ ┌─────────────────────────────────────┐ │
│ │ Redis Cluster │ │
│ │ │ │
│ │ Leadership Key: "instance-123" │ │
│ │ Execution Queue: Bull Queue │ │
│ │ Pub/Sub: Event Coordination │ │
│ └─────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────────┘
n8n has several built-in "mutex" mechanisms:
-
Redis Leadership Election:
- Uses Redis
SET NX
(set if not exists) for atomic leader election - TTL-based leader key renewal
- Automatic failover when leader becomes unavailable
- Uses Redis
-
Execution Queues:
- Bull queues with Redis backend
- Atomic job processing
- Prevents duplicate execution processing
-
Database Constraints:
- Unique execution IDs
- Atomic status updates
- Prevents race conditions
Your scheduler triggers worked correctly because:
- Leadership Election: Only one instance becomes leader
- ScheduledTaskManager: Only leader executes scheduled tasks
- Atomic Operations: Redis operations are atomic
- Proper Failover: If leader fails, another instance takes over
Duplication issues typically happen when:
- Improper Setup: Missing Redis coordination
- Split-Brain: Network partitions causing multiple leaders
- Race Conditions: During leadership transitions
- Manual Triggers: These bypass the leadership system
- Webhook Nodes: These run on all instances (by design)
# Essential settings for multi-server setup
EXECUTIONS_MODE=queue
QUEUE_BULL_REDIS_HOST=your-redis-host
QUEUE_BULL_REDIS_PORT=6379
N8N_MULTI_MAIN_SETUP_ENABLED=true
N8N_CONCURRENCY_PRODUCTION_LIMIT=10
# Optional but recommended
N8N_MULTI_MAIN_SETUP_KEY_TTL=10
N8N_MULTI_MAIN_SETUP_CHECK_INTERVAL=3
n8n services can respond to leadership changes using decorators:
@OnLeaderTakeover()
startBackgroundProcess() {
// Called when instance becomes leader
}
@OnLeaderStepdown()
stopBackgroundProcess() {
// Called when instance loses leadership
}
This ensures services start/stop appropriately based on leadership status.
The Execute Workflow node (now called "Execute Sub-workflow") creates new executions that are treated as manual executions, not scheduled triggers:
// packages/core/src/execution-engine/node-execution-context/base-execute-context.ts
async executeWorkflow(
workflowInfo: IExecuteWorkflowInfo,
inputData?: INodeExecutionData[],
options?: { doNotWaitToFinish?: boolean; parentExecution?: RelatedExecution }
): Promise<ExecuteWorkflowData> {
// Creates a new execution via WorkflowRunner.run()
const result = await this.additionalData.executeWorkflow(workflowInfo, this.additionalData, {
...options,
parentWorkflowId: this.workflow.id,
inputData,
parentWorkflowSettings: this.workflow.settings,
node: this.node,
});
return result;
}
Execution Type | Leadership Protection | Duplication Risk | Reason |
---|---|---|---|
Scheduled Triggers | ✅ Yes | ❌ No | Only leader executes |
Execute Workflow | ❌ No | Treated as manual execution | |
Manual Executions | ❌ No | No leadership election | |
Webhook Executions | ❌ No | Run on all webhook instances |
1. No Leadership Election
// Execute Workflow bypasses the leadership system
const executionId = await this.additionalData.executeWorkflow(workflowInfo, additionalData, {
// No leadership check here - runs on any instance
});
2. Treated as Manual Execution
// packages/cli/src/workflow-runner.ts
async run(data: IWorkflowExecutionDataProcess) {
// Execute Workflow creates manual-type executions
const shouldEnqueue = this.executionsMode === 'queue' && data.executionMode !== 'manual';
// Manual executions (including Execute Workflow) may not be queued
}
3. Multi-Instance Behavior
- If the same parent workflow runs on multiple instances
- Each instance can invoke the sub-workflow independently
- No coordination mechanism prevents this
Instance A: Workflow X → Execute Workflow Y
Instance B: Workflow X → Execute Workflow Y
Instance C: Workflow X → Execute Workflow Y
Result: Workflow Y executes 3 times (once per instance)
1. Use Execution Mode Configuration
# Force all executions (including manual) to workers
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
EXECUTIONS_MODE=queue
2. Implement Application-Level Locking
// In your workflow, add a uniqueness check
const executionKey = `workflow-${workflowId}-${uniqueIdentifier}`;
const lockAcquired = await redis.set(executionKey, 'locked', 'NX', 'EX', 300);
if (!lockAcquired) {
throw new Error('Workflow already executing');
}
This is the most effective approach for preventing Execute Workflow duplication:
// At the beginning of your called workflow (sub-workflow)
const lockKey = `workflow-lock:${workflowId}:${uniqueIdentifier}`;
const lockTTL = 300; // 5 minutes safety timeout
// Atomic operation - only one instance can acquire the lock
const lockAcquired = await redis.set(lockKey, instanceId, 'NX', 'EX', lockTTL);
if (!lockAcquired) {
console.log('Workflow already running, exiting...');
return; // Exit early - no duplication
}
try {
// Your actual workflow logic here
await performWorkflowTasks();
} finally {
// Always clean up the lock
await redis.del(lockKey);
}
1. Lock Structure
// Use a meaningful lock key structure
const lockKey = `workflow-lock:${workflowId}:${inputHash}`;
// Examples:
// "workflow-lock:user-onboarding:user-123"
// "workflow-lock:data-processing:batch-456"
// "workflow-lock:notification-sender:event-789"
2. Complete Workflow Pattern
┌─────────────────────────────────────────────────────────────────┐
│ Sub-workflow with Redis Lock │
├─────────────────────────────────────────────────────────────────┤
│ 1. Redis SET NX Check │
│ ├─ Success: Continue with workflow │
│ └─ Failure: Exit early (already running) │
│ │
│ 2. Execute Workflow Logic │
│ ├─ Process data │
│ ├─ Call APIs │
│ └─ Save results │
│ │
│ 3. Cleanup (in finally block) │
│ └─ Redis DEL to release lock │
└─────────────────────────────────────────────────────────────────┘
1. Input-Based Locking
// Lock based on input data to allow parallel processing of different data
const inputHash = crypto.createHash('md5').update(JSON.stringify(inputData)).digest('hex');
const lockKey = `workflow-lock:${workflowId}:${inputHash}`;
2. Time-Based Locking
// For workflows that should run only once per time period
const timePeriod = new Date().toISOString().substr(0, 13); // Hour-based
const lockKey = `workflow-lock:${workflowId}:${timePeriod}`;
3. Resource-Based Locking
// For workflows that process specific resources
const resourceId = inputData.userId || inputData.orderId || inputData.accountId;
const lockKey = `workflow-lock:${workflowId}:resource-${resourceId}`;
1. Atomic Operation
- Redis SET NX is atomic across all instances
- No race conditions possible
- Either succeeds or fails cleanly
2. Distributed by Nature
- Works across multiple n8n instances
- Shared Redis acts as coordination point
- No instance-specific configuration needed
3. Safe with TTL
- Auto-cleanup if workflow crashes
- Prevents deadlocks
- Configurable timeout based on workflow duration
4. Flexible Implementation
- Can be adapted to any workflow pattern
- Supports different locking strategies
- Easy to implement with custom Redis node
// Custom Redis Node for Workflow Lock
const operation = this.getNodeParameter('operation', 0);
const lockKey = this.getNodeParameter('lockKey', 0);
const ttl = this.getNodeParameter('ttl', 0, 300);
if (operation === 'acquire') {
const result = await redis.set(lockKey, this.getInstanceId(), 'NX', 'EX', ttl);
return [{ json: { lockAcquired: result === 'OK', lockKey } }];
}
if (operation === 'release') {
await redis.del(lockKey);
return [{ json: { lockReleased: true, lockKey } }];
}
Instance A: Parent → Execute Sub-workflow → Redis SET NX ✅ (succeeds)
Instance B: Parent → Execute Sub-workflow → Redis SET NX ❌ (fails, exits)
Instance C: Parent → Execute Sub-workflow → Redis SET NX ❌ (fails, exits)
Result: Sub-workflow runs exactly once (Instance A only)
3. Use Webhook-Based Coordination
// Instead of Execute Workflow, use HTTP Request to a dedicated endpoint
// that implements its own locking mechanism
1. Minimize Execute Workflow Usage
- Use sparingly in multi-instance environments
- Consider alternatives like shared queues or webhooks
2. Implement Idempotency
- Design workflows to handle duplicate executions gracefully
- Use unique identifiers and database constraints
3. Monitor Execution Patterns
- Track execution IDs and frequencies
- Set up alerts for unexpected duplication
4. Proper Configuration
# Essential for coordination
EXECUTIONS_MODE=queue
QUEUE_BULL_REDIS_HOST=your-redis-host
N8N_MULTI_MAIN_SETUP_ENABLED=true
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
packages/cli/src/load-nodes-and-credentials.ts
- Main node loading orchestratorpackages/core/src/nodes-loader/directory-loader.ts
- Base loader classpackages/core/src/nodes-loader/custom-directory-loader.ts
- Custom nodes loaderpackages/core/src/nodes-loader/package-directory-loader.ts
- Community nodes loaderpackages/cli/src/services/community-packages.service.ts
- Community package managementpackages/cli/src/active-workflow-manager.ts
- Workflow activation/deactivationpackages/core/src/instance-settings/instance-settings.ts
- Path configurations
graph TD
A[n8n Startup] --> B[LoadNodesAndCredentials.init()]
B --> C[Load n8n-nodes-base]
C --> D[Load Community Packages]
D --> E[Load Custom Nodes]
E --> F[postProcessLoaders()]
F --> G[Setup Hot Reload]
G --> H[Ready to Execute]
I[File Change] --> J[Hot Reload Trigger]
J --> K[Reload Affected Package]
K --> L[Broadcast Update]
L --> M[Frontend Refresh]
# Place your nodes in the custom directory
~/.n8n/custom/
├── your-custom-node.node.js
├── your-custom-credential.credentials.js
└── utils/
└── shared-functions.js
// your-custom-node.node.js
class YourCustomNode {
description = {
displayName: 'Your Custom Node',
name: 'yourCustomNode',
group: ['transform'],
version: 1,
// ... rest of configuration
};
async execute() {
// Your node logic
}
}
module.exports = { YourCustomNode };
- Version Control: Keep custom nodes in separate repository
- Deployment: Use automated deployment to sync files to EFS
- Restart Coordination: Restart all n8n instances after updates
- Testing: Test in single-instance environment first
- Monitor file permissions on shared storage
- Log node loading errors across all instances
- Implement health checks for custom node availability
- Keep backups of working custom node versions
Symptoms:
- "Node type not found" errors
- Inconsistent node availability across instances
- Package installation failures
Solutions:
- Check database synchronization
- Verify EFS mount permissions
- Ensure consistent npm cache across instances
- Consider migration to custom nodes
Symptoms:
- Nodes not loading after file changes
- Syntax errors in custom nodes
- Missing dependencies
Solutions:
- Restart all n8n instances
- Check file syntax and exports
- Verify file permissions on EFS
- Review n8n logs for loading errors
┌─────────────────────────────────────────────────────────────────┐
│ LoadNodesAndCredentials │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ n8n-nodes-base │ │ Community Nodes │ │ Custom Nodes │ │
│ │ │ │ │ │ │ │
│ │ LazyPackage │ │ LazyPackage │ │ CustomDirectory │ │
│ │ DirectoryLoader │ │ DirectoryLoader │ │ Loader │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
├─────────────────────────────────────────────────────────────────┤
│ Memory Structure │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ loaded │ │ known │ │ types │ │
│ │ {nodes: {}} │ │ {nodes: {}} │ │ {nodes: []} │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Instance A │ │ Instance B │ │ Instance C │
│ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ Webhooks │ │ │ │ Triggers │ │ │ │ Pollers │ │
│ └─────────────┘ │ │ └─────────────┘ │ │ └─────────────┘ │
│ │ │ │ │ │ │ │ │
└────────┼────────┘ └────────┼────────┘ └────────┼────────┘
│ │ │
└──────────────────────┼──────────────────────┘
│
┌─────────────────┐
│ Database │
│ │
│ workflow_entity │
│ active: true │
└─────────────────┘
- Lazy Loading: Reduces initial memory footprint
- On-demand Loading: Loads nodes only when needed
- Hot Reload: Minimizes memory churn during development
- Community Nodes: Slower due to npm package scanning
- Custom Nodes: Faster due to simple file scanning
- Lazy Loading: Significantly improves startup time
- Custom Nodes: Linear scaling with no coordination overhead
- Community Nodes: Potential bottlenecks due to database synchronization
- Workflow Activation: Distributed across instances efficiently
- Package integrity verification via checksums
- Unverified packages can be disabled
- npm registry security scanning
- No built-in security scanning
- Direct file system access
- Manual code review required
- Hot Reload for Community Nodes: Extend hot reload to community packages
- Better Multi-Instance Sync: Improved coordination mechanisms
- Custom Node Packaging: Optional package management for custom nodes
- Security Scanning: Automated security analysis for custom nodes
- Community to Custom: Scripts to convert community nodes to custom format
- Custom to Community: Guidelines for publishing custom nodes
- Hybrid Approach: Support for both approaches in production
The analysis reveals that custom nodes provide a more robust solution for multi-server deployments due to their simplicity and independence from complex package management systems. While community nodes offer richer features and ecosystem integration, they introduce complexity that can cause issues in distributed environments.
For production multi-server setups with shared storage, custom nodes in ~/.n8n/custom/
provide the most reliable and maintainable approach.
This analysis was conducted on the n8n codebase and reflects the architecture as of the time of analysis. The n8n project is actively developed, so some details may evolve over time.