Skip to content

Instantly share code, notes, and snippets.

@vicenterusso
Created July 12, 2025 03:24
Show Gist options
  • Save vicenterusso/728fa575614157b8448e834798cf1cbe to your computer and use it in GitHub Desktop.
Save vicenterusso/728fa575614157b8448e834798cf1cbe to your computer and use it in GitHub Desktop.
n8n Inner Workings: Custom Node Loading and Multi-Instance Behavior

n8n Inner Workings: Node Loading and Multi-Instance Behavior

Overview

This document provides a comprehensive analysis of n8n's internal architecture, focusing on node loading mechanisms, memory management, and multi-instance behavior. Based on code analysis of the n8n codebase, this explains the differences between community nodes and custom nodes, and why custom nodes work better in multi-server setups.

Disclaimer: This file was generated by artificial intelligence after analyzing the source code and processing numerous questions I had about the N8N project. It may not be 100% accurate.

Node Loading and Memory Management

Memory Architecture

n8n uses a sophisticated three-tier memory structure for managing nodes:

// In LoadNodesAndCredentials class
loaded: { nodes: {}, credentials: {} }     // Immediately loaded instances
known: { nodes: {}, credentials: {} }      // Metadata (className, sourcePath)
types: { nodes: [], credentials: [] }      // Type descriptions for frontend

Loading Strategy

1. Lazy Loading System

  • n8n first loads lightweight JSON metadata files
  • Actual node classes are loaded on-demand when first accessed
  • Uses LazyPackageDirectoryLoader for efficient memory usage

2. Node Discovery Process

// Base paths scanned during initialization
const basePathsToScan = [
  path.join(CLI_DIR, '..'),                    // n8n-nodes-base
  path.join(CLI_DIR, 'node_modules'),          // n8n-nodes-base
  path.join(nodesDownloadDir, 'node_modules'), // Community nodes
  customExtensionDir                           // Custom nodes
];

3. Hot Reload Mechanism

  • Uses chokidar file watcher
  • Monitors changes to .js and .json files
  • Debounced by 100ms to prevent excessive reloads
  • Broadcasts nodeDescriptionUpdated events to frontend

When Nodes Are Reloaded

Trigger Scope Method
File changes Specific package Hot reload via chokidar
Package install/update Community packages postProcessLoaders()
n8n restart All nodes Full initialization
Manual reload All nodes API endpoint trigger

Community Nodes vs Custom Nodes

Community Nodes Architecture

Storage Location: ~/.n8n/nodes/node_modules/[package-name]/

Installation Process:

  1. npm pack downloads tarball from registry
  2. Extract to managed node_modules directory
  3. Strip development dependencies from package.json
  4. Execute npm install in package directory
  5. Register package in installed_packages database table
  6. Load via PackageDirectoryLoader

Database Tracking:

-- Community nodes require database entries
installed_packages: packageName, installedVersion, authorName, authorEmail
installed_nodes: name, type, latestVersion, package

Loading Class: LazyPackageDirectoryLoaderPackageDirectoryLoader

Custom Nodes Architecture

Storage Location: ~/.n8n/custom/

Installation Process:

  1. Direct file placement (manual copy)
  2. No package management required
  3. No dependency resolution
  4. No database tracking

File Pattern:

~/.n8n/custom/
├── my-node.node.js
├── my-credential.credentials.js
└── subdirectory/
    └── another-node.node.js

Loading Class: CustomDirectoryLoader

Key Differences

Aspect Community Nodes Custom Nodes
Storage ~/.n8n/nodes/node_modules/ ~/.n8n/custom/
Package Management Full npm ecosystem Manual file management
Database Tracking Required None
Dependencies Managed via npm Self-contained
Loading Lazy + Package loader Direct file loading
Multi-instance Complex synchronization Simple file sharing
Updates Via npm/UI Manual replacement

Why Custom Nodes Work Better in Multi-Server Setups

1. Filesystem Compatibility

  • Custom nodes: Simple .js files work perfectly with shared filesystems (EFS)
  • Community nodes: Complex node_modules structures can have issues with:
    • Symlink resolution across instances
    • File permissions in shared storage
    • Package lock conflicts
    • Binary dependency architecture mismatches

2. Database Synchronization

  • Custom nodes: No database dependencies
  • Community nodes: Require installed_packages table synchronization
    • Risk of race conditions during package installation
    • Database state mismatch between instances

3. Node.js Module Cache Issues

  • Custom nodes: Simpler cache management
  • Community nodes: Complex require.cache cleanup needed
    // Community packages need cache cleanup on reload
    unloadAll() {
      const filesToDelete = Object.keys(require.cache)
        .filter(filePath => isContainedWithin(this.directory, filePath));
      filesToDelete.forEach(filePath => delete require.cache[filePath]);
    }

4. Loading Complexity

  • Custom nodes: Direct file scanning and loading
  • Community nodes: Multi-step process with potential failure points:
    1. Package integrity verification
    2. npm dependency resolution
    3. Database transaction management
    4. Cross-instance state synchronization

Workflow Enable/Disable Mechanics

Workflow Activation Process

1. Database Update

await WorkflowRepository.update(workflowId, { active: true });

2. ActiveWorkflowManager Registration

// Three types of activatable workflows
if (shouldAddWebhooks) {
  added.webhooks = await this.addWebhooks(workflow, additionalData, 'trigger', activationMode);
}
if (shouldAddTriggersAndPollers) {
  added.triggersAndPollers = await this.addTriggersAndPollers(dbWorkflow, workflow, options);
}

3. Validation Checks

  • Verify workflow has trigger/webhook/poller nodes
  • Check credential permissions
  • Validate node configurations
  • Ensure no critical errors exist

4. Memory Storage

this.activeWorkflows[workflowId] = { 
  triggerResponses: ITriggerResponse[] 
};

Workflow Deactivation Process

1. Remove from Active Memory

await this.removeWorkflowTriggersAndPollers(workflowId);

2. Clean Up Resources

  • Remove HTTP webhook endpoints
  • Stop polling timers
  • Cancel pending operations
  • Clear execution queues

3. Database Update

await WorkflowRepository.update(workflowId, { active: false });

Multi-Instance Coordination

Pub/Sub Architecture:

// Commands published to other instances
void this.publisher.publishCommand({
  command: 'add-webhooks-triggers-and-pollers',
  payload: { workflowId }
});

void this.publisher.publishCommand({
  command: 'remove-triggers-and-pollers', 
  payload: { workflowId }
});

Instance-Specific Behavior:

  • Each instance manages its own webhook registrations
  • Database serves as source of truth for workflow state
  • Webhook endpoints are instance-specific
  • Triggers and pollers distributed across instances

Configuration Options

Community Package Settings

// Environment variables controlling community packages
N8N_COMMUNITY_PACKAGES_ENABLED=true
N8N_COMMUNITY_PACKAGES_REGISTRY=https://registry.npmjs.org
N8N_REINSTALL_MISSING_PACKAGES=false
N8N_UNVERIFIED_PACKAGES_ENABLED=true
N8N_VERIFIED_PACKAGES_ENABLED=true
N8N_COMMUNITY_PACKAGES_PREVENT_LOADING=false

Custom Node Settings

// Custom extension directory (default: ~/.n8n/custom)
CUSTOM_EXTENSION_ENV=~/.n8n/custom

Concurrency Control and Multi-Instance Coordination

Leadership Election System

n8n uses a Redis-based leadership election system to prevent duplicate executions in multi-server setups:

Leadership Mechanism:

// packages/cli/src/scaling/multi-main-setup.ee.ts
private async tryBecomeLeader() {
  const { hostId } = this.instanceSettings;
  
  // Redis SET NX operation - only succeeds if key doesn't exist
  const keySetSuccessfully = await this.publisher.setIfNotExists(
    this.leaderKey,
    hostId,
    this.leaderKeyTtl
  );
  
  if (keySetSuccessfully) {
    this.instanceSettings.markAsLeader();
    this.emit('leader-takeover');
  } else {
    this.instanceSettings.markAsFollower();
  }
}

Key Configuration:

# Multi-main setup environment variables
N8N_MULTI_MAIN_SETUP_ENABLED=true
N8N_MULTI_MAIN_SETUP_KEY_TTL=10      # seconds
N8N_MULTI_MAIN_SETUP_CHECK_INTERVAL=3 # seconds

Scheduled Task Manager

The ScheduledTaskManager ensures only the leader instance executes scheduled triggers:

// packages/core/src/execution-engine/scheduled-task-manager.ts
registerCron(workflow: Workflow, cronExpression: CronExpression, onTick: () => void) {
  const cronJob = new CronJob(
    cronExpression,
    () => {
      // KEY MECHANISM: Only execute on leader
      if (this.instanceSettings.isLeader) onTick();
    },
    undefined,
    true,
    workflow.timezone,
  );
  // Register the cron job
  this.cronJobs.set(workflow.id, [cronJob]);
}

This is why your scheduler triggers never duplicated - only the leader instance executes them!

Concurrency Control Service

n8n has a built-in ConcurrencyControlService that manages execution limits:

// packages/cli/src/concurrency/concurrency-control.service.ts
async throttle({ mode, executionId }: { mode: ExecutionMode; executionId: string }) {
  if (!this.isEnabled || this.isUnlimited(mode)) return;
  
  await this.getQueue(mode)?.enqueue(executionId);
}

Configuration:

# Concurrency limits
N8N_CONCURRENCY_PRODUCTION_LIMIT=10  # Max concurrent executions
N8N_CONCURRENCY_EVALUATION_LIMIT=5   # Max concurrent evaluations
EXECUTIONS_MODE=queue                 # Enable queue mode

Active Workflow Management

The ActiveWorkflowManager ensures triggers and pollers are only registered on the leader:

// packages/cli/src/active-workflow-manager.ts
shouldAddTriggersAndPollers() {
  return this.instanceSettings.isLeader;
}

shouldAddWebhooks(activationMode: WorkflowActivateMode) {
  return this.shouldAddWebhooks(activationMode);
}

Queue-Based Architecture

In scaling mode, n8n uses Redis queues for coordination:

// packages/cli/src/scaling/scaling.service.ts
setupWorker(concurrency: number) {
  // Workers process jobs from shared Redis queue
  void this.queue.process(JOB_TYPE_NAME, concurrency, async (job: Job) => {
    await this.jobProcessor.processJob(job);
  });
}

Multi-Instance Architecture

┌─────────────────────────────────────────────────────────────────────┐
│                    Multi-Instance Setup                             │
├─────────────────────────────────────────────────────────────────────┤
│  ┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐  │
│  │   Instance A    │    │   Instance B    │    │   Instance C    │  │
│  │   (LEADER)      │    │   (FOLLOWER)    │    │   (FOLLOWER)    │  │
│  │                 │    │                 │    │                 │  │
│  │ ✓ Triggers      │    │ ✗ Triggers      │    │ ✗ Triggers     │  │
│  │ ✓ Pollers       │    │ ✗ Pollers       │    │ ✗ Pollers      │  │
│  │ ✓ Schedulers    │    │ ✗ Schedulers    │    │ ✗ Schedulers   │  │
│  │ ✓ Webhooks      │    │ ✓ Webhooks      │    │ ✓ Webhooks     │  │
│  │ ✓ Executions    │    │ ✓ Executions    │    │ ✓ Executions   │  │
│  └─────────────────┘    └─────────────────┘    └─────────────────┘   │
│            │                      │                      │           │
│            └──────────────────────┼──────────────────────┘           │
│                                   │                                  │
│                    ┌─────────────────────────────────────┐           │
│                    │           Redis Cluster             │           │
│                    │                                     │           │
│                    │  Leadership Key: "instance-123"     │           │
│                    │  Execution Queue: Bull Queue        │           │
│                    │  Pub/Sub: Event Coordination        │           │
│                    └─────────────────────────────────────┘           │
└──────────────────────────────────────────────────────────────────────┘

Built-in Mutex Mechanisms

n8n has several built-in "mutex" mechanisms:

  1. Redis Leadership Election:

    • Uses Redis SET NX (set if not exists) for atomic leader election
    • TTL-based leader key renewal
    • Automatic failover when leader becomes unavailable
  2. Execution Queues:

    • Bull queues with Redis backend
    • Atomic job processing
    • Prevents duplicate execution processing
  3. Database Constraints:

    • Unique execution IDs
    • Atomic status updates
    • Prevents race conditions

Why Your Scheduler Never Duplicated

Your scheduler triggers worked correctly because:

  1. Leadership Election: Only one instance becomes leader
  2. ScheduledTaskManager: Only leader executes scheduled tasks
  3. Atomic Operations: Redis operations are atomic
  4. Proper Failover: If leader fails, another instance takes over

When Duplication Can Occur

Duplication issues typically happen when:

  1. Improper Setup: Missing Redis coordination
  2. Split-Brain: Network partitions causing multiple leaders
  3. Race Conditions: During leadership transitions
  4. Manual Triggers: These bypass the leadership system
  5. Webhook Nodes: These run on all instances (by design)

Configuration for Multi-Server Setup

# Essential settings for multi-server setup
EXECUTIONS_MODE=queue
QUEUE_BULL_REDIS_HOST=your-redis-host
QUEUE_BULL_REDIS_PORT=6379
N8N_MULTI_MAIN_SETUP_ENABLED=true
N8N_CONCURRENCY_PRODUCTION_LIMIT=10

# Optional but recommended
N8N_MULTI_MAIN_SETUP_KEY_TTL=10
N8N_MULTI_MAIN_SETUP_CHECK_INTERVAL=3

Leadership Events

n8n services can respond to leadership changes using decorators:

@OnLeaderTakeover()
startBackgroundProcess() {
  // Called when instance becomes leader
}

@OnLeaderStepdown()
stopBackgroundProcess() {
  // Called when instance loses leadership
}

This ensures services start/stop appropriately based on leadership status.

Execute Workflow Node and Duplication Risks

How Execute Workflow Node Works

The Execute Workflow node (now called "Execute Sub-workflow") creates new executions that are treated as manual executions, not scheduled triggers:

// packages/core/src/execution-engine/node-execution-context/base-execute-context.ts
async executeWorkflow(
  workflowInfo: IExecuteWorkflowInfo,
  inputData?: INodeExecutionData[],
  options?: { doNotWaitToFinish?: boolean; parentExecution?: RelatedExecution }
): Promise<ExecuteWorkflowData> {
  // Creates a new execution via WorkflowRunner.run()
  const result = await this.additionalData.executeWorkflow(workflowInfo, this.additionalData, {
    ...options,
    parentWorkflowId: this.workflow.id,
    inputData,
    parentWorkflowSettings: this.workflow.settings,
    node: this.node,
  });
  return result;
}

Duplication Risk Analysis

Execution Type Leadership Protection Duplication Risk Reason
Scheduled Triggers ✅ Yes ❌ No Only leader executes
Execute Workflow ❌ No ⚠️ Yes Treated as manual execution
Manual Executions ❌ No ⚠️ Yes No leadership election
Webhook Executions ❌ No ⚠️ Yes Run on all webhook instances

Why Execute Workflow Can Duplicate

1. No Leadership Election

// Execute Workflow bypasses the leadership system
const executionId = await this.additionalData.executeWorkflow(workflowInfo, additionalData, {
  // No leadership check here - runs on any instance
});

2. Treated as Manual Execution

// packages/cli/src/workflow-runner.ts
async run(data: IWorkflowExecutionDataProcess) {
  // Execute Workflow creates manual-type executions
  const shouldEnqueue = this.executionsMode === 'queue' && data.executionMode !== 'manual';
  // Manual executions (including Execute Workflow) may not be queued
}

3. Multi-Instance Behavior

  • If the same parent workflow runs on multiple instances
  • Each instance can invoke the sub-workflow independently
  • No coordination mechanism prevents this

Example Duplication Scenario

Instance A: Workflow X → Execute Workflow Y
Instance B: Workflow X → Execute Workflow Y
Instance C: Workflow X → Execute Workflow Y

Result: Workflow Y executes 3 times (once per instance)

Prevention Strategies

1. Use Execution Mode Configuration

# Force all executions (including manual) to workers
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
EXECUTIONS_MODE=queue

2. Implement Application-Level Locking

// In your workflow, add a uniqueness check
const executionKey = `workflow-${workflowId}-${uniqueIdentifier}`;
const lockAcquired = await redis.set(executionKey, 'locked', 'NX', 'EX', 300);
if (!lockAcquired) {
  throw new Error('Workflow already executing');
}

Redis SET NX Pattern - The Recommended Solution

This is the most effective approach for preventing Execute Workflow duplication:

// At the beginning of your called workflow (sub-workflow)
const lockKey = `workflow-lock:${workflowId}:${uniqueIdentifier}`;
const lockTTL = 300; // 5 minutes safety timeout

// Atomic operation - only one instance can acquire the lock
const lockAcquired = await redis.set(lockKey, instanceId, 'NX', 'EX', lockTTL);

if (!lockAcquired) {
  console.log('Workflow already running, exiting...');
  return; // Exit early - no duplication
}

try {
  // Your actual workflow logic here
  await performWorkflowTasks();
} finally {
  // Always clean up the lock
  await redis.del(lockKey);
}

Implementation Pattern

1. Lock Structure

// Use a meaningful lock key structure
const lockKey = `workflow-lock:${workflowId}:${inputHash}`;
// Examples:
// "workflow-lock:user-onboarding:user-123"
// "workflow-lock:data-processing:batch-456"
// "workflow-lock:notification-sender:event-789"

2. Complete Workflow Pattern

┌─────────────────────────────────────────────────────────────────┐
│                    Sub-workflow with Redis Lock                 │
├─────────────────────────────────────────────────────────────────┤
│  1. Redis SET NX Check                                          │
│     ├─ Success: Continue with workflow                          │
│     └─ Failure: Exit early (already running)                   │
│                                                                 │
│  2. Execute Workflow Logic                                      │
│     ├─ Process data                                             │
│     ├─ Call APIs                                                │
│     └─ Save results                                             │
│                                                                 │
│  3. Cleanup (in finally block)                                 │
│     └─ Redis DEL to release lock                               │
└─────────────────────────────────────────────────────────────────┘

Advanced Lock Patterns

1. Input-Based Locking

// Lock based on input data to allow parallel processing of different data
const inputHash = crypto.createHash('md5').update(JSON.stringify(inputData)).digest('hex');
const lockKey = `workflow-lock:${workflowId}:${inputHash}`;

2. Time-Based Locking

// For workflows that should run only once per time period
const timePeriod = new Date().toISOString().substr(0, 13); // Hour-based
const lockKey = `workflow-lock:${workflowId}:${timePeriod}`;

3. Resource-Based Locking

// For workflows that process specific resources
const resourceId = inputData.userId || inputData.orderId || inputData.accountId;
const lockKey = `workflow-lock:${workflowId}:resource-${resourceId}`;

Why This Works Perfectly

1. Atomic Operation

  • Redis SET NX is atomic across all instances
  • No race conditions possible
  • Either succeeds or fails cleanly

2. Distributed by Nature

  • Works across multiple n8n instances
  • Shared Redis acts as coordination point
  • No instance-specific configuration needed

3. Safe with TTL

  • Auto-cleanup if workflow crashes
  • Prevents deadlocks
  • Configurable timeout based on workflow duration

4. Flexible Implementation

  • Can be adapted to any workflow pattern
  • Supports different locking strategies
  • Easy to implement with custom Redis node

Example Redis Node Implementation

// Custom Redis Node for Workflow Lock
const operation = this.getNodeParameter('operation', 0);
const lockKey = this.getNodeParameter('lockKey', 0);
const ttl = this.getNodeParameter('ttl', 0, 300);

if (operation === 'acquire') {
  const result = await redis.set(lockKey, this.getInstanceId(), 'NX', 'EX', ttl);
  return [{ json: { lockAcquired: result === 'OK', lockKey } }];
}

if (operation === 'release') {
  await redis.del(lockKey);
  return [{ json: { lockReleased: true, lockKey } }];
}

Multi-Instance Test Scenario

Instance A: Parent → Execute Sub-workflow → Redis SET NX ✅ (succeeds)
Instance B: Parent → Execute Sub-workflow → Redis SET NX ❌ (fails, exits)
Instance C: Parent → Execute Sub-workflow → Redis SET NX ❌ (fails, exits)

Result: Sub-workflow runs exactly once (Instance A only)

3. Use Webhook-Based Coordination

// Instead of Execute Workflow, use HTTP Request to a dedicated endpoint
// that implements its own locking mechanism

Best Practices for Multi-Instance Setups

1. Minimize Execute Workflow Usage

  • Use sparingly in multi-instance environments
  • Consider alternatives like shared queues or webhooks

2. Implement Idempotency

  • Design workflows to handle duplicate executions gracefully
  • Use unique identifiers and database constraints

3. Monitor Execution Patterns

  • Track execution IDs and frequencies
  • Set up alerts for unexpected duplication

4. Proper Configuration

# Essential for coordination
EXECUTIONS_MODE=queue
QUEUE_BULL_REDIS_HOST=your-redis-host
N8N_MULTI_MAIN_SETUP_ENABLED=true
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true

Code References

Key Files Analyzed

  • packages/cli/src/load-nodes-and-credentials.ts - Main node loading orchestrator
  • packages/core/src/nodes-loader/directory-loader.ts - Base loader class
  • packages/core/src/nodes-loader/custom-directory-loader.ts - Custom nodes loader
  • packages/core/src/nodes-loader/package-directory-loader.ts - Community nodes loader
  • packages/cli/src/services/community-packages.service.ts - Community package management
  • packages/cli/src/active-workflow-manager.ts - Workflow activation/deactivation
  • packages/core/src/instance-settings/instance-settings.ts - Path configurations

Loading Process Flow

graph TD
    A[n8n Startup] --> B[LoadNodesAndCredentials.init()]
    B --> C[Load n8n-nodes-base]
    C --> D[Load Community Packages]
    D --> E[Load Custom Nodes]
    E --> F[postProcessLoaders()]
    F --> G[Setup Hot Reload]
    G --> H[Ready to Execute]
    
    I[File Change] --> J[Hot Reload Trigger]
    J --> K[Reload Affected Package]
    K --> L[Broadcast Update]
    L --> M[Frontend Refresh]
Loading

Recommendations for Multi-Server EFS Setup

1. Use Custom Nodes Directory

# Place your nodes in the custom directory
~/.n8n/custom/
├── your-custom-node.node.js
├── your-custom-credential.credentials.js
└── utils/
    └── shared-functions.js

2. File Structure Best Practices

// your-custom-node.node.js
class YourCustomNode {
  description = {
    displayName: 'Your Custom Node',
    name: 'yourCustomNode',
    group: ['transform'],
    version: 1,
    // ... rest of configuration
  };
  
  async execute() {
    // Your node logic
  }
}

module.exports = { YourCustomNode };

3. Deployment Strategy

  1. Version Control: Keep custom nodes in separate repository
  2. Deployment: Use automated deployment to sync files to EFS
  3. Restart Coordination: Restart all n8n instances after updates
  4. Testing: Test in single-instance environment first

4. Monitoring and Maintenance

  • Monitor file permissions on shared storage
  • Log node loading errors across all instances
  • Implement health checks for custom node availability
  • Keep backups of working custom node versions

Troubleshooting Common Issues

Community Nodes in Multi-Instance Setup

Symptoms:

  • "Node type not found" errors
  • Inconsistent node availability across instances
  • Package installation failures

Solutions:

  1. Check database synchronization
  2. Verify EFS mount permissions
  3. Ensure consistent npm cache across instances
  4. Consider migration to custom nodes

Custom Nodes Issues

Symptoms:

  • Nodes not loading after file changes
  • Syntax errors in custom nodes
  • Missing dependencies

Solutions:

  1. Restart all n8n instances
  2. Check file syntax and exports
  3. Verify file permissions on EFS
  4. Review n8n logs for loading errors

Architecture Diagrams

Node Loading Architecture

┌─────────────────────────────────────────────────────────────────┐
│                    LoadNodesAndCredentials                      │
├─────────────────────────────────────────────────────────────────┤
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐  │
│  │  n8n-nodes-base │  │ Community Nodes │  │  Custom Nodes   │  │
│  │                 │  │                 │  │                 │  │
│  │ LazyPackage     │  │ LazyPackage     │  │ CustomDirectory │  │
│  │ DirectoryLoader │  │ DirectoryLoader │  │ Loader          │  │
│  └─────────────────┘  └─────────────────┘  └─────────────────┘  │
├─────────────────────────────────────────────────────────────────┤
│                          Memory Structure                       │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐             │
│  │   loaded    │  │    known    │  │    types    │             │
│  │ {nodes: {}} │  │ {nodes: {}} │  │ {nodes: []} │             │
│  └─────────────┘  └─────────────┘  └─────────────┘             │
└─────────────────────────────────────────────────────────────────┘

Multi-Instance Workflow Activation

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Instance A    │    │   Instance B    │    │   Instance C    │
│                 │    │                 │    │                 │
│ ┌─────────────┐ │    │ ┌─────────────┐ │    │ ┌─────────────┐ │
│ │   Webhooks  │ │    │ │   Triggers  │ │    │ │   Pollers   │ │
│ └─────────────┘ │    │ └─────────────┘ │    │ └─────────────┘ │
│        │        │    │        │        │    │        │        │
└────────┼────────┘    └────────┼────────┘    └────────┼────────┘
         │                      │                      │
         └──────────────────────┼──────────────────────┘
                                │
                    ┌─────────────────┐
                    │    Database     │
                    │                 │
                    │ workflow_entity │
                    │  active: true   │
                    └─────────────────┘

Performance Considerations

Memory Usage

  • Lazy Loading: Reduces initial memory footprint
  • On-demand Loading: Loads nodes only when needed
  • Hot Reload: Minimizes memory churn during development

Startup Time

  • Community Nodes: Slower due to npm package scanning
  • Custom Nodes: Faster due to simple file scanning
  • Lazy Loading: Significantly improves startup time

Multi-Instance Scaling

  • Custom Nodes: Linear scaling with no coordination overhead
  • Community Nodes: Potential bottlenecks due to database synchronization
  • Workflow Activation: Distributed across instances efficiently

Security Considerations

Community Nodes

  • Package integrity verification via checksums
  • Unverified packages can be disabled
  • npm registry security scanning

Custom Nodes

  • No built-in security scanning
  • Direct file system access
  • Manual code review required

Future Considerations

Potential Improvements

  1. Hot Reload for Community Nodes: Extend hot reload to community packages
  2. Better Multi-Instance Sync: Improved coordination mechanisms
  3. Custom Node Packaging: Optional package management for custom nodes
  4. Security Scanning: Automated security analysis for custom nodes

Migration Strategies

  • Community to Custom: Scripts to convert community nodes to custom format
  • Custom to Community: Guidelines for publishing custom nodes
  • Hybrid Approach: Support for both approaches in production

Conclusion

The analysis reveals that custom nodes provide a more robust solution for multi-server deployments due to their simplicity and independence from complex package management systems. While community nodes offer richer features and ecosystem integration, they introduce complexity that can cause issues in distributed environments.

For production multi-server setups with shared storage, custom nodes in ~/.n8n/custom/ provide the most reliable and maintainable approach.


This analysis was conducted on the n8n codebase and reflects the architecture as of the time of analysis. The n8n project is actively developed, so some details may evolve over time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment