Skip to content

Instantly share code, notes, and snippets.

@michabbb
Created July 22, 2025 22:14
Show Gist options
  • Save michabbb/6445da815b4e1b367aefddd3711a309b to your computer and use it in GitHub Desktop.
Save michabbb/6445da815b4e1b367aefddd3711a309b to your computer and use it in GitHub Desktop.
Summary of Laravel Worldwide Meetup - Queues, Jobs, and Workers: Building Resilient Background Processing (https://www.youtube.com/watch?v=EBsvfjNUUj8)

Summary: Building Resilient Background Processing in Laravel

This talk by Haris Weftopoulos provides a deep dive into Laravel's queue system, moving from fundamental concepts to advanced production-ready strategies. The key takeaway is that queues are essential not just for performance, but for building reliable and scalable applications.


1. Core Concepts & Why You Should Use Queues

  • The "Why": Queues decouple time-consuming tasks (like sending emails, processing images, calling third-party APIs) from the user's web request. This results in:

    • Performance: Instant-feeling responses for the user.
    • Reliability: Built-in retry mechanisms handle temporary failures (e.g., network issues).
    • Scalability: You can scale your web servers and your queue workers independently.
    • Fault Tolerance: A failed background job will not crash your entire web application.
  • How it Works (FIFO-ish): Laravel's queue processes jobs in the order they are received (First-In, First-Out). However, a crucial difference is that a failed job does not block the entire queue. The worker will attempt to retry it or mark it as failed, then immediately move on to the next job in the queue.

  • Workers are Long-Lived Processes: This is a critical concept. Unlike a web request that boots the framework, handles the request, and dies, a queue worker boots the framework once and stays alive, processing jobs from memory. This is highly efficient but has one major implication.

    • Warning: Because workers are long-lived, they hold your application code in memory. You MUST restart your workers after every deployment to make them pick up the new code. Forgetting this is a common source of confusing bugs.
    • Warning: Long-lived processes can accumulate memory over time. It's a best practice to configure them to restart periodically.

2. Job Configuration & Best Practices

These are features you can configure directly on your Job class to make it more robust.

  • Retries & Backoff:

    • public $tries = 5;: The number of times a job can be attempted before being marked as permanently failed.
    • public $backoff = [10, 30, 60];: An array defining the delay (in seconds) between retries. This is better than immediate retries as it gives services time to recover.
    • public function retryUntil(): A method that returns a DateTime instance, specifying a time limit after which the job should no longer be retried.
  • Handling Permanent Failures:

    • public function failed(Throwable $exception): This method is called when a job has exhausted all its retries and is marked as failed. This is the perfect place to send a notification to your team (e.g., via Slack or email) or clean up any resources.
  • Ensuring Uniqueness (Preventing Duplicates):

    • Implement the ShouldBeUnique interface on your job.
    • public function uniqueId(): Define a unique string for this job instance (e.g., "refund-order-{$this->order->id}"). Laravel will not queue a new job if another job with the same unique ID is already pending.
    • public $uniqueFor = 3600;: The number of seconds the uniqueness lock should be maintained after the job completes successfully.
  • Job Middleware:

    • Just like route middleware, you can wrap jobs in middleware to add cross-cutting concerns.
    • Define a middleware() method on your job.
    • Use Case: Excellent for rate-limiting jobs that interact with external APIs to avoid hitting API limits. The talk showed a powerful example of using a Redis-based rate limiter that would release the job back to the queue with a delay if the rate limit was exceeded.

3. Orchestrating Complex Workflows

For workflows involving multiple jobs, Laravel provides powerful tools.

  • Bus Chaining (Bus::chain([...])):

    • Use Case: For running a sequence of jobs where each job depends on the previous one completing successfully.
    • Behavior: Jobs run sequentially. If any job in the chain fails, the entire chain stops, and subsequent jobs are not dispatched.
    • Example: 1. Download Video -> 2. Transcode Video -> 3. Upload Video to S3.
  • Bus Batching (Bus::batch([...])):

    • Use Case: For processing a large number of independent jobs in parallel.
    • Behavior: All jobs are dispatched to the queue at once, and workers can pick them up and process them concurrently. You can also monitor the batch's progress, handle failures, and run code when the batch completes (then, catch, finally blocks).
    • Example: Generating 1,000 invoices or resizing 500 uploaded images.

4. Monitoring with Laravel Horizon (Production Essential)

  • What it is: A beautiful dashboard and code-driven configuration system for Laravel's Redis queues.

    • Note: Horizon requires you to use Redis as your queue driver.
  • Why use it: It provides essential visibility into your queue system in production.

    • Real-time stats: Job throughput, wait times, etc.
    • Easy failed job management (view stack traces, retry jobs).
    • Monitor specific jobs using tags.
    • Code-based configuration for your workers (config/horizon.php).
  • Key Horizon Configuration (config/horizon.php):

    • balance => 'auto': Allows Horizon to intelligently auto-scale the number of worker processes per queue based on workload.
    • maxTime & maxJobs: Configure workers to automatically restart after running for a certain amount of time or processing a certain number of jobs. This is the recommended way to handle memory leaks.
    • nice: Set the CPU priority for worker processes.
  • Pro Tip for Metrics: To see the historical graphs on Horizon's "Metrics" page, you must schedule the php artisan horizon:snapshot command to run periodically (e.g., every five minutes) in your app/Console/Kernel.php.


5. Dos and Don'ts & Common Pitfalls

  • DO: Make your jobs idempotent. This means a job can be run multiple times without causing incorrect results or side effects. (e.g., before processing a payment, check if it has already been processed). This is crucial because jobs can be retried.

  • DO: Keep jobs simple and focused on a single responsibility. Move complex business logic into dedicated service classes and call them from the job.

  • DO: Start with the database queue driver. It's simple to set up and often powerful enough. Only move to redis when you observe performance bottlenecks.

  • DON'T: Try to access session data or the authenticated user (Auth::user()) directly inside a job's handle method. The job runs in a separate process and has no web request context.

  • DO: Pass the necessary data (like the user ID) into the job's constructor when you dispatch it. MyJob::dispatch($user->id);

  • DON'T: Dispatch thousands of jobs inside a foreach loop. This can overwhelm your queue connection.

  • DO: Use Bus::batch() for large bulk operations.

  • DON'T: Forget to restart your workers after deployment! Use php artisan queue:restart or php artisan horizon:terminate in your deployment script.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment