Start the worker:
celery -A tasks worker --loglevel=info -c 2 --pidfile=celery.pid
In another terminal send 6 tasks:
python script.py
You should see task 1 and task 2 start. Before they complete kill the worker gracefully:
# Send graceful shutdown kill -TERM `cat celery.pid` # Send a second TERM to complete the shutdown kill -TERM `cat celery.pid`
Or forcefully:
ps auxww | grep celery | grep -v grep | awk '{print $2}' | xargs kill -SIGKILL
Now restart the worker:
celery -A tasks worker --loglevel=info -c 2 --pidfile=celery.pid
With the RabbitMQ backend the worker will start up and begin processing tasks 1 and 2.
With the current Redis backend the worker will begin processing task 3 and 4 then
5 and 6 and finally 1 and 2. The restore_all_unacknowledged_messages
run on start up
makes it so 1 and 2 are picked up as soon as the 5 and 6 are finished otherwise they would
not run until after the default visibility timeout (1 hour).
With the updated Redis backend the worker will begin processing task 1 and 2 as desired.
This still requires the use of restore_all_unacknowledged_messages
to ensure that
the messages are restored instantly rather than waiting for the visibility timeout. This
is probably not a good idea if you have multiple workers configured to consumer from
the same queue which are not always restarted together.
What's
newredis
? Can't see any info about that anywhere.