Skip to content

Instantly share code, notes, and snippets.

@Alex3917
Created September 1, 2020 01:47
Show Gist options
  • Save Alex3917/9b09e67069c88361d4e0187e1968bc64 to your computer and use it in GitHub Desktop.
Save Alex3917/9b09e67069c88361d4e0187e1968bc64 to your computer and use it in GitHub Desktop.
How to configure Celery using Elastic Beanstalk with Amazon Linux 2
# In .ebextensions/01_celery.config
files:
"/etc/systemd/system/celery.service":
mode: "000644"
owner: celery
group: celery
content: |
[Unit]
Description=Celery Service
After=network.target
[Service]
# I saw some other tutorials suggesting using Type=simple, but that didn't work for me. Type=forking works
# as long as you're using an instance with at least 2.0 Gigs of RAM, but on a t2.micro instance it was running out
# of memory and crashing.
Type=forking
Restart=on-failure
RestartSec=10
User=celery
Group=celery
# You can have multiple EnvironmentFile= variables declared if you have files with variables.
# The celery docs on daemonizing celery with systemd put their environment variables in a file called
# /etc/conf.d/celery, but I'm choosing to instead set the celery variables as environment variables so that
# celery can also access the necessary variables for interacting with Django.
EnvironmentFile=/opt/elasticbeanstalk/deployment/env
WorkingDirectory=/var/app/current
ExecStart=/bin/sh -c '${CELERY_BIN} multi start worker \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=INFO --time-limit=300 --concurrency=2'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait worker \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart worker \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=INFO --time-limit=300 --concurrency=2'
[Install]
WantedBy=multi-user.target
"/etc/tmpfiles.d/celery.conf":
mode: "000755"
owner: celery
group: celery
content: |
d /var/run/celery 0755 celery celery -
d /var/log/celery 0755 celery celery -
container_commands:
01_create_celery_log_file_directories:
command: mkdir -p /var/log/celery /var/run/celery
02_give_celery_user_ownership_of_directories:
command: chown -R celery:celery /var/log/celery /var/run/celery
03_change_mode_of_celery_directories:
command: chmod -R 755 /var/log/celery /var/run/celery
04_reload_settings:
command: systemctl daemon-reload
# In .platform/hooks/postdeploy/01_start_celery.sh
#!/bin/bash
(cd /var/app/current; systemctl stop celery)
(cd /var/app/current; systemctl start celery)
(cd /var/app/current; systemctl enable celery.service)
@edchelstephens
Copy link

I was going by these issues:

celery/celery#6304 celery/celery#6285

To see if Celery is running, first you need to activate the virtualenv on your instance after ssh'ing in:

cd /var/app/current && . /var/app/venv/staging-LQM1lest/bin/activate

And then you can just do: celery inspect active

If you put your log files in the same place as the script, you can check all the log files in that folder by doing: tail /var/log/celery/*.log

edit: I think you also need to export the environment variable DJANGO_SETTINGS_MODULE to get this to work.

Hi @Alex3917 , I followed the instructions and manually exported the django settings module to the environment when i ssh

But when I run: celery inspect active I get the following error:

Traceback (most recent call last):
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
    yield
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/kombu/connection.py", line 433, in _ensure_connection
    return retry_over_time(
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
    return fun(*args, **kwargs)
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/kombu/connection.py", line 877, in _connection_factory
    self._connection = self._establish_connection()
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/kombu/connection.py", line 812, in _establish_connection
    conn = self.transport.establish_connection()
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
    conn.connect()
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/amqp/connection.py", line 323, in connect
    self.transport.connect()
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/amqp/transport.py", line 129, in connect
    self._connect(self.host, self.port, self.connect_timeout)
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/amqp/transport.py", line 184, in _connect
    self.sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/var/app/venv/staging-LQM1lest/bin/celery", line 8, in <module>
    sys.exit(main())
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/celery/__main__.py", line 15, in main
    sys.exit(_main())
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/celery/bin/celery.py", line 217, in main
    return celery(auto_envvar_prefix="CELERY")
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/celery/bin/base.py", line 134, in caller
    return f(ctx, *args, **kwargs)
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/celery/bin/control.py", line 136, in inspect
    replies = inspect._request(action,
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/celery/app/control.py", line 106, in _request
    return self._prepare(self.app.control.broadcast(
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/celery/app/control.py", line 741, in broadcast
    return self.mailbox(conn)._broadcast(
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/kombu/pidbox.py", line 328, in _broadcast
    chan = channel or self.connection.default_channel
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/kombu/connection.py", line 895, in default_channel
    self._ensure_connection(**conn_opts)
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/kombu/connection.py", line 433, in _ensure_connection
    return retry_over_time(
  File "/usr/lib64/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
    raise ConnectionError(str(exc)) from exc
kombu.exceptions.OperationalError: [Errno 111] Connection refused

Do you know how can I fix this?

@edchelstephens
Copy link

Many thanks! Was really useful! Some updates from that point:

  • very important to have *.sh scripts used in hooks with LF instead of CRLF run (after replace <file_name.sh> with you filename:
sed -i 's/\r$//' <file_name.sh>
  • another important thing is to make sure git uses LF - on windows core.autocrlf = true => LF is converted to CRLF so to make it false run:
git config --global core.autocrlf false
  • On ExecStart command I highly recommend based on celery latest docs use dynamic options multi start worker -> multi start ${CELERYD_NODES}. Have in elb config attributes CELERYD_NODES: worker1 worker2 ... easier to control.

  • For celery beat I am not quite sure how to get it work - I already used:

  • Versions that I am using:
    Django 3.2
    Celery 5.0.5

Hi @sorin-sabo, were you able to get this working? Can you share your configuration?

@ToddRKingston
Copy link

@edchelstephens No I have not gotten beat to start on deployment. I can start it manually after SSH into the instance.

@Alex3917 sorry to bother you on this, but I tried to add a file to start beat after starting celery - and after restarting the instance I'm somehow getting the same error as typefox09 'celery is not a valid user name'. Should I add this to my 01_celery.config file?

commands:
00_add_user_celery:
test: test ! "id -u celery 2> /dev/null"
command: useradd -d /opt/python/celery -g celery -u 1502 celery
ignoreErrors: false

Is this a permissions issue somehow? Any help you can provide would be very much appreciated.

@ToddRKingston
Copy link

ToddRKingston commented Aug 12, 2022

@Alex3917 I've added a 01_python.config file like you mentioned in your first response to appli-intramuros, however should the path change from /opt/python/celery to something else in AL2?

I'm getting 'Created group celery successfully' but then 'Command 00_add_user_celery (useradd -d /opt/python/celery -g celery -u 1501 celery) failed' error..

When I SSH in and try to run the command 'useradd -d /opt/python/celery -g celery -u 1501 celery'
I get '-bash: /usr/sbin/useradd: Permission denied'

Thanks in advance.

@ToddRKingston
Copy link

ToddRKingston commented Aug 20, 2022

@edchelstephens I was able to get beat to start on deployment. I added another file in .ebextensions/01_celery.config; similar to "/etc/systemd/system/celery.service":, but instead called "/etc/systemd/system/celerybeat.service". This is the file (underneath celery.service file):

 "/etc/systemd/system/celerybeat.service":
        mode: "000644"
        owner: celery
        group: celery
        content: |
            [Unit]
            Description=Celery Beat Service
            After=network.target

            [Service]
            Type=forking
            Restart=on-failure
            RestartSec=10
            User=celery
            Group=celery
            EnvironmentFile=/opt/elasticbeanstalk/deployment/env
            WorkingDirectory=/var/app/current
            ExecStart=/bin/sh -c '/var/app/venv/staging-LQM1lest/bin/celery beat -A video.celery  \
            --pidfile=/tmp/celerybeat.pid \
            --logfile=/var/log/celery/celerybeat.log \
            --loglevel=INFO -s /tmp/celerybeat-schedule'

            [Install]
            WantedBy=multi-user.target

After that, I added:

(cd /var/app/current; systemctl stop celerybeat)
(cd /var/app/current; systemctl start celerybeat)
(cd /var/app/current; systemctl enable celerybeat.service)

to the 01_start_celery.sh file that had the same language for celery. Hope this helps.

@MahirMahbub
Copy link

MahirMahbub commented Oct 25, 2022

Take a look at this gist. I have successfully deployed Django Rest App with celery, AWS sqs in Elastic Beanstalk with Amazon Linux 2

https://gist.github.com/MahirMahbub/f9c226dbc0a01da22c8c539cf9c9bcc9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment