master
branch of Celery.celery -A proj report
in the issue:software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:3.5.2
billiard:3.5.0.3 redis:2.10.6
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:disabled
CACHES: {
'default': { 'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://localhost:6379/1',
'TIMEOUT': 3600}}
CELERY_TASK_COMPRESSION: 'gzip'
CELERY_TASK_IGNORE_RESULT: True
CELERY_ACCEPT_CONTENT: ['pickle', 'json', 'msgpack', 'yaml']
CELERY_BROKER_URL: 'redis://localhost:6379/0'
DEBUG: False
INSTALLED_APPS:
('django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.admin',
'django.contrib.sitemaps',
'django.contrib.staticfiles',
'django.contrib.humanize',
'django.contrib.redirects',
'django.contrib.gis',
'django_extensions',
)
On Ubuntu 16.04.
sudo service <your-celery-service-name> start
sudo service <your-celery-service-name> reload
Celery should gracefully reload.
Celery does not gracefully reload. It shuts down, but never restarts.
After issuing the reload request (and subsequent failure to start), the output of sudo journalctl -xe
doesn't seem to show anything nefarious:
Nov 15 04:33:04 ip-172-31-44-219 sudo[12972]: ubuntu : TTY=pts/0 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/usr/sbin/service myproj-celery reload
Nov 15 04:33:04 ip-172-31-44-219 sudo[12972]: pam_unix(sudo:session): session opened for user root by ubuntu(uid=0)
Nov 15 04:33:04 ip-172-31-44-219 systemd[1]: Reloading myproj celery worker.
-- Subject: Unit myproj-celery.service has begun reloading its configuration
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit myproj-celery.service has begun reloading its configuration
Nov 15 04:33:07 ip-172-31-44-219 sh[12979]: celery multi v4.1.0 (latentcall)
Nov 15 04:33:07 ip-172-31-44-219 sh[12979]: > Stopping nodes...
Nov 15 04:33:07 ip-172-31-44-219 sh[12979]: > worker1@ip-172-31-44-219: TERM -> 12962
Nov 15 04:33:07 ip-172-31-44-219 sh[12979]: > Waiting for 1 node -> 12962.....
Nov 15 04:33:07 ip-172-31-44-219 sh[12979]: > worker1@ip-172-31-44-219: OK
Nov 15 04:33:07 ip-172-31-44-219 sh[12979]: > Restarting node worker1@ip-172-31-44-219: OK
Nov 15 04:33:07 ip-172-31-44-219 sh[12979]: > Waiting for 1 node -> None...
Nov 15 04:33:07 ip-172-31-44-219 sh[12992]: celery multi v4.1.0 (latentcall)
Nov 15 04:33:07 ip-172-31-44-219 sh[12992]: > worker1@ip-172-31-44-219: DOWN
Nov 15 04:33:07 ip-172-31-44-219 systemd[1]: Reloaded myproj celery worker.
-- Subject: Unit myproj-celery.service has finished reloading its configuration
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit myproj-celery.service has finished reloading its configuration
--
-- The result is done.
Nov 15 04:33:07 ip-172-31-44-219 sudo[12972]: pam_unix(sudo:session): session closed for user root
Perhaps CELERYD_PID_FILE="/var/run/celery/%N.pid"
being unknown is the problem? Meaning, we cannot tell Systemd (via PIDFile=
) what the actual location of the PID file is, because it's generated by celery itself, no?
Could you post your systemd and celery file?
You can tell systemd where to store the PID, but you have to escape it:
celery multi restart w1 -A proj --pidfile=/tmp/celery_%%n.pid
Notice the double percent.
wo have same problem.
i resolve it by edit systemd config.
Restart=always
Most helpful comment
wo have same problem.
i resolve it by edit systemd config.
Restart=always