让 django celery worker 在 elastic-beanstalk 上启动的问题

Posted

技术标签:

【中文标题】让 django celery worker 在 elastic-beanstalk 上启动的问题【英文标题】:Issue getting django celery worker started on elastic-beanstalk 【发布时间】:2020-01-14 20:50:03 【问题描述】:

我正试图让我的芹菜工人在弹性豆茎上运行。它在本地运行良好,但是当我部署到 EB 时,我收到错误“活动执行失败,因为:/usr/bin/env:bash”:没有这样的文件或目录”。我在 celery 和 EB 方面非常新手,所以我没有暂时没找到解决办法。

我正在使用 windows 机器,我看到其他人在 windows 上遇到这个问题,并通过将“celery_configuration.txt”文件转换为 UNIX EOL 来修复它,但是我使用的是 celery-worker.sh,但我转换了它到仍然无法正常工作的 UNIX EOL。

.ebextensions/celery.config

packages:
  yum:
    libcurl-devel: []

container_commands:
    01_mkdir_for_log_and_pid:
        command: "mkdir -p /var/log/celery/ /var/run/celery/"
    02_celery_configure:
        command: "cp .ebextensions/celery-worker.sh /opt/elasticbeanstalk/hooks/appdeploy/post/ && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/celery-worker.sh"
        cwd: "/opt/python/ondeck/app"
    03_celery_run:
        command: "/opt/elasticbeanstalk/hooks/appdeploy/post/celery-worker.sh"

.ebextensions/celery-worker.sh

#!/usr/bin/env bash

# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=$celeryenv%?

# Create celery configuraiton script
celeryconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A backend -P solo --loglevel=INFO -n worker.%%h

directory=/opt/python/current/app/enq_web
user=nobody
numprocs=1
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv
"

# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf

# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
  then
  echo "[include]" | tee -a /opt/python/etc/supervisord.conf
  echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi

# Reread the supervisord config
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread

# Update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update

# Start/Restart celeryd through supervisord
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker

我确定我错过了一些简单的事情,但我花了几个小时到处寻找解决方案,此时我没有想法。

【问题讨论】:

【参考方案1】:

问题在于 celery-worker.sh 文件中的 shebang(#!/usr/bin/env bash)行,尝试使用 /bin/bash/bin/sh。 bash 或 sh 的位置很大程度上取决于您使用 beanstalk 的 AMI。

【讨论】:

原来是因为我需要将其转换为 Unix EOL,但是当我添加新文件时,git 将其转换回 Windows EOL,这就是导致问题的原因。

以上是关于让 django celery worker 在 elastic-beanstalk 上启动的问题的主要内容,如果未能解决你的问题,请参考以下文章

如何在 Django 1.11 中运行 celery worker

Celery在Django中的使用介绍

Django - Celery Worker - 频道

Django celery 4 - ValueError: int() 的无效文字,当启动 celery worker 时,基数为 10

是否有人将 django celery worker 实现为 docker 容器,它仅在分配任务时运行

如何在 AWS Elastic Beanstalk 上运行 celery worker?