如何在 AWS Elastic Beanstalk 上部署 django 频道 2.x?

Posted

技术标签:

【中文标题】如何在 AWS Elastic Beanstalk 上部署 django 频道 2.x?【英文标题】:How to deploy django channels 2.x on AWS Elastic Beanstalk? 【发布时间】:2019-07-19 08:07:34 【问题描述】:

This tutorial 涵盖频道 1.x 的部署。但是,这不适用于通道 2.x。失败的部分是守护脚本,如下:

files:"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_daemon.sh":
mode: "000755"
owner: root
group: root
content: |
  #!/usr/bin/env bash

  # Get django environment variables
  djangoenv=`cat /opt/python/current/env 
  | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' 
  | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
  djangoenv=$djangoenv%?

  # Create daemon configuraiton script
  daemonconf="[program:daphne]
  ; Set full path to channels program if using virtualenv
  command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 5000 <your_project>.asgi:channel_layer
  directory=/opt/python/current/app
  user=ec2-user
  numprocs=1
  stdout_logfile=/var/log/stdout_daphne.log
  stderr_logfile=/var/log/stderr_daphne.log
  autostart=true
  autorestart=true
  startsecs=10

  ; Need to wait for currently executing tasks to finish at shutdown.
  ; Increase this if you have very long running tasks.
  stopwaitsecs = 600

  ; When resorting to send SIGKILL to the program to terminate it
  ; send SIGKILL to its whole process group instead,
  ; taking care of its children as well.
  killasgroup=true

  ; if rabbitmq is supervised, set its priority higher
  ; so it starts first
  priority=998

  environment=$djangoenv

  [program:worker]
  ; Set full path to program if using virtualenv
  command=/opt/python/run/venv/bin/python manage.py runworker
  directory=/opt/python/current/app
  user=ec2-user
  numprocs=1
  stdout_logfile=/var/log/stdout_worker.log
  stderr_logfile=/var/log/stderr_worker.log
  autostart=true
  autorestart=true
  startsecs=10

  ; Need to wait for currently executing tasks to finish at shutdown.
  ; Increase this if you have very long running tasks.
  stopwaitsecs = 600

  ; When resorting to send SIGKILL to the program to terminate it
  ; send SIGKILL to its whole process group instead,
  ; taking care of its children as well.
  killasgroup=true

  ; if rabbitmq is supervised, set its priority higher
  ; so it starts first
  priority=998

  environment=$djangoenv"

  # Create the supervisord conf script
  echo "$daemonconf" | sudo tee /opt/python/etc/daemon.conf

  # Add configuration script to supervisord conf (if not there already)
  if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
      then
      echo "[include]" | sudo tee -a /opt/python/etc/supervisord.conf
      echo "files: daemon.conf" | sudo tee -a /opt/python/etc/supervisord.conf
  fi

  # Reread the supervisord config
  sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread

  # Update supervisord in cache without restarting all services
  sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update 

  # Start/Restart processes through supervisord
  sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart daphne
  sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart worker

部署后,AWS 的日志中有 2 个错误:daphne: No such process 和 worker: No such process。

应该如何更改此脚本,以便它也可以在通道 2.x 上运行?

谢谢

【问题讨论】:

频道 2 不需要 daphne 【参考方案1】:

我遇到了同样的错误,我的原因是运行这些附加脚本的主管进程由于这行代码而没有选择 Daphne 进程:

if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf

这会检查 supervisord.conf 文件是否存在 [include],并且仅在不存在 [include] 时添加守护进程。

在我的情况下,我有一个

[include]
celery.conf 

在阻止此 Daphne 脚本添加 daemon.conf 的 supervisord 文件中。

您可以做一些事情:

    如果您有另一个创建 .conf 文件的脚本,请使用相同的包含逻辑将它们合并为一个

    重写包含逻辑以专门检查 daemon.conf

    通过 SSH 手动将 daemon.conf 添加到 supervisord.conf 到您的 EC2 实例

【讨论】:

以上是关于如何在 AWS Elastic Beanstalk 上部署 django 频道 2.x?的主要内容,如果未能解决你的问题,请参考以下文章

如何在 AWS Elastic Beanstalk 上修改 Nginx 配置

如何在 AWS Elastic Beanstalk 上设置 HTTPS

如何在 AWS Elastic Beanstalk 上设置 HTTPS

如何在 AWS Elastic Beanstalk 中更改数据库配置

如何在 AWS Elastic Beanstalk 中选择特定平台?

如何使用 Elastic beanstalk 和 Dockerrun.aws.json 正确部署到 AWS?