使用Fabric批量部署上线和线上环境监控

Posted 扎心了老铁

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了使用Fabric批量部署上线和线上环境监控相关的知识,希望对你有一定的参考价值。

本文讲述如何使用fabric进行批量部署上线的功能

这个功能对于小应用,可以避免开发部署上线的平台,或者使用linux expect开发不优雅的代码。

前提条件:

1、运行fabric脚本的机器和其他机器tcp_port=22端口通

2、ssh可以登录,你有账号密码

 

一、先说批量部署上线

先上代码,再仔细讲解,脚本如下

# -*- coding:utf-8 -*-
from fabric.colors import *
from fabric.api import *
from contextlib import contextmanager as _contextmanager

# 自动载入
env.user=data_monitor
env.hosts=[10.93.21.21, 10.93.18.34, 10.93.18.35]
env.password=[email protected]
# 手动加入
env.activate = source /home/data_monitor/.bash_profile
env.directory = /home/data_monitor/dmonitor/dmonitor


@_contextmanager
def virtualenv():
    with cd(env.directory):
        with prefix(env.activate):
            yield


@task
def update():
    with virtualenv():
        run("git pull origin master")


@task
def start():
    with virtualenv():
        run("$(nohup gunicorn --worker-class=gevent dmonitor.wsgi:application -b 0.0.0.0:8009 -w 4 &> /dev/null &) && sleep 1", warn_only=True)
        run("$(nohup python manage.py celery worker -Q high -c 30 &> /dev/null &) && sleep 1 ", warn_only=True)
        run("$(nohup python manage.py celery worker -Q mid -c 30 &> /dev/null &) && sleep 1 ", warn_only=True)
        run("$(nohup python manage.py celery worker -Q low -c 30 &> /dev/null &) && sleep 1", warn_only=True)


@task
def stop():
    with virtualenv():
        run("ps -ef | grep gunicorn | grep -v grep | awk ‘{print $2}‘| xargs kill -9", warn_only=True)
        run("ps -ef | grep celery | grep worker | grep -v grep | awk ‘{print $2}‘ | xargs kill -9", warn_only=True)


@task
def deploy():
    update()
    stop()
    start()

 

2、线上环境监控

当然一般线上环境没有用fabric监控的,但是开发环境和测试环境的话,一般都是虚拟机,没有人管你。

所以自己开发一个小型监控程序,监控一下硬盘cpu内存,或者是一些进程(redis/mysql...),还是挺有用的。

先上代码

这个文件是各种task

import logging

from fabric.api import *
from fabric.context_managers import *
from fabric.colors import red, yellow, green
from common.redis import Redis
from common.config import redis as redis_config

logger = logging.getLogger(__name__)
redis = Redis(redis_config.get(ip), redis_config.get(port))


# hard_disk_monitor, item_name=hard_disk
@task
def hard_disk_monitor(item_group, item_name, threshold):
    with settings(hide(warnings, running, stdout, stderr), parallel=True, warn_only=True):
        host = run(hostname -i)
        hard_disk = run("df -hl | grep /dev/vda3 | awk -F ‘ ‘ ‘{print $5}‘")
        print green(host + : + hard_disk)
        if int(hard_disk.strip(%)) > threshold:
            redis("lpush %s %s" % (:.join([machine, item_group, item_name]), host))


# memory_monitor, item_name=memory
@task
def memory_monitor(item_group, item_name, threshold):
    with settings(hide(warnings, running, stdout, stderr), parallel=True, warn_only=True):
        host = run(hostname -i)
        memory = run("cat /proc/meminfo | grep MemFree | awk -F ‘ ‘ ‘{print $2}‘")
        print yellow(host + : + memory)
        if int(memory.strip()) < threshold:
            redis("lpush %s %s" % (:.join([machine, item_group, item_name]), host))


# base_services_monitor, item_name != hard_disk or item_name != memory
@task
def base_services_monitor(item_group, item_name, threshold):
    with settings(hide(warnings,running,stdout,stderr),parallel=True,warn_only=True):
        host = run(hostname -i)
        count = run("ps -ef | grep %s | grep -v grep | wc -l" % item_name)
        print red(host + : + count)
        if int(count.strip()) != threshold:
            redis("hset %s %s %s" % (:.join([machine, item_group, item_name]), host, count))
            redis(incr %s % :.join([machine, item_group, item_name, host]))
            redis(expire %s 1800 % :.join([machine, item_group, item_name, host]))


# restart_services_monitor, item_name = tomcat-7.0.57-mis or item_name = tomcat-httpapi
@task
def restart_services_monitor(item_start):
    with settings(hide(warnings, running, stdout, stderr), parallel=True,warn_only=True):
        host = run(hostname -i)
        run(item_start)
        print green(host + : + item_start)

这个文件是执行task

# -*- coding:utf-8 -*-

from fabric.api import *
from fabric.context_managers import *
execute(monitors.hard_disk_monitor, item_group, item_name, item_threshold,
                        hosts=json.loads(item_param.get(item_hosts)))
                hosts = self.redis(lrange %s 0 -1 % :.join([machine, item_group, item_name]))

 

以上是关于使用Fabric批量部署上线和线上环境监控的主要内容,如果未能解决你的问题,请参考以下文章

fabric自动发布tomcat线上项目

容灾组件Sentinel控制台不显示应用问题

基于hyperledger fabric 联盟链 + vue cli的项目搭建完整教程

vue项目中,解决开发与线上 请求接口不同的问题

HyperLedger/Fabric SDK使用Docker容器镜像快速部署上线

Docker容器基本知识