Hortonworks(HDP)关闭不须要的组件(服务)

Posted ljbguanli

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Hortonworks(HDP)关闭不须要的组件(服务)相关的知识,希望对你有一定的参考价值。

Hortonworks(HDP)设置开机启动的组件(服务)是在一个makefile(.mf)文件里配置的,这个文件位于:

/usr/lib/hue/tools/start_scripts/start_deps.mf

我们仅仅须要改动这一个文件就能够,所以。在此之前。我们最好先做一个备份:

cp /usr/lib/hue/tools/start_scripts/start_deps.mf /usr/lib/hue/tools/start_scripts/start_deps.mf.bak

假设我们想禁止某个组件(服务)在开机时启动,我们仅仅须要找到配置Startup项的那一行代码,通常这一行是这种:

Startup: HDFS YARN Zookeeper Hive_Metastore WebHCat Oozie Falcon Knox

然后删除我们不须要的组件(服务)就可以。比如:假设我们仅仅须要使用HIVE,则能够这样改动,凝视掉现有的行,又一次复制一行,去掉oozie,falcon和knox:

#Startup: HDFS YARN Zookeeper Hive_Metastore WebHCat Oozie Falcon Knox
Startup: HDFS YARN Hive_Metastore WebHCat

另外,假设开启了Ambari, 还将同一时候启动:ganglia和nagios,由于这两项被Ambari所依赖。假设你的机器资源有限,想使用Ambari管理集群又想关闭监控,去除ganglia和nagios开机启动也是能够。作法与上面相似:找到

Ambari: ambari_server ambari_agent ganglia nagios

凝视掉现有的行,又一次复制一行。去掉ganglia nagios:

#Ambari: ambari_server ambari_agent ganglia nagios

Ambari: ambari_server ambari_agent

同一时候,再找到:

ambari_server: ganglia nagios
    $(call colorized,\
        Ambari server, \
        ambari-server start,\
        sleep 5, )

凝视掉第一行,又一次复制一行。去掉ganglia nagios:

#ambari_server: ganglia nagios
ambari_server:    

    $(call colorized,\
        Ambari server, \
        ambari-server start,\
        sleep 5, )

下面附上一份改动过的仅仅启动HIVE的start_deps.mf文件供參考:


line = 10
LOG=/var/log/startup_script.log

NO_COLOR=\x1b[0m
OK_COLOR=\x1b[32;01m
ERROR_COLOR=\x1b[31;01m
WARN_COLOR=\x1b[33;01m

OK_STRING=[$(OK_COLOR)  OK  $(NO_COLOR)]
ERROR_STRING=[$(ERROR_COLOR)ERRORS$(NO_COLOR)]
WARN_STRING=[$(WARN_COLOR)WARNINGS$(NO_COLOR)]

ECHO=echo -e
ECHO_ERR=printf ‘Starting%-50s$(ERROR_STRING)\n‘ "$1"
ECHO_WARN=printf ‘Starting%-50s$(WARN_STRING)\n‘ "$1"
ECHO_OK=printf ‘Starting%-50s$(OK_STRING)\n‘ "$1"
CAT=cat

define colorized
@$2 1>$(LOG) 2> "temp $1.log" || touch temp.errors;
@$3;
@if test -e "temp $1.errors"; then ($(ECHO_ERR) | tee -a $(LOG)) && ($(CAT) "temp $1.log" $4 | tee -a $(LOG)); elif test -s "temp $1.log"; then ($(ECHO_WARN) && $(CAT) "temp $1.log") | tee -a $(LOG); else $(ECHO_OK) | tee -a $(LOG); fi;
@$(RM) -f "temp $1.errors" "temp $1.log";
endef

all: Startup Ambari Others

###Startup: HDFS YARN Zookeeper Hive_Metastore WebHCat Oozie Falcon Knox
Startup: HDFS YARN Hive_Metastore WebHCat
    @echo "`date`:\tStartup" >> $(LOG)
###Ambari: ambari_server ambari_agent ganglia nagios
Ambari: ambari_server ambari_agent
Others: HBase Storm

HDFS: namenode secondary_namenode datanode nfsportpap hdfsnfs
YARN: resourcemanager yarnhistoryserver mapredhistoryserver nodemanagers
HBase: hbase_master hbase_regionservers hbase_stargate hbase_thrift
Zookeeper: zookeeper
Hive_Metastore: mysql hive hive2
Strom: nimbus supervisor stormui stormdrpc stormlogview stormrest
Falcon: falcon
Knox: knox-gateway knox-ldap
WebHCat: webhcat
Oozie: oozie

postgresql:
    $(call colorized,        Postgre SQL,         @/etc/init.d/postgresql start,        sleep 10,)

# ==== HDFS ====

namenode: postgresql
    $(call colorized,        name node,         su -l hdfs -c "export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode",        sleep 5,        /var/log/hadoop/hdfs/hadoop-hdfs-namenode-*.log)

datanode: postgresql
    $(call colorized,        data node,         su -l hdfs -c  "export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode",        sleep 5,        /var/log/hadoop/hdfs/hadoop-hdfs-datanode-*.log)
    @su - hdfs -c"hadoop dfsadmin -safemode leave"


secondary_namenode: postgresql
    $(call colorized,        secondary name node,         su -l hdfs -c  "export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start secondarynamenode",        sleep 5,        /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-*.log)

nfsportpap: namenode datanode
    $(call colorized,        NFS portmap,         export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start portmap,        sleep 5,        /var/log/hadoop/root/hadoop-root-portmap-sandbox.hortonworks.com.log)

hdfsnfs: namenode datanode
    $(call colorized,        Hdfs nfs,         export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start nfs3,        sleep 5,        /var/log/hadoop/root/hadoop-root-nfs3-sandbox.hortonworks.com.log)


# ==== YARN ====
resourcemanager: postgresql HDFS
    $(call colorized,        resource manager,         su - yarn -c‘export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf start resourcemanager‘,        sleep 25)


yarnhistoryserver: postgresql HDFS
    $(call colorized,        yarn history server,         su - yarn -c‘export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf start historyserver‘,        sleep 5)

mapredhistoryserver: postgresql HDFS
    $(call colorized,        mapred history server,         su - mapred -c‘export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop-mapreduce/sbin/mr-jobhistory-daemon.sh --config /etc/hadoop/conf start historyserver‘,        sleep 5)


nodemanagers: postgresql HDFS
    $(call colorized,        node manager,         su - yarn -c ‘export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf start nodemanager‘,        sleep 5)


# ==== HBase ====

hbase_master: postgresql zookeeper
    $(call colorized,        hbase master,         su - hbase -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start master",        sleep 25,        /var/log/hbase/hbase-hbase-master-*.log)

hbase_stargate: postgresql hbase_master
    $(call colorized,        hbase stargate,         su -l hbase -c "/usr/lib/hbase/bin/hbase-daemon.sh start rest -p 60080",        true,        /var/log/hbase/hbase-hbase-rest-*.log)

hbase_thrift: postgresql hbase_master
    $(call colorized,        hbase thrift,         su -l hbase -c "/usr/lib/hbase/bin/hbase-daemon.sh start thrift",        true,        /var/log/hbase/hbase-hbase-rest-*.log)

        su -l hbase -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start regionserver",        sleep 5,        /var/log/hbase/hbase-hbase-regionserver-*.log)

# ==== Hive ====

mysql:
    $(call colorized,        mysql,         /etc/init.d/mysqld start,        true)

hive: HDFS postgresql mysql
    $(call colorized,        hive server,         su - hive -c ‘env HADOOP_HOME=/usr JAVA_HOME=/usr/jdk64/jdk1.7.0_45 /tmp/start_metastore_script /var/log/hive/hive.out /var/log/hive/hive.log /var/run/hive/hive.pid /etc/hive/conf.server‘, true,        /var/log/hive/hive.log)

hive2: HDFS hive
    $(call colorized,        Hiveserver2,         su - hive -c ‘env JAVA_HOME=/usr/jdk64/jdk1.7.0_45 /tmp/start_hiveserver2_script /var/log/hive/hive-server2.out /var/log/hive/hive-server2.log /var/run/hive/hive-server.pid /etc/hive/conf.server‘,true,        /var/log/hive/hive-server2.log)

# ==== Storm ====

nimbus: Zookeeper YARN
    $(call colorized,        Storm nimbus,                 su - storm -c ‘/usr/bin/storm nimbus > /var/log/storm/nimbus.log &‘; sleep 10; su - storm -c‘pgrep -f "^java.+backtype.storm.daemon.nimbus$" && pgrep -f "^java.+backtype.storm.daemon.nimbus$" > /var/run/storm/nimbus.pid‘,true)

supervisor: Zookeeper stormui YARN
    $(call colorized,        Storm supervisor,         su - storm -c ‘/usr/bin/storm supervisor > /var/log/storm/supervisor.log &‘; sleep 10; su - storm -c‘pgrep -f "^java.+backtype.storm.daemon.supervisor$" && pgrep -f "^java.+backtype.storm.daemon.supervisor$" > /var/run/storm/supervisor.pid‘,true)

stormui: Zookeeper nimbus stormlogview YARN
    $(call colorized,        Storm ui,         su - storm -c ‘/usr/bin/storm ui > /var/log/storm/ui.log &‘; sleep 10; su - storm -c ‘pgrep -f "^java.+backtype.storm.ui.core$" && pgrep -f "^java.+backtype.storm.ui.core$" > /var/run/storm/ui.pid‘,true)

stormdrpc: Zookeeper nimbus YARN
    $(call colorized,        Storm DRPC,         su - storm -c ‘/usr/bin/storm drpc > /var/log/storm/drpc.log &‘; sleep 10; su - storm -c ‘pgrep -f "^java.+backtype.storm.daemon.drpc$" && pgrep -f "^java.+backtype.storm.daemon.drpc$" > /var/run/storm/drpc.pid‘,true)

stormlogview: Zookeeper stormdrpc YARN
    $(call colorized,        Storm Logview,                 su - storm -c ‘/usr/bin/storm logviewer > /var/log/storm/logviewer.log &‘; sleep 10; su - storm -c‘pgrep -f "^java.+backtype.storm.daemon.logviewer$" && pgrep -f "^java.+backtype.storm.daemon.logviewer$" > /var/run/storm/logviewer.pid‘,true)

stormrest: supervisor YARN
    $(call colorized,               Storm Rest server,

# ==== Single services ====

zookeeper: namenode
    $(call colorized,        zookeeper nodes,         su - zookeeper -c "source /etc/zookeeper/conf/zookeeper-env.sh ; env ZOOCFGDIR=/etc/zookeeper/conf ZOOCFG=zoo.cfg /usr/lib/zookeeper/bin/zkServer.sh start; sleep 10",        true)



webhcat: hive HDFS
    $(call colorized,        webhcat server,         su -l hcat -c "env HADOOP_HOME=/usr /usr/lib/hive-hcatalog/sbin/webhcat_server.sh start", true        , /var/log/webhcat/webhcat.log)


oozie: namenode
    $(call colorized,        Oozie,         su - oozie -c "cd /var/log/oozie; /usr/lib/oozie/bin/oozie-start.sh", true        ,        /var/log/oozie/oozie.log)


ganglia:
    $(call colorized,        Ganglia,         /etc/init.d/gmetad stop &>/dev/null; /etc/init.d/gmond stop &>/dev/null; /etc/init.d/hdp-gmetad start && /etc/init.d/hdp-gmond start,        true)

nagios:
    $(call colorized,        Nagios,         /etc/init.d/nagios start,        sleep 5,)

falcon: HDFS Oozie
    $(call colorized,        Falcon,         su - falcon -c‘env JAVA_HOME=/usr/jdk64/jdk1.7.0_45 FALCON_LOG_DIR=/var/log/falcon FALCON_PID_DIR=/var/run/falcon FALCON_DATA_DIR=/hadoop/falcon/activemq /usr/lib/falcon/bin/falcon-start -port 15000‘,        sleep 5,)


knox-ldap:
    $(call colorized,        Knox ldap,         su - knox -c "/usr/lib/knox/bin/ldap.sh start",        sleep 2,)


knox-gateway: HDFS WebHCat Oozie knox-ldap
    $(call colorized,        Knox gateway,         su - knox -c "/usr/lib/knox/bin/gateway.sh start",        sleep 2,)


# ==== Ambari ====

###ambari_server: ganglia nagios
ambari_server:
    $(call colorized,        Ambari server,         ambari-server start,        sleep 5, )


ambari_agent: ambari_server
    $(call colorized,        Ambari agent,         ambari-agent start,true)


以上是关于Hortonworks(HDP)关闭不须要的组件(服务)的主要内容,如果未能解决你的问题,请参考以下文章

Powershell windows server 2012 .Path 安装 Hortonworks 数据平台 (HDP) 时出错

设置多节点 Hadoop Hortonworks 集群

将 Apache NiFi 添加到现有的 Hortonworks HDP 集群

设置多节点Hadoop Hortonworks群集

无法从主机连接到 ZooKeeper/Hive 到 Sandbox Hortonworks HDP VM

2017.4.17 HDP和HDF