hadoop2.3.0cdh5.0.2 升级到cdh5.7.0
Posted roger888
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hadoop2.3.0cdh5.0.2 升级到cdh5.7.0相关的知识,希望对你有一定的参考价值。
后儿就放假了,上班这心真心收不住,为了能充实的度过这难熬的两天,我决定搞个大工程。。。。。ps:我为啥这么期待放假呢,在沙发上像死人一样躺一天真的有意义嘛。。。。。。。
当然版本:hadoop2.3.0cdh5.0.2
机器:
nn | dn | jn | rm | nm | jh | hmaster | hregionserver | |
mast1 | 是 | 是 | 是 | 是 | 是 | 是 | 是 | |
mast2 | 是 | 是 | 是 | 是 | 是 | 是 | 是 | |
mast3 | 是 | 是 | 是 | 是 | 是 |
目标版本:hadoop2.6.0cdh5.7.0
升级方式:Upgrading Unmanaged CDH Using the Command Line
升级注意事项:①从低于cdh5.4.0升级到cdh5.4.0或更高版本,需要进行hdfs元数据升级;
②从低于cdh5.2.0版本升级需要做如下升级:升级hdfs元数据
升级Sentry database
升级hive数据库
升级sqoop2数据库
③另外还要确保如下升级:升级Oozie数据库和共享数据库
如果向hdfs上传了spark集合jar文件,要上传文件的最新版本
升级步骤:
1.为升级做准备:
①使namenode进入安全模式,备份fsimage
[[email protected] conf]$ hdfs haadmin -getServiceState mast1
active
[[email protected] conf]$ hdfs dfsadmin -safemode enter
Safe mode is ON
[[email protected] conf]$ hdfs dfsadmin -saveNamespace
②确认无hadoop服务在运行
[[email protected] ~]# ps -aef|grep java
root 9540 8838 0 15:34 pts/0 00:00:00 grep java
③在namenode(ha中active的namenode)中备份元数据(注意过程中如果发现lock文件,则从头开始再执行一遍)
[[email protected] ~]# cd /app/hdp/dfs/name
[[email protected] name]# tar -cvf /root/nn_backup_data.tar .
./
./edits/
./edits/current/
./edits/current/edits_inprogress_0000000000000000624
./edits/current/edits_0000000000000000413-0000000000000000533
./edits/current/edits_0000000000000000620-0000000000000000621
./edits/current/edits_0000000000000000618-0000000000000000619
./edits/current/edits_0000000000000000062-0000000000000000180
./edits/current/edits_0000000000000000622-0000000000000000623
./edits/current/edits_0000000000000000038-0000000000000000050
./edits/current/edits_0000000000000000534-0000000000000000615
./edits/current/edits_0000000000000000181-0000000000000000182
./edits/current/edits_0000000000000000284-0000000000000000412
./edits/current/edits_0000000000000000051-0000000000000000061
./edits/current/seen_txid
./edits/current/edits_0000000000000000183-0000000000000000283
./edits/current/VERSION
./edits/current/edits_0000000000000000616-0000000000000000617
./current/
./current/fsimage_0000000000000000000
./current/seen_txid
./current/VERSION
./current/fsimage_0000000000000000000.md5
./namesecondary/
2.下载CDH 5 "1-click" repository
https://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm
3.更新
以上是关于hadoop2.3.0cdh5.0.2 升级到cdh5.7.0的主要内容,如果未能解决你的问题,请参考以下文章
Pig 0.12.0 在 Windows 2008 r2 x64 上的 Hadoop 2.3.0
bigdata_ambari修改hiveserver_metastore链接库(从0.14 升级到1.2.1 )