Hadoop 2.2.0和HBase-0.98 安装snappy
Posted cxchanpin
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Hadoop 2.2.0和HBase-0.98 安装snappy相关的知识,希望对你有一定的参考价值。
1、安装须要的依赖包及软件
须要安装的依赖包有:
gcc、c++、 autoconf、automake、libtool
须要安装的配套软件有:
Java6、Maven
关于上面的依赖包,假设在ubuntu下,使用sudo apt-get install * 命令安装。假设在centos下。使用sudo yum install *命令来安装。
关于配套的Java和Maven的安装,參考博文《Linux下Java、Maven、Tomcat的安装》。
2、下载snappy-1.1.2
可供下载的地址:
地址一:https://code.google.com/p/snappy/wiki/Downloads?tm=2
地址二:http://download.csdn.net/detail/iam333/7725883
3、编译并动态安装
下载后解压到某个文件夹,此处如果解压的地址位home文件夹。再运行例如以下命令例如以下:
$ cd ~/snappy-1.1.2 $ sudo ./configure $ sudo ./make $ sudo make install然后运行例如以下命令查看是否成功安装。
$ cd /usr/local/lib $ ll libsnappy.* -rw-r--r-- 1 root root 233506 Aug 7 11:56 libsnappy.a -rwxr-xr-x 1 root root 953 Aug 7 11:56 libsnappy.la lrwxrwxrwx 1 root root 18 Aug 7 11:56 libsnappy.so -> libsnappy.so.1.2.1 lrwxrwxrwx 1 root root 18 Aug 7 11:56 libsnappy.so.1 -> libsnappy.so.1.2.1 -rwxr-xr-x 1 root root 147758 Aug 7 11:56 libsnappy.so.1.2.1假设安装过程中没有遇到错误,且/usr/local/lib文件夹下有上面的文件,表示成功安装。
4、hadoop-snappy源代码编译
1)下载源代码。两种方式
a、安装svn,假设是ubuntu,使用sudo apt-get install subversion;假设是centos。使用sudo yum install subversion命令安装。
b、使用svn 从谷歌的svn仓库中checkout源代码,使用例如以下命令:
$ svn checkout http://hadoop-snappy.googlecode.com/svn/trunk/ hadoop-snappy这样就在运行命令的文件夹下将hadoop-snappy的源代码拷贝出来放在hadoop-snappy文件夹中。
只是由于谷歌的服务在大陆总是出问题。所以也能够选择直接下载,地址:http://download.csdn.net/detail/iam333/7726023
2)编译hadoop-snappy源代码
切换到hadoop-snappy源代码的文件夹下,运行例如以下命令:
a、假设上面安装snappy使用的是默认路径,命令为:
mvn packageb、假设上面安装的snappy使用的是自己定义路径,则命令为:
mvn package [-Dsnappy.prefix=SNAPPY_INSTALLATION_DIR]当中SNAPPY_INSTALLATION_DIR位snappy安装路径。
编译过程中可能出现的问题:
a)/root/modules/hadoop-snappy/maven/build-compilenative.xml:62: Execute failed: java.io.IOException: Cannot run program “autoreconf” (in directory “/root/modules/hadoop-snappy/target/native-src”): java.io.IOException: error=2, No such file or directory
解决方式:说明缺少文件。可是这个文件是在target下的,是编译过程中自己主动生成的,原本就不该存在,这是问什么呢?事实上根本问题不是缺文件,而是Hadoop Snappy是须要一定的前置条件。
所以请參考最上面的安装依赖包介绍安装依赖包。
b)出现例如以下错误提示:
[exec] make: *** [src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.lo] Error 1 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (compile) on project hadoop-snappy: An Ant BuildException has occured: The following error occurred while executing this line: [ERROR] /home/ngc/Char/snap/hadoop-snappy/hadoop-snappy-read-only/maven/build-compilenative.xml:75: exec returned:解决方式:Hadoop Snappy的官方文档只列出了须要gcc。而没有列出须要什么版本号的gcc。而实际上,Hadoop Snappy是须要gcc4.4的。假设gcc版本号高于默认的4.4版本号,就会报错。
如果使用的系统为centos,使用例如以下命令:(注:ubuntu须要将sudo yum install 换成sudo apt-get install)
$ sudo yum install gcc-4.4 $ sudo rm /usr/bin/gcc $ sudo ln -s /usr/bin/gcc-4.4 /usr/bin/gcc使用例如以下命令查看是否替换成功:
$ gcc --version gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3) Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.c)出现例如以下错误提示:
[exec] /bin/bash ./libtool --tag=CC --mode=link gcc -g -Wall -fPIC -O2 -m64 -g -O2 -version-info 0:1:0 -L/usr/local//lib -o libhadoopsnappy.la -rpath /usr/local/lib src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.lo src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.lo -ljvm -ldl [exec] /usr/bin/ld: cannot find -ljvm [exec] collect2: ld returned 1 exit status [exec] make: *** [libhadoopsnappy.la] 错误 1 [exec] libtool: link: gcc -shared -fPIC -DPIC src/org/apache/hadoop/io/compress/snappy/.libs/SnappyCompressor.o src/org/apache/hadoop/io/compress/snappy/.libs/SnappyDecompressor.o -L/usr/local//lib -ljvm -ldl -O2 -m64 -O2 -Wl,-soname -Wl,libhadoopsnappy.so.0 -o .libs/libhadoopsnappy.so.0.0.1这是由于没有把安装jvm的libjvm.so symbolic链接到usr/local/lib。假设你的系统是64位,可到/root/bin/jdk1.6.0_37/jre/lib/amd64/server/察看libjvm.so 链接到的地方,这里改动例如以下,使用命令:
$ sudo ln -s /usr/local/jdk1.6.0_45/jre/lib/amd64/server/libjvm.so /usr/local/lib/问题就可以解决。
5、Hadoop 2.2.0配置snappy
hadoop-snappy编译成功后。会在hadoop-snappy文件夹下的target文件夹中生成一些文件。当中有一个文件名称为:hadoop-snappy-0.0.1-SNAPSHOT.tar.gz
1)解压target下hadoop-snappy-0.0.1-SNAPSHOT.tar.gz,解压后,复制lib文件
$ sudo cp -r ~/snappy-hadoop/target/hadoop-snappy-0.0.1-SNAPSHOT/lib/native/Linux-amd64-64/* $HADOOP_HOME/lib/native/Linux-amd64-64/2)将target下的hadoop-snappy-0.0.1-SNAPSHOT.jar拷贝到$HADOOP_HOME/lib 下。
3)配置$HADOOP_HOME/etc/hadoop/hadoop-env.sh,加入:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/Linux-amd64-64/:/usr/local/lib/4) 配置$HADOOP_HOME/etc/hadoop/mapred-site.xml。这个文件里,全部跟压缩有关的配置选项有:
<property> <name>mapred.output.compress</name> <value>false</value> <description>Should the job outputs be compressed? </description> </property> <property> <name>mapred.output.compression.type</name> <value>RECORD</value> <description>If the job outputs are to compressed as SequenceFiles, how should they be compressed? Should be one of NONE, RECORD or BLOCK. </description> </property> <property> <name>mapred.output.compression.codec</name> <value>org.apache.hadoop.io.compress.DefaultCodec</value> <description>If the job outputs are compressed, how should they be compressed? </description> </property> <property> <name>mapred.compress.map.output</name> <value>false</value> <description>Should the outputs of the maps be compressed before being sent across the network. Uses SequenceFile compression. </description> </property> <property> <name>mapred.map.output.compression.codec</name> <value>org.apache.hadoop.io.compress.DefaultCodec</value> <description>If the map outputs are compressed, how should they be compressed?能够依据自己的须要,去进行配置。</description> </property>
当中。codec的类型例如以下:
<property> <name>io.compression.codecs</name> <value> org.apache.hadoop.io.compress.GzipCodec, org.apache.hadoop.io.compress.DefaultCodec, org.apache.hadoop.io.compress.BZip2Codec, org.apache.hadoop.io.compress.SnappyCodec </value> </property>SnappyCodec就代表了snappy压缩方式。
5)配置好了以后。重新启动hadoop集群就可以。
6、HBase 0.98配置snappy
1)配置HBase lib/native/Linux-amd64-64/ 中的lib文件。
简单起见,我们仅仅须要将$HADOOP_HOME/lib/native/Linux-amd64-64/下lib文件,所有拷贝到对应HBase文件夹下:
$ sudo cp -r $HADOOP_HOME/lib/native/Linux-amd64-64/* $HBASE_HOME/lib/native/Linux-amd64-64/2)配置HBase环境变量hbase-env.sh
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/Linux-amd64-64/:/usr/local/lib/ export HBASE_LIBRARY_PATH=$HBASE_LIBRARY_PATH:$HBASE_HOME/lib/native/Linux-amd64-64/:/usr/local/lib/ export CLASSPATH=$CLASSPATH:$HBASE_LIBRARY_PATH注意:别忘记了在habase-env.sh的開始位置配置HADOOP_HOME和HBASE_HOME。
3)配置好之后,重新启动HBase就可以。
4)验证是否成功安装
在HBase的安装文件夹下。运行例如以下语句:
$ bin/hbase shell 2014-08-07 15:11:35,874 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available HBase Shell; enter ‘help<RETURN>‘ for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 0.98.2-hadoop2, r1591526, Wed Apr 30 20:17:33 PDT 2014 hbase(main):001:0>然后运行创建语句:
hbase(main):001:0> create ‘test_snappy‘, {NAME => ‘cf‘, COMPRESSION => ‘SNAPPY‘} SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/q/hbase/hbase-0.98.2-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/q/hadoop2x/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 0 row(s) in 1.2580 seconds => Hbase::Table - test_snappy hbase(main):002:0>查看创建的test_snappy表:
hbase(main):002:0> describe ‘test_snappy‘ DESCRIPTION ENABLED ‘test_snappy‘, {NAME => ‘cf‘, DATA_BLOCK_ENCODING => ‘NONE‘, BLOOMFILTER => ‘ROW‘, REPLICATION_SCOPE => ‘0‘, VERSIONS => ‘1‘, COMPRESSIO true N => ‘SNAPPY‘, MIN_VERSIONS => ‘0‘, TTL => ‘2147483647‘, KEEP_DELETED_CELLS => ‘false‘, BLOCKSIZE => ‘65536‘, IN_MEMORY => ‘false‘, BLOC KCACHE => ‘true‘} 1 row(s) in 0.0420 seconds能够看到,COMPRESSION => ‘SNAPPY‘。
接下来。插入数据试试:
hbase(main):003:0> put ‘test_snappy‘, ‘key1‘, ‘cf:q1‘, ‘value1‘ 0 row(s) in 0.0790 seconds hbase(main):004:0>遍历test_snappy表试试:
hbase(main):004:0> scan ‘test_snappy‘ ROW COLUMN+CELL key1 column=cf:q1, timestamp=1407395814255, value=value1 1 row(s) in 0.0170 seconds hbase(main):005:0>以上过程均能正确运行,说明配置正确。
错误解决方式:
a)配置后,启动hbase出现例如以下异常:
WARN [main] util.CompressionTest: Can‘t instantiate codec: snappy java.io.IOException: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:96) at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:62) at org.apache.hadoop.hbase.regionserver.HRegionServer.checkCodecs(HRegionServer.java:660) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:538) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526)说明还没有配置好,好好检查hbase-env.sh中的配置,看自己是否配置正确。
转载请注明出处:http://blog.csdn.net/iAm333
以上是关于Hadoop 2.2.0和HBase-0.98 安装snappy的主要内容,如果未能解决你的问题,请参考以下文章
如何将 hbase 表从 hbase-0.94 集群复制到 hbase-0.98 集群
将一张表的数据从 HBase 0.94 复制到 HBase 0.98
初次启动hive,解决 ls: cannot access /home/hadoop/spark-2.2.0-bin-hadoop2.6/lib/spark-assembly-*.jar: No su