HBase实战:使用JAVA操作分布式集群HBASE

Posted qwangxiao

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了HBase实战:使用JAVA操作分布式集群HBASE相关的知识,希望对你有一定的参考价值。

HBase实战(4):使用JAVA操作分布式集群HBASE

    Hbase开发测试程序在windows 10的IDEA中,Vmvare虚拟机部署Hadoop、hbase等集群,虚拟机操作系统linux。将通过windows本地IDEA连接虚拟机的Hbase系统,进行操作。

    1,更改C:WindowsSystem32driversetc 的HOSTS文件  

[html] view plain copy
 
  1. 192.168.189.1 master  
  2. 192.168.189.2 worker1  
  3. 192.168.189.3 worker2  
  4. 192.168.189.4 worker3  

    

[html] view plain copy
 
  1. Microsoft Windows [版本 10.0.16299.371]  
  2. (c) 2017 Microsoft Corporation。保留所有权利。  
  3.   
  4. C:Userslenovo>ping master  
  5.   
  6. 正在 Ping master [192.168.189.1] 具有 32 字节的数据:  
  7. 来自 192.168.189.1 的回复: 字节=32 时间<1ms TTL=64  
  8. 来自 192.168.189.1 的回复: 字节=32 时间<1ms TTL=64  
  9.   
  10. 192.168.189.1 的 Ping 统计信息:  
  11.     数据包: 已发送 = 2,已接收 = 2,丢失 = 0 (0% 丢失),  
  12. 往返行程的估计时间(以毫秒为单位):  
  13.     最短 = 0ms,最长 = 0ms,平均 = 0ms  
  14. Control-C  
  15. ^C  
  16. C:Userslenovo>ping worker1  
  17.   
  18. 正在 Ping worker1 [192.168.189.2] 具有 32 字节的数据:  
  19. 来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64  
  20. 来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64  
  21. 来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64  
  22. 来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64  
  23.   
  24. 192.168.189.2 的 Ping 统计信息:  
  25.     数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),  
  26. 往返行程的估计时间(以毫秒为单位):  
  27.     最短 = 0ms,最长 = 0ms,平均 = 0ms  
  28.   
  29. C:Userslenovo>ping worker3  
  30.   
  31. 正在 Ping worker3 [192.168.189.4] 具有 32 字节的数据:  
  32. 来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64  
  33. 来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64  
  34. 来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64  
  35. 来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64  
  36.   
  37. 192.168.189.4 的 Ping 统计信息:  
  38.     数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),  
  39. 往返行程的估计时间(以毫秒为单位):  
  40.     最短 = 0ms,最长 = 0ms,平均 = 0ms  
  41.   
  42. C:Userslenovo>  

2,新建maven项目,编写pom.xml文件。下载HBASE的依赖包。

[html] view plain copy
 
  1. <?xml version="1.0" encoding="UTF-8"?>  
  2. <project xmlns="http://maven.apache.org/POM/4.0.0"  
  3.          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  
  4.          xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">  
  5.     <modelVersion>4.0.0</modelVersion>  
  6.   
  7.     <groupId>noc_hbase_test</groupId>  
  8.     <artifactId>noc_hbase_test</artifactId>  
  9.     <version>1.0-SNAPSHOT</version>  
  10.   
  11.     <properties>  
  12.         <scala.version>2.11.8</scala.version>  
  13.         <spark.version>2.2.1</spark.version>  
  14.         <jedis.version>2.8.2</jedis.version>  
  15.         <fastjson.version>1.2.14</fastjson.version>  
  16.         <jetty.version>9.2.5.v20141112</jetty.version>  
  17.         <container.version>2.17</container.version>  
  18.         <java.version>1.8</java.version>  
  19.         <hbase.version>1.2.0</hbase.version>  
  20.     </properties>  
  21.   
  22.   
  23.     <repositories>  
  24.         <repository>  
  25.             <id>scala-tools.org</id>  
  26.             <name>Scala-Tools Maven2 Repository</name>  
  27.             <url>http://scala-tools.org/repo-releases</url>  
  28.         </repository>  
  29.     </repositories>  
  30.   
  31.     <pluginRepositories>  
  32.         <pluginRepository>  
  33.             <id>scala-tools.org</id>  
  34.             <name>Scala-Tools Maven2 Repository</name>  
  35.             <url>http://scala-tools.org/repo-releases</url>  
  36.         </pluginRepository>  
  37.     </pluginRepositories>  
  38.   
  39.     <dependencies>  
  40.         <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase -->  
  41.         <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase -->  
  42.         <!-- hbase依赖包 -->  
  43.         <dependency>  
  44.             <groupId>org.apache.hbase</groupId>  
  45.             <artifactId>hbase-client</artifactId>  
  46.             <version>${hbase.version}</version>  
  47.             <exclusions>  
  48.                 <exclusion>  
  49.                     <groupId>org.slf4j</groupId>  
  50.                     <artifactId>slf4j-log4j12</artifactId>  
  51.                 </exclusion>  
  52.             </exclusions>  
  53.         </dependency>  
  54.         <dependency>  
  55.             <groupId>org.apache.hbase</groupId>  
  56.             <artifactId>hbase-common</artifactId>  
  57.             <version>${hbase.version}</version>  
  58.             <exclusions>  
  59.                 <exclusion>  
  60.                     <groupId>org.slf4j</groupId>  
  61.                     <artifactId>slf4j-log4j12</artifactId>  
  62.                 </exclusion>  
  63.             </exclusions>  
  64.         </dependency>  
  65.         <dependency>  
  66.             <groupId>org.apache.hbase</groupId>  
  67.             <artifactId>hbase-server</artifactId>  
  68.             <version>${hbase.version}</version>  
  69.             <exclusions>  
  70.                 <exclusion>  
  71.                     <groupId>org.slf4j</groupId>  
  72.                     <artifactId>slf4j-log4j12</artifactId>  
  73.                 </exclusion>  
  74.             </exclusions>  
  75.         </dependency>  
  76.   
  77.         <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->  
  78.         <dependency>  
  79.             <groupId>org.apache.hadoop</groupId>  
  80.             <artifactId>hadoop-common</artifactId>  
  81.             <version>2.6.0</version>  
  82.         </dependency>  
  83.   
  84.         <dependency>  
  85.             <groupId>org.apache.hadoop</groupId>  
  86.             <artifactId>hadoop-client</artifactId>  
  87.             <version>2.6.0</version>  
  88.         </dependency>  
  89.   
  90.         <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs -->  
  91.         <dependency>  
  92.             <groupId>org.apache.hadoop</groupId>  
  93.             <artifactId>hadoop-hdfs</artifactId>  
  94.             <version>2.6.0</version>  
  95.         </dependency>  
  96.   
  97.   
  98.     </dependencies>  
  99.   
  100.     <build>  
  101.         <plugins>  
  102.             <plugin>  
  103.                 <artifactId>maven-assembly-plugin</artifactId>  
  104.                 <configuration>  
  105.                     <classifier>dist</classifier>  
  106.                     <appendAssemblyId>true</appendAssemblyId>  
  107.                     <descriptorRefs>  
  108.                         <descriptor>jar-with-dependencies</descriptor>  
  109.                     </descriptorRefs>  
  110.                 </configuration>  
  111.                 <executions>  
  112.                     <execution>  
  113.                         <id>make-assembly</id>  
  114.                         <phase>package</phase>  
  115.                         <goals>  
  116.                             <goal>single</goal>  
  117.                         </goals>  
  118.                     </execution>  
  119.                 </executions>  
  120.             </plugin>  
  121.   
  122.             <plugin>  
  123.                 <artifactId>maven-compiler-plugin</artifactId>  
  124.                 <configuration>  
  125.                     <source>1.7</www.mcyllpt.com/ source>  
  126.                     <target>1.7</target>  
  127.                 </configuration>  
  128.             </plugin>  
  129.   
  130.             <plugin>  
  131.                 <groupId>net.alchim31.maven</groupId>  
  132.                 <artifactId>scala-maven-plugin</artifactId>  
  133.                 <version>3.2.2</version>  
  134.                 <executions>  
  135.                     <execution>  
  136.                         <id>scala-compile-first</id>  
  137.                         <phase>process-resources</phase>  
  138.                         <goals>  
  139.                             <goal>compile</www.078881.cn goal>  
  140.                         </goals>  
  141.                     </execution>  
  142.                 </executions>  
  143.                 <configuration>  
  144.                     <scalaVersion>${scala.version}</scalaVersion>  
  145.                     <recompileMode>incremental</recompileMode>  
  146.                     <useZincServer>true<www.yongshiyule178.com /useZincServer>  
  147.                     <args>  
  148.                         <arg>-unchecked</arg>  
  149.                         <arg>-deprecation</arg>  
  150.                         <arg>-feature</arg>  
  151.                     </args>  
  152.                     <jvmArgs>  
  153.                         <jvmArg>-Xms1024m</jvmArg>  
  154.                         <jvmArg>-Xmx1024m</www.huayiyul.com/ jvmArg>  
  155.                     </jvmArgs>  
  156.                     <javacArgs>  
  157.                         <javacArg>-source</javacArg>  
  158.                         <javacArg>${java.version}</javacArg>  
  159.                         <javacArg>-target</javacArg>  
  160.                         <javacArg>${java.www.cnzhaotai.com version}</javacArg>  
  161.                         <javacArg>-Xlint:all,-serial,-path</javacArg>  
  162.                     </javacArgs>  
  163.                 </configuration>  
  164.             </plugin>  
  165.   
  166.             <plugin>  
  167.                 <groupId>org.antlr<www.ruishengks.com /groupId>  
  168.                 <artifactId>antlr4-maven-plugin</artifactId>  
  169.                 <version>4.3</version>  
  170.                 <executions>  
  171.                     <execution>  
  172.                         <id>antlr<www.hjha178.com/ /id>  
  173.                         <goals>  
  174.                             <goal>antlr4</goal>  
  175.                         </goals>  
  176.                         <phase>none</phase>  
  177.                     </execution>  
  178.                 </executions>  
  179.                 <configuration>  
  180.                     <outputDirectory>src/test/java</outputDirectory>  
  181.                     <listener>true</listener>  
  182.                     <treatWarningsAsErrors>true</treatWarningsAsErrors>  
  183.                 </configuration>  
  184.             </plugin>  
  185.         </plugins>  
  186.     </build>  
  187.   
  188. </project>  

3.在IDEA项目下面放上linux环境配置hadoop和hbase配置文件,hbase-site.xml和hdfs-site.xml.

hbase-site.xml

[html] view plain copy
 
  1. <configuration>    
  2.        
  3.         <property>    
  4.             <name>hbase.rootdir</name>    
  5.             <value>hdfs://master:9000/hbase</value>    
  6.         </property>    
  7.           
  8.         <property>    
  9.             <name>hbase.cluster.distributed</name>    
  10.             <value>true</value>    
  11.         </property>    
  12.             
  13.         <property>    
  14.             <name>hbase.zookeeper.quorum</name>    
  15.             <value>192.168.189.1:2181,192.168.189.2:2181,192.168.189.3:2181</value>    
  16.         </property>    
  17.   
  18.   
  19. <property>  
  20.     <name>hbase.master.info.port</name>  
  21.     <value>60010</value>  
  22. </property>  
  23.   
  24.   
  25.     </configuration>    

hdfs-site.xml

[html] view plain copy
 
  1. <configuration>  
  2.    <property>  
  3.        <name>dfs.replication</name>  
  4.        <value>3</value>  
  5.    </property>  
  6.    <property>  
  7.        <name>dfs.namenode.name.dir</name>  
  8.        <value>/usr/local/hadoop-2.6.0/tmp/dfs/name</value>  
  9.    </property>  
  10.    <property>  
  11.        <name>dfs.datanode.data.dir</name>  
  12.        <value> /usr/local/hadoop-2.6.0/tmp/dfs/data</value>  
  13.    </property>  
  14. lt;/configuration>  

HBASE测试代码:

[html] view plain copy
 
  1. package HbaseTest;  
  2. import org.apache.hadoop.conf.Configuration;  
  3. import org.apache.hadoop.hbase.HTableDescriptor;  
  4. import org.apache.hadoop.hbase.client.Connection;  
  5. import org.apache.hadoop.hbase.client.Admin;  
  6.   
  7. import java.io.IOException;  
  8.   
  9. public class HbaseMyTest {  
  10.     public static Configuration configuration;  
  11.     public static Connection connection;  
  12.     public static Admin admin;  
  13.     public static void main(String[] args) throws IOException {  
  14.      listTables();  
  15.     }  
  16.   
  17.     public static void listTables() throws IOException {  
  18.         HbaseUtils.init();  
  19.         HTableDescriptor hTableDescriptors[] = admin.listTables();  
  20.         for (HTableDescriptor hTableDescriptor : hTableDescriptors) {  
  21.             System.out.println("IDEA本地程序查询Hbase的表名: "+hTableDescriptor.getNameAsString());  
  22.         }  
  23.         HbaseUtils.close();  
  24.     }  
  25.   
  26. }  
[html] view plain copy
 
  1. package HbaseTest;  
  2.   
  3. import org.apache.hadoop.hbase.HBaseConfiguration;  
  4. import org.apache.hadoop.hbase.client.ConnectionFactory;  
  5.   
  6. import java.io.IOException;  
  7.   
  8. public class HbaseUtils {  
  9.     public static void init() {  
  10.         HbaseMyTest.configuration = HBaseConfiguration.create();  
  11.         HbaseMyTest.configuration.set("hbase.zookeeper.property.clientPort", "2181");  
  12.         HbaseMyTest.configuration.set("hbase.zookeeper.quorum", "192.168.189.1,192.168.189.2,192.168.189.3");  
  13.         HbaseMyTest.configuration.set("hbase.master", "192.168.189.1:60000");  
  14.   
  15.         try {  
  16.             HbaseMyTest.connection = ConnectionFactory.createConnection(HbaseMyTest.configuration);  
  17.             HbaseMyTest.admin = HbaseMyTest.connection.getAdmin();  
  18.         } catch (IOException e) {  
  19.             e.printStackTrace();  
  20.         }  
  21.     }  
  22.   
  23.     public static void close() {  
  24.         try {  
  25.             if (null != HbaseMyTest.admin)  
  26.                 HbaseMyTest.admin.close();  
  27.             if (null != HbaseMyTest.connection)  
  28.                 HbaseMyTest.connection.close();  
  29.         } catch (IOException e) {  
  30.             e.printStackTrace();  
  31.         }  
  32.   
  33.     }  
  34. }  

 

运行结果为:

技术分享图片

以上是关于HBase实战:使用JAVA操作分布式集群HBASE的主要内容,如果未能解决你的问题,请参考以下文章

HBase+SpringBoot实战分布式文件存储

Hadoop分布式集群实战

Hive On HBase实战

Hadoop2.7.5+Hbase1.4.0完全分布式集群搭建

HBase+SpringBoot实战分布式文件存储

Linux实战——Zookeeper集群安装部署