HBase实战:使用JAVA操作分布式集群HBASE
Posted qwangxiao
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了HBase实战:使用JAVA操作分布式集群HBASE相关的知识,希望对你有一定的参考价值。
HBase实战(4):使用JAVA操作分布式集群HBASE
Hbase开发测试程序在windows 10的IDEA中,Vmvare虚拟机部署Hadoop、hbase等集群,虚拟机操作系统linux。将通过windows本地IDEA连接虚拟机的Hbase系统,进行操作。
1,更改C:WindowsSystem32driversetc 的HOSTS文件
- 192.168.189.1 master
- 192.168.189.2 worker1
- 192.168.189.3 worker2
- 192.168.189.4 worker3
- Microsoft Windows [版本 10.0.16299.371]
- (c) 2017 Microsoft Corporation。保留所有权利。
- C:Userslenovo>ping master
- 正在 Ping master [192.168.189.1] 具有 32 字节的数据:
- 来自 192.168.189.1 的回复: 字节=32 时间<1ms TTL=64
- 来自 192.168.189.1 的回复: 字节=32 时间<1ms TTL=64
- 192.168.189.1 的 Ping 统计信息:
- 数据包: 已发送 = 2,已接收 = 2,丢失 = 0 (0% 丢失),
- 往返行程的估计时间(以毫秒为单位):
- 最短 = 0ms,最长 = 0ms,平均 = 0ms
- Control-C
- ^C
- C:Userslenovo>ping worker1
- 正在 Ping worker1 [192.168.189.2] 具有 32 字节的数据:
- 来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64
- 来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64
- 来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64
- 来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64
- 192.168.189.2 的 Ping 统计信息:
- 数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),
- 往返行程的估计时间(以毫秒为单位):
- 最短 = 0ms,最长 = 0ms,平均 = 0ms
- C:Userslenovo>ping worker3
- 正在 Ping worker3 [192.168.189.4] 具有 32 字节的数据:
- 来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64
- 来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64
- 来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64
- 来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64
- 192.168.189.4 的 Ping 统计信息:
- 数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),
- 往返行程的估计时间(以毫秒为单位):
- 最短 = 0ms,最长 = 0ms,平均 = 0ms
- C:Userslenovo>
2,新建maven项目,编写pom.xml文件。下载HBASE的依赖包。
- <?xml version="1.0" encoding="UTF-8"?>
- <project xmlns="http://maven.apache.org/POM/4.0.0"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <groupId>noc_hbase_test</groupId>
- <artifactId>noc_hbase_test</artifactId>
- <version>1.0-SNAPSHOT</version>
- <properties>
- <scala.version>2.11.8</scala.version>
- <spark.version>2.2.1</spark.version>
- <jedis.version>2.8.2</jedis.version>
- <fastjson.version>1.2.14</fastjson.version>
- <jetty.version>9.2.5.v20141112</jetty.version>
- <container.version>2.17</container.version>
- <java.version>1.8</java.version>
- <hbase.version>1.2.0</hbase.version>
- </properties>
- <repositories>
- <repository>
- <id>scala-tools.org</id>
- <name>Scala-Tools Maven2 Repository</name>
- <url>http://scala-tools.org/repo-releases</url>
- </repository>
- </repositories>
- <pluginRepositories>
- <pluginRepository>
- <id>scala-tools.org</id>
- <name>Scala-Tools Maven2 Repository</name>
- <url>http://scala-tools.org/repo-releases</url>
- </pluginRepository>
- </pluginRepositories>
- <dependencies>
- <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase -->
- <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase -->
- <!-- hbase依赖包 -->
- <dependency>
- <groupId>org.apache.hbase</groupId>
- <artifactId>hbase-client</artifactId>
- <version>${hbase.version}</version>
- <exclusions>
- <exclusion>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-log4j12</artifactId>
- </exclusion>
- </exclusions>
- </dependency>
- <dependency>
- <groupId>org.apache.hbase</groupId>
- <artifactId>hbase-common</artifactId>
- <version>${hbase.version}</version>
- <exclusions>
- <exclusion>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-log4j12</artifactId>
- </exclusion>
- </exclusions>
- </dependency>
- <dependency>
- <groupId>org.apache.hbase</groupId>
- <artifactId>hbase-server</artifactId>
- <version>${hbase.version}</version>
- <exclusions>
- <exclusion>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-log4j12</artifactId>
- </exclusion>
- </exclusions>
- </dependency>
- <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->
- <dependency>
- <groupId>org.apache.hadoop</groupId>
- <artifactId>hadoop-common</artifactId>
- <version>2.6.0</version>
- </dependency>
- <dependency>
- <groupId>org.apache.hadoop</groupId>
- <artifactId>hadoop-client</artifactId>
- <version>2.6.0</version>
- </dependency>
- <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs -->
- <dependency>
- <groupId>org.apache.hadoop</groupId>
- <artifactId>hadoop-hdfs</artifactId>
- <version>2.6.0</version>
- </dependency>
- </dependencies>
- <build>
- <plugins>
- <plugin>
- <artifactId>maven-assembly-plugin</artifactId>
- <configuration>
- <classifier>dist</classifier>
- <appendAssemblyId>true</appendAssemblyId>
- <descriptorRefs>
- <descriptor>jar-with-dependencies</descriptor>
- </descriptorRefs>
- </configuration>
- <executions>
- <execution>
- <id>make-assembly</id>
- <phase>package</phase>
- <goals>
- <goal>single</goal>
- </goals>
- </execution>
- </executions>
- </plugin>
- <plugin>
- <artifactId>maven-compiler-plugin</artifactId>
- <configuration>
- <source>1.7</www.mcyllpt.com/ source>
- <target>1.7</target>
- </configuration>
- </plugin>
- <plugin>
- <groupId>net.alchim31.maven</groupId>
- <artifactId>scala-maven-plugin</artifactId>
- <version>3.2.2</version>
- <executions>
- <execution>
- <id>scala-compile-first</id>
- <phase>process-resources</phase>
- <goals>
- <goal>compile</www.078881.cn goal>
- </goals>
- </execution>
- </executions>
- <configuration>
- <scalaVersion>${scala.version}</scalaVersion>
- <recompileMode>incremental</recompileMode>
- <useZincServer>true<www.yongshiyule178.com /useZincServer>
- <args>
- <arg>-unchecked</arg>
- <arg>-deprecation</arg>
- <arg>-feature</arg>
- </args>
- <jvmArgs>
- <jvmArg>-Xms1024m</jvmArg>
- <jvmArg>-Xmx1024m</www.huayiyul.com/ jvmArg>
- </jvmArgs>
- <javacArgs>
- <javacArg>-source</javacArg>
- <javacArg>${java.version}</javacArg>
- <javacArg>-target</javacArg>
- <javacArg>${java.www.cnzhaotai.com version}</javacArg>
- <javacArg>-Xlint:all,-serial,-path</javacArg>
- </javacArgs>
- </configuration>
- </plugin>
- <plugin>
- <groupId>org.antlr<www.ruishengks.com /groupId>
- <artifactId>antlr4-maven-plugin</artifactId>
- <version>4.3</version>
- <executions>
- <execution>
- <id>antlr<www.hjha178.com/ /id>
- <goals>
- <goal>antlr4</goal>
- </goals>
- <phase>none</phase>
- </execution>
- </executions>
- <configuration>
- <outputDirectory>src/test/java</outputDirectory>
- <listener>true</listener>
- <treatWarningsAsErrors>true</treatWarningsAsErrors>
- </configuration>
- </plugin>
- </plugins>
- </build>
- </project>
3.在IDEA项目下面放上linux环境配置hadoop和hbase配置文件,hbase-site.xml和hdfs-site.xml.
hbase-site.xml
- <configuration>
- <property>
- <name>hbase.rootdir</name>
- <value>hdfs://master:9000/hbase</value>
- </property>
- <property>
- <name>hbase.cluster.distributed</name>
- <value>true</value>
- </property>
- <property>
- <name>hbase.zookeeper.quorum</name>
- <value>192.168.189.1:2181,192.168.189.2:2181,192.168.189.3:2181</value>
- </property>
- <property>
- <name>hbase.master.info.port</name>
- <value>60010</value>
- </property>
- </configuration>
hdfs-site.xml
- <configuration>
- <property>
- <name>dfs.replication</name>
- <value>3</value>
- </property>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>/usr/local/hadoop-2.6.0/tmp/dfs/name</value>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value> /usr/local/hadoop-2.6.0/tmp/dfs/data</value>
- </property>
- lt;/configuration>
HBASE测试代码:
- package HbaseTest;
- import org.apache.hadoop.conf.Configuration;
- import org.apache.hadoop.hbase.HTableDescriptor;
- import org.apache.hadoop.hbase.client.Connection;
- import org.apache.hadoop.hbase.client.Admin;
- import java.io.IOException;
- public class HbaseMyTest {
- public static Configuration configuration;
- public static Connection connection;
- public static Admin admin;
- public static void main(String[] args) throws IOException {
- listTables();
- }
- public static void listTables() throws IOException {
- HbaseUtils.init();
- HTableDescriptor hTableDescriptors[] = admin.listTables();
- for (HTableDescriptor hTableDescriptor : hTableDescriptors) {
- System.out.println("IDEA本地程序查询Hbase的表名: "+hTableDescriptor.getNameAsString());
- }
- HbaseUtils.close();
- }
- }
- package HbaseTest;
- import org.apache.hadoop.hbase.HBaseConfiguration;
- import org.apache.hadoop.hbase.client.ConnectionFactory;
- import java.io.IOException;
- public class HbaseUtils {
- public static void init() {
- HbaseMyTest.configuration = HBaseConfiguration.create();
- HbaseMyTest.configuration.set("hbase.zookeeper.property.clientPort", "2181");
- HbaseMyTest.configuration.set("hbase.zookeeper.quorum", "192.168.189.1,192.168.189.2,192.168.189.3");
- HbaseMyTest.configuration.set("hbase.master", "192.168.189.1:60000");
- try {
- HbaseMyTest.connection = ConnectionFactory.createConnection(HbaseMyTest.configuration);
- HbaseMyTest.admin = HbaseMyTest.connection.getAdmin();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- public static void close() {
- try {
- if (null != HbaseMyTest.admin)
- HbaseMyTest.admin.close();
- if (null != HbaseMyTest.connection)
- HbaseMyTest.connection.close();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- }
运行结果为:
以上是关于HBase实战:使用JAVA操作分布式集群HBASE的主要内容,如果未能解决你的问题,请参考以下文章