Mac上搭建Hadoop+Hive+Spark开发环境
Posted 技术和生活小站
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Mac上搭建Hadoop+Hive+Spark开发环境相关的知识,希望对你有一定的参考价值。
pic by maxrivephotography from Instagram
最近在学习java大数据组件,被hadoop+hive+spark等组件的各种抽象概念搞的云里雾里。文档和书都在读,但如果不去实际运行demo和example,终归记不住,缺乏深刻的印象。
对学习一门新技术新框架,我的经验是第一步看文档,建立初步的概念和印象,第二步跑demo,更直观的看常见的API有哪些,程序是如何调用的,第三步,就是项目中实战运用。最后一步,要精深,需要debug跟进源码。
前段时间基本完成了第一步,对这些大数据组件有了基本的概念,现在准备学习demo,实际运行。网上有很多教程来搭建本地学习环境,但真正动手做起来,才发现还是会遇到各种问题。大家搭建的版本不一样,操作系统环境不一样,遇到的问题也是不一样的。零零碎碎的查官方文档和其他教程,花了将近2天的时间,才真正在Mac系统上搭建起来。
我始终觉得mac做开发,相对windows还是很有优势的,毕竟mac更像linux,配置起来更方便。既然自己也花了这么长时间才搭建好开发环境,就有必要总结和记录一下了。
主要是为了学习spark的examples,由于spark最新版本是2.3.0,依赖的hive是1.2.1版本。而brew安装hadoop和hive的版本依次是3.0.0和2.3.0,运行spark sql时,会出问题。
第一次运行bin/spark-submit —class “org.apache.spark.examples.sql.hive.JavaSparkHiveExample” spark-examples-1.0-SNAPSHOT.jar,会成功在hive中创建src表,但第二次运行时会报错:”org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database ‘default’ not found;”
用maven dependency:tree来看依赖情况,spark sql依赖的是hive 1.2.1。
[INFO] \- org.apache.spark:spark-hive_2.11:jar:2.3.0:compile
[INFO] +- com.twitter:parquet-hadoop-bundle:jar:1.6.0:compile
[INFO] +- org.spark-project.hive:hive-exec:jar:1.2.1.spark2:compile
[INFO] | +- commons-io:commons-io:jar:2.4:compile
// ......
[INFO] +- org.spark-project.hive:hive-metastore:jar:1.2.1.spark2:compile
[INFO] | +- com.jolbox:bonecp:jar:0.8.0.RELEASE:compile
[INFO] | +- commons-cli:commons-cli:jar:1.2:compile
[INFO] | +- commons-logging:commons-logging:jar:1.1.3:compile
// ......
所以手工从官网下载hive 1.2.1版本。从官网更新时间来说,hive 1.2.1 和 2.3.3 是目前主力版本。hive 2相当第一代产品有哪些新功能,后面再研究。
Hadoop 2.6.5 搭建
1、编辑配置文件
etc/hadoop/core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/Users/liaorui/data/hadoop2/hdfs-tmp</value>
</property>
</configuration>
注意要配置hadoop.tmp.dir,因为默认hadoop是把数据存在/tmp下的,电脑重启后会丢失。
etc/hadoop/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
etc/hadoop/mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
</property>
</configuration>
2、格式化dfs文件系统,并启动
$ bin/hdfs namenode -format
# Start NameNode daemon and DataNode daemon:
$ sbin/start-dfs.sh
3、建立执行MapReduce任务的HDFS目录。
$ bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/<username>
4、启动yarn
#Start ResourceManager daemon and NodeManager daemon:
$ sbin/start-yarn.sh
#Browse the web interface for the ResourceManager; by default it is available at:
* ResourceManager - http://localhost:8088/
#When you’re done, stop the daemons with:
$ sbin/stop-yarn.sh
Hive 1.2.1搭建
1、hive配置
conf/hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore</value>
<description>the URL of the MySQL database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoStartMechanism</name>
<value>SchemaTable</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://localhost:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>true</value>
</property>
</configuration>
2、创建mysql database
hive的数据存储在hdfs上,但库表元信息metastore是存储在第三方数据库的。默认是用derby内嵌启动的,但无法多用户访问。所以通常是用mysql或postgresql作为metastore存储。
$ mysql -u root -p
Enter password:
mysql> CREATE DATABASE metastore;
mysql> USE metastore;
mysql> CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hive';
...
mysql> REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'hive'@'localhost';
mysql> GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'localhost';
mysql> FLUSH PRIVILEGES;
mysql> quit;
3、将mysql connector驱动复制到hive/lib目录下,注意用 mysql-connector-java-5.1.22.jar 及以后版本。更早版本可能会报错:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘OPTION SQL_SELECT_LIMIT=DEFAULT’ at line 1
4、设置HADOOP_HOME环境变量
#Hive uses Hadoop, so you must have Hadoop in your path OR
$ export HADOOP_HOME=<hadoop-install-dir>
5、在dfs中建立hive需要的目录
$ $HADOOP_HOME/bin/hadoop fs -mkdir /tmp
$ $HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse
$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp
$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse
6、初始化metastore数据库
$HIVE_HOME/bin/schematool -dbType mysql -initSchema
HiveServer2 (HS2) is a server interface that enables remote clients to execute queries against Hive and retrieve the results (a more detailed intro here). The current implementation, based on Thrift RPC, is an improved version of HiveServer and supports multi-client concurrency and authentication. It is designed to provide better support for open API clients like JDBC and ODBC.
7、运行metastore服务
bin/hive --service metastore
Spark 2.3.3搭建
1、将hadoop的core-site.xml、hdfs-site.xml和hive的hive-site.xml 放在spark的conf下,spark会自动加载hive。
2、将hive-exec-1.2.1.spark2.jar、hive-metastore-1.2.1.spark2.jar、libfb303-0.9.3.jar、libthrift-0.9.3.jar、spark-hive_2.11-2.3.0.jar 放到spark的jars目录。
将hive/lib/jline-2.12.jar放到hadoop/share/hadoop/yarn/lib目录,并删除旧版本hive/lib/jline-0.9.jar。
3、将spark/examples目录的demo源码新建maven工程,pom.xml如下:
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.11</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>com.github.scopt</groupId>
<artifactId>scopt_2.11</artifactId>
<version>3.7.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.11</artifactId>
<version>2.3.0</version>
</dependency>
</dependencies>
将spark examples导入转成maven工程,就方便在idea或eclipse里debug学习了,也方便编译成jar,用spark-submit提交任务。
4、提交spark任务
bin/spark-submit --class "org.apache.spark.examples.sql.hive.JavaSparkHiveExample" spark-examples-1.0-SNAPSHOT.jar
至此,可以运行spark的examples程序了。
以上是关于Mac上搭建Hadoop+Hive+Spark开发环境的主要内容,如果未能解决你的问题,请参考以下文章
搭建hadoop+spark+hive环境(配置安装hive)
hadoop集群搭建(Hadoop 3.1.3 /Hive 3.1.2/Spark 3.0.0)