spark sql应该如何配置访问hive metastore? [复制]

Posted

技术标签:

【中文标题】spark sql应该如何配置访问hive metastore? [复制]【英文标题】:How should spark sql be configured to access hive metastore? [duplicate] 【发布时间】:2015-06-30 17:08:06 【问题描述】:

我正在尝试使用 Spark SQL 从 Hive 元存储中读取表,但 Spark 给出了关于未找到表的错误。恐怕 Spark SQL 会创建一个全新的空元存储。

我通过这个命令提交spark任务:

spark-submit --class etl.EIServerSpark --driver-class-path '/opt/cloudera/parcels/CDH/lib/hive/lib/*' --driver-java-options '-Dspark.executor.extraClassPath=/opt/cloudera/parcels/CDH/lib/hive/lib/*' --jars $HIVE_CLASSPATH --files /etc/hive/conf/hive-site.xml,/etc/hadoop/conf/yarn-site.xml --master yarn-client /root/etl.jar

这是错误:

2015-06-30 17:50:51,563 INFO  [main] util.Utils (Logging.scala:logInfo(59)) - Copying /etc/hive/conf/hive-site.xml to /tmp/spark-568de027-8b66-40fa-97a4-2ec50614f486/hive-site.xml
2015-06-30 17:50:51,568 INFO  [main] spark.SparkContext (Logging.scala:logInfo(59)) - Added file file:/etc/hive/conf/hive-site.xml at http://10.136.149.126:43349/files/hive-site.xml with timestamp 1435683051561
2015-06-30 17:50:51,568 INFO  [main] util.Utils (Logging.scala:logInfo(59)) - Copying /etc/hadoop/conf/yarn-site.xml to /tmp/spark-568de027-8b66-40fa-97a4-2ec50614f486/yarn-site.xml
2015-06-30 17:50:51,570 INFO  [main] spark.SparkContext (Logging.scala:logInfo(59)) - Added file file:/etc/hadoop/conf/yarn-site.xml at http://10.136.149.126:43349/files/yarn-site.xml with timestamp 1435683051568
2015-06-30 17:50:51,637 INFO  [sparkDriver-akka.actor.default-dispatcher-5] util.AkkaUtils (Logging.scala:logInfo(59)) - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@gateway.edp.hadoop:52818/user/HeartbeatReceiver
2015-06-30 17:50:51,756 INFO  [main] netty.NettyBlockTransferService (Logging.scala:logInfo(59)) - Server created on 40198
2015-06-30 17:50:51,757 INFO  [main] storage.BlockManagerMaster (Logging.scala:logInfo(59)) - Trying to register BlockManager
2015-06-30 17:50:51,759 INFO  [sparkDriver-akka.actor.default-dispatcher-2] storage.BlockManagerMasterActor (Logging.scala:logInfo(59)) - Registering block manager localhost:40198 with 265.4 MB RAM, BlockManagerId(<driver>, localhost, 40198)
2015-06-30 17:50:51,761 INFO  [main] storage.BlockManagerMaster (Logging.scala:logInfo(59)) - Registered BlockManager
2015-06-30 17:50:52,840 INFO  [main] parse.ParseDriver (ParseDriver.java:parse(185)) - Parsing command: SELECT id, name FROM eiserver.eismpt
2015-06-30 17:50:53,141 INFO  [main] parse.ParseDriver (ParseDriver.java:parse(206)) - Parse Completed
2015-06-30 17:50:54,041 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(502)) - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2015-06-30 17:50:54,064 INFO  [main] metastore.ObjectStore (ObjectStore.java:initialize(247)) - ObjectStore, initialize called
2015-06-30 17:50:54,227 WARN  [main] DataNucleus.General (Log4JLogger.java:warn(96)) - Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hive/lib/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/jars/datanucleus-rdbms-3.2.9.jar."
2015-06-30 17:50:54,268 WARN  [main] DataNucleus.General (Log4JLogger.java:warn(96)) - Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hive/lib/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/jars/datanucleus-api-jdo-3.2.6.jar."
2015-06-30 17:50:54,274 WARN  [main] DataNucleus.General (Log4JLogger.java:warn(96)) - Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hive/lib/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/jars/datanucleus-core-3.2.10.jar."
2015-06-30 17:50:54,314 INFO  [main] DataNucleus.Persistence (Log4JLogger.java:info(77)) - Property datanucleus.cache.level2 unknown - will be ignored
2015-06-30 17:50:54,315 INFO  [main] DataNucleus.Persistence (Log4JLogger.java:info(77)) - Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
2015-06-30 17:50:56,109 INFO  [main] metastore.ObjectStore (ObjectStore.java:getPMF(318)) - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
2015-06-30 17:50:56,170 INFO  [main] metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:<init>(110)) - mysql check failed, assuming we are not on mysql: Lexical error at line 1, column 5.  Encountered: "@" (64), after : "".
2015-06-30 17:50:57,315 INFO  [main] DataNucleus.Datastore (Log4JLogger.java:info(77)) - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
2015-06-30 17:50:57,316 INFO  [main] DataNucleus.Datastore (Log4JLogger.java:info(77)) - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
2015-06-30 17:50:57,688 INFO  [main] DataNucleus.Datastore (Log4JLogger.java:info(77)) - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
2015-06-30 17:50:57,688 INFO  [main] DataNucleus.Datastore (Log4JLogger.java:info(77)) - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
2015-06-30 17:50:57,842 INFO  [main] DataNucleus.Query (Log4JLogger.java:info(77)) - Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
2015-06-30 17:50:57,844 INFO  [main] metastore.ObjectStore (ObjectStore.java:setConf(230)) - Initialized ObjectStore
2015-06-30 17:50:58,113 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(560)) - Added admin role in metastore
2015-06-30 17:50:58,115 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(569)) - Added public role in metastore
2015-06-30 17:50:58,198 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:addAdminUsers(597)) - No user is added in admin role, since config is empty
2015-06-30 17:50:58,376 INFO  [main] session.SessionState (SessionState.java:start(383)) - No Tez session required at this point. hive.execution.engine=mr.
2015-06-30 17:50:58,525 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:logInfo(632)) - 0: get_table : db=eiserver tbl=eismpt
2015-06-30 17:50:58,525 INFO  [main] HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(314)) - ugi=root     ip=unknown-ip-addr      cmd=get_table : db=eiserver tbl=eismpt
2015-06-30 17:50:58,567 ERROR [main] metadata.Hive (Hive.java:getTable(1003)) - NoSuchObjectException(message:eiserver.eismpt table not found)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1569)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

如何配置 spark sql 以访问部署在 postgres 上的 hive Metastore?我正在使用 CDH 5.3.2。

谢谢

【问题讨论】:

【参考方案1】:

配置 Spark 以使用 Hive Metastore thriftserver:

编辑$SPARK_HOME/conf/hive-site.xml 以删除直接连接信息并添加此属性:

<configuration>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value> /*make sure to replace with your hive-metastore service's thrift url*/
    <description>URI for client to contact metastore server</description>
  </property>
</configuration>

如果$SPARK_HOME/conf 中不存在hive-site.xml,那么要连接到hive 元存储,您需要将hive-site.xml 文件复制到spark/conf 目录中。所以以root用户登录后运行以下命令,

cp  /usr/lib/hive/conf/hive-site.xml    /usr/lib/spark/conf/

创建 Hive 上下文

scala&gt; REPL 提示符下键入以下内容:

import org.apache.spark.sql.hive.HiveContext
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)

创建 Hive 表

hiveContext.sql("CREATE TABLE IF NOT EXISTS TestTable (key INT, value STRING)")

显示 Hive 表

scala> hiveContext.hql("SHOW TABLES").collect().foreach(println)

测试配置(可选)

使用cd $SPARK_HOME; sbin/stop-thriftserver.sh 停止 Spark SQL thriftserver 使用cd;./start-thriftserver.sh 启动 Hive Metastore thriftserver 检查$HIVE_HOME/logs/metastore.out 的日志是否有任何错误。 Spark SQL thriftserver 在成功连接到 此服务器,因此它必须正在运行。 启动 Spark SQL thriftserver 与cd $SPARK_HOME; sbin/start-thriftserver.sh 检查返回行中指示的日志文件。 您应该会看到如下行:
16/12/29 20:22:19 INFO metastore: Trying to connect to metastore with URI thrift://localhost:9083
16/12/29 20:22:19 INFO metastore: Connected to metastore.

运行$SPARK_HOME/bin/beeline -u 'jdbc:hive2://localhost:10000/' 并尝试!tables 命令以确保您能够列出元数据。

【讨论】:

【参考方案2】:

The doc 说要把 spark.sql.hive.metastore.sharedPrefixes = org.postgresql 放到配置文件里,你试过吗?

【讨论】:

【参考方案3】:

确保$HIVE_HOME/conf/hive-site.xml 配置指向元存储的完整路径。

<property>
  <name>javax.jdo.option.ConnectionURL</name> 
    <value>jdbc:derby:;databaseName=/home/hive/metastore_db;create=true</value>
    <description>JDBC connect string for a JDBC metastore</description>
   </property>
<property>

将 hive-site.xml 文件放在 $SPARK_HOME/conf 中,以将 SparkR 指向与 Hive 相同的元存储。

希望这能解决您的问题。

【讨论】:

以上是关于spark sql应该如何配置访问hive metastore? [复制]的主要内容,如果未能解决你的问题,请参考以下文章

Spark: Spark-sql 读hbase

数据存储在对象存储中时从 Spark SQL 访问 Hive 表

Spark SQL 使用beeline访问hive仓库

第57课:Spark SQL on Hive配置及实战

了解如何在 Spark 中执行 Hive SQL

使用 spark 访问 hive 数据