从 Apache Spark 连接到 Hive [重复]

Posted

技术标签:

【中文标题】从 Apache Spark 连接到 Hive [重复]【英文标题】:Connec to Hive from Apache Spark [duplicate] 【发布时间】:2017-07-25 14:34:43 【问题描述】:

我有一个在独立 Cloudera VM 上运行的简单程序。我在 Hive 中创建了一个托管表,我想在 Apache spark 中读取该表,但尚未建立与 hive 的初始连接。请指教。

我在 IntelliJ 中运行这个程序,我已经将 hive-site.xml 从我的 /etc/hive/conf 复制到 /etc/spark/conf,即使这样 spark-job 没有连接到 Hive 元存储

 public static void main(String[] args) throws AnalysisException 
         String master = "local[*]";

         SparkSession sparkSession = SparkSession
                 .builder().appName(ConnectToHive.class.getName())
                 .config("spark.sql.warehouse.dir", "hdfs://quickstart.cloudera:8020/user/hive/warehouse")
                 .enableHiveSupport()
                 .master(master).getOrCreate();

         SparkContext context = sparkSession.sparkContext();
         context.setLogLevel("ERROR");

         SQLContext sqlCtx = sparkSession.sqlContext();

         HiveContext hiveContext = new HiveContext(sparkSession);
         hiveContext.setConf("hive.metastore.warehouse.dir", "hdfs://quickstart.cloudera:8020/user/hive/warehouse");

         hiveContext.sql("SHOW DATABASES").show();
         hiveContext.sql("SHOW TABLES").show();

         sparkSession.close();
     

输出如下,期望在哪里看到“Employee table”,以便我查询。由于我在 Standa-alone 上运行,因此 hive 元存储在本地 mysql 服务器中。

 +------------+
 |databaseName|
 +------------+
 |     default|
 +------------+

 +--------+---------+-----------+
 |database|tableName|isTemporary|
 +--------+---------+-----------+
 +--------+---------+-----------+

jdbc:mysql://127.0.0.1/metastore?createDatabaseIfNotExist=true 是 Hive Metastore 的配置

 hive> show databases;
 OK
 default
 sxm
 temp
 Time taken: 0.019 seconds, Fetched: 3 row(s)
 hive> use default;
 OK
 Time taken: 0.015 seconds
 hive> show tables;
 OK
 employee
 Time taken: 0.014 seconds, Fetched: 1 row(s)
 hive> describe formatted employee;
 OK
 # col_name             data_type               comment             

 id                     string                                      
 firstname              string                                      
 lastname               string                                      
 addresses              array<struct<street:string,city:string,state:string>>                       

 # Detailed Table Information        
 Database:              default                  
 Owner:                 cloudera                 
 CreateTime:            Tue Jul 25 06:33:01 PDT 2017     
 LastAccessTime:        UNKNOWN                  
 Protect Mode:          None                     
 Retention:             0                        
 Location:              hdfs://quickstart.cloudera:8020/user/hive/warehouse/employee     
 Table Type:            MANAGED_TABLE            
 Table Parameters:       
    transient_lastDdlTime   1500989581          

 # Storage Information       
 SerDe Library:         org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe  
 InputFormat:           org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat    
 OutputFormat:          org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat   
 Compressed:            No                       
 Num Buckets:           -1                       
 Bucket Columns:        []                       
 Sort Columns:          []                       
 Storage Desc Params:        
    serialization.format    1                   
 Time taken: 0.07 seconds, Fetched: 29 row(s)
 hive> 

添加了 Spark 日志

 log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
 log4j:WARN Please initialize the log4j system properly.
 log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
 Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
 17/07/25 11:38:30 INFO SparkContext: Running Spark version 2.1.0
 17/07/25 11:38:30 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 17/07/25 11:38:30 INFO SecurityManager: Changing view acls to: cloudera
 17/07/25 11:38:30 INFO SecurityManager: Changing modify acls to: cloudera
 17/07/25 11:38:30 INFO SecurityManager: Changing view acls groups to: 
 17/07/25 11:38:30 INFO SecurityManager: Changing modify acls groups to: 
 17/07/25 11:38:30 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(cloudera); groups with view permissions: Set(); users  with modify permissions: Set(cloudera); groups with modify permissions: Set()
 17/07/25 11:38:31 INFO Utils: Successfully started service 'sparkDriver' on port 55232.
 17/07/25 11:38:31 INFO SparkEnv: Registering MapOutputTracker
 17/07/25 11:38:31 INFO SparkEnv: Registering BlockManagerMaster
 17/07/25 11:38:31 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
 17/07/25 11:38:31 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
 17/07/25 11:38:31 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-eb1e611f-1b88-487f-b600-3da1ff8353db
 17/07/25 11:38:31 INFO MemoryStore: MemoryStore started with capacity 1909.8 MB
 17/07/25 11:38:31 INFO SparkEnv: Registering OutputCommitCoordinator
 17/07/25 11:38:31 INFO Utils: Successfully started service 'SparkUI' on port 4040.
 17/07/25 11:38:31 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.2.15:4040
 17/07/25 11:38:31 INFO Executor: Starting executor ID driver on host localhost
 17/07/25 11:38:31 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41433.
 17/07/25 11:38:31 INFO NettyBlockTransferService: Server created on 10.0.2.15:41433
 17/07/25 11:38:31 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
 17/07/25 11:38:31 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.2.15, 41433, None)
 17/07/25 11:38:31 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.2.15:41433 with 1909.8 MB RAM, BlockManagerId(driver, 10.0.2.15, 41433, None)
 17/07/25 11:38:31 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.2.15, 41433, None)
 17/07/25 11:38:31 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.2.15, 41433, None)
 17/07/25 11:38:32 INFO SharedState: Warehouse path is 'file:/home/cloudera/works/JsonHive/spark-warehouse/'.
 17/07/25 11:38:32 INFO HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
 17/07/25 11:38:32 INFO deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
 17/07/25 11:38:32 INFO deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
 17/07/25 11:38:32 INFO deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed
 17/07/25 11:38:32 INFO deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
 17/07/25 11:38:32 INFO deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
 17/07/25 11:38:32 INFO deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
 17/07/25 11:38:32 INFO deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
 17/07/25 11:38:32 INFO deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
 17/07/25 11:38:32 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
 17/07/25 11:38:32 INFO ObjectStore: ObjectStore, initialize called
 17/07/25 11:38:32 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
 17/07/25 11:38:32 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
 17/07/25 11:38:34 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
 17/07/25 11:38:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
 17/07/25 11:38:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
 17/07/25 11:38:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
 17/07/25 11:38:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
 17/07/25 11:38:35 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
 17/07/25 11:38:35 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
 17/07/25 11:38:35 INFO ObjectStore: Initialized ObjectStore
 17/07/25 11:38:36 INFO HiveMetaStore: Added admin role in metastore
 17/07/25 11:38:36 INFO HiveMetaStore: Added public role in metastore
 17/07/25 11:38:36 INFO HiveMetaStore: No user is added in admin role, since config is empty
 17/07/25 11:38:36 INFO HiveMetaStore: 0: get_all_databases
 17/07/25 11:38:36 INFO audit: ugi=cloudera ip=unknown-ip-addr  cmd=get_all_databases   
 17/07/25 11:38:36 INFO HiveMetaStore: 0: get_functions: db=default pat=*
 17/07/25 11:38:36 INFO audit: ugi=cloudera ip=unknown-ip-addr  cmd=get_functions: db=default pat=* 
 17/07/25 11:38:36 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
 17/07/25 11:38:36 INFO SessionState: Created local directory: /tmp/76258222-81db-4ac1-9566-1d8f05c3ecba_resources
 17/07/25 11:38:36 INFO SessionState: Created HDFS directory: /tmp/hive/cloudera/76258222-81db-4ac1-9566-1d8f05c3ecba
 17/07/25 11:38:36 INFO SessionState: Created local directory: /tmp/cloudera/76258222-81db-4ac1-9566-1d8f05c3ecba
 17/07/25 11:38:36 INFO SessionState: Created HDFS directory: /tmp/hive/cloudera/76258222-81db-4ac1-9566-1d8f05c3ecba/_tmp_space.db
 17/07/25 11:38:36 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is file:/home/cloudera/works/JsonHive/spark-warehouse/
 17/07/25 11:38:36 INFO HiveMetaStore: 0: get_database: default
 17/07/25 11:38:36 INFO audit: ugi=cloudera ip=unknown-ip-addr  cmd=get_database: default   
 17/07/25 11:38:36 INFO HiveMetaStore: 0: get_database: global_temp
 17/07/25 11:38:36 INFO audit: ugi=cloudera ip=unknown-ip-addr  cmd=get_database: global_temp   
 17/07/25 11:38:36 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
 +------------+
 |databaseName|
 +------------+
 |     default|
 +------------+

 +--------+---------+-----------+
 |database|tableName|isTemporary|
 +--------+---------+-----------+
 +--------+---------+-----------+


 Process finished with exit code 0

更新

/usr/lib/hive/conf/hive-site.xml 不在类路径中,因此它没有读取表,在将其添加到类路径中后它工作正常......因为我是从 IntelliJ 运行的,所以我有这个问题..在生产中 spark-conf 文件夹将链接到 hive-site.xml ...

【问题讨论】:

您不再需要创建HiveContext。在SparkSession 上调用enableHiveSupport 就足够了。尝试拨打sparkSession.sql("SHOW DATABASES").show(); 不走运。我试过了。 如果你放弃.config("spark.sql.warehouse.dir", ...)怎么办? Spark 应该自己选择正确的配置。如果没有,你能分享一下执行的日志吗? 在问题中添加了火花日志 "spark.sql.warehouse.dir" 会影响日志吗? SharedState: Warehouse path is 'file:/home/cloudera/works/JsonHive/spark-warehouse/'.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBYHiveClientImpl: Warehouse location for Hive client (version 1.2.1) is file:/home/cloudera/works/JsonHive/spark-warehouse/Spark 【参考方案1】:
 17/07/25 11:38:35 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY

这暗示您没有连接到远程配置单元元存储(您已设置为 MySQL),并且 XML 文件在您的类路径上不正确。

在创建 SparkSession 之前,您可以在没有 XML 的情况下以编程方式完成

System.setProperty("hive.metastore.uris", "thrift://METASTORE:9083");

How to connect to a Hive metastore programmatically in SparkSQL?

【讨论】:

谢谢...在 spark_home 我们有一个指向 hive-site.xml 的链接,我在其中指定了所有元存储和其他详细信息。 /usr/lib/hive/conf/hive-site.xml 链接或复制,当然,但这不是完全必要的。另一种方法是定义HADOOP_CONF_DIR 环境变量

以上是关于从 Apache Spark 连接到 Hive [重复]的主要内容,如果未能解决你的问题,请参考以下文章

Spark 2 连接到 Hive MetaStore [重复]

无法使用 Apache spark 2.1.0 连接到 hive 数据库

如何配置 Apache Spark 2.4.5 以连接到 HIVE 的 MySQL Metastore?

如何使用 Scala Eclipse IDE 连接到现有的 Hive

将 Spark SQL Hive 服务器连接到 Cassandra?

如何将 Spark-Notebook 连接到 Hive 元存储?