使用 thriftserver 和直线错误将数据从 hdfs 加载到 spark2.1 表中

Posted

技术标签:

【中文标题】使用 thriftserver 和直线错误将数据从 hdfs 加载到 spark2.1 表中【英文标题】:load data from hdfs into spark2.1 table using thriftserver and beeline error 【发布时间】:2018-01-31 21:25:44 【问题描述】:

我的开发环境是:

spark 版本:2.1.0 (spark-2.1.0-bin-hadoop2.7) jdk:1.7 hadoop:hadoop-2.7.2 蜂巢:1.2.1 deloy 模式:纱线上的火花 我设置了 hive-site.xml

<property>
  <name>hive.exec.stagingdir</name>
  <value>/tmp/hive/spark-$user.name</value>
  </property>
当我使用 thriftserver 将数据从 hdfs 加载到 spark 表中时,出现直线错误:

原因:org.apache.hadoop.hive.ql.metadata.HiveException: 无法将源 hdfs://hadoop.td.com/user/zxh/testdata/td/dbdata 移动到目标 hdfs://hadoop .td.com/warehouse/spark/dmpv3.db/at/dbdata

这是 spark2.1 的错误吗?? 我该如何解决?谢谢!!

所有错误信息:

17/08/23 17:21:43 INFO thriftserver.SparkExecuteStatementOperation: Running query 'LOAD DATA INPATH '/user/zxh/testdata/td/dbdata' INTO TABLE at' with 3d810df4-55c8-48b7-a889-091b5dabe284
17/08/23 17:21:43 INFO execution.SparkSqlParser: Parsing command: LOAD DATA INPATH '/user/zxh/testdata/td/dbdata' INTO TABLE at
17/08/23 17:21:43 INFO metastore.HiveMetaStore: 53: get_database: dmpv3
17/08/23 17:21:43 INFO HiveMetaStore.audit: ugi=hadoop	ip=unknown-ip-addr	cmd=get_database: dmpv3	
17/08/23 17:21:43 INFO metastore.HiveMetaStore: 53: get_table : db=dmpv3 tbl=at
17/08/23 17:21:43 INFO HiveMetaStore.audit: ugi=hadoop	ip=unknown-ip-addr	cmd=get_table : db=dmpv3 tbl=at	
17/08/23 17:21:43 INFO metastore.HiveMetaStore: 53: get_table : db=dmpv3 tbl=at
17/08/23 17:21:43 INFO HiveMetaStore.audit: ugi=hadoop	ip=unknown-ip-addr	cmd=get_table : db=dmpv3 tbl=at	
17/08/23 17:21:43 INFO parser.CatalystSqlParser: Parsing command: string
17/08/23 17:21:43 INFO parser.CatalystSqlParser: Parsing command: string
17/08/23 17:21:43 INFO parser.CatalystSqlParser: Parsing command: string
17/08/23 17:21:43 INFO metastore.HiveMetaStore: 53: get_database: dmpv3
17/08/23 17:21:43 INFO HiveMetaStore.audit: ugi=hadoop	ip=unknown-ip-addr	cmd=get_database: dmpv3	
17/08/23 17:21:44 INFO metastore.HiveMetaStore: 53: get_table : db=dmpv3 tbl=at
17/08/23 17:21:44 INFO HiveMetaStore.audit: ugi=hadoop	ip=unknown-ip-addr	cmd=get_table : db=dmpv3 tbl=at	
17/08/23 17:21:44 INFO metastore.HiveMetaStore: 53: get_table : db=dmpv3 tbl=at
17/08/23 17:21:44 INFO HiveMetaStore.audit: ugi=hadoop	ip=unknown-ip-addr	cmd=get_table : db=dmpv3 tbl=at	
17/08/23 17:21:44 INFO metastore.HiveMetaStore: 53: get_table : db=dmpv3 tbl=at
17/08/23 17:21:44 INFO HiveMetaStore.audit: ugi=hadoop	ip=unknown-ip-addr	cmd=get_table : db=dmpv3 tbl=at	
17/08/23 17:21:44 ERROR thriftserver.SparkExecuteStatementOperation: Error executing query, currentState RUNNING, 
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.sql.hive.client.Shim_v0_14.loadTable(HiveShim.scala:716)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadTable$1.apply$mcV$sp(HiveClientImpl.scala:672)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadTable$1.apply(HiveClientImpl.scala:672)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadTable$1.apply(HiveClientImpl.scala:672)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:283)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:230)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:229)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:272)
	at org.apache.spark.sql.hive.client.HiveClientImpl.loadTable(HiveClientImpl.scala:671)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadTable$1.apply$mcV$sp(HiveExternalCatalog.scala:741)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadTable$1.apply(HiveExternalCatalog.scala:739)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadTable$1.apply(HiveExternalCatalog.scala:739)
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:95)
	at org.apache.spark.sql.hive.HiveExternalCatalog.loadTable(HiveExternalCatalog.scala:739)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.loadTable(SessionCatalog.scala:319)
	at org.apache.spark.sql.execution.command.LoadDataCommand.run(tables.scala:302)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:220)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:160)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:173)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs://hadoop.td.com/user/zxh/testdata/td/dbdata to destination hdfs://hadoop.td.com/warehouse/spark/dmpv3.db/at/dbdata
	at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2644)
	at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:2711)
	at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1645)
	... 47 more
Caused by: java.io.IOException: Filesystem closed
	at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)
	at org.apache.hadoop.hdfs.DFSClient.getEZForPath(DFSClient.java:3288)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getEZForPath(DistributedFileSystem.java:2093)
	at org.apache.hadoop.hdfs.client.HdfsAdmin.getEncryptionZoneForPath(HdfsAdmin.java:289)
	at org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.isPathEncrypted(Hadoop23Shims.java:1221)
	at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2607)
	... 49 more
17/08/23 17:21:44 ERROR thriftserver.SparkExecuteStatementOperation: Error running hive query: 
org.apache.hive.service.cli.HiveSQLException: java.lang.reflect.InvocationTargetException
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:258)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:160)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:173)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)

【问题讨论】:

您可以添加您尝试过的查询吗?它('LOAD DATA INPATH '/user/zxh/testdata/td/dbdata' INTO TABLE)似乎没有完成。 谢谢@mrsrinivas 但我的表名在 【参考方案1】:

我用spark-1.6.2 thriftserver和beeline,用spark-2.1+yarn core,问题不存在。 但不保证还有其他问题!

【讨论】:

【参考方案2】:

这是错误。包括 Spark 2.1 2.2(我试过了。)

https://issues.apache.org/jira/browse/SPARK-21725

https://issues.apache.org/jira/browse/SPARK-11083

【讨论】:

现在,我使用 Spark1.6.3 的 thriftserver 作为直线。等 Spark-sql 使用 Spark2.x

以上是关于使用 thriftserver 和直线错误将数据从 hdfs 加载到 spark2.1 表中的主要内容,如果未能解决你的问题,请参考以下文章

无法从直线访问 Spark 2.0 临时表

Spark 2.3.0 SQL 无法将数据插入 hive hbase 表

SparkSQL介绍与Hive整合Spark的th/beeline/jdbc/thriftserve2shell方式使用SQL

【平台运维】Hive ThriftServer报错解决

实战福利:Spark 常见错误问题汇总

将数据拟合到fourier3系列总是产生一条直线