集群异常上的 Hive AvroSerde
Posted
技术标签:
【中文标题】集群异常上的 Hive AvroSerde【英文标题】:Hive AvroSerde on cluster exception 【发布时间】:2015-05-27 13:12:06 【问题描述】:我有 AVRO 文件,我需要将该文件映射到 HIVE 表。最好的解决方案是使用 AvroSerDe。 所以我在集群上使用了下一个命令:
- CREATE EXTERNAL TABLE my_db.new_table
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
TBLPROPERTIES (
'avro.schema.url'='hdfs:///folder/mySchema.avsc');
- LOAD DATA inpath '/folder/myFile.avro' OVERWRITE INTO TABLE my_db.new_table;
并且所有这些命令都成功执行,但是当我尝试使用 hive 查询语言获取数据时,我在 Hadoop map 任务上出现异常:
SELECT
user.name as u_name,
FROM my_db.new_table
LATERAL VIEW explode(users) user_table as user;
例外:
2015-05-27 13:22:24,838 DEBUG [main] org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils: Failed to open file system for uri hdfs:///folder/mySchema.avsc assuming it is not a FileSystem url
java.io.IOException: Incomplete HDFS URI, no host: hdfs:///folder/mySchema.avsc
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.getSchemaFromFS(AvroSerdeUtils.java:149)
at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.determineSchemaOrThrowException(AvroSerdeUtils.java:110)
at org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader.getSchema(AvroGenericRecordReader.java:112)
at org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader.<init>(AvroGenericRecordReader.java:70)
at org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat.getRecordReader(AvroContainerInputFormat.java:51)
at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:298)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.<init>(HadoopShimsSecure.java:259)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:386)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:652)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:169)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Hive 版本:0.14
出现这种异常的原因是什么?
谢谢!
【问题讨论】:
【参考方案1】:问题出在
TBLPROPERTIES (
'avro.schema.url'='hdfs:///folder/mySchema.avsc');
avro.schema.url 需要在 url 中包含 MASTER_NODE_NAME + 端口。 所以正确的版本是:
TBLPROPERTIES (
'avro.schema.url'='hdfs://MASTER_NODE_NAME:port/folder/mySchema.avsc');
【讨论】:
以上是关于集群异常上的 Hive AvroSerde的主要内容,如果未能解决你的问题,请参考以下文章
OutOfMemory 异常 - HDInsight LLAP 群集中的 Hive 多联接查询