使用 PIG 查询 Avro 数据时出错,Utf8 无法转换为 java.lang.String

Posted

技术标签:

【中文标题】使用 PIG 查询 Avro 数据时出错,Utf8 无法转换为 java.lang.String【英文标题】:Error quering Avro Data using PIG, Utf8 cannot be cast to java.lang.String 【发布时间】:2015-06-12 08:05:50 【问题描述】:

我已经使用 Flume 将 Twitter 数据下载到 HDFS 中,但是当我尝试使用 PIG 查询它时,我得到一个类转换异常,无法从 utf-8 转换为 String。

grunt> A= LOAD '/apps/hive/warehouse/twtr_uk.db/twitterdata_09062015/' USING AvroStorage ('
>>   "type" : "record",
>>   "name" : "Doc",
>>   "doc" : "adoc",
>>   "fields" : [
>>   
>>     "name" : "id",
>>     "type" : "string"
>>   ,
>>   
>>     "name" : "user_friends_count",
>>     "type" : [ "int", "null" ]
>>   ,
>>   
>>     "name" : "user_location",
>>     "type" : [ "string", "null" ]
>>   ,
>>   
>>     "name" : "user_description",
>>     "type" : [ "string", "null" ]
>>   , 
>>     "name" : "user_statuses_count",
>>     "type" : [ "int", "null" ]
>>   , 
>>     "name" : "user_followers_count",
>>     "type" : [ "int", "null" ]
>>   , 
>>     "name" : "user_name",
>>     "type" : [ "string", "null" ]
>>   , 
>>     "name" : "user_screen_name",
>>     "type" : [ "string", "null" ]
>>   , 
>>     "name" : "created_at",
>>     "type" : [ "string", "null" ]
>>   , 
>>     "name" : "text",
>>     "type" : [ "string", "null" ]
>>   , 
>>     "name" : "retweet_count",
>>     "type" : [ "long", "null" ]
>>   , 
>>     "name" : "retweeted",
>>     "type" : [ "boolean", "null" ]
>>   , 
>>     "name" : "in_reply_to_user_id",
>>     "type" : [ "long", "null" ]
>>   , 
>>     "name" : "source",
>>     "type" : [ "string", "null" ]
>>   , 
>>     "name" : "in_reply_to_status_id",
>>     "type" : [ "long", "null" ]
>>   , 
>>     "name" : "media_url_https",
>>     "type" : [ "string", "null" ]
>>   , 
>>     "name" : "expanded_url",
>>     "type" : [ "string", "null" ]
>>    ]
>> ');
grunt> illustrate A;
2015-06-11 10:07:05,361 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://sandbox.hortonworks.com:8020
2015-06-11 10:07:05,382 [main] WARN  org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2015-06-11 10:07:05,382 [main] INFO  org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - RULES_ENABLED=[ConstantCalculator, LoadTypeCastInserter, PredicatePushdownOptimizer, StreamTypeCastInserter], RULES_DISABLED=[AddForEach, ColumnMapKeyPrune, GroupByConstParallelSetter, LimitOptimizer, MergeFilter, MergeForEach, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter]
2015-06-11 10:07:05,383 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2015-06-11 10:07:05,384 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2015-06-11 10:07:05,384 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2015-06-11 10:07:05,385 [main] INFO  org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
2015-06-11 10:07:05,385 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2015-06-11 10:07:05,426 [main] WARN  org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2015-06-11 10:07:05,426 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map - Aliases being processed per job phase (AliasName[line,offset]): M: A[123,3] C:  R:
2015-06-11 10:07:05,436 [main] INFO  org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 6
2015-06-11 10:07:05,436 [main] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 6
java.lang.ClassCastException: org.apache.avro.util.Utf8 cannot be cast to java.lang.String
        at org.apache.pig.impl.util.avro.AvroTupleWrapper.getMemorySize(AvroTupleWrapper.java:201)
        at org.apache.pig.impl.util.avro.AvroTupleWrapper.getMemorySize(AvroTupleWrapper.java:178)
        at org.apache.pig.pen.util.ExampleTuple.getMemorySize(ExampleTuple.java:97)
        at org.apache.pig.data.DefaultAbstractBag.sampleContents(DefaultAbstractBag.java:101)

错误 2997:遇到 IOException。例外

【问题讨论】:

你运行的是哪个版本的猪? Apache Pig 版本 0.14.0.2.2.0.0-204,我使用的是 HDP Sandbox,所以希望版本兼容。 【参考方案1】:

如果您在 hdfs 中有 avro 数据,则无需明确指定 avro 架构,请尝试如下运行。

A= LOAD '/apps/hive/warehouse/twtr_uk.db/twitterdata_09062015/' 使用 AvroStorage ();

【讨论】:

嘿,感谢您抽出宝贵时间,但在执行 Illustrate 时出现同样的错误。在处理其他一些 avro 数据时,它会给出: 感谢您的宝贵时间,但它给出了同样的错误。使用您建议的语法加载其他一些 avro 数据时出现另一个(如下)错误:INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - 处理的总输入路径:3 2015-06-24 14:38:52,748 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - 处理的总输入路径:3 org.apache.avro.AvroRuntimeException:java.io.IOException:块大小对于此实现无效或太大: -40 在 org.apache.avro.file.DataFileStream.hasNextBlock(DataFileStream.java:275)

以上是关于使用 PIG 查询 Avro 数据时出错,Utf8 无法转换为 java.lang.String的主要内容,如果未能解决你的问题,请参考以下文章

从 ES 加载数据并使用 pig 在 HDFS 中存储为 avro

如何在 Pig 中使用 Avro 数据

如何使用 PIG 将 Avro 格式存储在 HDFS 中?

Pig - 读取存储为 Avro 的 Hive 表

在 PIg 脚本中对 Avro 文件使用 UDF

pig-avro:如何自定义方式,他们 avro 存储加载文件