失败:执行错误,从 org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask​​ 返回代码 1

Posted

技术标签:

【中文标题】失败:执行错误,从 org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask​​ 返回代码 1【英文标题】:FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask 【发布时间】:2017-09-27 04:03:47 【问题描述】:

我是 Hadoop 新手,正在尝试在 Hive 上运行一些连接查询。 我创建了两个表(table1 和 table2)。我执行了 Join 查询,但收到以下错误消息:

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask

但是,当我在 Hive UI 中运行此查询时,查询会被执行,并且我会得到正确的结果。有人可以在这里帮助解释可能出了什么问题吗?

【问题讨论】:

Hive 没有明确的“UI”。您从哪里查询? 我正在通过 Hive 编辑器在 quickstart.cloudera:8888 运行它 这就是所谓的 Hue... 那么,当您遇到错误时,您在哪里运行查询? hive 命令已弃用 是的,它是色相。我正在终端中运行查询。正常的 SQL 命令运行良好,除了 Join 查询,之后我收到此错误:'hive> select t1.Id,t1.Name,t2.Id,t2.Name from table1 t1 join table2 t2 on t1.id= t2.id;查询 ID = root_20170926212222_d79b2469-efc1-49db-a2d5-e68a5e1dca87 Total jobs = 1 FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask​​' 但是,在 Hue 编辑器中,查询运行良好。 Hue 通过 HiveServer2 运行查询。您正在使用 Hive CLI 绕过它。 blog.cloudera.com/blog/2014/02/… 【参考方案1】:

我刚刚在运行查询之前添加了以下内容,它确实有效。

SET hive.auto.convert.join=false;

【讨论】:

这太棒了。它对我来说很好:) 谢谢。【参考方案2】:

只要把这个命令放在查询之前:

SET hive.auto.convert.join=false;

绝对有效!

【讨论】:

请解释...不要只在三行“解释”上调整配置。 hive.auto.convert.join 自身设置为docs.qubole.com/en/latest/user-guide/engines/hive/…、cwiki.apache.org/confluence/display/Hive/… 等的一些背景【参考方案3】:

我也遇到了 Cloudera Quick Start VM - 5.12 的问题,通过在 hive 提示符下执行以下语句解决了这个问题:

SET hive.auto.convert.join=false;

希望以下信息对你有用:

第一步:mysql 的零售数据库中导入所有表

sqoop import-all-tables \
--connect jdbc:mysql://quickstart.cloudera:3306/retail_db \
--username retail_dba \
--password cloudera \
--num-mappers 1 \
--warehouse-dir /user/cloudera/sqoop/import-all-tables-text \
--as-textfile

第 2 步:在 Hive 中创建名为 retail_db 的数据库和所需的表

create database retail_db;
use retail_db;

create external table categories(
  category_id int,
  category_department_id int,
  category_name string)
row format delimited 
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/categories';

create external table customers(
  customer_id int,
  customer_fname string,
  customer_lname string,
  customer_email string,
  customer_password string,
  customer_street string,
  customer_city string,
  customer_state string,
  customer_zipcode string)
row format delimited 
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/customers';

create external table departments(
  department_id int,
  department_name string)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/departments';

create external table order_items(
  order_item_id int,
  order_item_order_id int,
  order_item_product_id int,
  order_item_quantity int,
  order_item_subtotal float,
  order_item_product_price float)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/order_items';

create external table orders(
  order_id int,
  order_date string,
  order_customer_id int,
  order_status string)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/orders';

create external table products(
  product_id int,
  product_category_id int,
  product_name string,
  product_description string,
  product_price float,
  product_image string)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/products';

第三步:执行JOIN查询

SET hive.cli.print.current.db=true;

select o.order_date, sum(oi.order_item_subtotal)
from orders o join order_items oi on (o.order_id = oi.order_item_order_id)
group by o.order_date 
limit 10;

上面的查询给出了以下问题:

查询 ID = cloudera_20171029182323_6eedd682-256b-466c-b2e5-58ea100715fb 工作总数 = 1 FAILED:执行错误,从 org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask​​ 返回代码 1

第 4 步: 通过在 HIVE 提示符下执行以下语句解决了上述问题:

SET hive.auto.convert.join=false;

第五步:查询结果

select o.order_date, sum(oi.order_item_subtotal)
from orders o join order_items oi on (o.order_id = oi.order_item_order_id)
group by o.order_date 
limit 10;

Query ID = cloudera_20171029182525_cfc70553-89d2-4c61-8a14-4bbeecadb3cf
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1509278183296_0005, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1509278183296_0005/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1509278183296_0005
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2017-10-29 18:25:19,861 Stage-1 map = 0%,  reduce = 0%
2017-10-29 18:25:26,181 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 2.72 sec
2017-10-29 18:25:27,240 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 5.42 sec
2017-10-29 18:25:32,479 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 8.01 sec
MapReduce Total cumulative CPU time: 8 seconds 10 msec
Ended Job = job_1509278183296_0005
Launching Job 2 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1509278183296_0006, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1509278183296_0006/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1509278183296_0006
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2017-10-29 18:25:38,676 Stage-2 map = 0%,  reduce = 0%
2017-10-29 18:25:43,925 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.85 sec
2017-10-29 18:25:49,142 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.13 sec
MapReduce Total cumulative CPU time: 2 seconds 130 msec
Ended Job = job_1509278183296_0006
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 2  Reduce: 1   Cumulative CPU: 8.01 sec   HDFS Read: 8422614 HDFS Write: 17364 SUCCESS
Stage-Stage-2: Map: 1  Reduce: 1   Cumulative CPU: 2.13 sec   HDFS Read: 22571 HDFS Write: 407 SUCCESS
Total MapReduce CPU Time Spent: 10 seconds 140 msec
OK
2013-07-25 00:00:00.0   68153.83132743835
2013-07-26 00:00:00.0   136520.17266082764
2013-07-27 00:00:00.0   101074.34193611145
2013-07-28 00:00:00.0   87123.08192253113
2013-07-29 00:00:00.0   137287.09244918823
2013-07-30 00:00:00.0   102745.62186431885
2013-07-31 00:00:00.0   131878.06256484985
2013-08-01 00:00:00.0   129001.62241744995
2013-08-02 00:00:00.0   109347.00200462341
2013-08-03 00:00:00.0   95266.89186286926
Time taken: 35.721 seconds, Fetched: 10 row(s)

【讨论】:

【参考方案4】:

尝试在连接上设置 AuthMech 参数

我已将其设置为 2 并定义了用户名

解决了我在 ctas 上的问题

问候, 奥坎

【讨论】:

【参考方案5】:

在我的例子中,为execute 添加参数configuration 将解决这个问题。 这个问题是由写访问冲突引起的。 您应该使用configuration 来确保您具有写入权限。

【讨论】:

【参考方案6】:

就我而言,这是没有设置队列的问题,所以我做了以下操作:

**set mapred.job.queue.name=**队列名称

这解决了我的问题。希望这会对某人有所帮助。

【讨论】:

【参考方案7】:

在使用 hue 界面时遇到了同样的问题, 下面是答案 在 hdfs 中创建 /user/admin 并使用以下命令更改其权限:

[root@ip-10-0-0-163 ~]# su - hdfs

[hdfs@ip-10-0-0-163 ~]$ hadoop fs -mkdir /user/admin

[hdfs@ip-10-0-0-163 ~]$ hadoop fs -chown admin /user/admin

[hdfs@ip-10-0-0-163 ~]$ exit

【讨论】:

以上是关于失败:执行错误,从 org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask​​ 返回代码 1的主要内容,如果未能解决你的问题,请参考以下文章

失败:执行错误,从 org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask​​ 返回代码 1

MS office Word 2016 从 2010 升级导致错误“错误:80080005 服务器执行失败”(CO_E_SERVER_EXEC_FAILURE).Net Web 应用程序

AWS Java Lambda 压缩 JSON 响应失败:“由于配置错误,执行失败:格式错误的 Lambda 代理响应”

mmcopyvirtualmemory执行失败

如何从 Codesoft 解决异常:“检索 COM 类工厂...失败...80080005 服务器执行失败”?

从 SSIS 调用时删除语句失败