spark模型运行时无法连接摸个excutors异常org.apache.spark.shuffle.FetchFailedException: Failed to connect to xxxx/x

Posted shaozhiqi

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了spark模型运行时无法连接摸个excutors异常org.apache.spark.shuffle.FetchFailedException: Failed to connect to xxxx/x相关的知识,希望对你有一定的参考价值。

error:org.apache.spark.shuffle.FetchFailedException: Failed to connect to xxxx/xx.xx.xx.xx:xxxx

定位来定位去与防火墙等无关。反复查看日志:

2019-09-30 11:00:46,521 | WARN | [dispatcher-event-loop-50] | Lost task 5.0 in stage 1.2 (TID 24441, dggsafe0321-cm, executor 7): ExecutorLostFailure (executor 7 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 4.6 GB of 4.5 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead. | org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66)
2019-09-30 11:00:46,521 | INFO | [dag-scheduler-event-loop] | Resubmitted ShuffleMapTask(6, 25830), so marking it as still running | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
2019-09-30 11:00:46,522 | WARN | [dispatcher-event-loop-50] | Lost task 4.0 in stage 1.2 (TID 24440, dggsafe0321-cm, executor 7): ExecutorLostFailure (executor 7 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 4.6 GB of 4.5 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead. | org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66)
2019-09-30 11:00:46,522 | INFO | [dag-scheduler-event-loop] | Resubmitted ShuffleMapTask(6, 15603), so marking it as still running | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)

发现节点内存溢出,导致节假死,导致节点无法访问,扩展相应执行内存重启就行。

--driver-memory 4g --executor-memory 6g 

 

以上是关于spark模型运行时无法连接摸个excutors异常org.apache.spark.shuffle.FetchFailedException: Failed to connect to xxxx/x的主要内容,如果未能解决你的问题,请参考以下文章

spark分区数,task数目,core数,worker节点个数,excutor数量梳理

kafka 分区 spark excutor task rdd

spark作业调优-------合理分配资源

任务中如何确定spark分区数task数目core个数worker节点个数excutor数量

spark分区数,task数目,core数,worker节点个数,excutor数量梳理

Spark 资源优化