由于 MEMORY 问题,Hadoop HA 备用 Namenode 启动在安全模式下挂起

Posted

技术标签:

【中文标题】由于 MEMORY 问题,Hadoop HA 备用 Namenode 启动在安全模式下挂起【英文标题】:Hadoop HA Standby Namenode start up hang in safemode because of MEMORY problem 【发布时间】:2019-07-26 14:32:18 【问题描述】:

在NN-A(活动)由于内存不足(太多块/文件)而崩溃后,我们升级了具有更多内存的NN-A,但不立即升级NN-B(非活动)。

由于HeapSize不同,我们删除了一些文件(8000万到7000万),然后NN-B崩溃了。 NN-A 变得活跃。

然后我们升级了 NN-B,并启动它。它卡在安全模式,日志如下:

报告的块 4620668 需要额外的 62048327 个块才能达到总块 66735729 的阈值 0.9990。

The reported blocks X needs..X 增长缓慢,我查看了 Heap 使用情况:

Attaching to process ID 11598, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 24.79-b02

using parallel threads in the new generation.
using thread-local object allocation.
Concurrent Mark-Sweep GC

Heap Configuration:
   MinHeapFreeRatio = 40
   MaxHeapFreeRatio = 70
   MaxHeapSize      = 107374182400 (102400.0MB)
   NewSize          = 2006515712 (1913.5625MB)
   MaxNewSize       = 2006515712 (1913.5625MB)
   OldSize          = 4013096960 (3827.1875MB)
   NewRatio         = 2
   SurvivorRatio    = 8
   PermSize         = 21757952 (20.75MB)
   MaxPermSize      = 85983232 (82.0MB)
   G1HeapRegionSize = 0 (0.0MB)

Heap Usage:
New Generation (Eden + 1 Survivor Space):
   capacity = 1805910016 (1722.25MB)
   used     = 1805910016 (1722.25MB)
   free     = 0 (0.0MB)
   100.0% used
Eden Space:
   capacity = 1605304320 (1530.9375MB)
   used     = 1605304320 (1530.9375MB)
   free     = 0 (0.0MB)
   100.0% used
From Space:
   capacity = 200605696 (191.3125MB)
   used     = 200605696 (191.3125MB)
   free     = 0 (0.0MB)
   100.0% used
To Space:
   capacity = 200605696 (191.3125MB)
   used     = 0 (0.0MB)
   free     = 200605696 (191.3125MB)
   0.0% used
concurrent mark-sweep generation:
   capacity = 105367666688 (100486.4375MB)
   used     = 105192740832 (100319.61520385742MB)
   free     = 174925856 (166.82229614257812MB)
   99.83398526179955% used
Perm Generation:
   capacity = 68755456 (65.5703125MB)
   used     = 41562968 (39.637535095214844MB)
   free     = 27192488 (25.932777404785156MB)
   60.45042883578577% used

14501 interned Strings occupying 1597840 bytes.

同时,NN-A的堆:

Attaching to process ID 6061, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 24.79-b02

using parallel threads in the new generation.
using thread-local object allocation.
Concurrent Mark-Sweep GC

Heap Configuration:
   MinHeapFreeRatio = 40
   MaxHeapFreeRatio = 70
   MaxHeapSize      = 107374182400 (102400.0MB)
   NewSize          = 1134100480 (1081.5625MB)
   MaxNewSize       = 1134100480 (1081.5625MB)
   OldSize          = 2268266496 (2163.1875MB)
   NewRatio         = 2
   SurvivorRatio    = 8
   PermSize         = 21757952 (20.75MB)
   MaxPermSize      = 85983232 (82.0MB)
   G1HeapRegionSize = 0 (0.0MB)

Heap Usage:
New Generation (Eden + 1 Survivor Space):
   capacity = 1020723200 (973.4375MB)
   used     = 643184144 (613.3881988525391MB)
   free     = 377539056 (360.04930114746094MB)
   63.01259185644061% used
Eden Space:
   capacity = 907345920 (865.3125MB)
   used     = 639407504 (609.7865142822266MB)
   free     = 267938416 (255.52598571777344MB)
   70.47009193582973% used
From Space:
   capacity = 113377280 (108.125MB)
   used     = 3776640 (3.6016845703125MB)
   free     = 109600640 (104.5233154296875MB)
   3.3310377528901736% used
To Space:
   capacity = 113377280 (108.125MB)
   used     = 0 (0.0MB)
   free     = 113377280 (108.125MB)
   0.0% used
concurrent mark-sweep generation:
   capacity = 106240081920 (101318.4375MB)
   used     = 42025146320 (40078.30268859863MB)
   free     = 64214935600 (61240.13481140137MB)
   39.55677138092327% used
Perm Generation:
   capacity = 51249152 (48.875MB)
   used     = 51131744 (48.763031005859375MB)
   free     = 117408 (0.111968994140625MB)
   99.77090742886828% used

16632 interned Strings occupying 1867136 bytes.

我们尝试重新启动两者,NN-A 启动并在 10 分钟内变为活动状态,但 NN-B 永远卡住了。

最后我转储了堆使用情况:

 num     #instances         #bytes  class name
----------------------------------------------
   1:     185594071    13362773112  org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$BlockProto
   2:     185594071    13362773112  org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$ReceivedDeletedBlockInfoProto
   3:     101141030    10550504248  [Ljava.lang.Object;
   4:     185594072     7423762880  org.apache.hadoop.hdfs.protocol.Block
   5:     185594070     7423762800  org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo
   6:      63149803     6062381088  org.apache.hadoop.hdfs.server.namenode.INodeFile
   7:      23241035     5705267888  [B

它显示了非常非常大的 ReceivedDeletedBlock 计数,但是为什么呢?

【问题讨论】:

【参考方案1】:

我通过将dfs.blockreport.initialDelay更改为300解决了这个问题,失败的原因是Block Report Storm

【讨论】:

以上是关于由于 MEMORY 问题,Hadoop HA 备用 Namenode 启动在安全模式下挂起的主要内容,如果未能解决你的问题,请参考以下文章

Hadoop HA

Spark HA on yarn 最简易安装。

(超详细)基于Zookeeper的Hadoop HA集群的搭建

当实际的活动名称节点关闭时,HDFS HA 集群备用节点不会变为活动状态

Hadoop 生产基础架构 - 存储困境

HDFS HA 可能性