hadoop 突然断电数据丢失问题
Posted 疯吻IT
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hadoop 突然断电数据丢失问题相关的知识,希望对你有一定的参考价值。
HDFS-Could not obtain block
?
MapReduce?Total cumulative CPU time: 33 seconds 380 msec
Ended Job = job_201308291142_4635 with errors
Error?during job, obtaining debugging information...
Job Tracking URL:?http://xxx?/jobdetails.jsp?jobid=job_201308291142_4635
Examining task ID: task_201308291142_4635_m_000019 (and more) from job job_201308291142_4635
Examining task ID: task_201308291142_4635_m_000007 m(and more) from job job_201308291142_4635
Examining task ID: task_201308291142_4635_m_000009 (and more) from job job_201308291142_4635
?
Task with the most failures(5):
-----
Task ID:
? task_201308291142_4635_m_000009
?
URL:
-----
Diagnostic Messages for this Task:
java.io.IOException:?java.io.IOException: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1555036314-10.115.5.16-1375773346340:blk_-2678705702538243931_541142 file=/user/hive/warehouse/playtime/dt=20131119/access_pt.log.2013111904.log
? ? ? ? at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
? ? ? ? at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
? ? ? ? at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:330)
? ? ? ? at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:246)
? ? ? ? at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:215)
? ? ? ? at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:200)
? ? ? ? at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
?
- ? Reson
- ?Solution?
? ? ? HDFS?FILE?
? ? ? ? ? ? - If?HDFS?block is missing?
? ? ? ? ?1. confirm status
? ? ? ? ? ? ? Confirm missing block is exit or not.
? ? ? ? ? ? ? If missing block is over 1, file is not able to read.?
?$?hadoop?dfsadmin -report
?
?Configured Capacity: 411114887479296 (373.91 TB)
Present Capacity: 411091477784158 (373.89 TB)
DFS Remaining: 411068945908611 (373.87 TB)
DFS Used: 22531875547 (20.98 GB)
DFS Used%: 0.01%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
?
-------------------------------------------------
Datanodes available: 20 (20 total, 0 dead)
?
? ? ? ? ?? ? 2. detail block file
? ? ? ? ? ? ? ?hadoop fsck
? ? ??hadoop?fsck?/ -files -blocks
? ??
...
Status: HEALTHY
?Total size: ? ?4056908575 B (Total open files size: 3505453 B)
?Total dirs: ? ?533
?Total files: ? 15525 (Files currently being written: 2)
?Total blocks (validated): ?15479 (avg. block size 262091 B) (Total open file blocks (not validated): 2)
?Minimally replicated blocks: ? 15479 (100.0 %)
?Over-replicated blocks: ? ?0 (0.0 %)
?Under-replicated blocks: ? 0 (0.0 %)
?Mis-replicated blocks: ? ? 0 (0.0 %)
?Default replication factor: ? ?3
?Average block replication: 3.0094967
?Corrupt blocks: ? ? ? ?0
?Missing replicas: ? ? ?0 (0.0 %)
?Number of data-nodes: ? ? ?20
?Number of racks: ? ? ? 1
FSCK?ended at Tue Nov 19 10:17:19 KST 2013 in 351 milliseconds
?
The filesystem under path ‘/‘ is HEALTHY
?
? ? ? ? ? ? 3. ?remove corrupted file
?
.....
.........................Status: HEALTHY
?Total size: ? ?4062473881 B (Total open files size: 3505453 B)
?Total dirs: ? ?533
?Total files: ? 15525 (Files currently being written: 2)
?Total blocks (validated): ? ? ?15479 (avg. block size 262450 B) (Total open file blocks (not validated): 2)
?Minimally replicated blocks: ? 15479 (100.0 %)
?Over-replicated blocks: ? ? ? ?0 (0.0 %)
?Under-replicated blocks: ? ? ? 0 (0.0 %)
?Mis-replicated blocks: ? ? ? ? 0 (0.0 %)
?Default replication factor: ? ?3
?Average block replication: ? ? 3.0094967
?Corrupt blocks: ? ? ? ? ? ? ? ?0
?Missing replicas: ? ? ? ? ? ? ?0 (0.0 %)
?Number of data-nodes: ? ? ? ? ?20
?Number of racks: ? ? ? ? ? ? ? 1
FSCK?ended at Tue Nov 19 10:21:41 KST 2013 in 294 milliseconds
?
?
The filesystem under path ‘/‘ is HEALTHY
? ? ?
? ? ? ? ? ?HIVE FILE?
? ? ? ? ? ? ? ?- ?If hive block is missing?
? ? ? ?alter?table drop partition?
?
以上是关于hadoop 突然断电数据丢失问题的主要内容,如果未能解决你的问题,请参考以下文章