Linux下的文件系统修复

Posted jks212454

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Linux下的文件系统修复相关的知识,希望对你有一定的参考价值。

一、进行磁盘分区

[root@node1 ~]# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition number (3-128, default 3): 
First sector (16779264-41943006, default 16779264): 
Last sector, +sectors or +size{K,M,G,T,P} (16779264-41943006, default 41943006): +2G

Created a new partition 3 of type 'Linux filesystem' and of size 2 GiB.

Command (m for help): print
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 054D4516-ACE2-4C1A-BB38-1B93407B26A9

Device        Start      End  Sectors Size Type
/dev/sdb1      2048 12584959 12582912   6G Linux filesystem
/dev/sdb2  12584960 16779263  4194304   2G Linux filesystem
/dev/sdb3  16779264 20973567  4194304   2G Linux filesystem

Command (m for help): w
The partition table has been altered.
Syncing disks.

[root@node1 ~]# 

二、格式成xfs文件系统

1.查看分区

[root@node1 ~]# ll /dev/sdb3
brw-rw---- 1 root disk 8, 19 Sep 24 11:13 /dev/sdb3
[root@node1 ~]# 

2.格式化分区

[root@node1 ~]# mkfs.xfs /dev/sdb3
meta-data=/dev/sdb3              isize=512    agcount=4, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@node1 ~]# 

三、挂载分区

1.查看分区的uuid

UUID="c4477c1b-5ce7-4257-a44f-5e43e1d2a18e"

2.将挂载信息写入/etc/fstab

# 
# /etc/fstab
# Created by anaconda on Fri Mar 19 22:21:55 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=b7190d80-906f-4b9d-9ab4-5a503ecaea2c /                       xfs     defaults        0 0
UUID=525a30a7-d484-4ed5-9f38-f827f54e29ff /boot                   xfs     defaults        0 0
UUID=e6cf8733-5eec-4942-9429-c3e9087b6ff0 swap                    swap    defaults        0 0
UUID="deff8218-3389-4245-a6bf-1716010fd6d4" /mnt/lv01 xfs   defaults        0 0
UUID=7b7937af-408b-4370-9bd9-baa0cb5d1c6b swap swap defaults 0 0
UUID="c4477c1b-5ce7-4257-a44f-5e43e1d2a18e" /mnt/vdb3 xfs defaults 0 0  

3.挂载分区

[root@node1 ~]# mount -a
[root@node1 ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               970M     0  970M   0% /dev
tmpfs                  984M     0  984M   0% /dev/shm
tmpfs                  984M  9.4M  974M   1% /run
tmpfs                  984M     0  984M   0% /sys/fs/cgroup
/dev/nvme0n1p3          18G   11G  7.3G  59% /
/dev/mapper/vg01-lv01  7.0G   83M  7.0G   2% /mnt/lv01
/dev/nvme0n1p1         495M  140M  356M  29% /boot
overlay                 18G   11G  7.3G  59% /var/lib/docker/overlay2/3851b60316c4c9b3d888c4e6133589bee2882b3e231cf2c4d9ff42eca7a4a390/merged
overlay                 18G   11G  7.3G  59% /var/lib/docker/overlay2/8c7e59c24a0b2648c82f41eeddba522e58e06c6809ff702f641f6377b60e8d1f/merged
tmpfs                  197M  4.0K  197M   1% /run/user/0
/dev/sdb3              2.0G   47M  2.0G   3% /mnt/vdb3

四、破坏文件

1.向/mnt/vdn3目录中写入文件

[root@node1 ~]# cp -a /etc/passwd /etc/profile /mnt/vdb3/
[root@node1 ~]# ls /mnt/vdb3/
passwd  profile
[root@node1 ~]#

2.卸载/mnt/vdb3/文件夹报错处理

[root@node1 ~]# umount /dev/sdb3 
umount: /mnt/vdb3: target is busy.


[root@node1 ~]# fuser -m /dev/sdb3
/dev/sdb3:           36728
[root@node1 ~]# ps aux |grep 36728
root      36728  0.0  0.1  45796  3908 ?        Ss   11:08   0:00 /usr/libexec/openssh/sftp-server
root      48587  0.0  0.0  12112   976 pts/0    R+   11:37   0:00 grep --color=auto 36728

[root@node1 ~]# kill -9 36728
[root@node1 ~]# ps aux |grep 36728
root      75342  0.0  0.0  12112  1068 pts/0    R+   13:00   0:00 grep --color=auto 36728
[root@node1 ~]# 

3.卸载分区,破坏分区

[root@node1 ~]# dd if=/dev/zero of=/dev/sdb3 bs=10k count=1 
1+0 records in
1+0 records out
10240 bytes (10 kB, 10 KiB) copied, 0.00211146 s, 4.8 MB/s

4.挂载分区报错

[root@node1 ~]# mount -a
mount: /mnt/vdb3: can't find UUID="c4477c1b-5ce7-4257-a44f-5e43e1d2a18e".
[root@node1 ~]# 

五、修复分区

[root@node1 ~]# xfs_repair /dev/sdb3
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
.found candidate secondary superblock...
verified secondary superblock...
writing modified primary superblock
sb realtime bitmap inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 129
resetting superblock realtime bitmap ino pointer to 129
sb realtime summary inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 130
resetting superblock realtime summary ino pointer to 130
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
Metadata CRC error detected at 0x56526b345b72, xfs_agf block 0x1/0x200
Metadata CRC error detected at 0x56526b372862, xfs_agi block 0x2/0x200
bad magic # 0x0 for agf 0
bad version # 0 for agf 0
bad length 0 for agf 0, should be 131072
bad uuid 00000000-0000-0000-0000-000000000000 for agf 0
bad magic # 0x0 for agi 0
bad version # 0 for agi 0
bad length # 0 for agi 0, should be 131072
bad uuid 00000000-0000-0000-0000-000000000000 for agi 0
reset bad agf for ag 0
reset bad agi for ag 0
bad agbno 0 for btbno root, agno 0
bad agbno 0 for btbcnt root, agno 0
bad agbno 0 for refcntbt root, agno 0
bad agbno 0 for inobt root, agno 0
bad agbno 0 for finobt root, agno 0
agi unlinked bucket 0 is 0 in ag 0 (inode=0)
agi unlinked bucket 1 is 0 in ag 0 (inode=0)
agi unlinked bucket 2 is 0 in ag 0 (inode=0)
agi unlinked bucket 3 is 0 in ag 0 (inode=0)
agi unlinked bucket 4 is 0 in ag 0 (inode=0)
agi unlinked bucket 5 is 0 in ag 0 (inode=0)
agi unlinked bucket 6 is 0 in ag 0 (inode=0)
agi unlinked bucket 7 is 0 in ag 0 (inode=0)
agi unlinked bucket 8 is 0 in ag 0 (inode=0)
agi unlinked bucket 9 is 0 in ag 0 (inode=0)
agi unlinked bucket 10 is 0 in ag 0 (inode=0)
agi unlinked bucket 11 is 0 in ag 0 (inode=0)
agi unlinked bucket 12 is 0 in ag 0 (inode=0)
agi unlinked bucket 13 is 0 in ag 0 (inode=0)
agi unlinked bucket 14 is 0 in ag 0 (inode=0)
agi unlinked bucket 15 is 0 in ag 0 (inode=0)
agi unlinked bucket 16 is 0 in ag 0 (inode=0)
agi unlinked bucket 17 is 0 in ag 0 (inode=0)
agi unlinked bucket 18 is 0 in ag 0 (inode=0)
agi unlinked bucket 19 is 0 in ag 0 (inode=0)
agi unlinked bucket 20 is 0 in ag 0 (inode=0)
agi unlinked bucket 21 is 0 in ag 0 (inode=0)
agi unlinked bucket 22 is 0 in ag 0 (inode=0)
agi unlinked bucket 23 is 0 in ag 0 (inode=0)
agi unlinked bucket 24 is 0 in ag 0 (inode=0)
agi unlinked bucket 25 is 0 in ag 0 (inode=0)
agi unlinked bucket 26 is 0 in ag 0 (inode=0)
agi unlinked bucket 27 is 0 in ag 0 (inode=0)
agi unlinked bucket 28 is 0 in ag 0 (inode=0)
agi unlinked bucket 29 is 0 in ag 0 (inode=0)
agi unlinked bucket 30 is 0 in ag 0 (inode=0)
agi unlinked bucket 31 is 0 in ag 0 (inode=0)
agi unlinked bucket 32 is 0 in ag 0 (inode=0)
agi unlinked bucket 33 is 0 in ag 0 (inode=0)
agi unlinked bucket 34 is 0 in ag 0 (inode=0)
agi unlinked bucket 35 is 0 in ag 0 (inode=0)
agi unlinked bucket 36 is 0 in ag 0 (inode=0)
agi unlinked bucket 37 is 0 in ag 0 (inode=0)
agi unlinked bucket 38 is 0 in ag 0 (inode=0)
agi unlinked bucket 39 is 0 in ag 0 (inode=0)
agi unlinked bucket 40 is 0 in ag 0 (inode=0)
agi unlinked bucket 41 is 0 in ag 0 (inode=0)
agi unlinked bucket 42 is 0 in ag 0 (inode=0)
agi unlinked bucket 43 is 0 in ag 0 (inode=0)
agi unlinked bucket 44 is 0 in ag 0 (inode=0)
agi unlinked bucket 45 is 0 in ag 0 (inode=0)
agi unlinked bucket 46 is 0 in ag 0 (inode=0)
agi unlinked bucket 47 is 0 in ag 0 (inode=0)
agi unlinked bucket 48 is 0 in ag 0 (inode=0)
agi unlinked bucket 49 is 0 in ag 0 (inode=0)
agi unlinked bucket 50 is 0 in ag 0 (inode=0)
agi unlinked bucket 51 is 0 in ag 0 (inode=0)
agi unlinked bucket 52 is 0 in ag 0 (inode=0)
agi unlinked bucket 53 is 0 in ag 0 (inode=0)
agi unlinked bucket 54 is 0 in ag 0 (inode=0)
agi unlinked bucket 55 is 0 in ag 0 (inode=0)
agi unlinked bucket 56 is 0 in ag 0 (inode=0)
agi unlinked bucket 57 is 0 in ag 0 (inode=0)
agi unlinked bucket 58 is 0 in ag 0 (inode=0)
agi unlinked bucket 59 is 0 in ag 0 (inode=0)
agi unlinked bucket 60 is 0 in ag 0 (inode=0)
agi unlinked bucket 61 is 0 in ag 0 (inode=0)
agi unlinked bucket 62 is 0 in ag 0 (inode=0)
agi unlinked bucket 63 is 0 in ag 0 (inode=0)
sb_fdblocks 521704, counted 390638
root inode chunk not found
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
correcting imap
correcting imap
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
Note - stripe unit (0) and width (0) were copied from a backup superblock.
Please reset with mount -o sunit=<value>,swidth=<value> if necessary
done
[root@node1 ~]# 

六、检查文件状态

1.重新挂载分区

[root@node1 ~]# mount -a
[root@node1 ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               970M  1.0M  969M   1% /dev
tmpfs                  984M     0  984M   0% /dev/shm
tmpfs                  984M   22M  962M   3% /run
tmpfs                  984M     0  984M   0% /sys/fs/cgroup
/dev/nvme0n1p3          18G   11G  7.3G  59% /
/dev/mapper/vg01-lv01  7.0G   83M  7.0G   2% /mnt/lv01
/dev/nvme0n1p1         495M  140M  356M  29% /boot
overlay                 18G   11G  7.3G  59% /var/lib/docker/overlay2/3851b60316c4c9b3d888c4e6133589bee2882b3e231cf2c4d9ff42eca7a4a390/merged
overlay                 18G   11G  7.3G  59% /var/lib/docker/overlay2/8c7e59c24a0b2648c82f41eeddba522e58e06c6809ff702f641f6377b60e8d1f/merged
tmpfs                  197M  4.0K  197M   1% /run/user/0
/dev/sdb3              2.0G   47M  2.0G   3% /mnt/vdb3
[root@node1 ~]# 

2.查看文件

[root@node1 ~]# ls /mnt/vdb3/
passwd  profile
[root@node1 ~]# cat passwd 
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync

3.ext4文件系统修复

[root@node1 ~]# fsck -v /dev/sdb3

以上是关于Linux下的文件系统修复的主要内容,如果未能解决你的问题,请参考以下文章

如何进行服务器Linux系统下的ext文件系统修复

Android 逆向Linux 文件权限 ( Linux 权限简介 | 系统权限 | 用户权限 | 匿名用户权限 | 读 | 写 | 执行 | 更改组 | 更改用户 | 粘滞 )(代码片段

linux目录结构

卸载了Linux系统后在用Windows下的磁盘管理工具格式化硬盘,结果使分区表损坏,怎样修复?

纯手工修复fstab文件以及boot目录(Centos6.8)

在Tomcat的安装目录下conf目录下的server.xml文件中增加一个xml代码片段,该代码片段中每个属性的含义与用途