xfs文件系统

Posted 阳光-源泉

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了xfs文件系统相关的知识,希望对你有一定的参考价值。

引用http://blog.chinaunix.net/uid-522675-id-4665059.html  xfs文件系统使用总结

 

1.3 xfs相关常用命令
xfs_admin: 调整 xfs 文件系统的各种参数  
xfs_copy: 拷贝 xfs 文件系统的内容到一个或多个目标系统(并行方式)  
xfs_db: 调试或检测 xfs 文件系统(查看文件系统碎片等)  
xfs_check: 检测 xfs 文件系统的完整性  
xfs_bmap: 查看一个文件的块映射  
xfs_repair: 尝试修复受损的 xfs 文件系统  
xfs_fsr: 碎片整理  
xfs_quota: 管理 xfs 文件系统的磁盘配额  
xfs_metadump: 将 xfs 文件系统的元数据 (metadata) 拷贝到一个文件中  
xfs_mdrestore: 从一个文件中将元数据 (metadata) 恢复到 xfs 文件系统  
xfs_growfs: 调整一个 xfs 文件系统大小(只能扩展)  
xfs_freeze    暂停(-f)和恢复(-u)xfs 文件系统
xfs_logprint: 打印xfs文件系统的日志  
xfs_mkfile: 创建xfs文件系统  
xfs_info: 查询文件系统详细信息  
xfs_ncheck: generate pathnames from i-numbers for XFS  
xfs_rtcp: XFS实时拷贝命令   
xfs_io: 调试xfs I/O路径

2.2  计算块使用
 We want to use mysql on /dev/sda3, but how can we ensure that it is aligned with the RAID stripes?  It takes a small amount of math:

    Start with your RAID stripe size.  Let’s use 64k which is a common default.  In this case 64K = 2^16 = 65536 bytes. 默认尺寸是64K
    Get your sector size from fdisk.  In this case 512 bytes. 扇区大小512b
    Calculate how many sectors fit in a RAID stripe.   65536 / 512 = 128 sectors per stripe. 每个条带大小128个扇区。
    Get start boundary of our mysql partition from fdisk: 27344896. 查看mysql分区的起始数为27344896
    See if the Start boundary for our mysql partition falls on a stripe boundary by dividing the start sector of the partition by the sectors per stripe:  27344896 / 128 = 213632.  This is a whole number, so we are good.  If it had a remainder, then our partition would not start on a RAID stripe boundary. 查看如果由起始扇区划分的起始边界落到条带的边界,再计算扇区数,得到一个整数。如果有余数,那么我们的分区不会从raid条带边界开始。
    
Create the Filesystem

XFS requires a little massaging (or a lot).  For a standard server, it’s fairly simple.  We need to know two things:

    RAID stripe size
    Number of unique, utilized disks in the RAID.  This turns out to be the same as the size formulas I gave above:
        RAID 1+0:  is a set of mirrored drives, so the number here is num drives / 2.
        RAID 5: is striped drives plus one full drive of parity, so the number here is num drives – 1.
In our case, it is RAID 1+0 64k stripe with 8 drives.  Since those drives each have a mirror, there are really 4 sets of unique drives that are striped over the top.  Using these numbers, we set the ‘su’ and ‘sw’ options in mkfs.xfs with those two values respectively.
 
2.3 格式化文件系统
通过以上实例总结执行命令 mkfs.xfs -d su=64k,sw=4 /dev/sda3

3. xfs文件系统的创建
3.1 默认方法
#mkfs.xfs /dev/sdc1
meta-data=/dev/sdc1 isize=256    agcount=18, agsize=1048576 blks
data     =                       bsize=4096   blocks=17921788, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=0
naming   =version 2              bsize=4096  
log      =internal log           bsize=4096   blocks=2187, version=1
         =                       sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

3.2 指定块和内部log大小

# mkfs.xfs -b size=1k -l size=10m /dev/sdc1
meta-data=/dev/sdc1 isize=256    agcount=18, agsize=4194304 blks
data     =                       bsize=1024   blocks=71687152, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=0
naming   =version 2              bsize=4096  
log      =internal log           bsize=1024   blocks=10240, version=1
         =                       sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0
3.3 使用逻辑卷做为外部日志的卷
# mkfs.xfs -l logdev=/dev/sdh,size=65536b /dev/sdc1
meta-data=/dev/sdc1              isize=256    agcount=4, agsize=76433916
blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=305735663,
imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =/dev/sdh               bsize=4096   blocks=65536, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

3.3 目录块

# mkfs.xfs -b size=2k -n size=4k /dev/sdc1
meta-data=/dev/sdc1              isize=256    agcount=4,
agsize=152867832 blks
         =                       sectsz=512   attr=2
data     =                       bsize=2048   blocks=611471327,
imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=2048   blocks=298569, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

3.4 扩展文件系统
新增的空间不会使原有文件系统上的文件不会被改动,而且被增加的空间变成可用的附加的文件存储
XVM支持xfs系统的扩展
# xfs_growfs /mnt
meta-data=/mnt                   isize=256    agcount=30, agsize=262144 blks
data     =                       bsize=4096   blocks=7680000, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=0
naming   =version 2              bsize=4096  
log      =internal               bsize=4096   blocks=1200 version=1
         =                       sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0
data blocks changed from 7680000 to 17921788

4. 文件系统的维护
4.1 碎片的整理
查看文件块状况: xfs_bmap -v file.tar.bz2
查看磁盘碎片状况: xfs_db -c frag -r /dev/sda1
整理碎片: xfs_fsr /dev/sda1


mountpoint与device要区别

挂载点
[[email protected] ~]# xfs_info /root
meta-data=/dev/mapper/centos-root isize=256    agcount=4, agsize=3110656 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=12442624, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=6075, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

设备名,(下面输出比较多)
[[email protected] ~]# xfs_logprint /dev/mapper/centos-root|more

[[email protected] ~]# xfs_bmap /var/log/messages
/var/log/messages:
        0: [0..119]: 6304..6423
        1: [120..127]: 6440..6447
        2: [128..135]: 6464..6471
[[email protected] ~]# xfs_bmap /var/log/secure
/var/log/secure:
        0: [0..7]: 6424..6431
        1: [8..15]: 6456..6463
        2: [16..23]: 6592..6599
[[email protected] ~]# xfs_bmap -v /var/log/messages
/var/log/messages:
 EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL
   0: [0..119]:        6304..6423        0 (6304..6423)       120
   1: [120..127]:      6440..6447        0 (6440..6447)         8
   2: [128..135]:      6464..6471        0 (6464..6471)         8


[[email protected] ~]# xfs_db -c frag -r /dev/xvda1
actual 326, ideal 324, fragmentation factor 0.61%

[[email protected] ~]# xfs_db -c frag -r /dev/xvda2
xfs_db: /dev/xvda2 is not a valid XFS filesystem (unexpected SB magic number 0x00000000)
Use -F to force a read attempt.
因为/dev/xvda2是一个pv,它没有包含文件系统

[[email protected] ~]# xfs_db -c frag -r /dev/mapper/centos-root
actual 20226, ideal 20092, fragmentation factor 0.66%
[[email protected] ~]# xfs_db -c frag -r /dev/centos/root
actual 20239, ideal 20103, fragmentation factor 0.67%
[[email protected] ~]# xfs_db -c frag -r /dev/dm-0
actual 20239, ideal 20103, fragmentation factor 0.67%


以上是关于xfs文件系统的主要内容,如果未能解决你的问题,请参考以下文章

《Linux学习并不难》文件系统管理:xfs文件系统介绍

Redhat XFS文件系统

Linux中XFS文件系统的备份,恢复,修复

xfsdump 备份文件系统

文件系统特点与XFS文件系统

xfs文件系统元数据损坏修复