Linux学习-KVM

Posted 丢爸

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Linux学习-KVM相关的知识,希望对你有一定的参考价值。

安装虚拟机

#创建磁盘文件
[root@localhost vm]# qemu-img create -f qcow2 centos6.10b-disk0.qcow2 10G
Formatting 'centos6.10b-disk0.qcow2', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off 
[root@localhost vm]# ll
total 3900072
-rw------- 1 root root 8591507456 Sep 17 20:53 centos6.10a-disk0
-rw-r--r-- 1 root root     197120 Sep 18 03:37 centos6.10b-disk0.qcow2
-rw-r--r-- 1 qemu qemu 3991928832 Sep 17 20:16 CentOS-6.10-x86_64-bin-DVD1.iso
drwx------ 2 root root      16384 Sep 17 20:01 lost+found
drwxr-xr-x 2 root root       4096 Sep 17 20:41 winvm1
[root@localhost vm]# ll -h
total 3.8G
-rw------- 1 root root 8.1G Sep 17 20:53 centos6.10a-disk0
-rw-r--r-- 1 root root 193K Sep 18 03:37 centos6.10b-disk0.qcow2
-rw-r--r-- 1 qemu qemu 3.8G Sep 17 20:16 CentOS-6.10-x86_64-bin-DVD1.iso
drwx------ 2 root root  16K Sep 17 20:01 lost+found
drwxr-xr-x 2 root root 4.0K Sep 17 20:41 winvm1
#通过virt-install命令行方式安装虚拟机
[root@localhost vm]# virt-install --name=centos6b --disk path=/vm/centos6.10b-disk0.qcow2 --vcpus=1 --ram=1024 --cdrom=/vm/CentOS-6.10-x86_64-bin-DVD1.iso --network network=default --graphics vnc,listen=0.0.0.0 --os-type=linux --os-variant=rhel6

virsh使用

虚拟机操作

[root@localhost ~]# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit
#列出活动的虚拟机
virsh # list
 Id    Name                           State
----------------------------------------------------
 1     centos6.10-a                   running
 2     centos6b                       running
#列出所有的虚拟机
virsh # list --all
 Id    Name                           State
----------------------------------------------------
 -     centos6.10-a                   shut off
 -     centos6b                       shut off

#启动虚拟机
virsh # start centos6b
Domain centos6b started
virsh # list
 Id    Name                           State
----------------------------------------------------
 1     centos6b                       running
 #强制关闭虚拟机
virsh # destroy 1
Domain 1 destroyed
#设置为自动启动(如下图)
virsh # autostart centos6b
Domain centos6b marked as autostarted
#暂停虚拟机,暂停后依然占用系统资源
virsh # suspend centos6b
Domain centos6b suspended
virsh # list --all
 Id    Name                           State
----------------------------------------------------
 2     centos6b                       paused
 -     centos6.10-a                   shut off
 #获取虚拟机的UUID
virsh # domuuid centos6b
f9023616-5d63-4a15-a343-7e5740e13c11
#恢复虚拟机
virsh # resume f9023616-5d63-4a15-a343-7e5740e13c11
Domain f9023616-5d63-4a15-a343-7e5740e13c11 resumed

virsh # list --all
 Id    Name                           State
----------------------------------------------------
 2     centos6b                       running
 -     centos6.10-a                   shut off
#查看虚拟机的信息
virsh # dominfo centos6b
Id:             -
Name:           centos6b
UUID:           f9023616-5d63-4a15-a343-7e5740e13c11
OS Type:        hvm
State:          shut off
CPU(s):         1
Max memory:     1048576 KiB
Used memory:    1048576 KiB
Persistent:     yes
Autostart:      enable
Managed save:   no
Security model: none
Security DOI:   0
#查看虚拟机硬盘信息
virsh # domblklist centos6b
Target     Source
------------------------------------------------
vda        /vm/centos6.10b-disk0.qcow2
hda        -
#创建并查看镜像快照列表
[root@localhost vm]# qemu-img snapshot -c s1 centos6.10b-disk0.qcow2 
[root@localhost vm]# qemu-img snapshot -l centos6.10b-disk0.qcow2 
Snapshot list:
ID        TAG                 VM SIZE                DATE       VM CLOCK
1         s1                        0 2021-09-27 10:20:37   00:00:00.000
#通过qemu-img info也可以查看快照信息
[root@localhost vm]# qemu-img info centos6.10b-disk0.qcow2 
image: centos6.10b-disk0.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.2G
cluster_size: 65536
Snapshot list:
ID        TAG                 VM SIZE                DATE       VM CLOCK
1         s1                        0 2021-09-27 10:20:37   00:00:00.000
Format specific information:
    compat: 1.1
    lazy refcounts: false
#回滚快照
[root@localhost vm]# qemu-img snapshot -a s1 centos6.10b-disk0.qcow2 

存储池操作

Libvirt可以以存储池的形式对存储进行统一管理、简化操作
对于虚拟机操作来讲,存储池和卷不是必需的
支持以下存储池

  • dir:Filesystem Directory
  • disk:Physical Disk Device
  • fs:Pre-Formatted Block Device
  • gluster:Gluster FileSystem
  • iscsi:iSCSI Target
  • logical:LVM Volume Group
  • mpath:Multipath Device Enumerator
  • netfs:Network Export Directory
  • rbd:RADOS Block Device/Ceph
  • scsi:SCSI Host Adapter
  • sheepdog:Sheepdog FileSystem

virsh中存储池相关命令

  • find-storage-pool-sources-as:通过参数查找存储池源find potential storage pool sources
  • find-storage-pool-sources:通过XML文档查找存储池源找到潜在的存储池源
  • pool-autostart:自动启动某个池
  • pool-build:建立池
  • pool-create-as:从 一组变量中创建一个池
  • pool-create:从一个XML文件中创建一个池
  • pool-define-as:在一组变量中定义池
  • pool-define:在一个XML文件中定义一个池或修改已有池
  • pool-delete:删除池
  • pool-destroy:销毁池
  • pool-dumpxml:将池信息保存至XML文件中
  • pool-edit:为存储池编辑XML配置
  • pool-info:存储池信息
  • pool-list:列出池
  • pool-name:将池UUID转换为池名称
  • pool-refresh:刷新池
  • pool-start:启动一个非活跃的池
  • pool-undefine:取消定义一个不活跃的池
  • pool-uuid:把一个池名称转换为池UUID
#查看存储池列表
[root@localhost autostart]# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # pool-list
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes       
 iso                  active     yes       
 vm                   active     yes       
 winvm1               active     yes   
 #pool-info
 virsh # pool-info iso
Name:           iso
UUID:           7e1bd9c8-0b20-49fa-8a60-8ab02c70832d
State:          running
Persistent:     yes
Autostart:      yes
Capacity:       16.99 GiB
Allocation:     13.71 GiB
Available:      3.28 GiB

 #基于目录的存储池(dir:FileSystem Directory)
 virsh # pool-define-as guest_images dir --target "/guest_images"
Pool guest_images defined
#启动存储池
virsh # pool-start guest_images
Pool guest_images started
#停止存储池
virsh # pool-destroy guest_images
Pool guest_images destroyed
#删除存储池
virsh # pool-delete guest_images
Pool guest_images deleted
#清除配置文件
virsh # pool-undefine guest_images
Pool guest_images has been undefined
基于分区的存储池
[root@localhost ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x84a5b3d9.

Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   g   create a new empty GPT partition table
   G   create an IRIX (SGI) partition table
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

Command (m for help): p

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x84a5b3d9

   Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): 
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@localhost ~]# fdisk -l /dev/sdc

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x84a5b3d9

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048    41943039    20970496   83  Linux
#创建文件系统
[root@localhost ~]# mkfs.ext4 /dev/sdc1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5242624 blocks
262131 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2153775104
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done  

#预定义存储池,target表示要挂载的目录,source-dev为挂载的设备
virsh # pool-define-as guest_images_fs fs --source-dev "/dev/sdc1" --target "/guest_images2"
Pool guest_images_fs defined
#如果指定的target目录不存在,则需要先进行pool-build后,再进行存储池的启动
virsh # pool-build guest_images_fs
Pool guest_images_fs built
virsh # pool-start guest_images_fs
Pool guest_images_fs started
基于磁盘的存储池(disk:Physical Disk Device)
#在/tmp目录下创建一个XML文件

<pool type="disk">
  <name>guest_images_disk</name>
  <source>
    <device path="/dev/sdc"/>
    <format type="gpt"/>
  </source>
  <target>
    <path>/dev</path>
  </target>
</pool>
#根据XML文件来定义存储池
virsh # pool-define /tmp/guest_images_disk.xml 
Pool guest_images_disk defined from /tmp/guest_images_disk.xml
virsh # pool-start guest_images_disk 
Pool guest_images_disk started
基于LVM的存储池(logical:LVM Volume Group)
  • 基于LVM的存储池要求使用全部磁盘分区
  • 创建存储池,有两种方法
  • 使用现有的VG
  • 创建新的VG
  • Target Path:新的卷组名
  • Source Path:存储设备的位置
  • Build Path :会创建新的VG
#先在磁盘上创建pv,然后创建VG
[root@localhost tmp]# pvcreate /dev/sdc1
WARNING: ext4 signature detected on /dev/sdc1 at offset 1080. Wipe it? [y/n]: y
  Wiping ext4 signature on /dev/sdc1.
  Physical volume "/dev/sdc1" successfully created.   
[root@localhost tmp]# pvdisplay /dev/sdc1
  "/dev/sdc1" is a new physical volume of "<20.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdc1
  VG Name               
  PV Size               <20.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               boY1Dw-ZQ8Q-kf0j-YtKe-E87X-1iHt-sInrDp
   
[root@localhost tmp]# vgcreate guest_images_lvm /dev/sdc1
  Volume group "guest_images_lvm" successfully created
[root@localhost tmp]# vgscn
bash: vgscn: command not found...
[root@localhost tmp]# vgscan
  Reading volume groups from cache.
  Found volume group "vmvg" using metadata type lvm2
  Found volume group "guest_images_lvm" using metadata type lvm2
  Found volume group "centos" using metadata type lvm2
#通过VG创建logical存储池,source-name代表VG的名称,--target代表目录位置
virsh # pool-define-as guest-images_lvm3 logical --target=/dev/libvirt_lvm --source-name=guest_images_lvm
Pool guest-images_lvm3 defined

virsh # pool-start guest-images_lvm3 
Pool guest-images_lvm3 started
iSCSI存储池

在SAN中,主机一般都是Initiator,存储设备则是Target

  • Initiator
    • SCSI会话的发起方
    • 向Target请求LUN,并将数据的读写指令发送给Target
  • Target
    • 接受SCSI会话的一方
    • 它接收来自Initiator的指令,为Initiator提供LUN,并实现对LUN的读写
      Linux开源Target项目
  • Linux SCSI Target - STGT/tgt
  • Linux - IO Target - IO - LIO Linux2.6.38开始纳入内核
  • SCST - Generic SCSI Subsystem for Linux
  • http://scst.sourceforge.net/comparison.html
    Linux-IO Target在Linux内核中,用软件实现各种SCSI Target
  • 前端
    FC、FCoE、iSCSI、1394、InfiniBand、USB、vHost…
  • 后端
    SATA,SAS,SCSI,SSD,FLASH,DVD,USB,ramdisk
  • 架构
  • 支持SSE4.2高性能,多线程
  • 支持x86,ia64,Alpha,Cell,PPC,ARM,MIPS等多种CPU
  • 支持高可用,负荷平衡群集

安装Linux存储服务器

  • 最小化安装Linux
  • 安装targetcli软件包
[root@localhost yum.repos.d]# yum install -y target cli
  • 使用targetcli配置存储
[root@localhost yum.repos.d]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> ls
o- / .................................................................................... [...]
  o- backstores ......................................................................... [...]
  | o- block ............................................................. [Storage Objects: 0]
  | o- fileio ............................................................ [Storage Objects: 0]
  | o- pscsi ............................................................. [Storage Objects: 0]
  | o- ramdisk ........................................................... [Storage Objects: 0]
  o- iscsi ....................................................................... [Targets: 0]
  o- loopback .................................................................... [Targets: 0]
  /> cd backstores/block 
  /backstores/block> create block1 dev=/dev/sdb1   #先建立分区/dev/sdb1
Created block storage object block1 using /dev/sdb1.
/backstores/fileio> create fileio1 /tmp/foo1.img 50M
Created fileio fileio1 with size 52428800
#使得稀疏文件
[root@localhost ~]# du -h /tmp/foo1.img
0	/tmp/foo1.img
/backstores/ramdisk> create ramdisk1 1M
Created ramdisk ramdisk1 with size 1M.
/backstores/ramdisk> ls /
o- / .................................................................................... [...]
  o- backstores ......................................................................... [...]
  | o- block ............................................................. [Storage Objects: 1]
  | | o- block1 .................................. [/dev/sdb1 (40.0GiB) write-thru deactivated]
  | |   o- alua .............................................................. [ALUA Groups: 1]
  | |     o- default_tg_pt_gp .................................. [ALUA state: Active/optimized]
  | o- fileio ............................................................ [Storage Objects: 1]
  | | o- fileio1 ............................. [/tmp/foo1.img (50.0MiB) write-back deactivated]
  | |   o- alua .............................................................. [ALUA Groups: 1]
  | |     o- default_tg_pt_gp .................................. [ALUA state: Active/optimized]
  | o- pscsi ............................................................. [Storage Objects: 0]
  | o- ramdisk ........................................................... [Storage Objects: 1]
  |   o- ramdisk1 ...................................................... [(1.0MiB) deactivated]
  |     o- alua .............................................................. [ALUA Groups: 1]
  |       o- default_tg_pt_gp .................................. [ALUA state: Active/optimized]
  o- iscsi ....................................................................... [Targets: 0]
  o- loopback .................................................................... [Targets: 0]
  #创建iscsi IQN
/iscsi> pwd
/iscsi
/iscsi> create
Created target iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.43559404d2b8.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> ls
o- iscsi ......................................................................... [Targets: 1]
  o- iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.43559404d2b8 .................... [TPGs: 1]
    o- tpg1 ............................................................ [no-gen-acls, no-auth]
      o- acls ....................................................................... [ACLs: 0]
      o- luns ....................................................................... [LUNs: 0]
      o- portals ................................................................. [Portals: 1]
        o- 0.0.0.0:3260 .................................................................. [OK]
 /iscsi/iqn.20...2b8/tpg1/luns> pwd
/iscsi/iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.43559404d2b8/tpg1/luns
/iscsi/iqn.20...2b8/tpg1/luns> create /backstores/block/block1 
Created LUN 0.
/iscsi/iqn.20...2b8/tpg1/luns> create /backstores/fileio/fileio1  
Created LUN 1.
/iscsi/iqn.20...2b8/tpg1/luns> create /backstores/ramdisk/ramdisk1  
Created LUN 2.
/iscsi/iqn.20...2b8/tpg1/luns> ls
o- luns ............................................................................. [LUNs: 3]
  o- lun0 ....................................... [block/block1 (/dev/sdb1) (default_tg_pt_gp)]
  o- lun1 ................................. [fileio/fileio1 (/tmp/foo1.img) (default_tg_pt_gp)]
  o- lun2 ............................................... [ramdisk/ramdisk1 (default_tg_pt_gp)
/iscsi/iqn.20...2b8/tpg1/acls> pwd
/iscsi/iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.43559404d2b8/tpg1/acls
#查看Initiator上的IQN号码
[root@localhost iscsi]# pwd
/etc/iscsi
[root@localhost iscsi]# vim initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:e8effc2de2e5
#创建ACL
/iscsi/iqn.20...2b8/tpg1/acls> create iqn.1994-05.com.redhat:e8effc2de2e5
Created Node ACL for iqn.1994-05.com.redhat:e8effc2de2e5
Created mapped LUN 2.
Created mapped LUN 1.
Created mapped LUN 0.
/iscsi/iqn.20...2b8/tpg1/acls> ls
o- acls ............................................................................. [ACLs: 1]
  o- iqn.1994-05.com.redhat:e8effc2de2e5 ..................................... [Mapped LUNs: 3]
    o- mapped_lun0 ................................................... [lun0 block/block1 (rw)]
    o- mapped_lun1 ................................................. [lun1 fileio/fileio1 (rw)]
    o- mapped_lun2 ............................................... [lun2 ramdisk/ramdisk1 (rw)]
#保存配置
/> saveconfig
Configuration saved to /etc/target/saveconfig.json
#启动target服务并设置开机自启动
[root@localhost ~]# systemctl start target
[root@localhost ~]# systemctl enable target
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.
#通过Initiator连接Target
#检查是否可以发现
[root@localhost iscsi]# iscsiadm --mode discovery --type sendtargets --portal 192.168.0.102
192.168.0.102:3260,1 iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.43559404d2b8
#测试挂载设备,连接至Target设备
[root@localhost iscsi]# iscsiadm -d2 -m node --login
iscsiadm: Max file limits 1024 4096
iscsiadm: default: Creating session 1/1
Logging in to [iface: default, target: iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.43559404d2b8, portal: 192.168.0.102,3260] (multiple)
Login to [iface: default, target: iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.43559404d2b8, portal: 192.168.0.102,3260] successful.
#此时可以通过fdisk查看当前机器中挂载的磁盘信息
[root@localhost iscsi]# fdisk -l

Disk /dev/sdb: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x230dcdf9

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048   104857599    52427776   8e  Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 61682A1D-C460-45ED-ACF1-08B02D172AF9


#         Start          End    Size  Type            Name
 1         2048     41943006     20G  Linux LVM       

Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000af6a1

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    41943039    19921920   8e  Linux LVM

Disk /dev/mapper/centos-root: 18.2 GB, 18249416704 bytes, 35643392 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmvg-lvvm1: 53.7 GB, 53682896896 bytes, 104849408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdd: 42.9 GB, 42948624384 bytes, 83884032 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes


Disk /dev/sde: 1 MB, 1048576 bytes, 2048 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdf: 52 MB, 52428800 bytes, 102400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes
#登录iscsi设备,断开与Target连接
[root@localhost iscsi]#  iscsiadm -d2 -m node --logout
iscsiadm: Max file limits 1024 4096
Logging out of session [sid: 1, target: iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.43559404d2b8, portal: 192.168.0.102,3260]
Logout of [sid: 1, target: iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.43559404d2b8, portal: 192.168.0.102,3260] successful.

#通过virsh定义iscsi连接池
virsh # pool-define-as --name store2 --type iscsi --source-host 192.168.0.102 --source-dev iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.0d675eb8f860 --target /dev/disk/by-path
Pool store2 defined

virsh # pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes       
 iso                  active     yes       
 store2               inactive   no        
 vm                   active     yes       
 winvm1               active     yes       

virsh # pool-start store2
Pool store2 started

virsh # pool-list
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes       
 iso                  active     yes       
 store2               active     no        
 vm                   active     yes       
 winvm1               active     yes       
基于NFS的存储池
#安装和配置NFS
[root@localhost ~]# yum install -y nfs-utils
[root@localhost ~]# mkdir /nfsshare
[root@localhost ~]# vim /etc/exports
/nfsshare *(rw)
#启动服务
[root@localhost ~]# systemctl start rpcbind
[root@localhost ~]# systemctl status rpcbind
● rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2021-09-30 09:24:02 EDT; 18min ago
 Main PID: 682 (rpcbind)
   CGroup: /system.slice/rpcbind.service
           └─682 /sbin/rpcbind -w

Sep 30 09:24:01 localhost.localdomain systemd[1]: Starting RPC bind service...
Sep 30 09:24:02 localhost.localdomain systemd[1]: Started RPC bind service.
[root@localhost ~]# systemctl start nfs
[root@localhost ~]# showmount -e 192.168.0.102
Export list for 192.168.0.102:
/nfsshare *

#通过创建NFS连接池后,会自动挂载NFS
[root@localhost ~]# mount | grep 192.168.
192.168.0.102:/nfsshare on /var/lib/libvirt/images/nfspool type nfs4 (rw,nosuid,nodev,noexec,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.104,local_lock=none,addr=192.168.0.102)
#查看存储池的XML文件
virsh # pool-dumpxml vm
<pool type='dir'>
  <name>vm</name>
  <uuid>4a4de567-81a4-4ab7-91b8-d6a365d208eb</uuid>
  <capacity unit='bytes'>52706115584</capacity>
  <allocation unit='bytes'>6733983744</allocation>
  <available unit='bytes'>45972131840</available>
  <source>
  </source>
  <target>
    <path>/vm</path>
    <permissions>
      <mode>0755</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>

存储卷

  • 存储池被分割为存储卷(Storage Volume)
  • 存储卷
    • 文件
    • 块设备(如物理分区,LVM逻辑卷等)
    • libvirt管理的其他类型存储的抽象

virsh存储卷相关命令

  • vol-clone:克隆一个卷
  • vol-create:从一个XML文件中创建一个卷
  • vol-create-from:使用另一个卷做为输出,创建一个新卷
  • vol-create-as:从一组变量中创建卷
  • vol-delete:删除卷
  • vol-wipe:wipe一个卷
  • vol-dumpxml:保存卷信息到XML文件中
  • vol-info:存储卷停止
  • vol-list:列出卷
  • vol-pool:根据卷的key或路径返回存储池
  • vol-pah:根据卷名或key返回卷的路径
  • vol-name:根据卷key或路径返回卷的名称
  • vol-key:根据卷名或路径返回卷的key

存储卷管理

  • 基于目录的存储池中的存储卷管理
virsh # vol-list vm
 Name                 Path                                    
------------------------------------------------------------------------------
 base-centos6-disk0.qcow2 /vm/base-centos6-disk0.qcow2            
 CentOS-6.10-x86_64-bin-DVD1.iso /vm/CentOS-6.10-x86_64-bin-DVD1.iso     
 centos6.10a-disk0    /vm/centos6.10a-disk0                   
 centos6.10b-disk0.qcow2 /vm/centos6.10b-disk0.qcow2             
 crm-disk0.qcow2      /vm/crm-disk0.qcow2                     
 erp-disk0.qcow2      /vm/erp-disk0.qcow2                     
 lost+found           /vm/lost+found                          
 oa-disk0.qcow2       /vm/oa-disk0.qcow2                      
 t1.img               /vm/t1.img                              
 winvm1               /vm/winvm1 
 #创建存储卷
 virsh # vol-create-as vm test1.qcow2 1G --format qcow2
Vol test1.qcow2 created
#查看存储卷信息(使用路径)
virsh # vol-info /vm/test1.qcow2
Name:           test1.qcow2
Type:           file
Capacity:       1.00 GiB
Allocation:     196.00 KiB
#查看存储卷信息(使用参数)
virsh # vol-info test1.qcow2 --pool vm
Name:           test1.qcow2
Type:           file
Capacity:       1.00 GiB
Allocation:     196.00 KiB
#使用qemu-img info命令查看创建的存储卷信息
[root@localhost ~]# qemu-img info /vm/test1.qcow2 
image: /vm/test1.qcow2
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 196K
cluster_size: 65536
Format specific information:
    compat: 0.10
#查看磁盘占用情况
[root@localhost ~]# du -h /vm/test1.qcow2
196K	/vm/test1.qcow2

  • 基于LVM的存储池中的存储卷管理
#先创建VG,然后创建存储池
virsh # pool-define-as guest_image_lvm2 logical --source-name=guest_kvm_vg --target=/dev/guest_kvm_vg
Pool guest_image_lvm2 defined

virsh # pool-start guest_image_lvm2 
Pool guest_image_lvm2 started

virsh # pool-list
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes       
 guest_image_lvm2     active     no        
 iso                  active     yes       
 vm                   active     yes       
 winvm1               active     yes     
#在存储卷的前提下,创建volume
virsh # vol-create-as guest_image_lvm2 lvvol1 1G
Vol lvvol1 created

virsh # vol-create-as guest_image_lvm2 lvvol2 2G
Vol lvvol2 created

virsh # vol-create-as guest_image_lvm2 lvvol3 3G
Vol lvvol3 created
#查看存储池下面的存储卷信息
virsh # vol-list guest_image_lvm2 
 Name                 Path                                    
------------------------------------------------------------------------------
 lvvol1               /dev/guest_kvm_vg/lvvol1                
 lvvol2               /dev/guest_kvm_vg/lvvol2                
 lvvol3               /dev/guest_kvm_vg/lvvol3  
 #通过lvscan查看会显示出三个lv
 [root@localhost ~]# lvscan
  ACTIVE            '/dev/vmvg/lvvm1' [<50.00 GiB] inherit
  ACTIVE            '/dev/guest_kvm_vg/lvvol1' [1.00 GiB] inherit
  ACTIVE            '/dev/guest_kvm_vg/lvvol2' [2.00 GiB] inherit
  ACTIVE            '/dev/guest_kvm_vg/lvvol3' [3.00 GiB] inherit
  ACTIVE            '/dev/centos/swap' [2.00 GiB] inherit
  ACTIVE            '/dev/centos/root' [<17.00 GiB] inherit
#克隆存储卷
virsh # vol-list vm
 Name                 Path                                    
------------------------------------------------------------------------------
 base-centos6-disk0.qcow2 /vm/base-centos6-disk0.qcow2            
 CentOS-6.10-x86_64-bin-DVD1.iso /vm/CentOS-6.10-x86_64-bin-DVD1.iso     
 centos6.10a-disk0    /vm/centos6.10a-disk0                   
 centos6.10b-disk0.qcow2 /vm/centos6.10b-disk0.qcow2             
 crm-disk0.qcow2      /vm/crm-disk0.qcow2                     
 erp-disk0.qcow2      /vm/erp-disk0.qcow2                     
 lost+found           /vm/lost+found                          
 oa-disk0.qcow2       /vm/oa-disk0.qcow2                      
 t1.img               /vm/t1.img                              
 test1.qcow2          /vm/test1.qcow2                         
 winvm1               /vm/winvm1         
virsh # vol-clone test1.qcow2 test2.qcow2 --pool=vm
Vol test2.qcow2 cloned from test1.qcow2

virsh # vol-list vm
 Name                 Path                                    
------------------------------------------------------------------------------
 base-centos6-disk0.qcow2 /vm/base-centos6-disk0.qcow2            
 CentOS-6.10-x86_64-bin-DVD1.iso /vm/CentOS-6.10-x86_64-bin-DVD1.iso     
 centos6.10a-disk0    /vm/centos6.10a-disk0                   
 centos6.10b-disk0.qcow2 /vm/centos6.10b-disk0.qcow2             
 crm-disk0.qcow2      /vm/crm-disk0.qcow2                     
 erp-disk0.qcow2      /vm/erp-disk0.qcow2                     
 lost+found           /vm/lost+found                          
 oa-disk0.qcow2       /vm/oa-disk0.qcow2                      
 t1.img               /vm/t1.img                              
 test1.qcow2          /vm/test1.qcow2                         
 test2.qcow2          /vm/test2.qcow2                         
 winvm1               /vm/winvm1      

#克隆存储卷(克隆LV)
virsh # vol-clone lvvol1 lvvol4 --pool=guest_image_lvm2 
Vol lvvol4 cloned from lvvol1

virsh # vol-list guest_image_lvm2 
 Name                 Path                                    
------------------------------------------------------------------------------
 lvvol1               /dev/guest_kvm_vg/lvvol1                
 lvvol2               /dev/guest_kvm_vg/lvvol2                
 lvvol3               /dev/guest_kvm_vg/lvvol3                
 lvvol4               /dev/guest_kvm_vg/lvvol4   
 #通过lvsan查看信息
 [root@localhost ~]# lvscan
  ACTIVE            '/dev/vmvg/lvvm1' [<50.00 GiB] inherit
  ACTIVE            '/dev/guest_kvm_vg/lvvol1' [1.00 GiB] inherit
  ACTIVE            '/dev/guest_kvm_vg/lvvol2' [2.00 GiB] inherit
  ACTIVE            '/dev/guest_kvm_vg/lvvol3' [3.00 GiB] inherit
  ACTIVE            '/dev/guest_kvm_vg/lvvol4' [1.00 GiB] inherit
  ACTIVE            '/dev/centos/swap' [2.00 GiB] inherit
  ACTIVE            '/dev/centos/root' [<17.00 GiB] inherit
#删除一个存储卷
virsh # vol-delete lvvol4 --pool guest_image_lvm2 
Vol lvvol4 deleted
[root@localhost ~]# lvscan
  ACTIVE            '/dev/vmvg/lvvm1' [<50.00 GiB] inherit
  ACTIVE            '/dev/guest_kvm_vg/lvvol1' [1.00 GiB] inherit
  ACTIVE            '/dev/guest_kvm_vg/lvvol2' [2.00 GiB] inherit
  ACTIVE            '/dev/guest_kvm_vg/lvvol3' [3.00 GiB] inherit
  ACTIVE            '/dev/centos/swap' [2.00 GiB] inherit
  ACTIVE            '/dev/centos/root' [<17.00 GiB] inherit

给虚拟机添加卷

  • attach-device
    通过XML添加新的设备

  • attach-disk

#三种创建卷的方法
#1.通过dd命令创建
[root@localhost ~]# dd if=/dev/zero of=/vm/test2.img count=1024 bs=1024k
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.25214 s, 253 MB/s
#2.通过qemu-img创建
[root@localhost ~]# qemu-img create -f qcow2 /vm/test3.qcow2 1G
Formatting '/vm/test3.qcow2', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 lazy_refcounts=off 
#3.通过vol-create
[root@localhost ~]# virsh vol-create-as vm test4.img 1G
Vol test4.img created

#通过XML添加磁盘
<disk type="file" device="disk">
  <driver name="qemu" type="raw" cache="none"/>
  <source file="/vm/test2.img"/>
  <target dev="/vdc"/>
</disk>
#通过XML挂载磁盘设备到虚拟机
virsh # attach-device oa /tmp/disks.xml --persistent
Device attached successfully

#查看虚拟机磁盘情况
virsh # domblklist oa
Target     Source
------------------------------------------------
vda        /vm/oa-disk0.qcow2
vdb        /vm/t1.img

#注:添加时,虚拟机必须处于启动状态
virsh # attach-disk --domain oa --source=/vm/t1.img --target=vdb --cache=none
Disk attached successfully

以上是关于Linux学习-KVM的主要内容,如果未能解决你的问题,请参考以下文章

VMware和kvm搜索哪个作为linux的虚拟机更好

Linux虚拟化技术KVM

kvm学习笔记(一,基础概念)

8.4 学习日记

kvm虚拟化学习笔记(十八)之ESXi到KVM之v2v迁移

学习笔记-KVM虚拟化