宿主机环境

  • OS: CentOS Linux release 7.8.2003 (Core)
  • CPU: 8 Core, 2 Sockets, 2Threads
  • Memeory: 64G
    宿主机配置
    设置主机名

    设置主机名

    vi /etc/hostname
    linuxfdc

设置hosts文件

vi /etc/hosts
127.0.0.1 localhost localhost.localdomain
::1 localhost localhost6 localhost6.localdomain6

192.168.31.62 linuxfdc
关闭防火墙
systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld

临时关闭

setenforce 0

永久关闭

vi /etc/selinux/config
SELINUX=disabled
开启IPv4转发

  • 方法1:临时生效

echo 1 > /proc/sys/net/ipv4/ip_forward

  • 方法2:增加内核配置

vi /etc/sysctl.conf
net.ipv4.ip_forward=1
下面的根据需要配置:

net.ipv4.ip_local_reserved_ports=30000-32767
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1
net.bridge.bridge-nf-call-ip6tables=1

  • 内核配置生效

sysctl -p
安装KVM及Libvirt

  • 查看当前Repo, 确保EPEL已经安装

epel.repo
epel-testing.repo

  • 安装KVM

yum install qemu-kvm -y

  • 加载KVM内核模块

  • 查看KVM是否加载

[root@ZL201947 ~]# lsmod | grep kvm
kvm_intel 183737 0
kvm 615914 1 kvm_intel
irqbypass 13503 1 kvm

  • 手工加载KVM内核模块

modprobe kvm
modprobe kvm-intel

  • 安装虚拟化管理工具

yum install virt-install virt-manager libvirt libvirt-python python-virtinst bridge-utils net-tools -y
启动libvirtd, 并将它设为开机启动

systemctl start libvirtd && systemctl enable libvirtd && systemctl status libvirtd
启动后使用ifconfig查看,发现会多出来一块virbr0的网络设备(网桥),IP默认为192.168.122.1/24;以及一块virbr0-nic网络设备,并且该设备在virbr0网桥上。

平台建设之基于Virsh工具管理KVM

查看宿主机虚拟化支持

cp /root/iso/CentOS-7-x86_64-Minimal-2003 /var/lib/libvirt/boot/
cp /root/iso/CentOS-7-x86_64-DVD-2003.iso /var/lib/libvirt/boot/
创建磁盘

  • RAW格式

qemu-img create -f raw /var/lib/libvirt/images/CentOS-7-x86_64-Minimal-2003.raw 64G

  • QCOW2格式

qemu-img create -f qcow2 /var/lib/libvirt/images/CentOS-7-x86_64-Minimal-2003.qcow2 64G
qemu-img create -f qcow2 /root/kvm/template/CentOS-7-x86_64-Minimal-2003.qcow2 20G
创建虚拟机
virt-install --virt-type kvm --name COS7-Minimal-2003 --ram 1024 \
--vcpus=2 --cdrom=/var/lib/libvirt/boot/CentOS-7-x86_64-Minimal-2003.iso \
--disk /root/kvm/template/CentOS-7-x86_64-Minimal-2003.qcow2,format=qcow2 \
--network network=default --graphics vnc,listen=0.0.0.0 --noautoconsole
默认创建的虚拟机配置文件位于:/etc/libvirt/qemu/COS7-1908-Minimal.xml.

  • 异常问题

cannot access storage file (as uid:107, gid:107)permission denied

默认情况下磁盘文件位于/var/lib/libvirt/images/目录下,执行创建虚拟机将正常,但是如果将磁盘文件放置在其他目下, 如:/root/kvm/template/,将产生错误:cannot access storage file (as uid:107, gid:107)permission denied。

解决方法:

vi /etc/libvirt/qemu.conf
....

The user for QEMU processes run by the system instance. It can be

specified as a user name or as a user id. The qemu driver will try to

parse this value first as a name and then, if the name doesn't exist,

as a user id.

#

Since a sequence of digits is a valid user name, a leading plus sign

can be used to ensure that a user id will not be interpreted as a user

name.

#

Some examples of valid values are:

#

user = "qemu" # A user named "qemu"

user = "+0" # Super user (uid=0)

user = "100" # A user named "100" or a user with uid=100

#
user = "root"

The group for QEMU processes run by the system instance. It can be

specified in a similar way to user.

group = "root"
....

然后执行:
systemctl restart libvirtd.service
KVM中CPU和内存

  • 一个KVM虚机即一个Linux qemu-kvm进程,与其他Linux进程一样被Linux进程调度器调度。
  • KVM虚机包括虚拟内存、虚拟CPU和虚机I/O设备,其中,内存和CPU的虚拟化由KVM内核模块负责实现,I/O设备的虚拟化由QEMU负责实现。
  • KVM用户机系统的内存是qumu-kvm进程的地址空间的一部分。
  • KVM虚机的vCPU作为线程运行在qemu-kvm进程的上下文中。
    配置虚拟机中的网络
    KVM常见的有4种网络模型:

  • 隔离模式:虚拟机之间组建网络,该模式无法与宿主机通信,无法与其他网络通信,相当于虚拟机只是连接到一台交换机上。
  • 路由模式:相当于虚拟机连接到一台路由器上,由路由器(物理网卡),统一转发,但是不会改变源地址。
  • NAT模式:在路由模式中,会出现虚拟机可以访问其他主机,但是其他主机的报文无法到达虚拟机,而NAT模式则将源地址转换为路由器(物理网卡)地址,这样其他主机也知道报文来自那个主机,在Docker环境中经常被使用。
  • 桥接模式:在宿主机中创建一张虚拟网卡作为宿主机的网卡,而物理网卡则作为交换机。
    KVM默认网络配置文件: /etc/libvirt/qemu/networks/default.xml,下面重点描述NAT模式和桥接模式。

配置NAT网络
虚拟机利用host机器的IP进行上网,对外显示一个IP;此种模式下必须开启IP v4转发。

平台建设之基于Virsh工具管理KVM

使用VNC-Viewer-6.17.1113-Linux-x64,设置连接Server:192.168.31.62:5900, 进入虚拟机页面,root登陆,下面设置网络:

  • 方法1:手工设置,重启失效

  • 查看虚拟机的网络设备

ip addr show

  • 启用网络设备

ip link set dev eth0 up

  • 设置eth0网卡, default网桥网段的IP

ip addr add 192.168.122.2/24 dev eth0

  • 设置路由

ip route add default via 192.168.122.1 dev eth0

  • 设置DNS

vi /etc/resolv.conf, 添加:

nameserver 192.168.122.1

  • 方法2:配置文件配置

vi /etc/sysconfig/network-scripts/ifcfg-eth0
...
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
ONBOOT=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth0
UUID=0a43b578-b24b-4c6a-8615-8fdaf813bce4
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.122.2
PREFIX=24
GATEWAY=192.168.122.1
PEERDNS=no
DNS1=192.168.122.1
然后reboot虚拟机!

配置桥接网络
将虚拟机桥接到host机器的网卡上,Guest和host机器都通过bridge上网,对外展示不同的IP。

Bridge方式即虚拟网桥的网络连接方式,是客户机和子网里面的机器能够互相通信。可以使虚拟机成为网络中具有独立IP的主机。

桥接网络(也叫物理设备共享)被用作把一个物理设备复制到一台虚拟机。网桥多用作高级设置,特别是主机多个网络接口的情况。

平台建设之基于Virsh工具管理KVM

  • 备份网络设备文件

cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth0.bak

  • 创建网桥配置文件

vi /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=br0
TYPE=Bridge
BOOTPROTO=static
HWADDR=00:14:5E:C2:1E:40
IPADDR=192.168.31.62
PREFIX=24
ONBOOT=yes

  • 修改ifcfg-eth0文件

DEVICE=eth0
TYPE=Ethernet
HWADDR=00:14:5E:C2:1E:40
ONBOOT=yes
BRIDGE=br0

  • 重启网络服务

systemctl restart network

  • 查看结果

brctl show

  • 配置虚拟机的XML文件

<interface type="bridge"><!--虚拟机网络连接方式-->
<source bridge="br0"/><!-- 当前主机网桥的名称-->
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
<mac address="00:16:e4:9a:b3:6a"/><!--为虚拟机分配MAC地址,务必唯一,否则DHCP获得同样IP,引起冲突-->
</interface>

  • 虚拟机中配置网络

vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=00:16:e4:9a:b3:6a
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=dhcp

  • 网络配置异常处理

  • MAC地址冲突导致eth0启动失败或获取不到IP地址
    解决方法:删除/etc/udev/rules.d/目录下面的持久命名规则文件:persistent-net.rules

基于Virsh管理虚拟机
网络互通
主机可SSH免密码登入虚拟机节点。

  • 三次回车后,密钥生成完成

ssh-keygen -t rsa -m PEM

  • 拷贝密钥到其他节点

ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.122.xxx
注意: 各节点的SSH Key必须唯一,如果是基于KVM复制的VM,需要删除/root/.ssh/id_rsa*文件。

查看帮助

  • virsh help | less
  • virsh help | grep reboot
    网络管理
  • 网络查看

  • virsh net-list --all
  • nmcli //CentOS7下查看网络配置详细信息的另一种方式
  • 删除默认网络

  • virsh net-destroy default
  • virsh net-undefine default
    定义网络

vi test.xml
<network>
<name>test</name>
<uuid>721cddd4-e4de-4ed0-8907-15981411a842</uuid>
<forward dev='em2' mode='nat'>
<interface dev='em2'/>
</forward>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:b9:37:3f'/>
<domain name='test'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.100.1' end='192.168.100.255'/>
</dhcp>
</ip>
</network>

virsh net-define test.xml

  • 创建网络

  • virsh net-create test.xml
  • 启动网络

  • virsh net-start test
  • 标记为自动启动

  • virsh net-autostart test
  • 编辑修改default网络

  • virsh net-edit default
    Domain Monitoring
  • virsh list --all #显示本地活动虚拟机

  • virsh dominfo/#显示虚拟机的基本信息

例如:

virsh dominfo 2
virsh dominfo COS7-Minimal-2003
Host and Hypervisor

  • virsh version
  • virsh sysinfo
    Domain Management
  • virsh create COS7-Minimal-2003.xml #创建虚拟机(创建后,虚拟机立即执行,成为活动主机)

  • virsh start COS7-Minimal-2003 #启动名字为COS7-Minimal-2003的非活动虚拟机

  • virsh shutdown COS7-Minimal-2003 #正常关闭虚拟机

  • virsh destroy COS7-Minimal-2003 #强制关闭虚拟机

  • virsh autostart COS7-Minimal-2003

  • virsh reboot COS7-Minimal-2003

  • virsh reset COS7-Minimal-2003

  • virsh define COS7-Minimal-2003.xml

  • virsh undefine COS7-Minimal-2003

  • virsh dumpxml COS7-Minimal-2003 >> /root/COS7-Minimal-2003.xml

  • virsh dumpxml COS7-Minimal-2003 #显示虚拟机的当前配置文件

  • virsh suspend COS7-Minimal-2003 #暂停虚拟机

  • virsh resume COS7-Minimal-2003 #启动暂停的虚拟机

  • virsh setmem COS7-Minimal-2003 51200 #给不活动虚拟机设置内存大小

  • virsh setvcpus COS7-Minimal-2003 4 #给不活动虚拟机设置cpu个数

  • virsh edit COS7-Minimal-2003 #编辑配置文件(一般用在刚定义完VM)

镜像管理
以Ubunt系统为例,首先安装依赖包:

  • apt-get install libguestfs-tools
  • virt-edit -a vm/gitlab-docker/disk.raw /etc/fstab
    基于XML创建新的VM
    以PROD-CEPH-M1为例生成PROD-CEPH-M2!

新建PROD-CEPH-M2目录
mkdir -p /root/vm/PROD-CEPH-N2
复制PROD-CEPH-M1磁盘文件
复制前删除PROD-CEPH-M1中/etc/udev/rules.d目录下的70-persistent-net.rules

cp /root/vm/PROD-CEPH-N1/* /root/vm/PROD-CEPH-N2
修改PROD-CEPH-M2下image的名字
mv PROD-CEPH-M1.qcow2 PROD-CEPH-M2.qcow2
更新虚拟机定义文件、修改定义文件中下列选项
复制/etc/libvirt/qemu/PROD-CEPH-M1.xml(或/root/vm/PROD-CEPH-N1/PROD-CEPH-M1.xml)文件至PROD-CEPH-M2目录下PROD-CEPH-M2.xml,然后修改该下面选项:

  • 修改虚拟名称
  • 删除uuid
  • 指定disk
  • 删除mac
    <name>PROD-CEPH-M2</name> //修改名称
    <uuid>cb0299fd-6c17-472f-bd7d-088d7145dc4a</uuid> //删除uuid
    <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory>
    <vcpu placement='static'>2</vcpu>

<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/var/lib/libvirt/images/PROD-CEPH-M2/PROD-CEPH-M2.qcow2'/> //指定disk
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>

<interface type='network'>
<mac address='52:54:00:c2:6c:af'/> //删除mac
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>

  • 定义VM

virsh define PROD-CEPH-M2.xml

  • 启动VM

virsh start PROD-CEPH-M2

  • 修改hostname,hosts

vi /etc/hostname
vi /etc/hosts

  • 修改IP地址

vi /etc/sysconfig/network-scripts/ifcfg-eth0

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
ONBOOT=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth0
#UUID=0a43b578-b24b-4c6a-8615-8fdaf813bce4 //删除UUID
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.122.2 //更改IP地址
PREFIX=24
GATEWAY=192.168.122.1
PEERDNS=no
DNS1=192.168.122.1 //DNS指向网关
虚拟机多磁盘配置
Undefine要增加硬盘的VM
virsh undefine PROD-CEPH-M1
创建另一个硬盘
qemu-img create -f raw /root/kvm/PROD-CEPH-M1/journal.raw 50G
qemu-img create -f qcow2 /root/kvm/PROD-CEPH-M1/journal.qcow2 50G
VM定义文件中增加disk
vi /var/lib/libvirt/images/COS7-Minimal-2003/COS7-Minimal-2003.xml

原磁盘

<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/var/lib/libvirt/images/COS7-Minimal-2003/vda.raw'/>
<target dev='vdb' bus='virtio'/>
<!-- <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> --> #删除
</disk>

增加磁盘

<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/root/kvm/COS7-Minimal-2003/journal.qcow2'/>
<target dev='vdb' bus='virtio'/>
<!--<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>--> #删除
</disk>
重新define VM,并启动
virsh define COS7-Minimal-2003.xml
virsh start COS7-Minimal-2003
另一种方式创建虚拟磁盘
使用下面命令可虚拟多个硬盘设备(未测试):

fallocate -l 4096M /opt/mydriver0.img
mkfs -t xfs /opt/mydriver0.img
mkdir /media/mydriver0
mount -t auto -o loop /opt/mydriver0.img /media/mydriver0/
umount /media/mydriver0/

fallocate -l 4096M /opt/mydriver1.img
mkfs -t xfs /opt/mydriver1.img
mkdir /media/mydriver1
mount -t auto -o loop /opt/mydriver1.img /media/mydriver1/
umount /media/mydriver1/

查看系统磁盘信息:
fdisk -l
df -h
虚拟机中添加网卡
进入网络配置目录
cd /etc/libvirt/qemu/networks
创建网桥定义文件
复制default.xml生成新的网桥定义文件,cluster.xml:

<network>
<name>cluster</name>#修改名称
<!--<uuid>721cddd4-e4de-4ed0-8907-15981411a842</uuid>--> #删除UUID
<forward mode='nat'/>
<bridge name='virbr1'/> #修改网桥名称
<mac address='52:54:00:f6:ba:7e'/> #修改mac地址
<ip address='10.0.0.1' netmask='255.255.255.0'>
<dhcp>
<range start='10.0.0.2' end='10.0.0.254'/>
</dhcp>
</ip>
</network>
定义网络
virsh net-define cluster.xml
启动网络
virsh net-start cluster
设置网络自动启动
virsh net-autostart cluster
查看网络
virsh net-list --all #这里你可以看到已经有一个新地网桥local
ip a #可以看到你的虚拟网卡virbr1
Destroy虚拟机
virsh destroy 虚拟机 #需要先备份虚拟机定义XML文件
添加网卡配置
修改虚拟机配置文件,在原来的网卡配置下面加一个网卡配置:

<interface type='network'>
<mac address='52:54:00:9b:3d:15'/> #修改mac或不配置,让自动生成
<source network='cluster'/> #修改为你新的网桥名称
<model type='virtio'/>
</interface>
定义虚拟机
virsh define 虚拟机ID/虚拟机名称
启动虚拟机
virsh start 虚拟机ID/虚拟机名称
查看虚拟机中网卡
进入虚拟机执行:ip a,将会看到一个eth1网卡。

配置新网卡
在/etc/sysconfig/network-scripts/照ifcfg-eth0复制一个ifcfg-eth1文件:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=eth1
UUID=3ab61577-8705-4ca8-abf8-980041c2f396 #随机生成一个UUID,或不设置,重启后将自动生成
DEVICE=eth1
ONBOOT=yes
IPADDR=10.0.0.2
PREFIX=24
GATEWAY=10.0.0.1
PEERDNS=no
DNS1=10.0.0.1
调整KVM中磁盘大小
关闭KVM
[root@linuxfdc KVM]# virsh shutdown K8S-HARBOR
Domain K8S-HARBOR is being shutdown
查看Guest OS Disk
Locate your guest OS disk path:

[root@linuxfdc KVM]# virsh domblklist K8S-HARBOR
Target Source

vda /data/kvm/K8S-HARBOR/K8S-HARBOR.qcow2
hda -
或者:

[root@linuxfdc KVM]# virsh dumpxml K8S-HARBOR | egrep 'disk type' -A 5
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/data/kvm/K8S-HARBOR/K8S-HARBOR.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hda' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
查看磁盘信息:

[root@linuxfdc KVM]# qemu-img info /data/kvm/K8S-HARBOR/K8S-HARBOR.qcow2
image: /data/kvm/K8S-HARBOR/K8S-HARBOR.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 16G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
Extend Guest VM Disk
注意: qemu-img不能resize有Snapshot的image,在执行resize之前需要删除所有的VM Snapshots.

[root@linuxfdc KVM]# virsh snapshot-list K8S-HARBOR
Name Creation Time State

snapshot1 2020-04-16 08:54:24 +0300 shutoff

[root@linuxfdc KVM]# virsh snapshot-delete --domain K8S-HARBOR --snapshotname snapshot1
Domain snapshot snapshot1 deleted
Resize磁盘,增加10G:

[root@linuxfdc KVM]# qemu-img resize /data/kvm/K8S-HARBOR/K8S-HARBOR.qcow2 +10G
Image resized.

或者使用virsh命令进行resize

virsh blockresize K8S-HARBOR /data/kvm/K8S-HARBOR/K8S-HARBOR.qcow2 10G
查看磁盘信息:

[root@linuxfdc KVM]# qemu-img info /data/kvm/K8S-HARBOR/K8S-HARBOR.qcow2
image: /data/kvm/K8S-HARBOR/K8S-HARBOR.qcow2
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 16G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
调整KVM中LVM Partition

  • 启动VM

virsh start K8S-HARBOR

  • 查看磁盘信息

[root@K8S-HARBOR ~]# fdisk -l /dev/vda

Disk /dev/vda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0005d7f6

Device Boot Start End Blocks Id System
/dev/vda1 * 2048 1050623 524288 83 Linux
/dev/vda2 1050624 41943039 20446208 8e Linux LVM

  • 调整/分区

将给KVM磁盘增加的10G空间,增加到/分区上,操作步骤如下:

[root@K8S-HARBOR ~]# fdisk /dev/vda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): d #删除分区
Partition number (1,2, default 2): 2 #删除分区2:/deb/vda2
Partition 2 is deleted

Command (m for help): p #查看删除当前系统分区

Disk /dev/vda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0005d7f6

Device Boot Start End Blocks Id System
/dev/vda1 * 2048 1050623 524288 83 Linux

Command (m for help): n #添加一个新分区
Partition type:
p primary (1 primary, 0 extended, 3 free)
e extended
Select (default p): p #选择添加主分区
Partition number (2-4, default 2): 2 #指定分区编号
First sector (1050624-62914559, default 1050624): 1050624 #选择默认值即可
Last sector, +sectors or +size{K,M,G} (1050624-62914559, default 62914559): 62914559 #将分区从调整前的41943039扩大到62914559
Partition 2 of type Linux and of size 29.5 GiB is set

Command (m for help): p #查看扩大后的分区

Disk /dev/vda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0005d7f6

Device Boot Start End Blocks Id System
/dev/vda1 * 2048 1050623 524288 83 Linux
/dev/vda2 1050624 62914559 30931968 83 Linux

Command (m for help): t #变更分区ID
Partition number (1,2, default 2): 2 #指定分区编号
Hex code (type L to list all codes): 8e #指定分区Code,8e指Linux LVM类型
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): p #查看分区

Disk /dev/vda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0005d7f6

Device Boot Start End Blocks Id System
/dev/vda1 * 2048 1050623 524288 83 Linux
/dev/vda2 1050624 62914559 30931968 8e Linux LVM

Command (m for help): w #将分区变更写入分区表
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
重启KVM
[root@linuxfdc data]# virsh reboot K8S-HARBOR
Domain K8S-HARBOR is being rebooted
Resize the LVM Logical Volume

  • Resize PV

执行下面命令前请确保/分区下面还有Space,否则将遇到错误:

Couldn't create temporary archive name.
0 physical volume(s) resized or updated / 1 physical volume(s) not resized
Resize PV(/dev/vda2):

[root@K8S-HARBOR ~]# pvresize /dev/vda2
Physical volume "/dev/vda2" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized

  • Resize后PV大小

[root@K8S-HARBOR ~]# pvdisplay
--- Physical volume ---
PV Name /dev/vda2
VG Name centos
PV Size <29.50 GiB / not usable 2.00 MiB #从默认的19.50GB调整到29.50GB
Allocatable yes
PE Size 4.00 MiB
Total PE 7551
Free PE 2560 #可用PE从0变成2560
Allocated PE 4991
PV UUID IU656N-d3Cv-qW63-dFsM-Jkdv-ojYS-ll2yPM

  • Resize LV

调整前:

[root@K8S-HARBOR ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home centos -wi-ao---- 4.00g
root centos -wi-ao---- <13.50g
swap centos -wi-ao---- 2.00g
Resize LV(centos/root):

[root@K8S-HARBOR ~]# lvextend -r centos/root /dev/vda2
Size of logical volume centos/root changed from <13.50 GiB (3455 extents) to <23.50 GiB (6015 extents).
Logical volume centos/root successfully resized.
meta-data=/dev/mapper/centos-root isize=512 agcount=4, agsize=884480 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=3537920, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 3537920 to 6159360
使用-r选项可避免执行命令:resize2fs or xfs_growfs

调整后:

[root@K8S-HARBOR ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home centos -wi-ao---- 4.00g
root centos -wi-ao---- <23.50g
swap centos -wi-ao---- 2.00g
异常处理
Error getting authority
Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)

解决方法:https://unix.stackexchange.com/questions/352745/centos-cannot-mount-home-error-getting-authority-error-initializing-authority

基于XML创建新的VM, SSH连接慢的问题
解决方法:

  • 确保主机上IP v4转发已开启

echo 1 > /proc/sys/net/ipv4/ip_forward

  • 修改VM中SSHD 配置,禁止DNS Lookup

vi /etc/ssh/sshd_config
UseDNS no

systemctl restart sshd
KVM迁移时CPU指令集不一致导致的问题
异常信息:

error: the CPU is incompatible with host CPU: Host CPU does not provide required features: fma, x2apic, movbe, tsc-deadline, xsave, avx, f16c, rdrand, fsgsbase, bmi1, hle, avx2, smep, bmi2, erms, invpcid, rtm, mpx, rdseed, adx, smap, xsaveopt, xsavec, xgetbv1, abm, 3dnowprefetch
解决方法:

修改虚拟机定义文件中的cpu定义部分:

<cpu mode='custom' match='exact' check='partial'>
<model fallback='allow'>Skylake-Client</model>
</cpu>
修改成:

<cpu mode='host-passthrough' check='none'>
<model fallback='allow'>Broadwell-noTSX-IBRS</model>
<feature policy='require' name='md-clear'/>
<feature policy='require' name='spec-ctrl'/>
<feature policy='require' name='ssbd'/>
</cpu>
虚拟机多磁盘,多网卡配置示例
<domain type='kvm'>
<name>host80</name>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>4</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-trusty'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/kvm-spice</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/root/vm/host80/vda.raw'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/root/vm/host80/vdb.raw'/>
<target dev='vdb' bus='virtio'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/root/vm/host80/vdc.raw'/>
<target dev='vdc' bus='virtio'/>
</disk>
<controller type='usb' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<interface type='network'>
<mac address='00:00:de:be:ce:01'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>
<interface type='network'>
<mac address='00:00:de:be:ee:01'/>
<source network='storage_nat'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<video>
<model type='cirrus' vram='16384' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</memballoon>
</devices>
</main>