xen提供的虚机动态迁移需要满足以下条件:
1. 配置文件中开启迁移
# (xend-relocation-hosts-allow '')
# (xend-relocation-port 8002)
# (xend-relocation-address '')
# (xend-relocation-server yes)
2. 两台dom0能在同样的路径访问到domU相关文件。因为迁移过去的信息是包括绝对路径。
使用xm命令进行动态迁移,迁移过去的只是虚机的当前运行状态,而块设备则不进行任何处理。这样,一般都需要搭建共享的存储环境,如iSCSI, DRBD, NFS等。
注意:迁移后的主机上没有相应的配置文件。
在网上发现一种同步LVM进行虚机动态迁移到方法,摘录如下:
The problem
Currently, there is no support for providing automatic remote access to filesystems stored on local disk when a domain is migrated. Administrators should choose an appropriate storage solution (i.e. SAN, NAS, etc.) to ensure that domain filesystems are also available on their destination node. GNBD is a good method for exporting a volume from one machine to another. iSCSI can do a similar job, but is more complex to set up.
This does not mean that it is impossible though. Live migration is a more efficient migration, and migration can be seen as a save on one node, and a restore on another. Normally, if you save a VM on one machine, and try to restore it on another machine, it will fail when it is unable to read its filesystems. But what would happen if you coppied the filesystem to the other node between the save and restore? If done right, it works pretty well.
The solution?
The solution is simple:
- Save running image
- Sync disks
- copy image to other node, restore
This can be somewhat sped up by syncing the disks twice:
- Sync disks
- Save running image
- Sync disks - only having to save any changes in the last few seconds
- copy image to other node, restore
Syncronizing block devices
File backed
If you are using plain files as vbds, you can sync the disks using rsync.
Raw devices
If you are using raw devices, rsync can not be used. I wrote a small utility called [[blocksync|/programs/blocksync.py]] which can syncronize 2 block devices over the network. In my testing it was easily able to max out the network on an initial sync, and max out the disk read speed on a resync.
$ blocksync.py /dev/xen/vm-root 1.2.3.4
Will sync /dev/xen/vm-root onto 1.2.3.4. The device should already exist on the destination and be the same size.
Solaris ZFS
If you are using ZFS, it should be possible to use zfs send
to sync the block devices before migration. This would give an almost instantaneous sync time.
Automation
A simple script [[/programs/xen_migrate.sh]] and its helper [[/programs/xen_vbds.py]] will migrate a domain to another host. File and raw vbds are supported. ZFS send
support is not yet implemented.
注:
迁移到要求:两个主机上必须有同名的vg,且vg剩余空间足够
在他的基础上,我把创建目标主机的lv也加入到了脚本中
在52和53上测试,迁移一个5GB系统盘/1G swap/4GB数据盘虚机所需时间为24分钟
具体方法可以参考附件中的代码。
Example:
#migrating a 1G / + 128M swap over the network
#physical machines are 350mhz with 64M of ram,
#total downtime is about 3 minutes
xen1:~# time ./migrate.sh test 192.168.1.2
+ '[' 2 -ne 2 ']'
+ DOMID=test
+ DSTHOST=192.168.1.2
++ xen_vbds.py test
+ FILES=/dev/xen/test-root
/dev/xen/test-swap
+ main
+ check_running
+ xm list test
Name Id Mem(MB) CPU State Time(s) Console
test 87 15 0 -b--- 0.0 9687
+ sync_disk
+ blocksync.py /dev/xen/test-root 192.168.1.2
ssh -c blowfish 192.168.1.2 blocksync.py server /dev/xen/test-root -b 1048576
same: 942, diff: 82, 1024/1024
+ blocksync.py /dev/xen/test-swap 192.168.1.2
ssh -c blowfish 192.168.1.2 blocksync.py server /dev/xen/test-swap -b 1048576
same: 128, diff: 0, 128/128
+ save_image
+ xm save test test.dump
+ sync_disk
+ blocksync.py /dev/xen/test-root 192.168.1.2
ssh -c blowfish 192.168.1.2 blocksync.py server /dev/xen/test-root -b 1048576
same: 1019, diff: 5, 1024/1024
+ blocksync.py /dev/xen/test-swap 192.168.1.2
ssh -c blowfish 192.168.1.2 blocksync.py server /dev/xen/test-swap -b 1048576
same: 128, diff: 0, 128/128
+ copy_image
+ scp test.dump 192.168.1.2:
test.dump 100% 16MB 3.2MB/s 00:05
+ restore_image
+ ssh 192.168.1.2 'xm restore test.dump && rm test.dump'
(domain
(id 89)
[domain info stuff cut out]
)
+ rm test.dump
real 6m6.272s
user 1m29.610s
sys 0m30.930s