RHEV upgrade saga: Creating VMs on Open vSwitch
Rabu, 30 Januari 2013
0
komentar
http://www.itworld.com/virtualization/336623/rhev-upgrade-saga-rhel-kvm-creating-vms-open-vswitch
In last week's post, we discussed how we created our network by integrating Open vSwitch into RHEL KVM. Now we need to create some virtual machines to run the workloads. (VMs are required to run within a virtual environment, so we need an easy way to create them.) Once more we will approach this from running on a RHEL 6 and RHEL 5 box, as the steps are somewhat different.
The libvirt that comes with stock RHEL 6 (and RHEV actually) is version 0.90.10-21, which, lucky for us, contains support for Open vSwitch, however the libvirt for RHEL 5 is version 0.8.2, which does not contain support for Open vSwitch. This means that for RHEL 5 we have to take some extra steps to manage our networks and implies that we can't use virt-manager to create our VMs. It also means on RHEL 5 that we can't import our Open vSwitch networks into virsh to make using virt-manager and other tools easier.
Even so, I feel that libvirt v1.0.1 is a better way to go, so I downloaded the source RPM from libvirt.org and rebuilt it on my RHEL 6 machine. This did require me to rebuild libssh2 (needed >= v1.4) and sanlock (needed >= v2.4) to get the proper versions of those tools to support libvirt 1.0.1.
While this upgrade works for RHEL 6, it will NOT work on RHEL 5 as it would require installing so many new packages that it is far easier to just upgrade to RHEL 6. So if you are using RHEL 5, you should continue down the path to use libvirt 0.8.2.
Without a tool to manage multiple KVM nodes, it is very hard to do a rolling upgrade of libvirt. I am still looking for a good tool for this. RHEV may be the only usable interface, but I could also use OpenStack -- a discussion for another time.
ovsbr1
The key lines are the name of the bridge, the virtualport type, and portgroup. While I do not use VLANs, we want to make a default portgroup that includes all VMs, etc. This has no VLANs defined. So we need to define it in libvirt, verify it is defined, start it, and then verify it is active.
For a LVM based pool where the Volume Group already exists
In general, we do not want to use the default location because it ends up being in an inconvenient location within the root filesystem. You may wish to delete it, so that VMs don't accidentally end up there. Use of a block storage device as a disk type such as iSCSI would be a better performer than a file system approach if the iSCSI server is running over a high speed network such as 10G. If all you have is 1G your mileage may vary.
I did this using a simple script that will assign the proper values for my VMs. Specifically the base memory, number of vCPUs, disk to a pool, the networks to use (in this case two Open vSwitch bridges), where to find the installation media, and finally the use of VNC to do the install.
This makes it an easily repeatable process and the script takes two arguments, the vmname and the size in Gigabytes of the disk. Once I have a VM installed, I could then clone it as necessary. Run as such for a 12G VM named vmname.
During the install you will have to configure your networks, to determine which Mac Addresses go with which you should use the following command:
What you are looking for is which interface goes with which bridge via its Mac address as the Linux installer lists network adapters via Mac addresses not bridges. It does not even know there is a bridge there. Using the above script works on RHEL 6 and RHEL 5 and does not require you to go into and edit any XML files.
If you do have to edit the XML file containing the VM definition you can do so using:
If you do not do the define command mentioned above, the changes may not be picked up.
Next we will clone some VMs from a gold master.
In last week's post, we discussed how we created our network by integrating Open vSwitch into RHEL KVM. Now we need to create some virtual machines to run the workloads. (VMs are required to run within a virtual environment, so we need an easy way to create them.) Once more we will approach this from running on a RHEL 6 and RHEL 5 box, as the steps are somewhat different.
The libvirt that comes with stock RHEL 6 (and RHEV actually) is version 0.90.10-21, which, lucky for us, contains support for Open vSwitch, however the libvirt for RHEL 5 is version 0.8.2, which does not contain support for Open vSwitch. This means that for RHEL 5 we have to take some extra steps to manage our networks and implies that we can't use virt-manager to create our VMs. It also means on RHEL 5 that we can't import our Open vSwitch networks into virsh to make using virt-manager and other tools easier.
Even so, I feel that libvirt v1.0.1 is a better way to go, so I downloaded the source RPM from libvirt.org and rebuilt it on my RHEL 6 machine. This did require me to rebuild libssh2 (needed >= v1.4) and sanlock (needed >= v2.4) to get the proper versions of those tools to support libvirt 1.0.1.
# Get libssh2 >= v1.4 which is available from the Fedora Core 18 repository
# rpmbuild –rebuild libssh2-1.4.3-1.fc18.src.rpm
# rpm –Uvh /root/rpmbuild/RPMS/x86_64/{libssh2,libssh2-devel}-1.4.3-1.el6.x86_64.rpm
# Get sanlock >= 2.4 which is available from the Fedora Core 18 repository as well
# rpmbuild –rebuild sanlock-2.6.4.fc18.src.rpm
# rpm –Uvh /root/rpmbuild/RPMS/x86_64/{sanlock,sanlock-devel,sanlock-lib,sanlock-python,fence-sanlock}-2.6-4.el6.x86_64.rpm
# wget http://libvirt.org/sources/libvirt-1.0.1-1.fc17.src.rpm
# rpmbuild –rebuild libvirt-1.0.1-1.fc17.src.rpm
# rm /root/rpmbuild/RPMS/x86_64/libvirt*debuginfo*rpm
# rpm –Uvh /root/rpmbuild/RPMS/x86_64/libvirt*rpm
# service libvirtd restart
# rpmbuild –rebuild libssh2-1.4.3-1.fc18.src.rpm
# rpm –Uvh /root/rpmbuild/RPMS/x86_64/{libssh2,libssh2-devel}-1.4.3-1.el6.x86_64.rpm
# Get sanlock >= 2.4 which is available from the Fedora Core 18 repository as well
# rpmbuild –rebuild sanlock-2.6.4.fc18.src.rpm
# rpm –Uvh /root/rpmbuild/RPMS/x86_64/{sanlock,sanlock-devel,sanlock-lib,sanlock-python,fence-sanlock}-2.6-4.el6.x86_64.rpm
# wget http://libvirt.org/sources/libvirt-1.0.1-1.fc17.src.rpm
# rpmbuild –rebuild libvirt-1.0.1-1.fc17.src.rpm
# rm /root/rpmbuild/RPMS/x86_64/libvirt*debuginfo*rpm
# rpm –Uvh /root/rpmbuild/RPMS/x86_64/libvirt*rpm
# service libvirtd restart
While this upgrade works for RHEL 6, it will NOT work on RHEL 5 as it would require installing so many new packages that it is far easier to just upgrade to RHEL 6. So if you are using RHEL 5, you should continue down the path to use libvirt 0.8.2.
Without a tool to manage multiple KVM nodes, it is very hard to do a rolling upgrade of libvirt. I am still looking for a good tool for this. RHEV may be the only usable interface, but I could also use OpenStack -- a discussion for another time.
For RHEL 6
Once libvirtd has been restarted, we can import our networks into libvirt for use, to do that we need to write a proper libvirt network XML file. Here is the one I used named ovsbr1.xmlThe key lines are the name of the bridge, the virtualport type, and portgroup. While I do not use VLANs, we want to make a default portgroup that includes all VMs, etc. This has no VLANs defined. So we need to define it in libvirt, verify it is defined, start it, and then verify it is active.
# virsh net-define ovsbr1.xml
# virsh net-list –all
Name State Autostart Persistent
--------------------------------------------------
default active yes yes
ovsbr1 inactive no no
# virsh net-start ovsbr1
# virsh net-info ovsbr1
Name ovsbr1
UUID ffffff-ffffffff-ffffffffff-ffffffffff….
Active: yes
Persistent: no
Autostart: no
Bridge: ovsbr1
Building VMs
Before we make some VMs we need to place the VMs on our storage. There are multiple types of storage pools we can use: physical disk device (disk), pre-formatted block device (fs), logical volume manager volume group (logical), iscsi target (iscsi), multipath device (mpath), network directory (netfs), SCSI host adapater (scsi), or directories (dir). For our example we will be using a directory. However, for best performance a logical storage pool is recommended.# virsh pool-create-as VMs dir - - - - “/mnt/KVM”
#pool-list
Name State Autostart
-----------------------------------------
default active yes
VMs active yes
For a LVM based pool where the Volume Group already exists
# virsh pool-define-as vg_kvm logical --target /dev/vg_kvm
# virsh pool-start vg_kvm
Pool vg_kvm started
# virsh pool-autostart vg_kvm
Pool vg_kvm marked as autostarted
# virsh pool-list
Name State Autostart
-----------------------------------------
default active yes
vg_kvm active yes
In general, we do not want to use the default location because it ends up being in an inconvenient location within the root filesystem. You may wish to delete it, so that VMs don't accidentally end up there. Use of a block storage device as a disk type such as iSCSI would be a better performer than a file system approach if the iSCSI server is running over a high speed network such as 10G. If all you have is 1G your mileage may vary.
I did this using a simple script that will assign the proper values for my VMs. Specifically the base memory, number of vCPUs, disk to a pool, the networks to use (in this case two Open vSwitch bridges), where to find the installation media, and finally the use of VNC to do the install.
# cat mkvm
set -x
virt-install --name $1 --ram 2048 --vcpus=2 --disk pool=VMs,size=$2 --network bridge=ovsbr0 --network bridge=ovsbr1 --cdrom /home/kvm/CentOS-5.8-x86_64-bin-DVD-1of2.iso --noautoconsole --vnc --hvm --os-variant rhel5
set -x
virt-install --name $1 --ram 2048 --vcpus=2 --disk pool=VMs,size=$2 --network bridge=ovsbr0 --network bridge=ovsbr1 --cdrom /home/kvm/CentOS-5.8-x86_64-bin-DVD-1of2.iso --noautoconsole --vnc --hvm --os-variant rhel5
This makes it an easily repeatable process and the script takes two arguments, the vmname and the size in Gigabytes of the disk. Once I have a VM installed, I could then clone it as necessary. Run as such for a 12G VM named vmname.
# ./mkvm vmname 12
During the install you will have to configure your networks, to determine which Mac Addresses go with which you should use the following command:
# virsh dumpxml vmname
…
…
What you are looking for is which interface goes with which bridge via its Mac address as the Linux installer lists network adapters via Mac addresses not bridges. It does not even know there is a bridge there. Using the above script works on RHEL 6 and RHEL 5 and does not require you to go into and edit any XML files.
If you do have to edit the XML file containing the VM definition you can do so using:
# vi /etc/libvirt/qemu/vmname.xml
And once you finish editing
# virsh define vmname.xml
And once you finish editing
# virsh define vmname.xml
If you do not do the define command mentioned above, the changes may not be picked up.
Next we will clone some VMs from a gold master.
TERIMA KASIH ATAS KUNJUNGAN SAUDARA
Judul: RHEV upgrade saga: Creating VMs on Open vSwitch
Ditulis oleh Unknown
Rating Blog 5 dari 5
Semoga artikel ini bermanfaat bagi saudara. Jika ingin mengutip, baik itu sebagian atau keseluruhan dari isi artikel ini harap menyertakan link dofollow ke https://androidblackberries.blogspot.com/2013/01/rhev-upgrade-saga-creating-vms-on-open.html. Terima kasih sudah singgah membaca artikel ini.Ditulis oleh Unknown
Rating Blog 5 dari 5
0 komentar:
Posting Komentar