Type your search keyword, and press enter

Import regular kvm image to oVirt or RHEV

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

I recently replaced a couple of servers within a friends business with an oVirt virtualisation setup, I’m really pleased with the whole configuration which consists of a single engine host and 2 hypervisor nodes, the storage is shared over the 2 hosts with glusterfs. The guests which run on the platform replace the services that ran separately on a couple of physical servers, LAMP stack for intranet, Asterisk PBX, postfix/dovecot mailserver, squid proxy cache, Bind DNS, and DHCP server.

The big problem I saw with the setup was the Windows XP virtual machine which was running on the existing server as a libvirt/kvm guest. This was an emergency config which was provisioned to fulfil a temporary need which, as usual, became permanent. Originally when I decided on the ovirt configuration, I presumed with it being kvm based an import would be a simple case of importing an image file. Unfortunately with the current version this is not the case, but I believe it is planned for future releases. This doesn’t help me now, so with Google being my friend i decided to search around a bit, I found clumsy solutions using cat (which I tried without luck), and other solutions such as v2v required the original guest to be running, which wasn’t an option for me. So I had a little play around and actually ended up with a working image.

First thing I did was convert the qcow2 image to a raw image using qemu-img convert, (-p gives a progress report):

qemu-image convert windowsxp.img -O raw windowsxp-raw.img -p

You can confirm the image details or even look at the existing image details using the command:

qemu-img info windowsxp-raw.img

I worked this out from seeing the process which occurred during an export/import process within ovirt, this was from a temporary ovirt machine I used to pre-build the servers before the arriving to their office.

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Next I created a guest within ovirt, and created a new disk for this machine. At this point the disk files didn’t exist so I powered on the virtual machine. Happy the files were created I then powered the vm off. I verified that the files existed by browsing to the datastore from a console, using the UUID of the disk image which was created and looked in the directory of that name.. for example if the disk image had a uuid of abcdefgh-1234-5678-90ab-abcdefghjklmn on a datastore vmstore the path would be something like

/data/vmstore/uuidofdatastore/images/abcdefgh-1234-5678-90ab-abcdefghjklmn

In this directory would be several files, but the one without an extension would be your disk image, you could probably work this out by looking at the sizes of the files.

On the same principle of the cat method I previously mentioned, which I wasn’t too keen on the sanity of I decided to try trusty old dd.

dd if=/windowsxp-raw.img of=/data/vmstore/uuidofdatastore/images/abcdefgh-1234-5678-90ab-abcdefghjklmn/abcdefgh-1234-5678-90ab-abcdefghjklmn bs=4M

Once this completed I powered on the virtual machine, and to my surpise I was eventually presented with the Windows XP desktop I was expecting.

I downloaded the virtio drivers iso from http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ and attached it to the virtual machine and allowed the hardware to be detected and reinstalled correctly.

I hope this helps anyone else in a similar situation

New hosting for my blog

After one of my LUG colleagues mentioning about the BigV service from Bytemark I just had to try it. So today I prized open the wallet and set myself up an account.

For £10+VAT per month you get a VM, it comes with 25gb HD space, 1GiB RAM, 1CPU (2.2ghz AMD it seems)  and 200gb bandwidth per month. All seems pretty reasonable. Their software is a breeze to use, and being command line driven its my kind of thing. The underlying tech is KVM/QEMU, and I’m guessing some kind of openstack(y) type goodness, as you can spin up a VM from their selection of images or connect your own ISO.

10 mins or so later and I have my very own install with a public IP. Awesomeness.

Now to figure out what to do with it…

Migrate my blog over of course, and so here we are.

I ran through a few steps to secure the host (IPtables etc) and also updated the system, noticed that the fastestmirror was Bytemark – no surprises there.

Seems pretty good so far, we’ll see how it goes.

If you are interested in setting up your own account:

bigv_poweredby_portrait_153_x_130

GlusterFS Quickstart Howto on Fedora

GlusterHere’s a (very) quick howto showing how to get GlusterFS up and running on Fedora. Its probably better situated on a distro like CentOS/RHEL, Ubuntu Server LTS or Debian stable but where’s the fun in knowing it won’t break? Most of these commands are transferrable to other distros though, its Fedora centric due to the use of yum, selinux and systemd (systemctl).

Pre-requisites:
2x (or more) servers running Fedora, I used 18 in this example but i’m sure it shouldn’t change a great deal for newer releases. If it does I’ll try update this doc. The idea behind this setup is to use 2 servers as hypervisors (KVM) and have local storage but reslience, I won’t be covering the virtualisation side, purely storage so VM’s will be adequate for this setup.

So at this point we should have 2 clean installs of Fedora on 2 servers fully updated.
For arguments sake we’ll all them host1 and host2. with IP addresses of 192.168.1.50 and 192.168.1.51 respectively.
(you will need to add hostnames and IPs to /etc/hosts if you don’t use DNS)

Lets disable selinux and iptables for now to make this process easier:
sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
setenforce 0
systemctl stop firewalld.service
systemctl disable firewalld.service

yum install nfs-utils glusterfs-server
systemctl start glusterd.service
systemctl start rpcbind.service

OK so now we’re installed we’re ready to start setting up Gluster, lets create a directory on both servers

root@host1 ~ # mkdir /gluster
root@host2 ~ # mkdir /gluster

Now lets get a volume created:
Do this on only 1 host.

root@host1 ~ # gluster peer probe host2
root@host1 ~ # gluster volume create vol1 replica 2 host1:/gluster host2:/gluster

These commands told the 2 hosts to become “friends” then created a glusterfs volume called vol1 with 2 replicas (hosts), you will need to change this to the number of hosts you run, and the paths to the volume on each host.

When you run the last command above it will tell you that your volume creation has been successful and that it needs to be started to access data. Lets do this:

root@host1 ~ # gluster volume start vol1

So now we have a functioning gluster cluster we need to mount it somewhere.

root@host1 ~ # yum install glusterfs-fuse glusterfs

Installs the relevant software to allow us to mount the volume, lets create directories and mount:

root@host1 ~ # mkdir /store
root@host2 ~ # mkdir /store

root@host1 ~ # mount -t glusterfs host1:/vol1 /store
root@host2 ~ # mount -t glusterfs host2:/vol1 /store

You should now be able to create files in /store on host1 and them be visible to host2 /store. Notice how we mounted the volume on the same machine it is hosted, this way we are always writing to local storage and syncing out.

Update: the same instructions will work on CentOS/RHEL/Scientific Linux, you will just need to install the EPEL yum repositories first – http://fedoraproject.org/wiki/EPEL

Post Virtualisation Talk

Well I finally got there, after a bout of illness causing me to postpone the talk, I finally delivered it last night.

The equipment performed flawlessly, thankfully.. After my asterisk talk you’d think I’d learn to turn off DNS lookups in SSH though.

The only downside was that when I ran through the talk I was installing packages so was actually tight on time, didn’t think about the fact they were already installed so I finished a good 20 mins early. Would have given me time to talk about something I really wanted to include, GlusterFS.

Must say I’m looking forward to the Christmas do on the 10th, and even more so to Tim’s roll your own distro in January.

s’all for now.

Virtualisation talk

So this coming Monday will be the 2 year anniversary of the Rossendale Linux User Group, not too shabby really. Not marking the occasion or anything but I’m going to be running a talk/demo on virtualisation under Linux. Seems to be the pet project I’ve worked on the most so have a fairly polished setup to talk about.
But why make it easy on myself? I normally use CentOS for server builds but just for a change, as it seems to be the way I’m heading, I decided to give Ubuntu a shot.

Turned out quite well and seem to have the same polished end product.

Monday will be the part one of the talk, as i’m sure to come up with a part 3 – I’ve already decided that part 2 will be on openstack, but may change that to part 3 as the natural progression of virtualisation would have something in between. Such as heartbeat monitoring for high availability or better clustering techniques etc etc.

I guess its to see how well part 1 goes first!

Oh and if you are wondering, part 1 contains, building a server, creating vm’s, bringing online a second server, using shared storage, migrating vm’s between hosts, clustering the storage.

I guess another element to the middle part would be to automate the migrations etc…

see here http://rosslug.org.uk/doku.php?id=meetings:12_november_2012 for a fairly comprehensive run through of the talk.

If anyone reading this is from the Rossendale or East Lancs area and fancies coming along to said talk, then please do all are welcome!!!
You can find details of the location of the meetings here: http://rosslug.org.uk/doku.php?id=meetings:venue