Type your search keyword, and press enter

SSH known hosts verification failure one liner

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.


Those who regularly build and rebuild machines or virtual machines on a dhcp network will probably be faced with this quite often, this is due to the known fingerprint for the previous host being different to a new one which has aquired the same IP address.

Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /root/.ssh/known_hosts:66
ECDSA host key for has changed and you have requested strict checking.
Host key verification failed.

There is an option to have SSH ignore these when connecting, however i find that cleaning out the old line before connecting far quicker and i do this with a Sed one liner.

The line in the known_hosts file we are interested in can be found at the end of the line:

Offending ECDSA key in /root/.ssh/known_hosts:66

66 in this case, so we can get sed to simply delete that line using:

sed -i '66d' ~/.ssh/known_hosts

An SSH session can now be opened without Host key verification failure.

Hope this helps someone.

Monit – monitor your processes and services simply

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Monit is an application I’ve been meaning to setup for a while, I was first made aware of it from a chap I had the pleasure of talking to at OggCamp this year, he seemed to use it to the n’th degree to monitor files and services within docker containers to ensure a development environment was as it should be. This was far more than I really needed, but the monitoring of services definitely caught my attention so I set about installing and configuring. I was pleasantly surprised with the result, and how simple the whole process was.

Scenario: small hosting server with low spec, occasionally gets hit with a large amount of traffic resulting in either apache or mysql dying.

Configuration: In this instance a CentOS 6 server with standard LAMP stack, but i’m sure this will work with other distributions such as Fedora or CentOS 7 just replacing the relevant commands for systemd based commands.


First off lets install monit, this comes from the rpmforge (http://repoforge.org/) repositories so if you haven’t already got them installed do so

yum localinstall http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm

It would be worth checking the website to ensure that rpm version is correct (http://repoforge.org/use/)

Once thats installed we can install the monit software

yum install monit

lets enable the service to start on boot, and also start it to ensure it works OK before configuring:

chkconfig monit on

service monit start

Note: if using a systemd based distro such as Fedora or CentOS 7 then systemctl commands will need to be used instead of the above (systemctl enable monit and systemctl start monit)

If all is good then we can now tailor the configuration to our needs, monit uses the common approach for config files by having a master config at /etc/monit.conf which also reads in files from /etc/monit.d/. The only directive I changed in the master config file was to uncomment the following line:

set logfile syslog facility log_daemon

Which turns on logging, whether this is needed further down the line is to be decided but for now its great to have during configuration.

Next we can create some config files in /etc/monit.d/ for our services (apache httpd and mysql in this case)

vi /etc/monit.d/mysqld.conf

check process mysqld with pidfile /var/run/mysqld/mysqld.pid
start program = "/sbin/service mysqld start"
stop program = "/sbin/service mysqld stop"
if failed host port 3306 then restart
if 5 restarts within 5 cycles then timeout

vi /etc/monit.d/httpd.conf

check process httpd with pidfile /var/run/httpd/httpd.pid
start program = "/sbin/service httpd start"
stop program = "/sbin/service httpd stop"
if failed host port 80 then restart
if 5 restarts within 5 cycles then timeout

These two config files will check the pid files for activity outside monit, namely if the process stops without monit stopping it, and take action based on the status. The also monitor the respective tcp ports for the particular applications, 3306 for mysqld and 80 for apache.

Note: these configurations should also work with Debian based distributions but check the location of the pid files, also the service names are slightly different (mysql and apache2 if memory serves correctly).

Lets restart Monit and run some tests, for this I will run a tail on the log file while stopping services and killing processes:

tailf /var/log/messages

service monit restart

[root@web1 monit.d]# service monit restart
Stopping monit: Dec 31 12:20:56 web1 monit[5338]: Shutting down monit HTTP server
Dec 31 12:20:56 web1 monit[5338]: monit HTTP server stopped
Dec 31 12:20:56 web1 monit[5338]: monit daemon with pid [5338] killed
Dec 31 12:20:56 web1 monit[5338]: 'web1' Monit stopped
[ OK ]
Starting monit: Starting monit daemon with http interface at [localhost:2812]
[ OK ]
Dec 31 12:20:57 web1 monit[6232]: Starting monit daemon with http interface at [localhost:2812]
[root@web1 monit.d]# Dec 31 12:20:57 web1 monit[6236]: Starting monit HTTP server at [localhost:2812]
Dec 31 12:20:57 web1 monit[6236]: monit HTTP server started
Dec 31 12:20:57 web1 monit[6236]: 'web1' Monit started

Lets stop mysqld

service mysqld stop

[root@web1 monit.d]# service mysqld stop
Stopping mysqld: [ OK ]

[root@web1 monit.d]# service mysqld stop
Stopping mysqld: [ OK ]

Within seconds an entry in the log file is presented:

Dec 31 12:22:57 web1 monit[6236]: 'mysqld' process is not running
Dec 31 12:22:57 web1 monit[6236]: 'mysqld' trying to restart
Dec 31 12:22:57 web1 monit[6236]: 'mysqld' start: /sbin/service

[root@web1 monit.d]# service mysqld status
mysqld (pid 6526) is running...

OK so that worked nicely, lets try something a little less clean

[root@web1 monit.d]# ps -ef|grep mysqld
root 6679 1 0 12:23 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/mysql.sock --pid-file=/var/run/mysqld/mysqld.pid --basedir=/usr --user=mysql
mysql 6867 6679 1 12:23 ? 00:00:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock

[root@web1 monit.d]# kill 6867
[root@web1 monit.d]# service mysqld status
mysqld dead but subsys locked

And as if by magic:

Dec 31 12:25:59 web1 monit[6236]: 'mysqld' process is not running
Dec 31 12:25:59 web1 monit[6236]: 'mysqld' trying to restart
Dec 31 12:25:59 web1 monit[6236]: 'mysqld' start: /sbin/service

Brilliant, it seemed to perform exactly as expected. I wont bore you with the detail, but Apache restarted just the same.
And that is it, a really easy to configure monitoring solution. Here, however, I was just scratching the surface of the monitoring capabilities. Take a look at the Monit website and wiki for more details on the vast array of configurables. http://mmonit.com/monit/documentation/ http://mmonit.com/monit/

Gluster, CIFS, ZFS – kind of part 2

A while ago I put together a post detailing the installation and configuration of 2 hosts running glusterfs, which was then presented as CIFS based storage.


This post gained a bit of interest through the comments and social networks, one of the comments I got was from John Mark Walker suggesting I look at the samba-gluster vfs method instead of mounting the filesystem using fuse (directly access the volume from samba, instead of mounting then presenting). On top of this I’ve also been looking quite a bit at ZFS, whereas previously I had a Linux RAID as the base filesystem. So here is a slightly different approach to my previous post.

Getting prepared

As before, we’re looking at 2 hosts, virtual in the case of this build but more than likely physical in a real world scenario, either way it’s irrelevant. Both of these hosts are running CentOS 6 minimal installs (I’ll update to 7 at a later date), static IP addresses assigned and DNS entries created. I’ll also be running everything under a root session, if you don’t do the same just prefix the commands with sudo. For purposes of this I have also disabled SELINUX and removed all firewall rules. I will one day leave SELINUX enabled in this configuration but for now lets leave it out of the equation.

In my case these names and addresses are as follows:

arcstor01 –

arcstor02 –

First off lets get the relevant repositories installed (EPEL, ZFS and Gluster)

yum localinstall --nogpgcheck http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
curl -o /etc/yum.repos.d/gluster.repo http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
curl -o /etc/yum.repos.d/glusterfs-samba-epel.repo http://download.gluster.org/pub/gluster/glusterfs/samba/EPEL.repo/glusterfs-samba-epel.repo

Local filesystem

As previously mentioned, this configuration will be hosted from 2 virtual machines, each will have 3 disks. 1 for the OS, and the other 2 to be used in a ZFS pool.

First off we need to install ZFS itself, once you have the above zfs-release repo installed this can be done with the following command:

yum install kernel-devel zfs

Perform this on both hosts.

We can now create a zfs pool. In my case the disk device names are vdX but they could be sdX,

fdisk -l

can help you identify the device names, whatever they are just replace them in the following commands.

Create a ZFS pool

zpool create -f  -m /gluster gluster mirror /dev/vdb /dev/vdc

this command will create a zfs pool mounted at /gluster, without -m /gluster it would mount at /{poolname} while in this case it’s the same I just added the option for clarity. The volume name is gluster, the redundancy level is mirrored which is similar to RAID1, there are a number of raid levels available in ZFS all are best explained here: http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/. The final element to the command is where to host the pool, in our case on /dev/vdb and /dev/vdc. The -f option specified is to force creation of the pool, this is required remove the need to create partitions prior to the creation of the pool.

Running the command

zpool status

Will return the status of the created pool, which if successful should look something similar to:

[root@arcstor01 ~]# zpool status
 pool: gluster
 state: ONLINE
 scan: none requested
 gluster ONLINE 0 0 0
 mirror-0 ONLINE 0 0 0
 vdb1 ONLINE 0 0 0
 vdc1 ONLINE 0 0 0

errors: No known data errors

A quick ls and df will also show us that the /gluster mountpoint is present and the pool is mounted, the df should show the size as being half the sum of both drives in the pool:

[root@arcstor01 ~]# ls /
 bin boot cgroup dev etc gluster home lib lib64 lost+found media mnt opt proc root sbin selinux srv sys tmp usr var
 [root@arcstor01 ~]# df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/vda1 15G 1.2G 13G 9% /
 tmpfs 498M 0 498M 0% /dev/shm
 gluster 20G 0 20G 0% /gluster

If this is the case, rinse and repeat on host 2. If this is also successful then we now have a resilient base filesystem on which to host our gluster volumes. There is a bucket load more to ZFS and it’s capabilities but it’s way outside the confines of this configuration, well worth looking into though.

Glusterising our pool

So now we have a filesystem, lets make it better. Next up, installing glusterfs, enabling it then preparing the directories, for this part it is pretty much identical to the previous post:

yum install glusterfs-server -y

chkconfig glusterd on

service glusterd start

mkdir  -p /gluster/bricks/share/brick1

This needs to be done on both hosts.

Now only on host1 lets make the two nodes friends, create and then start the gluster volume:

# gluster peer probe arcstor02
peer probe: success.

# gluster vol create share replica 2 arcstor01:/gluster/bricks/share/brick1 arcstor02:/gluster/bricks/share/brick1
volume create: share: success: please start the volume to access data

# gluster vol start share
volume start: share: success

[root@arcstor01 ~]# gluster vol info share

Volume Name: data1
Type: Replicate
Volume ID: 73df25d6-1689-430d-9da8-bff8b43d0e8b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: arcstor01:/gluster/bricks/share1/brick1
Brick2: arcstor02:/gluster/bricks/share1/brick1

If all goes well above we should have a gluster volume ready to go, this volume will be presented via samba directly. For this configuration a locally available shared area is required, for this we will create another gluster volume to mount locally in which to store lockfiles and shared config files.

mkdir  -p /gluster/bricks/config/brick1
gluster vol create config replica 2 arcstor01:/gluster/bricks/config/brick1 arcstor02:/gluster/bricks/config/brick1
gluster vol start config
mkdir  /opt/samba-config
mount -t glusterfs localhost:config /opt/samba-config

The share volume could probably be used by using a different path in the samba config but for simplicity we’ll keep them seperate for now.
The mountpoint for /opt/samba-config will need to be added to fstab to ensure it mounts at boot time.

echo "localhost:config /opt/samba-config glusterfs defaults,_netdev 0 0" >>/etc/fstab

Should take care of that, remember that needs to be on both hosts.

Samba and CTDB

We now have a highly resilient datastore which could withstand both disk and host downtime, but we need to make that datastore available for consumption and also highly available in the process, for this we will use CTDB, as in the previous post. CTDB is a cluster version of the TDB database which sits under Samba. The majority of this section will be the same as the previous post except for the extra packages and a slightly different config for samba. Lets install the required packages:

yum -y install ctdb samba samba-common samba-winbind-clients samba-client samba-vfs-glusterfs

For the majority of config files we will create them in our shared config volume and symlink them to their expected location. First file we need to create is /etc/sysconfig/ctdb but we will do this as /opt/samba-config/ctdb then link it afterwards

Note: The files which are created in the shared area should be done only on one host, but the linking needs to be done on both.

vi /opt/samba-config/ctdb
 #CIFS only
 #CIFS only

We’ll need to remove the existing file in /etc/sysconfig then we can create the symlink

rm /etc/sysconfig/ctdb
ln -s /opt/samba-config/ctdb /etc/sysconfig/ctdb

Although we are using Samba the service we will be using is CTDB which allows for the extra clustering components, we need to stop and disable the samba services and enable the ctdb ones:

service smb stop
chkconfig smb off
chkconfig ctdb on

With this configuration being a cluster of essentially a single datapoint we should really use a single entry point, for this a 3rd “floating” or virtual IP address is employed, more than one could be used but lets keep this simple – We also need to create a ctdb config file which contains a list of all the nodes in the cluster. Both these files need to be created in the shared location:

vi /opt/samba-config/public_addresses eth0
vi /opt/samba-config/nodes

They both then need to be linked to their expected locations – neither of these exist so don’t need to be removed.

ln -s /opt/samba-config/nodes /etc/ctdb/nodes
ln -s /opt/samba-config/public_addresses /etc/ctdb/public_addresses

The last step is to modify the samba configuration to present the volume via cifs, I seemed to have issues using a linked file for samba so will only use the shared area for storing a copy of the config which can then be copied to both nodes to keep them identical.

cp /etc/samba/smb.conf /opt/samba-config/

Lets edit that file:

vi /opt/samba-config/smb.conf

Near the top add the following options

clustering = yes
idmap backend = tdb2
private dir = /opt/samba-config/

These turn the clustering (CTDB) features on and specify the shared directory where samba will create lockfiles. You can test starting ctdb at this point to ensure all is working, on both hosts:

cp /opt/samba-config/smb.conf /etc/samba/
service ctdb start

It should start OK, then health status of the cluster can be checked with

ctdb status

At this point I was finding that CTDB was not starting correctly, after a little bit of logwatching I found an error in the samba logs suggesting:

Failed to create pipe directory /run/samba/ncalrpc - No such file or directory

Also, to be search engine friendly the CTDB logfile was outputting

50.samba OUTPUT:ERROR: Samba tcp port 445 is not responding

This was a red herring, the port wasn’t responding as the samba part of CTDB wasn’t starting, 50.samba is a script in /etc/ctdb/events/ which actually starts the smb process.

So I created the directory /run/samba and restarted ctdb and the issue seems to have disappeared.

Now we have a started service, we can go ahead and add the configuration for the share. A regular samba share would look something like:

 comment = just a share
 path = /share
 read only = no
 guest ok = yes
 valid users = jon

In the previous post this would have been ideal if our gluster volume was mounted at share, but for this we are removing a layer and want samba to talk directly to gluster rather than via the fuse layer. This is achieved using a VFS object, we installed the samba-vfs-glusterfs package earlier. The configuration is slightly different within the smb.conf file also. Adding the following to our file should enable access to the share volume we created:

 comment = gluster vfs share
 path = /
 read only = No
 guest ok = Yes
 kernel share modes = No
 vfs objects = glusterfs
 glusterfs:loglevel = 7
 glusterfs:logfile = /var/log/samba/glusterfs-testvol.log
 glusterfs:volume = share

Notice the glusterfs: options near the bottom, these are specific to the glusterfs vfs object which is called further up (vfs objects = glusterfs). Another point to note is that the path is / this is relative to the volume rather than the filesystem, so a path to /test would be a test directory inside the gluster volume.

We can now reload the samba config, lets restart for completeness (on both nodes)

service ctdb restart

From a cifs client you should now be able to browse to \\\share (or whatever IP you specified as the floating IP).



All done!

To conclude, here we have created a highly resilient, highly available, very scalable storage solution using some fantastic technologies. We have created a single access method (Cifs on a floating  IP) to a datastore which is then stored on multiple hosts, which in turn store upon multiple disks. Talk about redundancy!

Useful links:






Upgrade CentOS 6 to 7 with Upgrade Tools

I decided to try the upgrade process from EL 6 to 7 on the servers I used in my previous blog post “Windows (CIFS) fileshares using GlusterFS and CTDB for Highly available data”

Following the instructions here I found the process fairly painless. However there were 1 or two little niggles which caused various issues which I will detail here.

The servers were minimal CentOS 6.5 installs, with Gluster volumes shared via CTDB. The extra packages installed had mostly come from the EPEL or Glusterfs repositories, and I believe this is where the issues arise – third party repositories.

My initial attempt saw me running:

preupg -l

which gave me the output: CentOS6_7

This meant that I had CentOS 6 to 7 upgrade content available to me, this could now be utilised by running:

preupg -s CentOS6_7

Which then ran through the preupgrade checks and produced the report of whether my system could, or should, be upgraded.

The results came back with several informational items, but more importantly 4 “needs_action” items.

These included “Packages not signed by CentOS”, “Removed RPMs”, “General” and “Content for enabling and disabling services based on CnentOS 6 system”

Firing up links and pointing it at the output preupgrade/result.html file I took a deeper look into the above details.

“Packages not signed by CentOS” as expected covered the third party installed applications, in my case the glusterfs rpms and the epel-release, which were to be expected. The other sections didn’t present any great worries so I pressed on with the upgrade:

centos-upgrade-tool-cli --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64/

running this takes the data from the previous report and runs an upgrade process based on it. Interestingly the first part of the process (redhat_upgrade_tool.yum) checks out the yum repos that are configured and EPEL “seems OK” whereas the glusterfs-epel ones don’t. This called for a little more investigation, as on my first upgrade trial run these packages failed to upgrade, luckily I took a snapshot of the machine before upgrading so could try again.

Strangely, even though the $basearch and $releasever variables were used in the config file, manually changing the $releasever to 7 (as $releasever translates to 7.0) seemed to do the trick. I manually edited the EPEL file too as this contained epel-6 in the url. After this I also noticed that the gluster services were no longer listed in the INPLACERISK: HIGH categories but had been moved to the MEDIUM.

Continue with upgrade [Y/N]?.

yes please!

The upgrade tool then goes through the process of downloading the boot images and packages ready for the upgrade, for some reason I got a message about the CentOS 7 GPG key being listed but not installed, so while I hunted out the key to import I re-ran the upgrade tool with the –nogpgcheck switch to skip that check. The tool finished successfully then and then prompted me with:

Finished. Reboot to start upgrade.

Ok then, here goes….

Bringing up the console to that machine showed me it booting into the images it downloaded in preparation for the upgrade. Mostly a screen of RPM package updates and reconfiguration. The update completed fairly quickly then automatically rebooted.

As mentioned above this was the second attempt at an upgrade on this machine, the first time it was upgraded I was prompted with the emergengy login screen after reboot. This turned out, strangely, to be that the glusterfs packages hadn’t been upgraded so I logged onto the console brought up eth0 and ran yum update. After a reboot I was faced with a working system.

The second attempt I managed to ensure the gluster packages were included in the upgrade so after crossing fingers the reboot ended with a login prompt. Great News!

The only issues I faced were Gluster volumes not mounting at boot time, but I’m sure this is a systemd configuration which can be easily rectified and really don’t change the success of the upgrade process.

All in all, good work from the Red Hat and CentOS teams, happy with the upgrade process. It’s not too far removed from Fedup in Fedora of which I’m sure it’s based.

Update: The issues I faced with my gluster volumes not mounting locally were resolved by adding the _netdev directive after defaults in fstab e.g.:

localhost:data1 /data/data1 glusterfs defaults,_netdev 0 0

All that was occurring was systemd was trying to mount the device as a local filesystem, which would try to run before the glusterd service had started. Adding this option delayed the mounting until all network-target was complete essentially.

The other issue that became apparent after I resolved the gluster mounting issue was the CTDB service not running once boot had completed, this was due to the CTDB service trying to start before filesystems were active, I modified the ctdb.service file to ensure that it only started after gluster had started which seemed to be enough. I guess that getting it to start after the filesystems had mounted would be better but for now it works. To do this I modified the  /usr/lib/systemd/system/ctdb.service file and changed the line:


in the [Unit] section to

After=network.target glusterd.service


Installing dig on a CentOS or Red Hat machine

Gone are the days where we install nslookup for DNS resolution testing, the new(ish) kid on the block is dig. Although maybe not installed by default, it can be installed quite easily from yum, however it comes bundled with a number of tools so the package name isn’t all that obvious.

[root@server ~]# yum install bind-utils

Will do the trick, now how to use it?

[root@server ~]# dig @nameserver address.com

replace nameserver with your dns nameserver of choice, for example:

[root@server ~]# dig @ google.com

will use Googles DNS server to resolve google.com

GlusterFS Quickstart Howto on Fedora

GlusterHere’s a (very) quick howto showing how to get GlusterFS up and running on Fedora. Its probably better situated on a distro like CentOS/RHEL, Ubuntu Server LTS or Debian stable but where’s the fun in knowing it won’t break? Most of these commands are transferrable to other distros though, its Fedora centric due to the use of yum, selinux and systemd (systemctl).

2x (or more) servers running Fedora, I used 18 in this example but i’m sure it shouldn’t change a great deal for newer releases. If it does I’ll try update this doc. The idea behind this setup is to use 2 servers as hypervisors (KVM) and have local storage but reslience, I won’t be covering the virtualisation side, purely storage so VM’s will be adequate for this setup.

So at this point we should have 2 clean installs of Fedora on 2 servers fully updated.
For arguments sake we’ll all them host1 and host2. with IP addresses of and respectively.
(you will need to add hostnames and IPs to /etc/hosts if you don’t use DNS)

Lets disable selinux and iptables for now to make this process easier:
sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
setenforce 0
systemctl stop firewalld.service
systemctl disable firewalld.service

yum install nfs-utils glusterfs-server
systemctl start glusterd.service
systemctl start rpcbind.service

OK so now we’re installed we’re ready to start setting up Gluster, lets create a directory on both servers

root@host1 ~ # mkdir /gluster
root@host2 ~ # mkdir /gluster

Now lets get a volume created:
Do this on only 1 host.

root@host1 ~ # gluster peer probe host2
root@host1 ~ # gluster volume create vol1 replica 2 host1:/gluster host2:/gluster

These commands told the 2 hosts to become “friends” then created a glusterfs volume called vol1 with 2 replicas (hosts), you will need to change this to the number of hosts you run, and the paths to the volume on each host.

When you run the last command above it will tell you that your volume creation has been successful and that it needs to be started to access data. Lets do this:

root@host1 ~ # gluster volume start vol1

So now we have a functioning gluster cluster we need to mount it somewhere.

root@host1 ~ # yum install glusterfs-fuse glusterfs

Installs the relevant software to allow us to mount the volume, lets create directories and mount:

root@host1 ~ # mkdir /store
root@host2 ~ # mkdir /store

root@host1 ~ # mount -t glusterfs host1:/vol1 /store
root@host2 ~ # mount -t glusterfs host2:/vol1 /store

You should now be able to create files in /store on host1 and them be visible to host2 /store. Notice how we mounted the volume on the same machine it is hosted, this way we are always writing to local storage and syncing out.

Update: the same instructions will work on CentOS/RHEL/Scientific Linux, you will just need to install the EPEL yum repositories first – http://fedoraproject.org/wiki/EPEL

Virtualisation talk

So this coming Monday will be the 2 year anniversary of the Rossendale Linux User Group, not too shabby really. Not marking the occasion or anything but I’m going to be running a talk/demo on virtualisation under Linux. Seems to be the pet project I’ve worked on the most so have a fairly polished setup to talk about.
But why make it easy on myself? I normally use CentOS for server builds but just for a change, as it seems to be the way I’m heading, I decided to give Ubuntu a shot.

Turned out quite well and seem to have the same polished end product.

Monday will be the part one of the talk, as i’m sure to come up with a part 3 – I’ve already decided that part 2 will be on openstack, but may change that to part 3 as the natural progression of virtualisation would have something in between. Such as heartbeat monitoring for high availability or better clustering techniques etc etc.

I guess its to see how well part 1 goes first!

Oh and if you are wondering, part 1 contains, building a server, creating vm’s, bringing online a second server, using shared storage, migrating vm’s between hosts, clustering the storage.

I guess another element to the middle part would be to automate the migrations etc…

see here http://rosslug.org.uk/doku.php?id=meetings:12_november_2012 for a fairly comprehensive run through of the talk.

If anyone reading this is from the Rossendale or East Lancs area and fancies coming along to said talk, then please do all are welcome!!!
You can find details of the location of the meetings here: http://rosslug.org.uk/doku.php?id=meetings:venue