Type your search keyword, and press enter

Monit – monitor your processes and services simply

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Monit is an application I’ve been meaning to setup for a while, I was first made aware of it from a chap I had the pleasure of talking to at OggCamp this year, he seemed to use it to the n’th degree to monitor files and services within docker containers to ensure a development environment was as it should be. This was far more than I really needed, but the monitoring of services definitely caught my attention so I set about installing and configuring. I was pleasantly surprised with the result, and how simple the whole process was.

Scenario: small hosting server with low spec, occasionally gets hit with a large amount of traffic resulting in either apache or mysql dying.

Configuration: In this instance a CentOS 6 server with standard LAMP stack, but i’m sure this will work with other distributions such as Fedora or CentOS 7 just replacing the relevant commands for systemd based commands.


First off lets install monit, this comes from the rpmforge (http://repoforge.org/) repositories so if you haven’t already got them installed do so

yum localinstall http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm

It would be worth checking the website to ensure that rpm version is correct (http://repoforge.org/use/)

Once thats installed we can install the monit software

yum install monit

lets enable the service to start on boot, and also start it to ensure it works OK before configuring:

chkconfig monit on

service monit start

Note: if using a systemd based distro such as Fedora or CentOS 7 then systemctl commands will need to be used instead of the above (systemctl enable monit and systemctl start monit)

If all is good then we can now tailor the configuration to our needs, monit uses the common approach for config files by having a master config at /etc/monit.conf which also reads in files from /etc/monit.d/. The only directive I changed in the master config file was to uncomment the following line:

set logfile syslog facility log_daemon

Which turns on logging, whether this is needed further down the line is to be decided but for now its great to have during configuration.

Next we can create some config files in /etc/monit.d/ for our services (apache httpd and mysql in this case)

vi /etc/monit.d/mysqld.conf

check process mysqld with pidfile /var/run/mysqld/mysqld.pid
start program = "/sbin/service mysqld start"
stop program = "/sbin/service mysqld stop"
if failed host port 3306 then restart
if 5 restarts within 5 cycles then timeout

vi /etc/monit.d/httpd.conf

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

check process httpd with pidfile /var/run/httpd/httpd.pid
start program = "/sbin/service httpd start"
stop program = "/sbin/service httpd stop"
if failed host port 80 then restart
if 5 restarts within 5 cycles then timeout

These two config files will check the pid files for activity outside monit, namely if the process stops without monit stopping it, and take action based on the status. The also monitor the respective tcp ports for the particular applications, 3306 for mysqld and 80 for apache.

Note: these configurations should also work with Debian based distributions but check the location of the pid files, also the service names are slightly different (mysql and apache2 if memory serves correctly).

Lets restart Monit and run some tests, for this I will run a tail on the log file while stopping services and killing processes:

tailf /var/log/messages

service monit restart

[root@web1 monit.d]# service monit restart
Stopping monit: Dec 31 12:20:56 web1 monit[5338]: Shutting down monit HTTP server
Dec 31 12:20:56 web1 monit[5338]: monit HTTP server stopped
Dec 31 12:20:56 web1 monit[5338]: monit daemon with pid [5338] killed
Dec 31 12:20:56 web1 monit[5338]: 'web1' Monit stopped
[ OK ]
Starting monit: Starting monit daemon with http interface at [localhost:2812]
[ OK ]
Dec 31 12:20:57 web1 monit[6232]: Starting monit daemon with http interface at [localhost:2812]
[root@web1 monit.d]# Dec 31 12:20:57 web1 monit[6236]: Starting monit HTTP server at [localhost:2812]
Dec 31 12:20:57 web1 monit[6236]: monit HTTP server started
Dec 31 12:20:57 web1 monit[6236]: 'web1' Monit started

Lets stop mysqld

service mysqld stop

[root@web1 monit.d]# service mysqld stop
Stopping mysqld: [ OK ]

[root@web1 monit.d]# service mysqld stop
Stopping mysqld: [ OK ]

Within seconds an entry in the log file is presented:

Dec 31 12:22:57 web1 monit[6236]: 'mysqld' process is not running
Dec 31 12:22:57 web1 monit[6236]: 'mysqld' trying to restart
Dec 31 12:22:57 web1 monit[6236]: 'mysqld' start: /sbin/service

[root@web1 monit.d]# service mysqld status
mysqld (pid 6526) is running...

OK so that worked nicely, lets try something a little less clean

[root@web1 monit.d]# ps -ef|grep mysqld
root 6679 1 0 12:23 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/mysql.sock --pid-file=/var/run/mysqld/mysqld.pid --basedir=/usr --user=mysql
mysql 6867 6679 1 12:23 ? 00:00:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock

[root@web1 monit.d]# kill 6867
[root@web1 monit.d]# service mysqld status
mysqld dead but subsys locked

And as if by magic:

Dec 31 12:25:59 web1 monit[6236]: 'mysqld' process is not running
Dec 31 12:25:59 web1 monit[6236]: 'mysqld' trying to restart
Dec 31 12:25:59 web1 monit[6236]: 'mysqld' start: /sbin/service

Brilliant, it seemed to perform exactly as expected. I wont bore you with the detail, but Apache restarted just the same.
And that is it, a really easy to configure monitoring solution. Here, however, I was just scratching the surface of the monitoring capabilities. Take a look at the Monit website and wiki for more details on the vast array of configurables. http://mmonit.com/monit/documentation/ http://mmonit.com/monit/

Barcamp Manchester


I’ve been meaning to write this post for some time, but things have been a little hectic recently. That said I really wanted to write something, even if it is a little short, about Barcamp Manchester. The event took place over the weekend of 18th & 19th October and was just a fantastic weekend.

After a fairly decent break from the Barcamp scene, Manchester really came back and did it justice. Set in the fantastic SpacePort building on Lever street which is a meet and workspace, I arrived earlyish on the Saturday morning with fellow members of RossLUG. Carting in my bundle of swag I was shown my table in the main space and setup the Fedora table. As most will know I am a proud ambassador for the Fedora project and more proud of the fact we were able to sponsor the event.


The table looked great and we had plenty of swag and disks to give to the myriads of folks visiting the table throughout the day, there was great conversation with many people and the traditional barrage of questions from everyones friend Gino.



@tommybobbins and myself setup the timelapse cameras and live stream of the weekend, the timelapses can be viewed:

Saturday Main room:

Sunday Main room:

Other Room:


Finally getting to meet fellow ambassador Dave ‘Kubblai’ McNulty who helped man the table and also give me the opportunity to attend a few talks. Plenty of great content in the talks I attended so if they are anything to go by there quality of the contect over the weekend was exceptionally high quality.

All in all a great weekend, seeing the regular attendees is always a highlight for me but also a high level of interest at the Fedora table in the Fedora project from people who have never heard of the project right up to those who are of an expert level. The organisers of the event need a huge pat on the back for this event and I will certainly look forward to next years!

Gluster, CIFS, ZFS – kind of part 2

A while ago I put together a post detailing the installation and configuration of 2 hosts running glusterfs, which was then presented as CIFS based storage.


This post gained a bit of interest through the comments and social networks, one of the comments I got was from John Mark Walker suggesting I look at the samba-gluster vfs method instead of mounting the filesystem using fuse (directly access the volume from samba, instead of mounting then presenting). On top of this I’ve also been looking quite a bit at ZFS, whereas previously I had a Linux RAID as the base filesystem. So here is a slightly different approach to my previous post.

Getting prepared

As before, we’re looking at 2 hosts, virtual in the case of this build but more than likely physical in a real world scenario, either way it’s irrelevant. Both of these hosts are running CentOS 6 minimal installs (I’ll update to 7 at a later date), static IP addresses assigned and DNS entries created. I’ll also be running everything under a root session, if you don’t do the same just prefix the commands with sudo. For purposes of this I have also disabled SELINUX and removed all firewall rules. I will one day leave SELINUX enabled in this configuration but for now lets leave it out of the equation.

In my case these names and addresses are as follows:

arcstor01 –

arcstor02 –

First off lets get the relevant repositories installed (EPEL, ZFS and Gluster)

yum localinstall --nogpgcheck http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
curl -o /etc/yum.repos.d/gluster.repo http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
curl -o /etc/yum.repos.d/glusterfs-samba-epel.repo http://download.gluster.org/pub/gluster/glusterfs/samba/EPEL.repo/glusterfs-samba-epel.repo

Local filesystem

As previously mentioned, this configuration will be hosted from 2 virtual machines, each will have 3 disks. 1 for the OS, and the other 2 to be used in a ZFS pool.

First off we need to install ZFS itself, once you have the above zfs-release repo installed this can be done with the following command:

yum install kernel-devel zfs

Perform this on both hosts.

We can now create a zfs pool. In my case the disk device names are vdX but they could be sdX,

fdisk -l

can help you identify the device names, whatever they are just replace them in the following commands.

Create a ZFS pool

zpool create -f  -m /gluster gluster mirror /dev/vdb /dev/vdc

this command will create a zfs pool mounted at /gluster, without -m /gluster it would mount at /{poolname} while in this case it’s the same I just added the option for clarity. The volume name is gluster, the redundancy level is mirrored which is similar to RAID1, there are a number of raid levels available in ZFS all are best explained here: http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/. The final element to the command is where to host the pool, in our case on /dev/vdb and /dev/vdc. The -f option specified is to force creation of the pool, this is required remove the need to create partitions prior to the creation of the pool.

Running the command

zpool status

Will return the status of the created pool, which if successful should look something similar to:

[root@arcstor01 ~]# zpool status
 pool: gluster
 state: ONLINE
 scan: none requested
 gluster ONLINE 0 0 0
 mirror-0 ONLINE 0 0 0
 vdb1 ONLINE 0 0 0
 vdc1 ONLINE 0 0 0

errors: No known data errors

A quick ls and df will also show us that the /gluster mountpoint is present and the pool is mounted, the df should show the size as being half the sum of both drives in the pool:

[root@arcstor01 ~]# ls /
 bin boot cgroup dev etc gluster home lib lib64 lost+found media mnt opt proc root sbin selinux srv sys tmp usr var
 [root@arcstor01 ~]# df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/vda1 15G 1.2G 13G 9% /
 tmpfs 498M 0 498M 0% /dev/shm
 gluster 20G 0 20G 0% /gluster

If this is the case, rinse and repeat on host 2. If this is also successful then we now have a resilient base filesystem on which to host our gluster volumes. There is a bucket load more to ZFS and it’s capabilities but it’s way outside the confines of this configuration, well worth looking into though.

Glusterising our pool

So now we have a filesystem, lets make it better. Next up, installing glusterfs, enabling it then preparing the directories, for this part it is pretty much identical to the previous post:

yum install glusterfs-server -y

chkconfig glusterd on

service glusterd start

mkdir  -p /gluster/bricks/share/brick1

This needs to be done on both hosts.

Now only on host1 lets make the two nodes friends, create and then start the gluster volume:

# gluster peer probe arcstor02
peer probe: success.

# gluster vol create share replica 2 arcstor01:/gluster/bricks/share/brick1 arcstor02:/gluster/bricks/share/brick1
volume create: share: success: please start the volume to access data

# gluster vol start share
volume start: share: success

[root@arcstor01 ~]# gluster vol info share

Volume Name: data1
Type: Replicate
Volume ID: 73df25d6-1689-430d-9da8-bff8b43d0e8b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: arcstor01:/gluster/bricks/share1/brick1
Brick2: arcstor02:/gluster/bricks/share1/brick1

If all goes well above we should have a gluster volume ready to go, this volume will be presented via samba directly. For this configuration a locally available shared area is required, for this we will create another gluster volume to mount locally in which to store lockfiles and shared config files.

mkdir  -p /gluster/bricks/config/brick1
gluster vol create config replica 2 arcstor01:/gluster/bricks/config/brick1 arcstor02:/gluster/bricks/config/brick1
gluster vol start config
mkdir  /opt/samba-config
mount -t glusterfs localhost:config /opt/samba-config

The share volume could probably be used by using a different path in the samba config but for simplicity we’ll keep them seperate for now.
The mountpoint for /opt/samba-config will need to be added to fstab to ensure it mounts at boot time.

echo "localhost:config /opt/samba-config glusterfs defaults,_netdev 0 0" >>/etc/fstab

Should take care of that, remember that needs to be on both hosts.

Samba and CTDB

We now have a highly resilient datastore which could withstand both disk and host downtime, but we need to make that datastore available for consumption and also highly available in the process, for this we will use CTDB, as in the previous post. CTDB is a cluster version of the TDB database which sits under Samba. The majority of this section will be the same as the previous post except for the extra packages and a slightly different config for samba. Lets install the required packages:

yum -y install ctdb samba samba-common samba-winbind-clients samba-client samba-vfs-glusterfs

For the majority of config files we will create them in our shared config volume and symlink them to their expected location. First file we need to create is /etc/sysconfig/ctdb but we will do this as /opt/samba-config/ctdb then link it afterwards

Note: The files which are created in the shared area should be done only on one host, but the linking needs to be done on both.

vi /opt/samba-config/ctdb
 #CIFS only
 #CIFS only

We’ll need to remove the existing file in /etc/sysconfig then we can create the symlink

rm /etc/sysconfig/ctdb
ln -s /opt/samba-config/ctdb /etc/sysconfig/ctdb

Although we are using Samba the service we will be using is CTDB which allows for the extra clustering components, we need to stop and disable the samba services and enable the ctdb ones:

service smb stop
chkconfig smb off
chkconfig ctdb on

With this configuration being a cluster of essentially a single datapoint we should really use a single entry point, for this a 3rd “floating” or virtual IP address is employed, more than one could be used but lets keep this simple – We also need to create a ctdb config file which contains a list of all the nodes in the cluster. Both these files need to be created in the shared location:

vi /opt/samba-config/public_addresses eth0
vi /opt/samba-config/nodes

They both then need to be linked to their expected locations – neither of these exist so don’t need to be removed.

ln -s /opt/samba-config/nodes /etc/ctdb/nodes
ln -s /opt/samba-config/public_addresses /etc/ctdb/public_addresses

The last step is to modify the samba configuration to present the volume via cifs, I seemed to have issues using a linked file for samba so will only use the shared area for storing a copy of the config which can then be copied to both nodes to keep them identical.

cp /etc/samba/smb.conf /opt/samba-config/

Lets edit that file:

vi /opt/samba-config/smb.conf

Near the top add the following options

clustering = yes
idmap backend = tdb2
private dir = /opt/samba-config/

These turn the clustering (CTDB) features on and specify the shared directory where samba will create lockfiles. You can test starting ctdb at this point to ensure all is working, on both hosts:

cp /opt/samba-config/smb.conf /etc/samba/
service ctdb start

It should start OK, then health status of the cluster can be checked with

ctdb status

At this point I was finding that CTDB was not starting correctly, after a little bit of logwatching I found an error in the samba logs suggesting:

Failed to create pipe directory /run/samba/ncalrpc - No such file or directory

Also, to be search engine friendly the CTDB logfile was outputting

50.samba OUTPUT:ERROR: Samba tcp port 445 is not responding

This was a red herring, the port wasn’t responding as the samba part of CTDB wasn’t starting, 50.samba is a script in /etc/ctdb/events/ which actually starts the smb process.

So I created the directory /run/samba and restarted ctdb and the issue seems to have disappeared.

Now we have a started service, we can go ahead and add the configuration for the share. A regular samba share would look something like:

 comment = just a share
 path = /share
 read only = no
 guest ok = yes
 valid users = jon

In the previous post this would have been ideal if our gluster volume was mounted at share, but for this we are removing a layer and want samba to talk directly to gluster rather than via the fuse layer. This is achieved using a VFS object, we installed the samba-vfs-glusterfs package earlier. The configuration is slightly different within the smb.conf file also. Adding the following to our file should enable access to the share volume we created:

 comment = gluster vfs share
 path = /
 read only = No
 guest ok = Yes
 kernel share modes = No
 vfs objects = glusterfs
 glusterfs:loglevel = 7
 glusterfs:logfile = /var/log/samba/glusterfs-testvol.log
 glusterfs:volume = share

Notice the glusterfs: options near the bottom, these are specific to the glusterfs vfs object which is called further up (vfs objects = glusterfs). Another point to note is that the path is / this is relative to the volume rather than the filesystem, so a path to /test would be a test directory inside the gluster volume.

We can now reload the samba config, lets restart for completeness (on both nodes)

service ctdb restart

From a cifs client you should now be able to browse to \\\share (or whatever IP you specified as the floating IP).



All done!

To conclude, here we have created a highly resilient, highly available, very scalable storage solution using some fantastic technologies. We have created a single access method (Cifs on a floating  IP) to a datastore which is then stored on multiple hosts, which in turn store upon multiple disks. Talk about redundancy!

Useful links:






Upgrade CentOS 6 to 7 with Upgrade Tools

I decided to try the upgrade process from EL 6 to 7 on the servers I used in my previous blog post “Windows (CIFS) fileshares using GlusterFS and CTDB for Highly available data”

Following the instructions here I found the process fairly painless. However there were 1 or two little niggles which caused various issues which I will detail here.

The servers were minimal CentOS 6.5 installs, with Gluster volumes shared via CTDB. The extra packages installed had mostly come from the EPEL or Glusterfs repositories, and I believe this is where the issues arise – third party repositories.

My initial attempt saw me running:

preupg -l

which gave me the output: CentOS6_7

This meant that I had CentOS 6 to 7 upgrade content available to me, this could now be utilised by running:

preupg -s CentOS6_7

Which then ran through the preupgrade checks and produced the report of whether my system could, or should, be upgraded.

The results came back with several informational items, but more importantly 4 “needs_action” items.

These included “Packages not signed by CentOS”, “Removed RPMs”, “General” and “Content for enabling and disabling services based on CnentOS 6 system”

Firing up links and pointing it at the output preupgrade/result.html file I took a deeper look into the above details.

“Packages not signed by CentOS” as expected covered the third party installed applications, in my case the glusterfs rpms and the epel-release, which were to be expected. The other sections didn’t present any great worries so I pressed on with the upgrade:

centos-upgrade-tool-cli --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64/

running this takes the data from the previous report and runs an upgrade process based on it. Interestingly the first part of the process (redhat_upgrade_tool.yum) checks out the yum repos that are configured and EPEL “seems OK” whereas the glusterfs-epel ones don’t. This called for a little more investigation, as on my first upgrade trial run these packages failed to upgrade, luckily I took a snapshot of the machine before upgrading so could try again.

Strangely, even though the $basearch and $releasever variables were used in the config file, manually changing the $releasever to 7 (as $releasever translates to 7.0) seemed to do the trick. I manually edited the EPEL file too as this contained epel-6 in the url. After this I also noticed that the gluster services were no longer listed in the INPLACERISK: HIGH categories but had been moved to the MEDIUM.

Continue with upgrade [Y/N]?.

yes please!

The upgrade tool then goes through the process of downloading the boot images and packages ready for the upgrade, for some reason I got a message about the CentOS 7 GPG key being listed but not installed, so while I hunted out the key to import I re-ran the upgrade tool with the –nogpgcheck switch to skip that check. The tool finished successfully then and then prompted me with:

Finished. Reboot to start upgrade.

Ok then, here goes….

Bringing up the console to that machine showed me it booting into the images it downloaded in preparation for the upgrade. Mostly a screen of RPM package updates and reconfiguration. The update completed fairly quickly then automatically rebooted.

As mentioned above this was the second attempt at an upgrade on this machine, the first time it was upgraded I was prompted with the emergengy login screen after reboot. This turned out, strangely, to be that the glusterfs packages hadn’t been upgraded so I logged onto the console brought up eth0 and ran yum update. After a reboot I was faced with a working system.

The second attempt I managed to ensure the gluster packages were included in the upgrade so after crossing fingers the reboot ended with a login prompt. Great News!

The only issues I faced were Gluster volumes not mounting at boot time, but I’m sure this is a systemd configuration which can be easily rectified and really don’t change the success of the upgrade process.

All in all, good work from the Red Hat and CentOS teams, happy with the upgrade process. It’s not too far removed from Fedup in Fedora of which I’m sure it’s based.

Update: The issues I faced with my gluster volumes not mounting locally were resolved by adding the _netdev directive after defaults in fstab e.g.:

localhost:data1 /data/data1 glusterfs defaults,_netdev 0 0

All that was occurring was systemd was trying to mount the device as a local filesystem, which would try to run before the glusterd service had started. Adding this option delayed the mounting until all network-target was complete essentially.

The other issue that became apparent after I resolved the gluster mounting issue was the CTDB service not running once boot had completed, this was due to the CTDB service trying to start before filesystems were active, I modified the ctdb.service file to ensure that it only started after gluster had started which seemed to be enough. I guess that getting it to start after the filesystems had mounted would be better but for now it works. To do this I modified the  /usr/lib/systemd/system/ctdb.service file and changed the line:


in the [Unit] section to

After=network.target glusterd.service


Windows (CIFS) fileshares using GlusterFS and CTDB for Highly available data

This tutorial will walk through the setup and configuration of GlusterFS and CTDB to provide highly available file storage via CIFS. GlusterFS is used to replicate data between multiple servers. CTDB provides highly available CIFS/Samba functionality.


2 servers (virtual or physical) with RHEL 6 or derivative (CentOS, Scientific Linux). When installing create a partition for root of around 16Gb, but leave a large amount of disk space available for the shared data (you can add this in the installer but ensure the partition type is XFS and that the mountpoint is /gluster/bricks/data1) Once you have an installed system, ensure networking is configured and running, in this example the two servers will be:

server1 = storenode1 –

server2 = storenode2 –

lets add host entries (unless you have DNS available, in which case add an entry for both hosts in there.

echo " storenode1" >> /etc/hosts

echo " storenode2" >> /etc/hosts

Next make sure both of your systems are completely up to date:

yum -y update

Reboot if there are any kernel updates.

Filesystem layout

Now we have 2 fully updated working installs its time to start laying out the filesystem, in this instance we will have a partition dedicated to the underlying gluster volume.

If you didn’t add a partition for /gluster/bricks/data1 during the install do this now:

fdisk a partition on the disk (/dev/sda3?)

fdisk /dev/sda

mkfs.xfs /dev/sda3

If mkfs.xfs isn’t installed, yum install xfsprogs will add it to your system.If you are running Red Hat you will need to subscribe to the Scalable filesystem channel to get this package.

The directory where this partition will be mounted:

mkdir /gluster/bricks/data1 -p

mount /dev/sda3 /gluster/bricks/data1

If the mount command worked correctly, lets add it to our fstab so it mounts at boot time.

echo "/dev/sda3 /gluster/bricks/data1 xfs defaults 0 0" >> /etc/fstab

You need to repeat the above steps to partition and mount the volume on server 2.

Introducing Gluster to the equation

Now we have a couple of working filesystems we are ready to bring gluster into the mix, we are going to use the /gluster/bricks/data1 as a location to store our brick for our Gluster volume. A Gluster volume is made up of many bricks, these bricks are essentially a directory on one or more servers that are grouped together to provide a storage array similar to RAID.

In our configuration we will have 2 servers, each with a directory used as a brick to create a replicated gluster volume. Also, for simplicity I have disabled both SELINUX and iptables for this build, however it’s fairly straight forward to get both working correctly with gluster, I may revisit at some point to add this configuration but for now I’m taking the stance that these servers are tucked away safely inside your network behind at least one firewall.

Lets install gluster, on both servers run the following:

cd /etc/yum.repos.d/

wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo

yum install glusterfs-server -y

chkconfig glusterd on

service glusterd start

Woohoo, we have Gluster up and running, oh wait it’s not doing anything…

Lets get both servers talking to each other, on the first server run:

gluster peer probe storenode2


We now need a directory which we will use for the brick in our Gluster volume, run this command on both servers:

mkdir -p /gluster/bricks/data1/brick1

Continue reading… “Windows (CIFS) fileshares using GlusterFS and CTDB for Highly available data”

Raspberry Pi Wildlife Camera

A while ago I built a Raspberry Pi based nature camera, sometimes known as a trail camera. Normally I cover most of my projects on here but this one has been a little different as it was featured in this months Linux Voice magazine. For this very reason I won’t feature a write-up here, just a few images and videos captured using it and a couple of pointers to the software used in the project.

The software stack simply consisted of:


RPi Cam Web Interface

GitHub repo for above interface software (My fork of the repo)

Here’s a few captures and pics of the components:





If you would like to read the full article, or better still the whole magazine, head over to LinuxVoice. Please support the chaps there by subscribing 🙂

Installing dig on a CentOS or Red Hat machine

Gone are the days where we install nslookup for DNS resolution testing, the new(ish) kid on the block is dig. Although maybe not installed by default, it can be installed quite easily from yum, however it comes bundled with a number of tools so the package name isn’t all that obvious.

[root@server ~]# yum install bind-utils

Will do the trick, now how to use it?

[root@server ~]# dig @nameserver address.com

replace nameserver with your dns nameserver of choice, for example:

[root@server ~]# dig @ google.com

will use Googles DNS server to resolve google.com

Arduino based Electricity monitor

Over the past 12 months or so I’ve been looking to add various “Smart House” components to my home, rather than do this in the traditional sense of buying something off-the-shelf I’ve been experimenting and building my own. One of the real plus points in me doing this is that all the data that is passed around is in an open standard and format chosen by me, not some cludge to try and extract information in a format decided by a-n-other manufacturer.

This post will cover the process I’ve gone through to build an Arduino based energy monitor, in the first instance this is only monitoring electricity usage

First step was to look at what information I could glean from the meter itself, there are a number of ways to extract information here, a clamp which sits around the wire between the meter and the consumer unit:


The other ways generally involve retrieving a pulse of some kind from the meter, either via a screw in terminal which sends a pulse over wire or a flashing LED. In my case it was a flashing LED as you can see in the following image.


This LED sends out a pulse every time a watt is used, or for monetary conversion 1000 for every kilowatt used. So knowing this I could create a device to read these flashes and convert them into usable data.



The configuration of the above circuit is really simple, a TSL261 Light to Voltage sensor connected to an arduino with an ethernet shield.

How to connect the TSL261
looking at the component, the sensor will have a raised part above it, this is the front. Front the front the pins run from left to right, pin 1 being GND, pin 2 V+ and pin 3 output. In my circuit I connected the voltage and ground appropriately (3v) and attached the output to digital pin 2.


My current incarnation has pins soldered to one end of a length of wire and the sensor the other (I used strands from CAT5e cable) to allow me to position the arduino then attach the sensor to the front of the electric meter, the attachment was a complicated process of sticking with some trusty old duct tape. I still need to get some heat shrink tubing to ensure the pins don’t short but it’s ok for now.


While researching this project I found this site, http://www.airsensor.co.uk/component/zoo/item/energy-monitor.html which took the data collected and stored it in a file on the SD card. This wasn’t exactly what I was looking for as I wanted to graph the data preferably from a live feed. I thought the best way to do this was to utilise one of my favourite current tools which I use in a lot of my other projects MQTT, I wanted the arduino to simple detect a pulse and send out information which I could retrieve on any device subscribed to a particular topic. More research led me to a page Nicegear where I found Hadley Rich doing pretty much exactly what I was looking for, his Arduino sketch created an output on a particular MQTT topic when a pulse was detected, but to top this he had also created a function to output the current watt usage every second. This proved to be more useful than the original idea of outputting the pulse on detection.

You can find the sketch at Hadleys website, or on my GitHub page. The only real difference between his sketch and the one on my arduino is the count of LED flashes per kWh, his needs to flash 1600 per kWh mine flashes 1000, or 1watt per flash.

So subscribing to the topic
house/power/meter/1/current would recieve a number of watts currently being used


or topic
would output a 1 everytime a pulse occurred.


Now what to do with this data?
My plan was to graph the data so had to figure some way of getting this data into a graphing application or service. More research ensued, Graphite/Carbon seems to be an ideal choice to pursue but at this point in time haven’t got anything functional. A lot of posts around the internet suggest the use of pachube, which became cosm which then became xively. Using a python script to listen to the mqtt topic usage and pipe to the xively API I ended up with a nice graph.


You can see my live feed here, xively.com/feeds/73975854

All in all, I’m really pleased with this project. It’s still rough around the edges in places, both physically and software wise. I’d like to move the project to an Arduino mini pro with one of the smaller ethernet shields and have a nice box to house it. Get a graphite instance working and retrieving the data. First on the next steps list however is to try and extract the same information from the gas meter, I believe there is a pulse output on the front of the meter but this will require some more research so watch this space.


Import regular kvm image to oVirt or RHEV

I recently replaced a couple of servers within a friends business with an oVirt virtualisation setup, I’m really pleased with the whole configuration which consists of a single engine host and 2 hypervisor nodes, the storage is shared over the 2 hosts with glusterfs. The guests which run on the platform replace the services that ran separately on a couple of physical servers, LAMP stack for intranet, Asterisk PBX, postfix/dovecot mailserver, squid proxy cache, Bind DNS, and DHCP server.

The big problem I saw with the setup was the Windows XP virtual machine which was running on the existing server as a libvirt/kvm guest. This was an emergency config which was provisioned to fulfil a temporary need which, as usual, became permanent. Originally when I decided on the ovirt configuration, I presumed with it being kvm based an import would be a simple case of importing an image file. Unfortunately with the current version this is not the case, but I believe it is planned for future releases. This doesn’t help me now, so with Google being my friend i decided to search around a bit, I found clumsy solutions using cat (which I tried without luck), and other solutions such as v2v required the original guest to be running, which wasn’t an option for me. So I had a little play around and actually ended up with a working image.

First thing I did was convert the qcow2 image to a raw image using qemu-img convert, (-p gives a progress report):

qemu-image convert windowsxp.img -O raw windowsxp-raw.img -p

You can confirm the image details or even look at the existing image details using the command:

qemu-img info windowsxp-raw.img

I worked this out from seeing the process which occurred during an export/import process within ovirt, this was from a temporary ovirt machine I used to pre-build the servers before the arriving to their office.

Next I created a guest within ovirt, and created a new disk for this machine. At this point the disk files didn’t exist so I powered on the virtual machine. Happy the files were created I then powered the vm off. I verified that the files existed by browsing to the datastore from a console, using the UUID of the disk image which was created and looked in the directory of that name.. for example if the disk image had a uuid of abcdefgh-1234-5678-90ab-abcdefghjklmn on a datastore vmstore the path would be something like


In this directory would be several files, but the one without an extension would be your disk image, you could probably work this out by looking at the sizes of the files.

On the same principle of the cat method I previously mentioned, which I wasn’t too keen on the sanity of I decided to try trusty old dd.

dd if=/windowsxp-raw.img of=/data/vmstore/uuidofdatastore/images/abcdefgh-1234-5678-90ab-abcdefghjklmn/abcdefgh-1234-5678-90ab-abcdefghjklmn bs=4M

Once this completed I powered on the virtual machine, and to my surpise I was eventually presented with the Windows XP desktop I was expecting.

I downloaded the virtio drivers iso from http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ and attached it to the virtual machine and allowed the hardware to be detected and reinstalled correctly.

I hope this helps anyone else in a similar situation

2013 – A good year

I thought I’d finish off the year with a bit of reflection, overall it’s been a pretty good year in both camps of my life – the geek/tech and the family side. Obvious highs of the year include:

  • Birth of my second child, Alfie.
  • OggCamp 13
  • LinuxCon Europe
  • Barcamp Blackpool
  • RossLUGs 3rd year – some fantastic meetings this year.

It certainly has been a full on year.

It’s been a really tech filled year, as since moving house last September I’ve had my own space for all my tech which is a real bonus. It’s allowed me to really get back into electronics with Arduino building the home automation system, the electric meter monitor (still to be finished) and more recently bringing a snowman christmas decoration back to life:

As part of my job heavily entails virtualisation and storage I’ve been getting heavily into oVirt, GlusterFS and Openstack (more specifically RDO). Making commits upstream too, to both code and documentation.

One of my other highs of this year tech wise was establishing a presence on GitHub I’ve uploaded most, if not all, the code I’ve worked on this year and licensed it with GPL with great reward of folks actually looking at my code. I feel like I’ve really given something back there.

On the topic of giving back, I finally became a Fedora ambassador this year. I’ve thought about it in previous years as I’ve always used the distro and given back where I can. After an experiment of using Ubuntu solely for a while I reverted back to my much cherished comfort zone, but decided to go the whole hog and really get involved in what has turned out to be a great community. Attending the events I regularly and ones I don’t, on behalf of the project has been a rewarding experience so far.

So whats in store for 2014? Well hopefully I’ll continue on this track, more open virtualisation, more Arduino, Raspberry Pi, more Fedora. But also coming in 2014 will be another track, STEM. I recently became a STEM ambassador which will allow me to impart some of my knowlege and skills and help bring a better quality of education in the tech sector to children. I’m hoping to get involved with, and also run, Arduino, Raspberry Pi coding sessions throughout 2014 so watch this space.

All in all 2013 has been an excellent year, lets hope 2014 is as good, if not better. All the very best to you all.