Type your search keyword, and press enter

Gluster, CIFS, ZFS – kind of part 2

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

A while ago I put together a post detailing the installation and configuration of 2 hosts running glusterfs, which was then presented as CIFS based storage.

http://jonarcher.info/2014/06/windows-cifs-fileshares-using-glusterfs-ctdb-highly-available-data/

This post gained a bit of interest through the comments and social networks, one of the comments I got was from John Mark Walker suggesting I look at the samba-gluster vfs method instead of mounting the filesystem using fuse (directly access the volume from samba, instead of mounting then presenting). On top of this I’ve also been looking quite a bit at ZFS, whereas previously I had a Linux RAID as the base filesystem. So here is a slightly different approach to my previous post.

Getting prepared

As before, we’re looking at 2 hosts, virtual in the case of this build but more than likely physical in a real world scenario, either way it’s irrelevant. Both of these hosts are running CentOS 6 minimal installs (I’ll update to 7 at a later date), static IP addresses assigned and DNS entries created. I’ll also be running everything under a root session, if you don’t do the same just prefix the commands with sudo. For purposes of this I have also disabled SELINUX and removed all firewall rules. I will one day leave SELINUX enabled in this configuration but for now lets leave it out of the equation.

In my case these names and addresses are as follows:

arcstor01 – 192.168.1.210

arcstor02 – 192.168.1.211

First off lets get the relevant repositories installed (EPEL, ZFS and Gluster)

yum localinstall --nogpgcheck http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
curl -o /etc/yum.repos.d/gluster.repo http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
curl -o /etc/yum.repos.d/glusterfs-samba-epel.repo http://download.gluster.org/pub/gluster/glusterfs/samba/EPEL.repo/glusterfs-samba-epel.repo

Local filesystem

As previously mentioned, this configuration will be hosted from 2 virtual machines, each will have 3 disks. 1 for the OS, and the other 2 to be used in a ZFS pool.

First off we need to install ZFS itself, once you have the above zfs-release repo installed this can be done with the following command:

yum install kernel-devel zfs

Perform this on both hosts.

We can now create a zfs pool. In my case the disk device names are vdX but they could be sdX,

fdisk -l

can help you identify the device names, whatever they are just replace them in the following commands.

Create a ZFS pool

zpool create -f  -m /gluster gluster mirror /dev/vdb /dev/vdc

this command will create a zfs pool mounted at /gluster, without -m /gluster it would mount at /{poolname} while in this case it’s the same I just added the option for clarity. The volume name is gluster, the redundancy level is mirrored which is similar to RAID1, there are a number of raid levels available in ZFS all are best explained here: http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/. The final element to the command is where to host the pool, in our case on /dev/vdb and /dev/vdc. The -f option specified is to force creation of the pool, this is required remove the need to create partitions prior to the creation of the pool.

Running the command

zpool status

Will return the status of the created pool, which if successful should look something similar to:

[root@arcstor01 ~]# zpool status
 pool: gluster
 state: ONLINE
 scan: none requested
 config:
NAME STATE READ WRITE CKSUM
 gluster ONLINE 0 0 0
 mirror-0 ONLINE 0 0 0
 vdb1 ONLINE 0 0 0
 vdc1 ONLINE 0 0 0

errors: No known data errors

A quick ls and df will also show us that the /gluster mountpoint is present and the pool is mounted, the df should show the size as being half the sum of both drives in the pool:

[root@arcstor01 ~]# ls /
 bin boot cgroup dev etc gluster home lib lib64 lost+found media mnt opt proc root sbin selinux srv sys tmp usr var
 [root@arcstor01 ~]# df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/vda1 15G 1.2G 13G 9% /
 tmpfs 498M 0 498M 0% /dev/shm
 gluster 20G 0 20G 0% /gluster

If this is the case, rinse and repeat on host 2. If this is also successful then we now have a resilient base filesystem on which to host our gluster volumes. There is a bucket load more to ZFS and it’s capabilities but it’s way outside the confines of this configuration, well worth looking into though.

Glusterising our pool

So now we have a filesystem, lets make it better. Next up, installing glusterfs, enabling it then preparing the directories, for this part it is pretty much identical to the previous post:

yum install glusterfs-server -y

chkconfig glusterd on

service glusterd start

mkdir  -p /gluster/bricks/share/brick1

This needs to be done on both hosts.

Now only on host1 lets make the two nodes friends, create and then start the gluster volume:

# gluster peer probe arcstor02
peer probe: success.

# gluster vol create share replica 2 arcstor01:/gluster/bricks/share/brick1 arcstor02:/gluster/bricks/share/brick1
volume create: share: success: please start the volume to access data

# gluster vol start share
volume start: share: success

[root@arcstor01 ~]# gluster vol info share

Volume Name: data1
Type: Replicate
Volume ID: 73df25d6-1689-430d-9da8-bff8b43d0e8b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: arcstor01:/gluster/bricks/share1/brick1
Brick2: arcstor02:/gluster/bricks/share1/brick1

If all goes well above we should have a gluster volume ready to go, this volume will be presented via samba directly. For this configuration a locally available shared area is required, for this we will create another gluster volume to mount locally in which to store lockfiles and shared config files.

mkdir  -p /gluster/bricks/config/brick1
gluster vol create config replica 2 arcstor01:/gluster/bricks/config/brick1 arcstor02:/gluster/bricks/config/brick1
gluster vol start config
mkdir  /opt/samba-config
mount -t glusterfs localhost:config /opt/samba-config

The share volume could probably be used by using a different path in the samba config but for simplicity we’ll keep them seperate for now.
The mountpoint for /opt/samba-config will need to be added to fstab to ensure it mounts at boot time.

echo "localhost:config /opt/samba-config glusterfs defaults,_netdev 0 0" >>/etc/fstab

Should take care of that, remember that needs to be on both hosts.

Samba and CTDB

We now have a highly resilient datastore which could withstand both disk and host downtime, but we need to make that datastore available for consumption and also highly available in the process, for this we will use CTDB, as in the previous post. CTDB is a cluster version of the TDB database which sits under Samba. The majority of this section will be the same as the previous post except for the extra packages and a slightly different config for samba. Lets install the required packages:

yum -y install ctdb samba samba-common samba-winbind-clients samba-client samba-vfs-glusterfs

For the majority of config files we will create them in our shared config volume and symlink them to their expected location. First file we need to create is /etc/sysconfig/ctdb but we will do this as /opt/samba-config/ctdb then link it afterwards

Note: The files which are created in the shared area should be done only on one host, but the linking needs to be done on both.

vi /opt/samba-config/ctdb
CTDB_RECOVERY_LOCK=/opt/samba-config/lockfile
 #CIFS only
 CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
 CTDB_MANAGES_SAMBA=yes
 #CIFS only
 CTDB_NODES=/etc/ctdb/nodes

We’ll need to remove the existing file in /etc/sysconfig then we can create the symlink

rm /etc/sysconfig/ctdb
ln -s /opt/samba-config/ctdb /etc/sysconfig/ctdb

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Although we are using Samba the service we will be using is CTDB which allows for the extra clustering components, we need to stop and disable the samba services and enable the ctdb ones:

service smb stop
chkconfig smb off
chkconfig ctdb on

With this configuration being a cluster of essentially a single datapoint we should really use a single entry point, for this a 3rd “floating” or virtual IP address is employed, more than one could be used but lets keep this simple – 192.168.1.212. We also need to create a ctdb config file which contains a list of all the nodes in the cluster. Both these files need to be created in the shared location:

vi /opt/samba-config/public_addresses
192.168.1.212/24 eth0
vi /opt/samba-config/nodes
192.168.1.210
192.168.1.211

They both then need to be linked to their expected locations – neither of these exist so don’t need to be removed.

ln -s /opt/samba-config/nodes /etc/ctdb/nodes
ln -s /opt/samba-config/public_addresses /etc/ctdb/public_addresses

The last step is to modify the samba configuration to present the volume via cifs, I seemed to have issues using a linked file for samba so will only use the shared area for storing a copy of the config which can then be copied to both nodes to keep them identical.

cp /etc/samba/smb.conf /opt/samba-config/

Lets edit that file:

vi /opt/samba-config/smb.conf

Near the top add the following options

clustering = yes
idmap backend = tdb2
private dir = /opt/samba-config/

These turn the clustering (CTDB) features on and specify the shared directory where samba will create lockfiles. You can test starting ctdb at this point to ensure all is working, on both hosts:

cp /opt/samba-config/smb.conf /etc/samba/
service ctdb start

It should start OK, then health status of the cluster can be checked with

ctdb status

At this point I was finding that CTDB was not starting correctly, after a little bit of logwatching I found an error in the samba logs suggesting:

Failed to create pipe directory /run/samba/ncalrpc - No such file or directory

Also, to be search engine friendly the CTDB logfile was outputting

50.samba OUTPUT:ERROR: Samba tcp port 445 is not responding

This was a red herring, the port wasn’t responding as the samba part of CTDB wasn’t starting, 50.samba is a script in /etc/ctdb/events/ which actually starts the smb process.

So I created the directory /run/samba and restarted ctdb and the issue seems to have disappeared.

Now we have a started service, we can go ahead and add the configuration for the share. A regular samba share would look something like:

[share]
 comment = just a share
 path = /share
 read only = no
 guest ok = yes
 valid users = jon

In the previous post this would have been ideal if our gluster volume was mounted at share, but for this we are removing a layer and want samba to talk directly to gluster rather than via the fuse layer. This is achieved using a VFS object, we installed the samba-vfs-glusterfs package earlier. The configuration is slightly different within the smb.conf file also. Adding the following to our file should enable access to the share volume we created:

[share]
 comment = gluster vfs share
 path = /
 read only = No
 guest ok = Yes
 kernel share modes = No
 vfs objects = glusterfs
 glusterfs:loglevel = 7
 glusterfs:logfile = /var/log/samba/glusterfs-testvol.log
 glusterfs:volume = share

Notice the glusterfs: options near the bottom, these are specific to the glusterfs vfs object which is called further up (vfs objects = glusterfs). Another point to note is that the path is / this is relative to the volume rather than the filesystem, so a path to /test would be a test directory inside the gluster volume.

We can now reload the samba config, lets restart for completeness (on both nodes)

service ctdb restart

From a cifs client you should now be able to browse to \\192.168.1.212\share (or whatever IP you specified as the floating IP).

ctdb

 

All done!

To conclude, here we have created a highly resilient, highly available, very scalable storage solution using some fantastic technologies. We have created a single access method (Cifs on a floating  IP) to a datastore which is then stored on multiple hosts, which in turn store upon multiple disks. Talk about redundancy!

Useful links:

http://www.centos.org

http://zfsonlinux.org/

http://www.gluster.org/

http://ctdb.samba.org/

 

Upgrade CentOS 6 to 7 with Upgrade Tools

I decided to try the upgrade process from EL 6 to 7 on the servers I used in my previous blog post “Windows (CIFS) fileshares using GlusterFS and CTDB for Highly available data”

Following the instructions here I found the process fairly painless. However there were 1 or two little niggles which caused various issues which I will detail here.

The servers were minimal CentOS 6.5 installs, with Gluster volumes shared via CTDB. The extra packages installed had mostly come from the EPEL or Glusterfs repositories, and I believe this is where the issues arise – third party repositories.

My initial attempt saw me running:

preupg -l

which gave me the output: CentOS6_7

This meant that I had CentOS 6 to 7 upgrade content available to me, this could now be utilised by running:

preupg -s CentOS6_7

Which then ran through the preupgrade checks and produced the report of whether my system could, or should, be upgraded.

The results came back with several informational items, but more importantly 4 “needs_action” items.

These included “Packages not signed by CentOS”, “Removed RPMs”, “General” and “Content for enabling and disabling services based on CnentOS 6 system”

Firing up links and pointing it at the output preupgrade/result.html file I took a deeper look into the above details.

“Packages not signed by CentOS” as expected covered the third party installed applications, in my case the glusterfs rpms and the epel-release, which were to be expected. The other sections didn’t present any great worries so I pressed on with the upgrade:

centos-upgrade-tool-cli --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64/

running this takes the data from the previous report and runs an upgrade process based on it. Interestingly the first part of the process (redhat_upgrade_tool.yum) checks out the yum repos that are configured and EPEL “seems OK” whereas the glusterfs-epel ones don’t. This called for a little more investigation, as on my first upgrade trial run these packages failed to upgrade, luckily I took a snapshot of the machine before upgrading so could try again.

Strangely, even though the $basearch and $releasever variables were used in the config file, manually changing the $releasever to 7 (as $releasever translates to 7.0) seemed to do the trick. I manually edited the EPEL file too as this contained epel-6 in the url. After this I also noticed that the gluster services were no longer listed in the INPLACERISK: HIGH categories but had been moved to the MEDIUM.

Continue with upgrade [Y/N]?.

yes please!

The upgrade tool then goes through the process of downloading the boot images and packages ready for the upgrade, for some reason I got a message about the CentOS 7 GPG key being listed but not installed, so while I hunted out the key to import I re-ran the upgrade tool with the –nogpgcheck switch to skip that check. The tool finished successfully then and then prompted me with:

Finished. Reboot to start upgrade.

Ok then, here goes….

Bringing up the console to that machine showed me it booting into the images it downloaded in preparation for the upgrade. Mostly a screen of RPM package updates and reconfiguration. The update completed fairly quickly then automatically rebooted.

As mentioned above this was the second attempt at an upgrade on this machine, the first time it was upgraded I was prompted with the emergengy login screen after reboot. This turned out, strangely, to be that the glusterfs packages hadn’t been upgraded so I logged onto the console brought up eth0 and ran yum update. After a reboot I was faced with a working system.

The second attempt I managed to ensure the gluster packages were included in the upgrade so after crossing fingers the reboot ended with a login prompt. Great News!

The only issues I faced were Gluster volumes not mounting at boot time, but I’m sure this is a systemd configuration which can be easily rectified and really don’t change the success of the upgrade process.

All in all, good work from the Red Hat and CentOS teams, happy with the upgrade process. It’s not too far removed from Fedup in Fedora of which I’m sure it’s based.

Update: The issues I faced with my gluster volumes not mounting locally were resolved by adding the _netdev directive after defaults in fstab e.g.:

localhost:data1 /data/data1 glusterfs defaults,_netdev 0 0

All that was occurring was systemd was trying to mount the device as a local filesystem, which would try to run before the glusterd service had started. Adding this option delayed the mounting until all network-target was complete essentially.

The other issue that became apparent after I resolved the gluster mounting issue was the CTDB service not running once boot had completed, this was due to the CTDB service trying to start before filesystems were active, I modified the ctdb.service file to ensure that it only started after gluster had started which seemed to be enough. I guess that getting it to start after the filesystems had mounted would be better but for now it works. To do this I modified the  /usr/lib/systemd/system/ctdb.service file and changed the line:

After=network.target

in the [Unit] section to

After=network.target glusterd.service

 

Windows (CIFS) fileshares using GlusterFS and CTDB for Highly available data

This tutorial will walk through the setup and configuration of GlusterFS and CTDB to provide highly available file storage via CIFS. GlusterFS is used to replicate data between multiple servers. CTDB provides highly available CIFS/Samba functionality.

Prerequisites:

2 servers (virtual or physical) with RHEL 6 or derivative (CentOS, Scientific Linux). When installing create a partition for root of around 16Gb, but leave a large amount of disk space available for the shared data (you can add this in the installer but ensure the partition type is XFS and that the mountpoint is /gluster/bricks/data1) Once you have an installed system, ensure networking is configured and running, in this example the two servers will be:

server1 = storenode1 – 192.168.1.15

server2 = storenode2 – 192.168.1.16

lets add host entries (unless you have DNS available, in which case add an entry for both hosts in there.

echo "192.168.1.15 storenode1" >> /etc/hosts

echo "192.168.1.16 storenode2" >> /etc/hosts

Next make sure both of your systems are completely up to date:

yum -y update

Reboot if there are any kernel updates.

Filesystem layout

Now we have 2 fully updated working installs its time to start laying out the filesystem, in this instance we will have a partition dedicated to the underlying gluster volume.

If you didn’t add a partition for /gluster/bricks/data1 during the install do this now:

fdisk a partition on the disk (/dev/sda3?)

fdisk /dev/sda

mkfs.xfs /dev/sda3

If mkfs.xfs isn’t installed, yum install xfsprogs will add it to your system.If you are running Red Hat you will need to subscribe to the Scalable filesystem channel to get this package.

The directory where this partition will be mounted:

mkdir /gluster/bricks/data1 -p

mount /dev/sda3 /gluster/bricks/data1

If the mount command worked correctly, lets add it to our fstab so it mounts at boot time.

echo "/dev/sda3 /gluster/bricks/data1 xfs defaults 0 0" >> /etc/fstab

You need to repeat the above steps to partition and mount the volume on server 2.

Introducing Gluster to the equation

Now we have a couple of working filesystems we are ready to bring gluster into the mix, we are going to use the /gluster/bricks/data1 as a location to store our brick for our Gluster volume. A Gluster volume is made up of many bricks, these bricks are essentially a directory on one or more servers that are grouped together to provide a storage array similar to RAID.

In our configuration we will have 2 servers, each with a directory used as a brick to create a replicated gluster volume. Also, for simplicity I have disabled both SELINUX and iptables for this build, however it’s fairly straight forward to get both working correctly with gluster, I may revisit at some point to add this configuration but for now I’m taking the stance that these servers are tucked away safely inside your network behind at least one firewall.

Lets install gluster, on both servers run the following:

cd /etc/yum.repos.d/

wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo

yum install glusterfs-server -y

chkconfig glusterd on

service glusterd start

Woohoo, we have Gluster up and running, oh wait it’s not doing anything…

Lets get both servers talking to each other, on the first server run:

gluster peer probe storenode2

gluster-peer-probe

We now need a directory which we will use for the brick in our Gluster volume, run this command on both servers:

mkdir -p /gluster/bricks/data1/brick1

Continue reading “Windows (CIFS) fileshares using GlusterFS and CTDB for Highly available data”

Import regular kvm image to oVirt or RHEV

I recently replaced a couple of servers within a friends business with an oVirt virtualisation setup, I’m really pleased with the whole configuration which consists of a single engine host and 2 hypervisor nodes, the storage is shared over the 2 hosts with glusterfs. The guests which run on the platform replace the services that ran separately on a couple of physical servers, LAMP stack for intranet, Asterisk PBX, postfix/dovecot mailserver, squid proxy cache, Bind DNS, and DHCP server.

The big problem I saw with the setup was the Windows XP virtual machine which was running on the existing server as a libvirt/kvm guest. This was an emergency config which was provisioned to fulfil a temporary need which, as usual, became permanent. Originally when I decided on the ovirt configuration, I presumed with it being kvm based an import would be a simple case of importing an image file. Unfortunately with the current version this is not the case, but I believe it is planned for future releases. This doesn’t help me now, so with Google being my friend i decided to search around a bit, I found clumsy solutions using cat (which I tried without luck), and other solutions such as v2v required the original guest to be running, which wasn’t an option for me. So I had a little play around and actually ended up with a working image.

First thing I did was convert the qcow2 image to a raw image using qemu-img convert, (-p gives a progress report):

qemu-image convert windowsxp.img -O raw windowsxp-raw.img -p

You can confirm the image details or even look at the existing image details using the command:

qemu-img info windowsxp-raw.img

I worked this out from seeing the process which occurred during an export/import process within ovirt, this was from a temporary ovirt machine I used to pre-build the servers before the arriving to their office.

Next I created a guest within ovirt, and created a new disk for this machine. At this point the disk files didn’t exist so I powered on the virtual machine. Happy the files were created I then powered the vm off. I verified that the files existed by browsing to the datastore from a console, using the UUID of the disk image which was created and looked in the directory of that name.. for example if the disk image had a uuid of abcdefgh-1234-5678-90ab-abcdefghjklmn on a datastore vmstore the path would be something like

/data/vmstore/uuidofdatastore/images/abcdefgh-1234-5678-90ab-abcdefghjklmn

In this directory would be several files, but the one without an extension would be your disk image, you could probably work this out by looking at the sizes of the files.

On the same principle of the cat method I previously mentioned, which I wasn’t too keen on the sanity of I decided to try trusty old dd.

dd if=/windowsxp-raw.img of=/data/vmstore/uuidofdatastore/images/abcdefgh-1234-5678-90ab-abcdefghjklmn/abcdefgh-1234-5678-90ab-abcdefghjklmn bs=4M

Once this completed I powered on the virtual machine, and to my surpise I was eventually presented with the Windows XP desktop I was expecting.

I downloaded the virtio drivers iso from http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ and attached it to the virtual machine and allowed the hardware to be detected and reinstalled correctly.

I hope this helps anyone else in a similar situation

2013 – A good year

I thought I’d finish off the year with a bit of reflection, overall it’s been a pretty good year in both camps of my life – the geek/tech and the family side. Obvious highs of the year include:

  • Birth of my second child, Alfie.
  • OggCamp 13
  • LinuxCon Europe
  • Barcamp Blackpool
  • RossLUGs 3rd year – some fantastic meetings this year.

It certainly has been a full on year.

It’s been a really tech filled year, as since moving house last September I’ve had my own space for all my tech which is a real bonus. It’s allowed me to really get back into electronics with Arduino building the home automation system, the electric meter monitor (still to be finished) and more recently bringing a snowman christmas decoration back to life:

As part of my job heavily entails virtualisation and storage I’ve been getting heavily into oVirt, GlusterFS and Openstack (more specifically RDO). Making commits upstream too, to both code and documentation.

One of my other highs of this year tech wise was establishing a presence on GitHub I’ve uploaded most, if not all, the code I’ve worked on this year and licensed it with GPL with great reward of folks actually looking at my code. I feel like I’ve really given something back there.

On the topic of giving back, I finally became a Fedora ambassador this year. I’ve thought about it in previous years as I’ve always used the distro and given back where I can. After an experiment of using Ubuntu solely for a while I reverted back to my much cherished comfort zone, but decided to go the whole hog and really get involved in what has turned out to be a great community. Attending the events I regularly and ones I don’t, on behalf of the project has been a rewarding experience so far.

So whats in store for 2014? Well hopefully I’ll continue on this track, more open virtualisation, more Arduino, Raspberry Pi, more Fedora. But also coming in 2014 will be another track, STEM. I recently became a STEM ambassador which will allow me to impart some of my knowlege and skills and help bring a better quality of education in the tech sector to children. I’m hoping to get involved with, and also run, Arduino, Raspberry Pi coding sessions throughout 2014 so watch this space.

All in all 2013 has been an excellent year, lets hope 2014 is as good, if not better. All the very best to you all.

 

OggCamp and LinuxCon Europe: Part 2 LinuxCon Europe 2013

Whoa I’m getting a bit slow here!

After the full on weekend of OggCamp my marathon continued up in Edinburgh for LinuxCon Europe 2013. Unfortunately my plan of heading up straight from OggCamp was scuppered, but I set off first thing on Monday morning. I decided to stick with driving after toying with the idea of getting the train. Glad I did, the Edinburgh park and ride system is brilliant! Parked up at Sheriffhall which allowed me to stay up to 7 days, perfect.

Managed to make it to the exhibition centre at around 2pm that afternoon, which I didn’t think was bad going. After meeting the team of Jiri, Keiran and Tony I quickly got the banner I had made erected. The booth was looking good already but I think that just added the finishing touch.

wpid-IMG_20131022_083134.jpg

I spent most of the rest of Monday chatting to the other team members and the various folks that passed by. It was interesting to see the other guys on the team interacting with attendees as this was the first event I have been to where the booth had more than just myself  running it. Having a bit of a wander around the exhibition floor reveals an interesting point around the Linux Foundation, with the likes of Intel, HP, Samsung and other massive names in the technology game being not only present but promoting open source just proves how prevalent Linux and the Open Source movement are.

Being the Cloud Open Expo as well as LinuxCon there was a massive cloud based presence in the exhibition hall and on the talk roster. Interestingly (or embarrasingly) Oracle also had a presence touting their wares(z) with their RHEL clone Unbreakable Linux – but we don’t need to say any more about them. One surprising thing that occurred to me was there wasn’t a booth from Canonical or Ubuntu, Kubuntu were there in all their glory. Nice to see a SUSE booth also, its always nice to see the cuddly chamaeleon.

wpid-1382561093610.jpg

Next I headed over to grab my complementary swag bag (more t-shirts to show off my allegiance!) while there I was swayed into purchasing a baby vest for the up and coming arrival of mini-geek number 2.

wpid-IMG_20131021_230220.jpgwpid-IMG_20131023_210421.jpg

Over the course of the event I got to know my fellow ambassadors quite well, enjoying a few beers in the evening was great although, thankfully, not as heavy as the OggCamp session. I spent a lot of time having some really interesting conversations with the likes of (name dropping time) Richard Morell of Red Hat, John Mark Walker also of Red Hat and the GlusterFS project, Dave Neary of Red Hat. Also recruiting a new ambassador to the project Elidh McAddam who was really keen to get on board.

Our booth was in a really cool position, right next to a sister project GlusterFS and oVirt both of which I am a really big fan of.

Over the course of the exhibition we were asked several times about the up coming Fedora 20, so I decided to completely rape the wifi and run an inplace upgrade on my laptop. Amazingly everything just worked, if there was a time or place where an upgrade failed it was bound to be there, but no all was good! Shame the same couldn’t be said for our attempts at running Wayland – No trackpad support caused a fail at the first hurdle.

wpid-1382517337514.jpg

I only attended 2 talks during the week, one being Linux Torvalds keynote and the follow up talk from Mikko Hypponen entitled “Living in a surveillance state”. I hadn’t intending going to the second talk, but it kind of followed on. I’m so glad I stayed as it was one of the most thought provoking talks I have seen. You can watch the full talk on Youtube as he repeated it as a TED talk:

After these talks I managed to bump into the man himself (photo credit Keiran Smith) notice also behind Linus is Greg Kroa-Hartman.

wpid-linustorvalds.jpg

All in all it was a fantastic week, plenty of Linuxy swag gained and experiences had. I was really proud to represent and be a part of the Fedora project, its a really good place to be with a lot of good work going on. Interestingly on the drive home a BMW X5 passed me with the registration 9TUX – wonder if they had been at the conference, and who it was.

wpid-IMG_20131024_175401.jpg

Highlights of the conference:

Being part of the Fedora Project.

Seeing Linus in the flesh (starstuck much).

Lennart Poettering reconfiguring Keirans Gnome desktop back to default “as that is how it should be”.

Amazing curry at the Bombay Bicycle Club.

Meeting fellow ambassadors and members of the community.

Mikko Hypponens talk.

Seeing a picture tweeted by Mikko and me being in it. (2 rows behind)

wpid-1382531483808.jpg

 

Lowlights:

Being late.

The drive home.

 

GlusterFS Quickstart Howto on Fedora

GlusterHere’s a (very) quick howto showing how to get GlusterFS up and running on Fedora. Its probably better situated on a distro like CentOS/RHEL, Ubuntu Server LTS or Debian stable but where’s the fun in knowing it won’t break? Most of these commands are transferrable to other distros though, its Fedora centric due to the use of yum, selinux and systemd (systemctl).

Pre-requisites:
2x (or more) servers running Fedora, I used 18 in this example but i’m sure it shouldn’t change a great deal for newer releases. If it does I’ll try update this doc. The idea behind this setup is to use 2 servers as hypervisors (KVM) and have local storage but reslience, I won’t be covering the virtualisation side, purely storage so VM’s will be adequate for this setup.

So at this point we should have 2 clean installs of Fedora on 2 servers fully updated.
For arguments sake we’ll all them host1 and host2. with IP addresses of 192.168.1.50 and 192.168.1.51 respectively.
(you will need to add hostnames and IPs to /etc/hosts if you don’t use DNS)

Lets disable selinux and iptables for now to make this process easier:
sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
setenforce 0
systemctl stop firewalld.service
systemctl disable firewalld.service

yum install nfs-utils glusterfs-server
systemctl start glusterd.service
systemctl start rpcbind.service

OK so now we’re installed we’re ready to start setting up Gluster, lets create a directory on both servers

root@host1 ~ # mkdir /gluster
root@host2 ~ # mkdir /gluster

Now lets get a volume created:
Do this on only 1 host.

root@host1 ~ # gluster peer probe host2
root@host1 ~ # gluster volume create vol1 replica 2 host1:/gluster host2:/gluster

These commands told the 2 hosts to become “friends” then created a glusterfs volume called vol1 with 2 replicas (hosts), you will need to change this to the number of hosts you run, and the paths to the volume on each host.

When you run the last command above it will tell you that your volume creation has been successful and that it needs to be started to access data. Lets do this:

root@host1 ~ # gluster volume start vol1

So now we have a functioning gluster cluster we need to mount it somewhere.

root@host1 ~ # yum install glusterfs-fuse glusterfs

Installs the relevant software to allow us to mount the volume, lets create directories and mount:

root@host1 ~ # mkdir /store
root@host2 ~ # mkdir /store

root@host1 ~ # mount -t glusterfs host1:/vol1 /store
root@host2 ~ # mount -t glusterfs host2:/vol1 /store

You should now be able to create files in /store on host1 and them be visible to host2 /store. Notice how we mounted the volume on the same machine it is hosted, this way we are always writing to local storage and syncing out.

Update: the same instructions will work on CentOS/RHEL/Scientific Linux, you will just need to install the EPEL yum repositories first – http://fedoraproject.org/wiki/EPEL

Post Virtualisation Talk

Well I finally got there, after a bout of illness causing me to postpone the talk, I finally delivered it last night.

The equipment performed flawlessly, thankfully.. After my asterisk talk you’d think I’d learn to turn off DNS lookups in SSH though.

The only downside was that when I ran through the talk I was installing packages so was actually tight on time, didn’t think about the fact they were already installed so I finished a good 20 mins early. Would have given me time to talk about something I really wanted to include, GlusterFS.

Must say I’m looking forward to the Christmas do on the 10th, and even more so to Tim’s roll your own distro in January.

s’all for now.