Type your search keyword, and press enter

Windows (CIFS) fileshares using GlusterFS and CTDB for Highly available data

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

This tutorial will walk through the setup and configuration of GlusterFS and CTDB to provide highly available file storage via CIFS. GlusterFS is used to replicate data between multiple servers. CTDB provides highly available CIFS/Samba functionality.

Prerequisites:

2 servers (virtual or physical) with RHEL 6 or derivative (CentOS, Scientific Linux). When installing create a partition for root of around 16Gb, but leave a large amount of disk space available for the shared data (you can add this in the installer but ensure the partition type is XFS and that the mountpoint is /gluster/bricks/data1) Once you have an installed system, ensure networking is configured and running, in this example the two servers will be:

server1 = storenode1 – 192.168.1.15

server2 = storenode2 – 192.168.1.16

lets add host entries (unless you have DNS available, in which case add an entry for both hosts in there.

echo "192.168.1.15 storenode1" >> /etc/hosts

echo "192.168.1.16 storenode2" >> /etc/hosts

Next make sure both of your systems are completely up to date:

yum -y update

Reboot if there are any kernel updates.

Filesystem layout

Now we have 2 fully updated working installs its time to start laying out the filesystem, in this instance we will have a partition dedicated to the underlying gluster volume.

If you didn’t add a partition for /gluster/bricks/data1 during the install do this now:

fdisk a partition on the disk (/dev/sda3?)

fdisk /dev/sda

mkfs.xfs /dev/sda3

If mkfs.xfs isn’t installed, yum install xfsprogs will add it to your system.If you are running Red Hat you will need to subscribe to the Scalable filesystem channel to get this package.

The directory where this partition will be mounted:

mkdir /gluster/bricks/data1 -p

mount /dev/sda3 /gluster/bricks/data1

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

If the mount command worked correctly, lets add it to our fstab so it mounts at boot time.

echo "/dev/sda3 /gluster/bricks/data1 xfs defaults 0 0" >> /etc/fstab

You need to repeat the above steps to partition and mount the volume on server 2.

Introducing Gluster to the equation

Now we have a couple of working filesystems we are ready to bring gluster into the mix, we are going to use the /gluster/bricks/data1 as a location to store our brick for our Gluster volume. A Gluster volume is made up of many bricks, these bricks are essentially a directory on one or more servers that are grouped together to provide a storage array similar to RAID.

In our configuration we will have 2 servers, each with a directory used as a brick to create a replicated gluster volume. Also, for simplicity I have disabled both SELINUX and iptables for this build, however it’s fairly straight forward to get both working correctly with gluster, I may revisit at some point to add this configuration but for now I’m taking the stance that these servers are tucked away safely inside your network behind at least one firewall.

Lets install gluster, on both servers run the following:

cd /etc/yum.repos.d/

wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo

yum install glusterfs-server -y

chkconfig glusterd on

service glusterd start

Woohoo, we have Gluster up and running, oh wait it’s not doing anything…

Lets get both servers talking to each other, on the first server run:

gluster peer probe storenode2

gluster-peer-probe

We now need a directory which we will use for the brick in our Gluster volume, run this command on both servers:

mkdir -p /gluster/bricks/data1/brick1

Continue reading “Windows (CIFS) fileshares using GlusterFS and CTDB for Highly available data”

Raspberry Pi Wildlife Camera

A while ago I built a Raspberry Pi based nature camera, sometimes known as a trail camera. Normally I cover most of my projects on here but this one has been a little different as it was featured in this months Linux Voice magazine. For this very reason I won’t feature a write-up here, just a few images and videos captured using it and a couple of pointers to the software used in the project.

The software stack simply consisted of:

Raspbian

RPi Cam Web Interface

GitHub repo for above interface software (My fork of the repo)

Here’s a few captures and pics of the components:

deer1

IMG_20140412_124455

IMG_20140412_191220585

Woodpecker

If you would like to read the full article, or better still the whole magazine, head over to LinuxVoice. Please support the chaps there by subscribing 🙂