Web Hosting Philippines, Offshore Programming, Offshore SEO Philippines, Cheap Webhosting Manila Philippines
Home -> Resources -> Linux -> Directory mirroring and replication via Glusterfs

Glusterfs is a very flexible and easy-to-use network filesystem that offers a load of must-have features, one of which is the clustering/mirroring of volumes/directories across multiple servers over TCP/IP.

Perhaps the most common and useful replication configuration one would want to set up would be a directory mirrored across two physical servers. The traditional and frankly rather primitive way of accomplishing this is with an rsync mirror. You run rsync as a cron job every few minutes and it will look at the differences between the local/remote copy of directories and update the files in question. While this method works reasonably well, there are a lot of things to set up and keep track of. You have to deal with at least 3 pieces of software: rsync, ssh/sshd and cron, and from a logical point of view, it is not a very transparent scheme at all.

A network filesystem like Glusterfs abstracts away these underlying concerns. By dealing with just a pair of configuration files (glusterfs.vol and glusterfsd.vol), you can setup different kinds of distributed configurations behind a directory mount. It can be as simple as a single directory export ala NFS, a 2-server mirror like the rsync method above (but considerably more elegant) or complex multiple server clusters with striping.

Below is the bare minimum configuration for a 2-server mirroring setup as of Glusterfs-2.0.4. For the sake of clarity, no performance tuning settings are shown. A more comprehensive version can be found at http://gluster.org/docs/index.php/Automatic_File_Replication_(Mirror)_across_Two_Storage_Servers


Fig. 1 glusterfsd.vol - configures what gets exported by the glusterfsd server

volume posix
  type storage/posix
  option directory /storagedir
end-volume

# posix-locks seems to be required in glusterfs-2.0.4 even
# though docs claim that it is already activated in storage/posix
volume brick
  type features/posix-locks
  subvolumes posix
end-volume

volume server
  type protocol/server
  option transport-type tcp
  subvolumes brick
  option auth.addr.brick.allow *
end-volume
          


Fig. 2 glusterfs.vol - configures what is intended to be mounted as a glusterfs filesystem (e.g. mount -t glusterfs /path-to/glusterfs.vol /mount-dir )

volume remote
  type protocol/client
  option transport-type tcp
  option remote-host w.x.y.z  # IP of remote server, e.g. the mirror
  option remote-subvolume brick
end-volume

# I opted to use storage/posix instead of protocol/client for
# the local copy, if someone has a comment on this, please
# email me at andy dot sy @ neotitans dot com
volume posix
  type storage/posix
  option directory /storagedir
end-volume

volume local
  type features/posix-locks
  subvolumes posix
end-volume

volume replicate
  type cluster/replicate
  subvolumes local remote
end-volume
          

© 2009 by Andy Sy
last updated: 2009-07-20


Web Development / Rich Internet Applications (RIA) Development

Programming Languages

Platforms

Database Development


   © 2003-2015 Neotitans Technologies Inc.