[Sysadmins] Upscaling my hosting environment.
Owen O' Shaughnessy
owen.oshaughnessy at gmail.com
Sat Apr 17 12:59:20 IST 2010
I've had a few servers in colo for a few years now providing basic
virtual hosting services and I'd like to put a few quid into scaling
the system up, esp on the email front, splitting everything away from
a single server and seperating the pop / imap / smtp / webmail /
mailbox storage onto seperate servers. The services I run are
typically all email services, apache web hosting, some CVS
repositories and on the back end to these there are ldap services and
mysql services. Each frontend service is currently running off a
single rackmount server
I've been looking at RAID filers and my plan is to replicate data
across two filers and have the mda's deliver from their respective
machines onto the filers allowing me to put multiple front end email
hosts in. Similarly have the webservers with no local storage, so that
I can have multiple front end webservers. Ditto for CVS etc. Just
upscale to remove single points of failure and build a bit of overdue
redundancy in. The key decision that I'm looking for help with is the
shared / redundant stoarge setup.
There are a lot of options out there for how to present the storage to
a bunch of servers, and until you go with one plan you won't see where
it falls down, and I thought I'd ping the request out to you guys in
the hopes that someone here has the experience to warn me off a
particular route. To set the scene, I dont have huge volumes of data
going in and out which you might expect since I'm hosting everything
off single machines at the moment, but I'd like to put in a solution
that will scale beyond single points of failure and allow for
significant volume growths before I have to spend again. The following
are some of my technology choices and my thinking on them, being
offered here for critique!
The storage devices can be directly network attached or attached to a
fileserver which shares them out, I'm currently on the side of
directly network attached as I dont see the point in introducing an
extra single point of failure and just the let NAS boxes do their own
Sharing the storage on the backend to servers can be TCP/IP&Copper/
Fiber Channel over copper / fiber channel over fiber, I'm currently
thinking tcp/ip&copper. I think that solution will better handle 8 ~
10 servers using the filer simultaneously at the expense of a
relatively small chunk of data bandwidth, but this is one area where
I've no experience and am probably siding with what I know best.
Protocol to map the shared storage onto the servers so they can use
the storage locally. My hosts will be all FreeBSD or Deb Linux
depending on the particular app. There are a variety of options here
such as iscsi, smb, nfs, cifs, afp. I think NFS is the only show in
town where I'm sharing a volume with multiple servers, and iscsi where
I'm sharing a dedicated volume to a single machine.
Filesystem on the actual storage device, will probably depend on what
is on offer from the chosen NAS solution if I go that route, but will
be planning to run XFS all the way.
So... am I on the right track or will I be kicking myself further down
the road? Any guidance from anyone who has walked this path before
would be much appreacited.
More information about the Sysadmins