[BBLISA] Colocation questions

Miles Fidelman mfidelman at meetinghouse.net
Thu Dec 10 23:17:29 EST 2015


On 12/10/15 11:40 AM, Rob Taylor wrote:
> Hi Guys. I had asked a question about co-location last night, and I got a few responses.
> To the people who responded, if people wouldn't mind, could you re-iterate why you chose to go that way?
> ROI, scalability, re-purposing of space, etc..? Also, if you don't mind, can you let me know the scale of the migration? 2 racks, 15 racks,
> how long you've been there, how hard was it to move to the facility, experiences with the colo staff for "remote hands" etc....
>

Well....

Back when I decided to co-locate, our little hosting business was 
outgrowing virtual servers (back before AWS and such).  And, I just like 
having our own hardware and complete control of our environment.  Been 
doing this about a decade (maybe more, can't really remember).

These days, the servers are still in the datacenter, as a combination of 
development sandbox and hosting some pro-bono email lists and web sites 
for local non-profits and community groups.  The hosting business (and 
associated web development practice) have long ago been shut down, but I 
have some new ventures in mind, and just like the level of control.

Configuration:

4 1U servers in rented rack space, running Debian Linux, set up with 
DRBD for disk mirroring and Xen for virtual machines, dual ethernet 
connections with failover.

Installed in what was the Boston Data Center (Hood Building, Charlestown 
MA); then Hosted Solutions; then Windstream; now somebody new.  Still 
the same contract, price, rack, and staff.

Re. specific questions:
- ROI is high - $1000 or so per 1U server, $300/mo. for space, power, 
lots of bandwidth, and server monitoring/restart - the servers have been 
averaging 6 year lifespans (with disk replacements more frequently) - I 
just can't see beating the cost factors with cloud hosting, and I just 
don't like metered services (too unpredictable)  [hint: stay one 
generation behind in server technology, use an off-brand server builder, 
and servers are dirt cheap - I've had great luck with rackmountsetc who 
build supermicro servers)

- scalability:
-- easy enough to add disk space, more powerful CPUs (next time around, 
I might go with splitting disks into a SAN, and blade servers)
-- I've been looking at hybrid cloud to absorb short-term growth, 
allowing time to plug in more servers
-- there's plenty of empty space in the rack I'm in

- re-purposing space - not sure what you mean here - it's just a rack, 
pull old equipment out, slide new equipment out, I use servers with 
slides, but a lot of people seem to just stack stuff on the shelves

- scale of migration: before this we were running on virtual servers 
from some random provider;  basically shoved the new machines into 
racks, connected them to power and network, shoved disk images across 
the net - no big deal

- how hard:
-- no big deal
-- had to find a place I liked & negotiate a contract (the place started 
as a DSL hub, bought by some folks who turned it into a hosting center, 
and since been acquired 4 times - same contract, same rack, same staff, 
same service)
-- purchased servers, had them shipped to the data center - bought some 
cables and ethernet switches at the local microcenter - carried them in; 
mounted and wired things up; loaded up the O/S; configured things; and 
we were off and running - basically a long weekend in the data center - 
hardest thing was figuring out how to mount the rack slides; after that 
configuring a high availability system is a bit complicated, and takes a 
lot of waiting while RAID arrays build and sync (and it gets cold in the 
data center)
-- once the basic infrastructure was in place, I could do everything 
else remotely (build VMs, install application servers and software, etc.)

- colo staff & "remote hands;"
-- always been very easy - only used them for remote reboot on rare 
occasion - call or file a web-based trouble ticket, and a few minutes 
later, someone walks over to the machine and hits the button
-- I also have them monitoring the machines, and I get a txt if 
something goes down
-- note: stuff is set up to failover to hot-spare backup
-- really the only time this has been important has been when a server 
has reached end of life and starts to crash intermittently for no 
obvious reason (just before it crashes and won't come back up) - in 
those cases, where a machine wouldn't reboot, the staff have gone the 
extra mile, plugged in a crash cart and looked at console output - 
pretty quickly determined that servers were shot
-- note: servers also have IPMI boards for remote control - but it's 
always proven easier to have the staff do things

I expect your mileage may vary, but with a good facility, that you can 
physically visit (or remote IPMI), things work great.

Miles Fidelman




----
In theory, there is no difference between theory and practice.
In practice, there is.  .... Yogi Berra



More information about the bblisa mailing list