[OSM-dev] The future of sysadmin

Frederik Ramm frederik at remote.org
Thu Jul 19 09:19:28 BST 2007


Steve,

> Despite handing off tile to jburgess and www to TomH, and sysadmin in
> general to NickH I'm still accused of centralising control and being
> evil

First of all, thank you for shouldering that load. Every project  
needs a few evil guys bent on domination, whom you can point the  
finger at when something doesn't go like you want it to, and whose  
fault it all is. Thank you for filling that role so competently. I  
know you'd much rather hug us all and it really takes all your  
roleplaying skills to swear at us now and then. You're doing an  
excellent job ;-)

I also want to say that I have the utmost respect for the work Nick  
Hill has done. His technical skills are first-class. It is not his  
fault that good people always bubble into positions where they're a  
single point of failure, that's just how things go, always.

I want to say something about the sysadmin/server situation.

Administering servers, hardware-wise, is not our core competency, and  
neither is it UCL's core competency to offer remote administration  
rack space. Given the financial means, I believe we should really  
farm out our servers to professional hosting companies, where clients  
not being able to physically access the servers is the *norm* and not  
the *exception*. These people have mechanisms in place that will make  
it possible for our admins to work with the servers, wherever they are.

Such mechanisms include, but are not limited to, remote consoles.

I currently have two severs at a German low-cost hosting provider;  
they're dual-core Athlon 64 machines with 2 Gig of RAM and 2x300GB  
STA hard disks, unlimited traffic, and cost 79 Euros per month  
(each). This is not high-quality hardware; things can break. I once  
had a hard disk die, called support, and had a new disk installed  
within a few hours; of course it was my responsibility to re-install  
stuff from backup and so on, and, this being a low-cost provider, if  
the breakage had happened on a Friday night it may well have been  
Monday until they replaced it. It is most of all not custom hardware  
- they do offer added memory or added hard disks but that's where it  
starts to become expensive.

If we invested some work into how we can distribute our storage and  
processing requirements to a slightly larger number of standard  
hardware components, achieving reliability by some kind of hot-spare  
mechanism, then we could drop that requirement of "people having to  
be able to travel to Central London".

It would also, presumably, become much easier to take up the offers  
of storage, bandwidth, and processing power that people often make.  
How many times people offered something like that on the lists or in  
person? I witnessed at least four such occasions and I believe there  
must have been far more - but we cannot manage to utilise these  
because we're so fixed on our Central London facilities.

I don't know if the right time for this is now, before we buy a lot  
of extra hardware and install it at UCL, or if we should give it  
another generation (half a year...). Working out a good way of  
distributing things will not be very easy but I think there are high  
rewards.

If we stay with UCL, then I second 80n's opinion that we need remote  
consoles (ideally linked to a modem and telephone line, independent  
of the Internet!) - but as I said, there are hosting providers  
offering that as a standard component with all their hosting...

Bye
Frederik

-- 
Frederik Ramm  ##  eMail frederik at remote.org  ##  N49°00.09' E008°23.33'






More information about the dev mailing list