Sunday 11 October 2009 at 2:21 pm
A nice tidbit over at Data Centre Knowledge -
Rackspace Says 'No More Servers'.
Which points to a new online community titled, funnily enough,
NoMoreServers. The focus is on removing the need for dedicated in-house servers and moving to a 'hosted' environment (different again from 'housing').
Software as a Service, Hosted Virtual Servers and Cloud Computing have been around for several years now but all three technologies have reached a point of maturity sufficient to warrant a serious re-appraisal (pervasive broadband and mobile computing have certainly helped too).
Wednesday 20 August 2008 at 6:19 pm
Its been awhile for a non-linky factual type post so here goes . . .
The amount of gear going into our datacenter that has a serial console management interface was starting to increase and the number of miscellaneous serial cables going into our servers to manage them was starting to get out of hand (even the datacenter people had started to notice the cable mess - not a good look as datacenter staff can be notoriously picky when it comes to cable tidiness). Plus we'd put some switches into our DMZ to handle gear going into a 'blue' (infrastructure internet gear) and 'red' (public facing services) zone as legs off our firewall appliance. So being able to tweak and check settings on these switches and the firewall appliance itself (should the management web interface be unavailable) was pretty important.
So I went looking for options and discovered the
Sena PS 810 - its not a full on 'Console Server' but it does just enough to be useful in terms of providing a telnet console to a serial interface. Sena do make the
VTS series which is pitched as a Console Server with more features but from what I can gather its quite a bit more expensive - unless you need all the extras you might find the PS series sufficient for you needs.
Serial and console servers are available from a number of vendors - they used to be more common but as the serial port becomes less important they've become a little more obscure and harder to find. I've heard the
HP ones are pretty good and I'm sure the other major vendors have them available too.
What the Sena PS 810 lets you do -
- Manage and setup the Sena via web interface
- Map serial ports to telnet ports (ie map serial port 1 to 8 to a telnet port number) - telnet to the Sena on the port number and you'll connect to the serial device
- Convert serial to IP connections and do com port redirection - ie if you have a bunch of serial devices in a factory you can plug them into this device and remotely control/monitor them as if they were local com ports on a remote computer - we don't need that capability but it could be useful if you need to monitor several UPS devices for example.
What it doesn't let you do -
- Control the serial device via a web interface (a full blown 'Console Server' will usually let you do this)
I have a feeling it also does support the ssh to serial mapping but I haven't had a chance to experiment with this. We've put this device on our back-end non-routed management network so there is an element of security provided.
Wednesday 04 July 2007 at 3:49 pm
So we installed our first datacenter 'beach-head' last week. It was actually mostly painless - due to all the advance prep work put in over the previous months. We have a 'feed & water' hosting contract so we own all our gear but our host looks after the power and environmentals (including a certain number of tape-changes).
Our initial 'beach-head' consisted of a diverse fibre data connection (100Mb), a router, out of band management switch (for the IP-KVM & ILO interfaces), data switch (separate vlans for data & san traffic), firewall (even though its all internal - traffic falls into different security zones to keep the auditors happy) and domain controller. We'll supplement this with our prod-SAN, a bunch of app & database servers, our backup server and tape drive + another telco comms circuit.
Some interesting tips if you're thinking of shipping gear offsite -
If you're in a metro area diverse fibre is cheap and fast (two leads into the building coming in from different directions going via different physical circuits).
Setup your equipment as if it were off-site - spin off a vlan at your existing location to simulate the entire off-site network so you can fully test everything before sending it off-site. That way you change IP addresses and spend the next few hours re-establishing your connectivity because you missed something.
Label up absolutely everything and note down all the interfaces and port connections. Keep track of this information in a spreadsheet or visio so you can talk to your host site engineers should they need to troubleshoot anything on your behalf.
If you're allowed (many hosts require you to leave your phone, pda or camera at the door), take a bunch of photo's to complement your diagrams.
Most datacenters have a colour-code for their cables - make sure you follow it or specify they stick to your existing scheme.
Your host will have engineers that can rack and cable everything up much tidier than you could so leave them to it. As long as you tell them where you want stuff they'll take care of the rest. Actually get them your rack layout in advance and they may even have some suggestions about what to put where.
Unless you're filthy rich you can run all your management traffic (IP KVM and ILO) through another switch (a good use for all those old non-PoE 10/100Mb Cisco's). Leave your server data & SAN traffic through a good non-blocking switch (we went with a Cisco 4948 as a big Catalyst enterprise chassis would have been overkill). Ideally we'd have two switches for redundancy and multi-pathing but cost would have been prohibitive and lets face it a $10 Power Supply on a media convertor is more likely to die than a $15k switch.
IP KVM's are cool and supplement ILO/LOM (Integrated Lights Out/Lights Out Management) - if you move to a totally hands-off approach to server provisioning you can get hardware delivered straight to the datacenter and then hooked up to the KVM - you can configure the rest remotely. In fact IBM's RSA II ILO card even lets you boot off a file or remote CD.
You can pick up a multi-port serial adaptor fairly cheaply - stick it into your management server and hook up your switch and SAN console ports for an extra level of low-level access.
Diesel goes 'stale' make sure your host cycles their tanks regularly in addition to running regular generator and UPS tests.
Don't forget to phase your deployment - start small and allow time to bed-down your infrastructure. No point throwing lots of critical gear out in the initial push and discovering a crappy patch lead causes your grief after a couple of days - make sure the basics work well before sending application servers offsite!
Most hosts will charge by the rack - make sure you think carefully about what you send to the datacenter. It might be a good opportunity to consolidate your servers. If you have lots of blades (or storage arrays) you may get hit up for more $$$ as they really suck down power. As your rack fills the host will take regular measurements of the amount of power you're pulling down - if you exceed the 'draw' for a standard rack you may be charged extra.
If you tour the datacenter make sure it has all the good stuff you'd want out of a custom built server hosting facility - hot & cold aisles (so the hot air from one rack doesn't get sucked into the opposite rack), iso-base earthquake damping (nothing like watching the rack jiggle), raised floors, 2+1 (two units plus a spare) redundancy for power, aircon, adequate filtering, UPS, comms etc.
Be sure to go over the financials with a fine tooth comb - you'll find some variation on price and what is and isn't included. If you're anything like us you'll find the host with the simplest price schema is often the best.
Its interesting to look for little things that make life easier - for example a separate Tape library room off the main server room. This means datacenter operators can do their tape changes without having to go anywhere near the servers themselves (we switched from SCSI to fibre-channel to accomodate the 12m cable run from the backup server to the tape drive). Another hosting provider was looking at rack-hoods for blade servers to ensure the air flow wasn't dissipated.
Look out for procedural aspects of datacenter operation that may affect how you currently do things. For example does the datacenter have existing relationships with archive companies so you can cycle your tapes to and from offsite storage ? Do they have a relationship with a specialist courier for shipping IT gear ? Do they have an acclimatisation period (some like 12 hours for new kit to adjust to the datacenter temperature & humidity) for new gear before they rack it and power it up ? Do you need to put contractors on an authorised access list for the site ?
Zoning your internal network seems to be popular with the auditors - use different firewall NIC's to access different parts of your LAN and lock down the rules. We're starting with a very simple configuration - we've split out our management, data and telco traffic. When we shift our DMZ out there we'll add another zone. We also will have an inter-datacenter circuit primarily for SAN replication to our DR/UAT site (due to earthquake risk most NZ datacenters have a presence in a couple of different locations). A recent external security assessment recommended fourteen different zones which was frankly insane for an organisation our size so we'll start small.
Will add updates if anything else of use comes along.
Tuesday 30 January 2007 at 2:23 pm
Seems to be very little information on picking a data-center to host your infrastructure servers - plenty of information for collocation, website hosting etc but not a whole lot to help you pick someone to entrust with your core kit.
If you're interested in a DIY data-center these sites contain some useful information:
*
Good guidelines (if a little dated) for Data-center requirements (cooling, power, security, connectivity and staffing/accessibility)
*
Data-center Resource Site
* Sun's guide to
Planning a Data-center
Some great remote management kit is available too - remote control your systems via Web-browser:
*
Raritan 64 Port IP KVM - even allows you to dial in via modem in an emergency when your LAN link dies
*
Raritan 20 Port Power Strip
*
OpenGear 8 Port Serial Console Server - make out of band adjustments to your switches, Unix and SAN gear