Servers are the backbone of enterprise computing today, most websites for example run on either Apache or IIS and will be running on a server of some description.
Perhaps because of servers ubiquity it is easy to become complacent about them especially as they are rarely seen but understanding what a server can offer you and it's limitations will definitely help with software development.
Servers on the whole are built more of the practicality side than for the esthetics, there is no need to make them look pretty if they are going to spend most of there time in a Data-Centre rarely seen. The server will only have a visit if something goes physically wrong, everything else should be looked after remotely. Ultimately the success of a server is measured in the the amount of time spent looking after it and it performing the task(s) you have set it.
The Differences Between Servers and Desktops
The main difference between Servers and Desktops are Servers are designed to run 24/7 desktop aren't. The components in a server can usually handle higher stress and handle redundancy by having two or more of critical components so if one fails there still is another keeping the server up and the service it is running available.
The Different Types of Server
There are different types of server then mains ones are,
Pedestal/Tower
A Pedestal Server (which is often know as a Tower) isn't rack mounted as the main intention of it's design is to work primarily in an office environment not in a Data-Centre. These machines are usually found as file or print servers.
Rack Mounted
As the name suggests a rack mounted service lives in a rack. Placing servers on top of each other in a rack to support their weight and to provide them with services such as power and networking means that you can have lots of servers taking up less room and room in data-centre's are at a premium. A typical rack is 19 or 21 U (I will explain later what a U is)
Enterprise Class
Enterprise Class servers are free-standing like pedestals but you won't find them in an office! Because of their size, value and special requirements such as multi-phased power supply, larger than domestic voltages or specialist cooling they can only really live in a data-centre. As you can imagine this type of server doesn't come cheap. Enterprise Class machines would have been called 'Mainframes' in year gone by but as they run can happily run operating systems such as Linux, Unix and Windows (The IBM P Series pictured does not run Windows). Their main function today is as a consolidated platform as one of these servers can run the equivalent of several hundred Linux servers saving space and power.
Blade
In a typical rack you can get about 10 rack mounted servers. Blade servers on the other hand are designed for high density, i.e. more servers in the same amount of space. Take for example the HP C Class Blades, a U10 C7000 Blade Enclosure can have upto 16 468c G5 Blade Servers. So a normal rack can have 32 servers in it rather than 10. Blades servers make ideal candidates for hosting Virtualised servers such as Vmware, Hyper-V or XenSource.
The Worlds Most Popular Server
It is estimated by HP that the world most popular server is the ProLiant DL380 G5. As I haven't seen this claim undisputed I will use the DL380 as an example of a typical server as it is the one you are most likely to meet. This next section will give you a detailed overview of the machine.
HP's Official ProLiant DL 380 Overview
Front View:
1. Eight sockets for PC2-5300 Fully Buffered DIMMs (DDR2-667) - up to 64GB RAM is possible
2. Hot-plug fans, full redundancy - air is sucked through the machine from front to back.
3. Systems Insight Display
4. Quick release lever for rapid server access - lifting these levers will allow you to quickly pull the machine out of a rack.
5. Support for eight Small Form Factor hot plug hard drives bays - Typically SAS drive (discussed in the article on Storage) are installed here.
6. Front LEDs (show server status) and Unit Identification button/LED (for easy in rack server identification)
7. Two front USB ports (2 rear USB ports, 1 internal USB port)
8. Intel Xeon Processor (Performance models include two processors)
9. Hot plug power supply, redundancy option (High performance models include redundant power supply) - Yes two power supplies. One can be replaced whilst the server is still working for maximum up-time.
10. Three full-size PCI-E expansion slots in standard expansion cage (or optional mixed PCI-X/E expansion cage). Two additional low-profile PCI-E slots embedded on the system board. Four slots available for use; one consumed by Smart Array controller (Base and Performance models)
11. Quick removal access panel
Rear View:
1. Torx service tool - HP exclusively use a standard screw-driver head in their machines called the Torx T15 to help avoid screw-driver cam-out.
2. Optional pass through cable door
3. Two Embedded NC373i Multifunction Gigabit Server Adapters with TCP/IP Offload Engine
4. Hot plug power supply bays, redundancy option (High performance models include redundant power supply)
5. Integrated Lights-Out 2 (iLO 2) Remote management port. ILO is a remote management console that can turn a server on and off and take over the screen amongst other features. ILO usually has it's own dedicated VLAN. I will talk more about what that is in A Developers Guide to Networking. What you need to know but were afraid to ask.
6. Video Port
7. Two USB 2.0 Ports
8. Serial Port
9. Keyboard Port
10. Mouse PS/2 Port
11. Two low-profile PCI Express x8 slots. Slot 1 is consumed by the P400 Smart Array controller. (Base and Performance models)
12. Three full-size PCI Express slots in standard cage (or optional mixed PCI-X/E expansion cage)
Specifications
- Processors:
- Quad-Core and Dual-Core Intel® Xeon® processors - systems support up to 2 processors
- Intel® 5000P chipset
-
- Memory:
- Up to 64 GB PC2-5300 Fully Buffered DIMMs (DDR2-667)
-
- Storage Controller:
- Performance Models: 512MB (RAID 0/1/1+0/5/6)
- High Efficiency and Base Models: 256MB Controller (RAID 0/1/1+0/5)
- Entry Models: 64MB Controller (RAID 0/1/1+0)
-
- Internal Drive Support:
- (8) small form factor (SFF) hot-plug drive bays to support Serial-attached SCSI (SAS) and Serial ATA (SATA) drives
- Slimline media bay supporting Optical or Floppy drive
-
- Network Controller:
- Two embedded Gigabit Network Adapters
-
- Expansion Slots:
- Four PCI-Express slots
-
- USB Ports:
- USB 2.0 support
- 5 total ports: (2) ports up front; (2) ports in back; (1) port internal
-
- Integrated Hypervisors (Optional):
- VMware & Citrix XenServer virtualization technology ... sadly not Microsoft yet!
-
- Redundancy:
- Fully redundant hot plug fans (N+1)
- Hot plug power supply with optional redundancy (Included in Performance models)
-
- Form Factor:
- Rack (2U), (3.5-inch); Depth 26 inches (66 cm)
-
What is a U ?
A 'U' or Rack Unit is an Electronic Industry Alliance standard height (EIA 310) measure for a rack mounted server. The reason there is a server is so machines from multiple vendors can all use the same racking.
A U or 1U equates to 1.75 inches or 44.45 millimetres. A typical rack is either 19 or 23 U's high. Our example machine the HP DL 350 G5 is 2U (see Form Factor in previous section)
Good Questions to Ask
One of the most frequent question there is, is why is the system running slowly?
Often the answer to this question is one of the four resources on a server is getting maxed out. CPU, Memory are obvious but the problems usually lie with I/O in the form of networking and disk. So understand what networks the servers is connected to and what storage and how it is configured. Recently I found a SQL Server database was running like a dog because the mdb file was sitting on the same hard disk as many others so the disk was working flat out! However to solve the problem developers were off writing better SQL and .Net code which won't have helped at all.
Is the Live server the same as the Dev? If so, what else is the Live server running?
'It runs ok in development' is frequently heard and we all know that if development and testing environments could accurately simulate a Live environment then more problems could be found before go-live. A far easier thing to do is measure how much resource your new development consumes and profile it over a working cycle such as a day or through a month-end process, then examine the servers it's going to go live on, and watch the server through the same profile you will get a better idea whether your new development will fit, this will be easier than trying to reproduce a Live simulation in test or dev but if you can do that, that's great!
In Summary
I do like hardware! So I am bias but I do believe that getting to know hardware can help bridge divides and help solve problems because if programs don't work well infront of business users the whole of IT looks like smucks and the 'his fault not mine' looks extra lame.