Web giants and mega-size cloud-computing providers garner most of the attention when it comes to highly tuned and optimized data center designs. In April, Facebook shared the specifications for the servers it builds as part of an effort its calling the Open Compute Project. More recently, Facebook engineers have written about testing an extreme multi-core chip design from Tilera. Google has long been known for taking unique approaches to server and data center operations and design, although the company is generally secretive about the specifics.
This sort of hyper-optimization around scale was supposedly going to rapidly drive all computing to happen at a small number of very large providers. The economies of scale, so the reasoning went, would be such as to render anything smaller cost prohibitive.
It hasn't played out that way--for a variety of reasons. And one of those reasons is that enterprises can play the data center optimization game too--and, in fact, may be better off with an approach that optimizes for their unique situation rather than for a mass audience. GE Appliances & Lighting's announcement yesterday that it's opening a new data center at its Louisville, Ky., Appliance Park headquarters offers a nice illustration of this trend.
GE's data center includes 128 cabinets of high-density servers.(Credit: GE)The new facility reuses most of the walls, floor, and roof of existing factory space at Appliance Park. The location is historically interesting because it's where the first commercial UNIVAC computer went in 1954. (For the computer history buffs, the UNIVAC 1 had about 5,200 vacuum tubes, used mercury delay lines--basically big columns of mercury--for storage, and ran at 2.25MHz.) The systems in the new facility are rather more advanced and even cutting-edge compared to the rackmount servers that are the norm in typical data centers.
Two 27,000-gallon thermal storage tanks that are part of the cooling system(Credit: GE )The most common servers today are 1U (1.75-inch) or 2U (3.5-inch) high, contain two multi-core processors, and are packaged into 42U-high cabinets. By the time networking equipment and other gear gets added, a cabinet typically draws about 4 to 7 kilowatts of power and dissipates an equivalent amount of heat. This latter point is important because that sort of power density was, for a long time, considered to be about the limit for conventional air cooling. And few companies wanted to deal with the complexity of more sophisticated cooling techniques.
However, GE's data center houses servers designed to operate in the 18 to 24 kilowatts per cabinet range. High-density obviously means the servers take up less space. It also means that, combined with high-efficiency cooling systems, less energy is needed to cool the servers. This is one reason that the new GE facility is one of the 6 percent of LEED-certified buildings globally to achieve Platinum certification. (LEED is an internationally recognized green building certification system.)
For a long time, optimization and technical innovation at the chip and server level were the norm while data centers were more about real estate and a fairly standardized set of infrastructure related to power and cooling. That's changing.
If you have a question or comment for Gordon Haff, you can submit it here. However, because our editors and writers receive hundreds of requests, we cannot tell you when you may receive a response.
Gordon Haff has more than 20 years of IT industry experience, dealing with many aspects of enterprise hardware and software. He writes primarily about the evolution of large-scale computing, application access, and related software and device trends. Gordon is senior cloud strategy marketing and evangelism manager at Red Hat, but the opinions expressed here are strictly his own. He is a member of the CNET Blog Network and is not an employee of CNET.This blog takes a deep (and often skeptical) look at trends big and small in the world of enterprise servers, data centers, and "Yotta-scale" computing, delving into related software, networking, and device trends driving change in (or being driven by) back-end systems.
No comments:
Post a Comment