IHS Markit, in March 2016, forecast that more than 20 billion chip-embedded IoT devices will be “connected” by 2020. But connected to what? This is an incredible amount of future digital information, but where does it all go? Yes, everything ends up in your data center.
Be ready to hear a cacophony of devices and applications resulting in a “tsunami of watts,” and that means power and cooling will have to look drastically different.
Whether you own your data center or are using the benefits of a colocation provider, there is no free lunch—someone needs to pay the bill.
The exciting promises of hybrid cloud, analytics, hyperconverged platforms, and other things will hit your power and cooling infrastructure hard. Cooling, which is typically accomplished via large industrial equipment—such as pumps, chillers, and cooling towers—is one of your main power consumers.
It’s not a new topic. It started years ago when blade servers became a “cool thing to have” in part to support server virtualization, but also because they can be more efficient than traditional servers. But, it’s time to do some math.
There are many ways to improve data center efficiency. You can move to Iceland, the data center paradise. Its position close to the Arctic Circle provides outside-air cooling year-round. You can improve your hot or cold aisle containment system, or spread the devices across rooms. However, it takes expensive and complex technology to achieve higher-average rack-cooling density on the facility side.
If you increase server and storage density by filling a rack with high-density hardware, it doesn’t mean you get more servers into fewer racks. Even with modern hyperconverged or dense blade racks, you run out of available power and cooling, long before you run out of physical space. Even if you can achieve high density, it will cost you a fortune. Cisco confirmed in a recent blog entry that to deliver cooling and power to a cabinet populated to 20kW would cost two times more than doing the same for twice as many cabinets populated to 10kW.
Even if power is often the main bottleneck, in some more dense areas, space can also be a problem. It’s not that easy to move walls or migrate data centers (believe me, I’ve been through that). So, the Holy Grail will be improving density without increasing power. It’s called efficiency.
At Quantum, we hear your concerns. We keep investing in product enhancements to improve the efficiency of our deduplication appliances. Isn’t the main purpose of deduplication to do more with less? This rule applies also for data center efficiency. We are the first and only company to use 8TB self-encrypting and low-power drives in our latest DXi6900-S deduplication appliance.
What does this mean? It means that, at the same usable capacity, the DXi6900-S needs 48% less data center space, 33% less Btu/h and 37% less watts than its direct competitor (don’t ask for names—we don’t do that).
In terms of savings (here is where my friends in the industry helped) it means:
- USD 42,000 savings per device over five years just for space, power, and cooling in a US-based data center for a single 544TB DXi6900-S versus the direct competitor
- EUR 46,000 savings, if you are in Paris, France
- AUD 91,000, if your colocation is in Sydney, Australia
And finally, some other good news: as the DXi6900-S doubles efficiency, you can run two appliances in the same rack.
Data is growing.
Power and cooling costs are real.
Moving to Iceland probably isn’t an option.
We’re here to help evaluate your system efficiency and look for alternatives.
 Based on colocation MSRP prices; street price may be lower[/vc_column_text][/vc_column][/vc_row]