Quantum just introduced a new deduplication appliance for small to medium sized sites that I think will revolutionize the way that IT departments handle growth. It’s called the DXi4601 and it provides a system for scaling that is both unique in the industry and that overcomes all the problems that people report when they try to scale backup capacity by adding nodes in a so-called “grid” system.
The “grid” I’m talking about is the one where users start with a node to protect around a TB of data, then add another one, two, or many, nodes to create a larger system. As many users discover, it has lots of problems when several nodes are used. Aside from managing the physical changes during scaling, the basic issue is that the nodes are essentially independent—the size and speed of a backup job is limited by the performance and capacity of one node—in fact each node has to reserve enough empty space to hold the entire backup set in an undeduplicated state. Deduplication only occurs for successive backup jobs going to the same node, and restores have to go back through the same node they came in through.
So does the grid do anything? Yes. It allows older, deduped data to be moved from one node to another out the back door. But, users tell us, if you really need that feature, you probably don’t want it. Since the data has to be rerouted back to the node it came from (better have room!) before it can be reconstructed and read, restore performance degrades by about 90%.
The better alternative is the new capacity on demand scalability offered by the DXi4601. It’s a deduplication appliance that is pre-populated with capacity which the user licenses only when it is needed. Users can start with 4TB of usable capacity and add more whenever they need to just by purchasing a license authorization code. That means there’s no re-racking or re-wiring, no down time, and no service visit—no IT person has to be on site at all!
The expanded capacity immediately becomes part of a single deduplication pool—and all data from all sources is deduplicated against all other data. There’s no need for a landing area or reserved space, because the dedupe is completely inline, and the entire performance and capacity of the system is available to any set of backup jobs. By the way: the performance is 1.7 TB an hour (nearly 500MB/sec), and that is achievable both on backup and on restores. That is roughly twice the speed of competitive products. (click here for the podcast)
OK. You ask: “So it’s faster and easier and gets more global dedupe than grid systems. What about the cost?” It’s actually far lower than grid pricing—after all, the expansion is storage, not processors and memory. Depending on the model, it can be as low as half the price of competitors. The economics are especially attractive when you realize that the base price includes everything you need—deduplication, CIFS and NFS interfaces, support for the Symantec OST API, and replication licenses. If you’re interested, check out my video on the topic: