How do they go so fast?

We are continually battling competitors stating ingest performance using numbers that defy logic. That is, we compete against systems that have four x 10GbE ports that supposedly ingest at 100TB/hour. Stealing with pride from Jim Barbera – Solutions Architect for the Federal Region – the math for the physical ports is as follows:

4 x 10GbE

10GbE = 1256MB/sec = 4.5TB/hour * 4 ports = 18TB/hour at theoretical maximum speeds.

So how the heck does this math apply to where we have competitors stating 80-100TB/hour on those same  four 10GbE ports?

In these cases the numbers are measured (some might say “fabricated”) using source side deduplication writing to a deduplication target device. Behind the scenes the target must have a mature blockpool and contain multiple full backup images. This must be done so that when the source side deduplication is employed there is essentially very little new/unique data that will need to be sent to the target. The other part of the scheme/scam is that the backup policy must be Full, Full, Full, so they’re dealing with the smallest ROC possible.

The following example is not to create debate about specific mathematical accuracy but to educate you on how they are “cooking the books”.

Example: 64TB Full image, total time to write and store is 8 hours representing roughly 8TB/hour. Not bad. We can attain that number and it is quite easy to see how the physical ports can support this.

w Can They Go So Fast? The Truth About Ingest Rates

Want to Know More?

After the 64TB is backed up and stored on the target device, let’s assume we apply the equivalent of 2% new/unique data to the backup source. This would work out to 1.28TB of data that needs to be sent and stored during a subsequent backup.

If we were to just do an incremental backup job, and it takes 10 minutes, we are seeing roughly an ingest rate again of 8TB/hour.

What if instead of the incremental backup of 1.28TB I ran another Full backup of 65.28TB?

The reality is that by employing source side deduplication I will be performing the handshaking for updating pointers for 64TB and actually only sending 1.28TB of new and unique data. If this entire process takes only 30 minutes I have essentially realized a full 65TB backup in 30 minutes. Therefore, if I can write a full 65TB backup in 30 minutes, I can claim a “logical performance” rating of 130TB/hour.

By now you’re saying, “that is all bull*%it.”  And you’re right.

But the numbers don’t lie.

So will we be positioned to better compete if we apply the liberal math to the situation and make the same claims? Or are we going to explain how with enough data in the blockpool, very little change rate, and source side deduplication in play, we can make a lump of coal turn into a diamond right before your very eyes?

For customers that buy into the competitors claims we need them to ask other vendors what themeasured performance for the first Full backup will be. At that point, they have to send all of the data and cannot rely on only updating pointers to existing blocks.

Quantum’s DXi products deliver the fastest native ingest performance and also have the options for Accent, Optimized Synthetic Full and vmPRO, all features that are going to only send new and unique data. In the end, this reduces network loading and also creates very high ingest rates when measured in “logical performance”.

Check out real world case studies from Quantum customers. Visit our Customer Story Portal to see how our DXi and vmPRO solutions solve the biggest data management problems.

Recommended Posts

Leave a Comment