|by Jordan Woods|
Few things come along that alter the world of filesystems and make them exciting, especially for folks in the Media and Entertainment industry. Especially with the multitude of distractions, tight turn-around schedules, and little budget. That’s why Quantum dove deep into their treasured StorNext product to revolutionize and re-invent what can be considered the most modern and developed shared filesystem to date: StorNext 5.
The vast majority of the enhancements were down on the very low level of the metadata and journaling section, but this is not to be overlooked as trivial or a “nice to have.” The effects made to this layer have profound consequences to the overall system performance and scalability on a massive level. Looking at this holistically we can see a handful of the most major changes: The ability for multithreaded metadata operations, new layering of extents on a contiguous range of blocks, online metadata copy, and what I like to call “intelligent” affinities. These select few updates are just some of the more major changes that will have great affect on media and entertainment workflows.
So, what makes this exciting?
We have now opened the door to new possibilities in workflow optimization. The…
Great time in NY at our first stop on the StorNext 5 Next Generation Tour! Special thanks to everyone who came out and asked so many great questions about how StorNext 5 is different and exactly what problems StorNext 5 Appliances can help you solve. From End-to-End Workflows to Object Storage to how Quantum’s Data Center offerings can even help solve rich content and big data problems for Media and Entertainment environments, we had too much to talk about.
We’ll see you there.
For more on Quantum and StorNext, check out:
|by Dan Duperron|
Technology is a moving target. That’s part of the fun of being in this business, but for a technology company – like Quantum – it can present some challenges. Sure, there are technological challenges but I’m not talking about those. Quantum has been around for decades and has a strong brand. We’re constantly evolving and changing and we strive to understand and anticipate the changes happening in the data center to best meet the challenges our customers face now. It’s okay with me if someone has fond memories of a Quantum disk drive from 20 years ago, but I want our partners and customers to understand what Quantum does today, and how we can help them save money and better leverage their data to make money.
And that, my friends, is how I ended up at Lincoln Financial Field in Philly last week hanging out with some of the Eagles cheerleaders. Sometimes this job is tough, but I love it anyway.
We held a series of events at the stadium as part of our “reTHINK Backup and Archive” roadshow, now touring the US. New tools are available that can be valuable for anyone looking to keep more data, more accessible,…
|by Casey Burns|
With the introduction of cloud there has been a lot of talk, including jokes about how one can get started in the cloud. We see customers all the time trying to figure out what they can do from a cloud strategy perspective and how this will impact (positively or negatively) their current infrastructure, mainly around budgets. Cloud certainly has the ability to provide some relief in terms of finances – allowing you and your team to focus on more strategic projects – so why not get started with using cloud technologies, particularly when it comes to backup.
Quantum recently announced a cloud based backup program for MSPs and VARs that delivers a number of fantastic benefits:
• Minimal upfront costs for the MSP/VAR or end-user
• Unique subscription based pricing allows costs to scale with revenue
• Offer high margin BaaS for any sized organization
• Take advantage of existing infrastructure using virtual dedupe appliance and backup software
So what’s this offering all about?
There are two main aspects: Technology and a unique capacity based licensing methodology.
Let’s talk about the technology. In Sept, we announced a big brother to our DXi V-Series of virtual dedupe appliances, the DXi V4000. The DXi…
Not all Tiered Storage is created equal.
In our new Data Center Solution videos we’re showing the world how we’re rethinking the future of backup, archive, and tiered storage. With a tailored, customized approach we help customers store their data in the right place, at the right time, for the right use case. Our Data Center solutions incorporate a customized mix of storage technologies including deduplication-optimized RAID, next-gen object storage, public and private cloud deployments, and tape technologies including LTFS.
Check out the video below and get in touch to see how our tiered storage solutions can help you build a 21st-century data protection strategy. When it comes to data protection and archiving, one size does not fit all.
|by Jon Gacek|
I read a good article in SearchDataBackup recently on an interview Sarah Wilson conducted with Jon Toigo on LTFS (Linear Tape File System). LTFS is an open standard technology that allows you to use tape like NAS – drag and drop files to and from the tape, quickly access them from a directory on your screen, easily exchange them between different operating systems and software, etc. The interview provides a good overview of LTFS and where it’s being used, and he also shoots down some of the misconceptions about tape that I often hear.
LTFS demonstrates the innovation that we continue to see in tape, one of the points highlighted by the Tape Storage Council in a memo issued last week. The Council – made up of tape providers from across the industry, including Quantum – talked about the “pivotal and expanding role” that tape plays in today’s data centers for long-term data retention. Among other things, the memo referenced:
* Interesting uses of tape by different organizations, including the National Institute of Health, Major League Baseball and USC.
* Tape capacity shipments reaching a record level in 2012 of more than 20,000 PB (that’s 20 EB), with…
As the volume of data has increased, there has been a shift in the way that companies use and access that data. That means it’s time to change the way you think about data protection, retention, and accessibility.
Organizations of all sizes recognize that data can help gain competitive advantages and even support new revenue streams, but this is placing a demand on IT to store and preserve access to that data. Companies need new solutions and technologies to support unpredictable, on-demand access and incorporate new approaches to backup and archiving.
It’s time to reTHINK Backup & Archive.
Click on the picture below and check out how Quantum is creating the next generation of solutions to support the unpredictable, on demand access of today’s data.
Go beyond the marketing and read about what reTHINK can mean for you:
Since we announced our next generation StorNext 5 Appliances three weeks ago, we’ve been getting requests for more background about how we’ve achieved such significant increases in performance, scalability and flexibility. To dive deeper into how we built StorNext 5 from the ground up, I’ve tapped Skip Levens, director of technical marketing, to detail some of its core design features.
A new standard for metadata efficiency. The footprint of file metadata in StorNext has always been small, but with StorNext 5, the engineering team was able to shrink the size of metadata volumes even further. That means StorNext metadata controllers can use the space dedicated to file metadata more efficiently, allowing StorNext 5 file systems to scale to 5 billion files. That’s really important since modern workflows are more likely to combine large streaming files with smaller files like audio, proxy and content-based metadata files.
Smaller metadata means large-scale performance. More compressed metadata and better caching techniques also unleashes hidden performance in existing metadata controllers, especially in larger deployments with frequent metadata reads and writes. With StorNext 5, large-scale file creation and deletion can also occur concurrently with high-performance access. That means customers won’t need to set…
This article originally appeared on Wired.com’s Innovation Insights blog.
The creation and acquisition of massive amounts of content has become easier than ever.
With the introduction of new digital acquisition technologies (from video cameras to sensors) and increasingly sophisticated data analysis tools, the way we handle and save our data is changing. The true value of information will evolve over time. For example, real-time data and historical data can reveal unexpected results. Old video footage can be compiled and digitized from archives to capture a previously insignificant moment in time. For businesses that rely on data to identify trends or repurpose content for monetization, there is a need to keep all of this forever.
The amount of data captured and shared between applications is staggering. In our personal lives it is not unconceivable for an individual to have a hundred thousand or more photos and thousands of home videos (my wife has 151K photos as of this week). While the world has seemingly embraced a streaming model, the desire to accumulate personal data stores has continued to grow as we find ourselves caught in the middle of changing technologies and hoarding content. We are moving from one computing…
|by Jon Gacek|
A federal jury in Seattle recently ruled for Microsoft in a patent dispute with Google’s Motorola Mobility division, closing off a summer in which patents have been a hot topic. The continuing Apple-Samsung battle has attracted a lot of attention, and President Obama’s proposals for cracking down on patent trolls are being followed closely in the technology, legal and VC communities. It’s the issue of patent trolls that I want to focus on here. These are companies that exist solely for the purpose of buying patents and then suing others for infringing on “their” technology. A few months ago, Quantum had a resounding legal victory against a patent troll, and it’s a good example of how absurd these lawsuits can be.
Before I get into that, I want to be clear that I think patent protection is very important. Advances in all kinds of fields are made possible by innovators being able to protect their intellectual property and get a return on their investment. It’s one of the foundations of our knowledge economy, and at Quantum, we understand it well—really well. As a long-time technology leader, we’ve been issued more than 1,000 patents, including the foundational patent for the most…
Being the “Cloud Guy” at Quantum, I get to talk to a wide variety of people about what’s happening in the cloud, from the wildly optimistic visionaries to the skeptics in the wondering, “Is my data really safe?” This week the visionaries got a hard reality check when Nirvanix abruptly announced plans to shut down their cloud service, giving customers and partners just two weeks to find another place for their petabytes of data.
The cloud still offers enormous benefits, but I think the Nirvanix example is a great reminder that not all clouds are created equal and there are key considerations companies need to thoroughly evaluate.
Keep Data On-Site
Data protection best practices, and our recommendation, is to keep a full copy of your data offsite, while at the same time retaining a full copy on premise. The Nirvanix shutdown underscores the importance of retaining a full copy of data on premise. Having an on premise copy provides fast restores that, when combined with an offsite copy, gives full protection for any situation.
Even though the cloud offers many benefits, customers need to continue to keep diversity in their data protection portfolio, and use the right technology,…
Earlier today at IBC in Amsterdam, Quantum announced StorNext 5 Appliances, a new generation for StorNext that delivers dramatic new levels of performance, scalability and flexibility for the industry’s leading file system, tiered storage and archive. Newly engineered from the ground up, StorNext 5 is the result of over two years of development with our team of file system experts—many from the original StorNext team— examining virtually every line of code with the thought: “How can we optimize StorNext for our customers’ modern, evolving workflows?”
In the past two decades of managing digital content for the world’s most demanding workflows, including the world’s top post and broadcast facilities, the digital workflow has changed. Video, image and sensor data are now 100% digital, from capture to ingest to processing to delivery to archive. Production teams are now globally distributed and need immediate, shared access to collaborate effectively. Digital content is now more than just large streaming video files, it’s a mix of files of various sized that must be managed together, efficiently. And the network topologies and interconnect that users and applications use to access content have evolved beyond the SAN to include IP/NAS and even cloud access.
|by Dan Duperron|
As my colleague Terry Grulke pointed out earlier, there is lot of funny math used by deduplication vendors to try to convince you that their system can go fast. With our DXi systems we don’t have to hire Cirque de Soleil to generate our performance numbers. We can keep it simple because DXi systems are just really, really fast – natively.
That’s what I’m going to talk here about here – “Native” performance. That is, the capability of the DXi system itself vs. some manufactured “logical” number like the ones Terry wrote about.
Apparently, our high performance is confusing to some of our competitors. Frequently when we are up against EMC or Data Domain, we get this question forwarded to us from the prospect, with this exact wording every time (Hmmm):
“The head unit only supports X disks and the expansion array supports Y, how does Quantum guarantee Z TB per hour (according to the documentation) with so few disks?” (insert X, Y, and Z from the appropriate DXi datasheet)
Well we don’t “guarantee” performance, nobody does. But our datasheet numbers are repeatable and justifiable, and created with real data- the same data types you have – Exchange, MS-Office…
|by Casey Burns|
Recently there have been a number of publications talking about “virtual appliances” and how they can provide a lot of benefits to customers, particularly those that are highly virtualized. For SearchStorage Virtualization and it was actually their “word of the day.” In short, these virtual appliance are analogous to physical appliances in that they are pre-configured (OS and software application), purpose built, self contained appliance that deploy easily and seamlessly into your environment.
From a data protection perspective, physical appliances have been around for years. Think about Quantum’s DXi, a physical backup appliance designed to drop right into your environment with minimal disruption and without the need for anything else to be added. Completely purpose built. In March of 2012, Quantum introduced the industry’s first and only Virtual Backup Appliance using deduplication and replication technologies, the DXi V1000, providing up to 30TB of storage capacity in a single virtual machine instance.
In May of 2013, we announced the DXi V4000, delivering up to 360TBs of storage capacity in a single virtual machine instance. Wow!
The benefits of physical appliances have been widely known for some time — given how long they have…
|by Mark Pastor|
Disk isn’t dead.
With all the love virtualization, flash memory and cloud are getting these days, allow me to be the first to jump to the defense of disk. It still has a future — of that I am certain. What gives me this tremendous confidence is the new use case I’m seeing that truly enables corporations to store everything and to access everything in ways previously not possible. The use case I am referring to is a simple one:
Use object storage disk for retaining unstructured archive data. All of it.
By moving all of your unstructured archive content to disk-based object storage you accomplish several things at once. You reduce the burden on your primary storage resources, you improve the performance of your backup operation, and you free up network bandwidth that is involved in your backup processes.
What enables this use case better now than ever before is two very key ingredients. First, today’s next-generation object storage technology includes some level of forward-error-correction technology that enables objects to be spread across components to accomplish durability much like what RAID technology does for drives within an enclosure. However, some implementations of object storage can go way beyond…