Your Data, Our Object Storage, Everyone’s Future

The challenge with data in life sciences today? Managing the sheer volume of it. The first genome took 15 years and 4 billion dollars to sequence. Today’s next-gen sequencers can sequence in days for less than $1,000. More genomes are being sequenced, which means more data is being analyzed—and it all has to be stored somewhere. In fact, Public Library of Science (PLOS) estimates that genomic data could soon surpass YouTube as the biggest generator of data. It’s clear that life sciences teams have their work cut out for them. However, the storage challenge goes beyond managing a flood of data. Teams of scientists often need to work on the same data at the same time, even if collaborators are in a lab half a world away. When these researchers access large genome data sets or high-res medical images, they need fast access. And research takes time—some research studies can last for decades. Data generated during the beginning of the study needs to remain accessible over the lifespan of the entire project. To scientists, it not just data. It’s their life’s work. It’s work that is building a better future.

From the Dalet Academy: Shared Storage Workflows, PT.2

Video editing has always placed higher demands on storage than any other file-based applications, and with today’s higher resolution formats, streaming video content demands even more performance from storage systems, with 4K raw requiring 1210 MB/sec per stream—7.3 times more throughput than raw HD. In the early days of non-linear editing, this level of performance could only be achieved with direct attached storage (DAS). As technology progressed, we were able to add shared collaboration even with many HD streams. Unfortunately, with the extreme demands of 4K and beyond, many workflows are resorting to DAS again, despite its drawbacks. With DAS, sharing large media files between editors and moving the content through the workflow means copying the files across the network or on reusable media such as individual USB and Thunderbolt-attached hard drives. That’s not only expensive because it duplicates the storage capacity required; it also diminishes user productivity and can break version control protocols. In this blog, we'll look the key differences between major storage technologies and well as general usage recommendations.

Why Use a MAM to Archive When I Can Automate Using an Archive Manager’s Policies?

One of the challenges an administrator at a content company stumbles with when beginning to implement a MAM-based workflow with a tiered storage solution is which one to lead with. It’s almost like new dance partners stumbling over each other’s feet. Even in well-integrated solutions, where the MAM vendor has coded to the archive vendor’s APIs, there is room for conflict. Does one stick solely to the MAM software itself to drive the archival and retrieval of content to the repository? Or does one complement this with policies in the storage archive software itself to automate the archival? Folks in the Hollywood area have an opportunity to learn more about MAM, archive and workflow storage in a live demo event at MelroseMac’s offices on June 9 with BlackMagic Cameras, Cantemo and StorNext. Read on and RSVP for the event.

Pt. 2 of Three Trends That Could Disrupt Your Workflow in 2015: Extended Online

There’s never been a bigger rush to transcode and deliver content worldwide to more distribution channels than today. A broad range of new delivery platforms and new audiences can bring new value to legacy content, but only if your workflow supports it. If you’re not ready to release content quickly when a new distribution opportunity arises, re-process it for special features, or even re-use content in a new project, you’re leaving money on the table. And that’s not so easy to explain to your boss or your investors. Unfortunately, most workflows are poorly set up to access, transcode and deliver content created years ago. The good news is that StorNext 5 workflows built with Quantum Lattus have specific capabilities that enable real-time and non-real-time operations to occur efficiently in the same storage infrastructure. Here's how.

Western Digital/HGST Acquires Amplidata: Object Storage is the Place to Be

As you may have heard already, there’s exciting news today in the object storage marketplace: Western Digital Corp., a leader in storage technology, announced that its HGST subsidiary is acquiring Amplidata, Quantum’s object storage technology partner. We’re happy for Amplidata and looking forward to expanded partnership opportunities with WDC and the HGST group. As a reminder, Quantum announced in 2012 that we were leveraging the performance and availability of Amplidata’s object storage technology by embedding it in our Lattus family of unique active archive solutions. Since that time, many Quantum customers have been able to increase the value of their data by extending cost-effective online access to massive volumes (PBs) of information, so let's look at how this announcement is great news for three major reasons.

Extending Online Storage

I consider the attention industry analysts pay to emerging technologies to be an interesting barometer for the market. Not long ago I attended the Next Gen Storage Summit, where object storage was a key focus, and met with a long list of industry luminaries to discuss object storage and where it is headed. Lots of probing discussions about Lattus, as well as observations about use cases for various industries, that stand to benefit from more cost efficient, scalable and accessible storage. They have also echoed a sentiment consistently: Demand for capacity growth is real industry wide and there is clearly a mix shift toward unstructured content that is driving this.

Will Legal Ruling on APIs Put Gas in the OpenStack Storage Engine?

Historically, the storage industry, simply put, sucks at agreeing on – and deploying – open standards for anything. This fact makes huge sense when you consider that the “standardized” segments of the storage business (e.g., raw disk storage) have survived for years on razor thin margins per disk, while the software and system value-add that have floated on top of this core architecture have been priced at anywhere from rational margins to excess profits. Nobody wants to give up those margins! Startups need those margins to innovate while the major system and storage houses who exert a level of market control simply love the ROI. Nobody really wants an open standard – unless by some chance it is constructed to allow customers to move off the competitor’s offerings and onto “mine.” This win-lose mentality results in a lot of talk (and meetings) about open standards and products, but very little action. SNIA is a tissue paper tiger. Enter – the cloud.

The Boundary Between Primary Data and Archive Data Has Blurred

Contrary to popular belief, how you archive matters more than what or why you archive. For the broad market, the notion of non-archived data has become antiquated. Getting rid of old data means taking the time or investing in resources required to decide what data can be deleted, and most data managers do not feel comfortable making those decisions. So today virtually everything is being stored forever, generating huge repositories of data and content, and creating a great urgency to establish a data storage architecture that will thrive in this new “store everything forever” era.

The Playoffs for Storage

I know a lot of folks think the big contest this time of year is the Super Bowl playoffs. In Quantum’s Denver, Bay Area and Seattle offices we’re sporting the colors of the Broncos, 49ers and Seahawks, with just a bit of friendly trash talk to kick off conference calls. Perhaps you know someone rooting for New England – I don’t. But if you care about data storage, the other big contest is Storage Magazine’s Product of the Year Awards. The award serves as an annual reminder of what the storage community found important, promising, and profitable. This year’s award finalists include a cross-section of Quantum products spanning scale-out shared storage and the data center, highlighting the breadth of innovation from the company over the last year. For 2013, four Quantum products – more than any other vendor among the finalists – have been selected in three award categories.

Data in the Goldilocks Zone

Astronomers searching for life outside of our solar system speak of The Goldilocks Zone – the region around a star where conditions are suitable for sustaining life: not too close and hot, and not too distant and cold. Initially these “just right” conditions appeared to be almost impossibly rare, but researchers over the years have found organisms that can exist in more conditions than previously imagined. It turns out that the Goldilocks Zone is wider than we thought, increasing the possibility of finding other planets capable of sustaining life. Today a similar recognition is happening in data centers. While IT has long thought of data storage as “hot” and requiring immediate access in flash memory or primary disk, or “cold” and suitable for backup and archive to tape, there weren’t many choices for a “warm” tier of data that required a more nuanced cost/latency balance. The expanding range of choices such as public and private cloud, object storage and LTFS tape has in effect created a wider Goldilocks Zone for data centers. The refreshed thinking about the capabilities of both established and emerging technologies for these different tiers of storage has been getting a lot of attention lately.

PetaScale Storage Gets Real: Are You Ready?

The creation and acquisition of massive amounts of content has become easier than ever. With the introduction of new digital acquisition technologies (from video cameras to sensors) and increasingly sophisticated data analysis tools, the way we handle and save our data is changing. The true value of information will evolve over time. For example, real-time data and historical data can reveal unexpected results. Old video footage can be compiled and digitized from archives to capture a previously insignificant moment in time. For businesses that rely on data to identify trends or repurpose content for monetization, there is a need to keep all of this forever.

Why Application Data Movers May Have Been Slow to the Object Storage Party

After publishing my blog yesterday on the need for application support of object storage to break the logjam in adoption….it occurs to me that some of you may be asking the question: “Janae, if object storage really is so cool and the gap in object storage adoption is data mover application providers writing to this new technology, why haven’t these developers quickly moved to fill this gap?”

Load More