It’s no secret that the stakes are high in sports broadcast. As Quantum’s Skip Levens said, there are “no second takes, millions of highly discriminating and knowledgeable customers scrutinizing your every move, and every play has the potential to make history.” There’s a lot of money to be made, but the competition between networks can be as fierce as anything on the road, field, court or diamond. So it’s no surprise that sports production pushes the envelope when it comes to adopting new technology. We’re only halfway through 2015 and we’ve seen some amazing leaps forward this year, in five key areas: Higher Definition Content, Camera Ubiquity, Real-time Data Analysis, In-Stadium Screens, and Live In-Home Experiences.
Today, everyone seems to understand the ever-growing importance of data protection, often viewing it as a superset of backup combined with snapshots and replication. Typically, a conversation about data protection includes the assumption of a “gold standard” centered on using secondary disk for rapid recovery and tertiary tape for long term retention. Of course, “the cloud” also is always a consideration as part of the next generation of the solution. It’s still all under the banner of “data protection” (DP), the collection of activities, methods, and media used to help recover or restore business information after a crisis or other IT disruption. According to research, primary storage is growing around 40% annually, with secondary storage used for data protection growing at similar rates. Budgets aren’t growing nearly that much. Meanwhile, IT organizations are being asked to do more (i.e., inject more agility, functionality, and resiliency into their operations) while spending as little budget money as possible. In actuality, data protection budgets are growing around 4.6% annually according to ESG research, but that level of increase won’t even let you keep doing what you have been doing at a larger scale.Therefore, you have to do something different. What you should do: ARCHIVE!
Video editing has always placed higher demands on storage than any other file-based applications, and with today’s higher resolution formats, streaming video content demands even more performance from storage systems, with 4K raw requiring 1210 MB/sec per stream—7.3 times more throughput than raw HD. In the early days of non-linear editing, this level of performance could only be achieved with direct attached storage (DAS). As technology progressed, we were able to add shared collaboration even with many HD streams. Unfortunately, with the extreme demands of 4K and beyond, many workflows are resorting to DAS again, despite its drawbacks. With DAS, sharing large media files between editors and moving the content through the workflow means copying the files across the network or on reusable media such as individual USB and Thunderbolt-attached hard drives. That’s not only expensive because it duplicates the storage capacity required; it also diminishes user productivity and can break version control protocols. In this blog, we'll look the key differences between major storage technologies and well as general usage recommendations.
Media content consists of both essence (the content itself) and its associated metadata. Everybody acknowledges that the metadata is important to classifying and locating content, so media companies tend to put a lot of thought into collecting and managing metadata — what type of information will be collected, where it will be entered and how often, etc. The idea is to ensure consistent, thorough metadata collection so that users can find and remonetize specific pieces of content. Metadata-gathering is a critical part of the metadata management process, to be sure, but it’s only half the process. What people tend to ignore is the other piece of metadata management — ensuring that the metadata is secure and archived. Why do they ignore it? Because media companies tend to focus so much on securing the actual content that they put little if any thought into securing the associated metadata, which is often stored in another database separate from the content itself. Let's look at best practices for protecting your metadata is to ensure that, while you’re backing up your content, you’re also backing up and archiving your metadata database.
If you were building a police department from the ground up, where would you begin? Where do you go to stock up on holsters, handcuffs, badges, flashlights, guns, dispatch centers, in-car computers, police cars and the myriad other gear required by modern law enforcement? A good place to start is the Police Security Expo, held this year in Atlantic City. It’s like a superstore for police. And Quantum was there, because police departments increasingly need to include storage on their shopping list. This was my chance to check out the latest on-body cameras that have been in the news so much lately. Think of something about the size of a GoPro, but even more heavy duty, and sophisticated enough to actually begin recording 10 seconds before you press Record. The vendors selling these cameras typically had a good crowd of officers being educated on what it’s like to live with them on a daily basis, and the question of storage always came up. It’s a good thing, because law enforcement agencies are routinely generating over 1PB of data a year.
The first time I edited any media, I did it with a razor and some sticky tape. It wasn’t a complicated edit – I was stitching together audio recordings of two movements of a Mozart piano concerto. It also wasn’t that long ago and I confess that every subsequent occasion I used a DAW (Digital Audio Workstation). I’m guessing that there aren’t many (or possibly any) readers of this blog that remember splicing video tape together (that died off with helical-scan) but there are probably a fair few who have, in the past, performed a linear edit with two or more tape machines and a switcher. Today, however, most media operations (even down to media consumption) are non-linear; this presents some interesting challenges when storing, and possibly more importantly, recalling media. To understand why this is so challenging, we first need to think about the elements of the media itself and then the way in which these elements are accessed.