Every year CRN – the top industry outlet for all things channel – highlights its leading voices with its annual Channel Chiefs issue. CRN poses a number of questions to each luminary, and the result is an interesting mosaic that offers insights to the state of the industry.
Branded video content is all the rage– and for good reason. Already accounting for over half of consumer internet traffic, video will be over 80% of the internet as we know it by 2019. Study after study has confirmed that video is undoubtedly the most powerful medium to connect with customers, employees, and investors (a.k.a. everyone who’s anyone to a business). What’s more, as the Association of National Advertisers report on the rise of the in-house agency observed, companies are increasingly bringing creative work, including video production, in-house. Some of the most progressive brands, like Red Bull and Marriott, run full-blown corporate studios. This trend is expected to continue, as marketing leaders cite several factors for the shift; most notably, speed of project turnaround and a desire to own engagement and conversion data surrounding branded video content. So, in essence: content is king, video content is everything, and leading brands who get this are taking video content management inside.
I love working with customers that we’ve watched grow and evolve. One of my favorite success stories is Matt Silverman, who founded motion design studio Swordfish in San Francisco with one Mac Mini and some direct attached storage. These days, Swordfish has an impressive client list that includes Sony, Microsoft, and Apple, and their growing team works on projects where 4K footage and large 3D renders are commonplace. Data is the core of Swordfish – like so many other companies, lost data means lost business. As a motion design studio, however, Swordfish has different storage needs than your traditional post house. They needed a robust, redundant network backbone that was compatible with many professional graphics and video software applications.
The demands of media delivery have evolved drastically over recent years, with more and more organizations producing often large volumes of video content for a range of uses. As that media landscape evolves, so too must the enabling tools, such as the MAM systems. Naturally with increasing numbers of media files being created, effective storage solutions are becoming increasingly important. Quantum StorNext provides great flexibility and performance for media workflows. By integrating Cantemo Portal with StorNext, our users can automate migration of content based on policies to more cost-effective long-term storage within Quantum.
Whether it’s adding search and edit capabilities for captions and subtitles, enhancing chat and messaging modules, or publishing directly to Facebook and Twitter, Quantum partner Dalet is continuously working to streamline the content creation process with their media asset management (MAM) solutions. Quantum StorNext is tightly integrated with Dalet MAM solutions to further streamline workflows by seamlessly and automatically moving assets between disk, tape, object storage and cloud resources. Today, we’re sharing what’s new with Dalet and what they’ll be showing at IBC.
Another year, another new format – or ten! Broadcasters are now surrounded by a sea of formats. Everything from HD-SDI, streaming formats, 4K/UHD and last but by no means least the many variants of IP based transport mechanisms like J2K or SMPTE 2022. It’s an increasing challenge for the industry to handle the mixture of all these sources especially when distributing content to the many output channels a broadcaster needs to address on a daily basis. Adding custom graphics and branding to the video forces us to have multiple versions of the same clip, eating up storage space and increasing the need for video management. Meanwhile broadcasters are often still stuck having specialised devices for singular tasks - video server for video, graphics servers, audio systems and vision mixers to name just a few. This isn’t anything new but a solution is urgently needed as we’re seeing these new formats arrive constantly. As we gear up for IBC 2015, let's take a look at these basic concepts and the understanding of how an efficient broadcaster should ideally operate, and dive into the creation of a new video, graphics and audio workflow centered on Viz Engine as a powerful video playback system.
It’s no secret that the stakes are high in sports broadcast. As Quantum’s Skip Levens said, there are “no second takes, millions of highly discriminating and knowledgeable customers scrutinizing your every move, and every play has the potential to make history.” There’s a lot of money to be made, but the competition between networks can be as fierce as anything on the road, field, court or diamond. So it’s no surprise that sports production pushes the envelope when it comes to adopting new technology. We’re only halfway through 2015 and we’ve seen some amazing leaps forward this year, in five key areas: Higher Definition Content, Camera Ubiquity, Real-time Data Analysis, In-Stadium Screens, and Live In-Home Experiences.
Video editing has always placed higher demands on storage than any other file-based applications, and with today’s higher resolution formats, streaming video content demands even more performance from storage systems, with 4K raw requiring 1210 MB/sec per stream—7.3 times more throughput than raw HD. In the early days of non-linear editing, this level of performance could only be achieved with direct attached storage (DAS). As technology progressed, we were able to add shared collaboration even with many HD streams. Unfortunately, with the extreme demands of 4K and beyond, many workflows are resorting to DAS again, despite its drawbacks. With DAS, sharing large media files between editors and moving the content through the workflow means copying the files across the network or on reusable media such as individual USB and Thunderbolt-attached hard drives. That’s not only expensive because it duplicates the storage capacity required; it also diminishes user productivity and can break version control protocols. In this blog, we'll look the key differences between major storage technologies and well as general usage recommendations.
Media content consists of both essence (the content itself) and its associated metadata. Everybody acknowledges that the metadata is important to classifying and locating content, so media companies tend to put a lot of thought into collecting and managing metadata — what type of information will be collected, where it will be entered and how often, etc. The idea is to ensure consistent, thorough metadata collection so that users can find and remonetize specific pieces of content. Metadata-gathering is a critical part of the metadata management process, to be sure, but it’s only half the process. What people tend to ignore is the other piece of metadata management — ensuring that the metadata is secure and archived. Why do they ignore it? Because media companies tend to focus so much on securing the actual content that they put little if any thought into securing the associated metadata, which is often stored in another database separate from the content itself. Let's look at best practices for protecting your metadata is to ensure that, while you’re backing up your content, you’re also backing up and archiving your metadata database.
The first time I edited any media, I did it with a razor and some sticky tape. It wasn’t a complicated edit – I was stitching together audio recordings of two movements of a Mozart piano concerto. It also wasn’t that long ago and I confess that every subsequent occasion I used a DAW (Digital Audio Workstation). I’m guessing that there aren’t many (or possibly any) readers of this blog that remember splicing video tape together (that died off with helical-scan) but there are probably a fair few who have, in the past, performed a linear edit with two or more tape machines and a switcher. Today, however, most media operations (even down to media consumption) are non-linear; this presents some interesting challenges when storing, and possibly more importantly, recalling media. To understand why this is so challenging, we first need to think about the elements of the media itself and then the way in which these elements are accessed.
If you’ve worked in storage for decades as I have, you’ve heard all the debates about which storage works best for each step in media workflows. But one thing that’s clear is that not every step has the same storage requirements, and that some kind of tiered storage strategy is needed. With every-expanding digital asset libraries, storing it all on high-performance disk isn’t practical or cost-effective. Traditional tiered storage is straightforward: store the most active, most recently used data on fastest, expensive disk storage, and store the less active, older data on slower, less expensive storage, generally tape or lower cost disk arrays. Hierarchical storage management (HSM) software was built to automate the data migration between tiers, and make file system access transparent regardless of where the data is stored. When the primary storage filled to a capacity watermark, for example, 90% of capacity, the HSM system would find the files that were least recently used and move them to the secondary tape tier until the disk storage had sufficient available capacity. This model of tiered storage was built for business data where the focus was containing costs. Disk storage was expensive, tape was cheap, and older business data was rarely relevant except for an occasional audit. The criteria was simply performance vs cost. But media workflows don’t manage business data. Here are the 3 biggest considerations for developing a new approach to workflow storage.
Video production is entering yet another major transition – the move to 4K. Much like the move to high definition (HD) several years ago, the new ultra-high definition (UHD) 4K-resolution formats have the potential to disrupt workflows, strain existing infrastructure and require costly unplanned upgrades. Those who remember how bumpy the change from SD to HD was are understandably nervous about what this looming 4K transition will bring. With lessons learned from the past, the industry is ready to make the change from HD to 4K. The technology has evolved, the tools have evolved and workflows have evolved. The challenge, however, is to make sense of all this change and put the right pieces together to enable a successful transition. The following five key tips will help you to make a smooth transition to full 4K production.
Remember those heady days of standing up your first SAN? In those days SAN’s were were small, and likely built up with 2Gb FibreChannel and 250GB hard drives. We found a way to make those small SANs work because we were likely ingesting from camera tape systems - and writing back finished project files to tape as well. It was chaotic – but it worked – and we evolved ever more elaborate file and folder structures to keep track of projects, customers and assets – and a growing shelf of tapes that we hoped were cataloged and tracked correctly. As simple file based workflows gave way to the modern, content-centric workflow model - several key lessons emerge. Here's the biggest lessons and how to understand them so you can "evolve beyond the adhoc SAN."