Category Archives: Workflow

A framework for evaluating asset management needs

A robust media asset management (MAM) strategy is universally understood as mission critical, but often neglected in today’s small and medium sized media enterprises. The implementation challenges can be daunting, but if approached methodically they can be managed by taking a phased approach to meet the most immediate needs first while building a foundation for the end game. I recommend these three phases as a framework for the approach to MAM adoption.

  1. Protect the assets you need today and in the short term to meet your deliverables deadlines.
  2. Preserve media and metadata with an understanding of how you will use these assets in the future.
  3. Optimize your asset management workflow to streamline complex, non-creative processes. If it doesn’t need thinking, automate it.

Protect: Always have a backup plan

Your media asset management strategy should be in gear before first camera card is removed from the camera and handed to the DIT. Scores of documents and images are created during pre-production and production. Relative to the media files and proxies used in post, these are small files and are only a few gigabytes. Back everything up. Storage is cheap, doing it again isn’t.

A common misconception is that having critical files in Dropbox, Box, Google Drive, or iCloud negates the need for a dedicated backup. The primary job of a sync services is to provide the latest version of a file to multiple devices so everyone with access to the file has the most current version. In a collaborative environment it’s important everyone be working with the latest version of the file, but at some point work on that file is complete and must be protected from further modifications. Leaving it in the collaboration space is a risky proposition. That file is only one careless mouse click away from ruin.

Each of the aforementioned services excepting iCloud feature version control, but for version control to be useful the document owner must know to revert to a previous version prior to the expiration of version control, and has to figure out which version to restore. Often the dicovery that a file has been damaged occurs after the 30 day window these services provide for version control. So, just back it up already.

Cloud backups make the most sense (and are fully buzzword compliant). Since the backup resides offsite it is immune to floods, fire, and physical theft. Cloud-based back up services such as CrashPlan, iDrive, and Carbonite have been around for some time, and they share near feature parity. All of them work and are reasonably priced. My personal preference is CrashPlan. It is the only vendor to offer a free option (provided you supply the remote storage and computer). The remote host can reside on the LAN or WAN and is simple to set up, or you can just pay an annual fee to CrashPlan to backup to their cloud. I have about a terabyte of personal photographs on CrashPlan and mirror what I backup to the cloud to an external drive on a local Mac. If my main drive becomes corrupt, I have a backup a few steps away. If the worst happens, and I lose my photos to flood or fire, CrashPlan will ship me a USB drive with my files (for a fee) so I don’t have to wait for my files to download and I maintain the original directory structure.

Don’t stop using Dropbox and the like. Implement a backup plan to complement it. The moment any asset is in a state that should be preserved, remove it or copy it out of Dropbox and copy it to a backed up directory.

Asset preservation

Post production is another story. There are the native media files from the camera, the proxy editing format files, rendered effects, and the final distribution masters. Backing up everything takes up a lot of storage. Let’s take the example of a season of reality television. The example below illustrates how much storage space is required to preserve a season’s worth of footage. For those with standard business bandwidth, cloud storage is not an option. It would simply take too long to get everything up to and back down from the cloud. I’ve made the spreadsheet available here. (For those interested learning what going all 4K will cost in storage, change the bit rate to something around 600 mbps.)

The workflow is simple.

  1. Bring in the native media on card or USB disk.
  2. Convert it to the edit proxy format.
  3. Park a copy of the native media on a NAS or LTO.
  4. Edit with the proxy material. (Don’t forget to backup your project files daily until you are ready to proceed to a full media asset management solution.)
  5. Bring back the native and convert it to the master format as needed.

The most crucial factor to consider is, like nearly everything else in business, price vs. performance. These most come into play in steps 3 and 5. The cost per terabyte of LTO 7 is about half that of spinning disk (~$40/TB in early 2016). The time required to pull material off LTO vs. spinning disk can be significantly higher. Below is what to consider in your nearline/parking decision:

  • The initial hardware outlay. Using the example above for a 13 hour season of reality television, about 100 TB will be needed. A 25-slot LTO library that will allow the whole season to be loaded will run just under $40,000. An inexpensive 100 TB NAS will cost around $16,000.
  • Performance. LTO 7 has a maximum throughput of 300 MB per second. Tape has come a long way, but the NAS is up to 10x faster.
  • Some inexpensive LTOs do not support partial restore of media file. That means that it doesn’t matter whether 10 seconds or ten minutes of the original clip is needed, the whole clip gets restored. That can eat up storage space and take serious time. Some cost effective LTO solutions, such as StorageDNA handle partial restores to the editing environment quite well.
  • Options proliferate for proxy creation and parking. My personal favorite is Root6’s ContentAgent. It works with every NLE, and can work in standalone, shared storage and project environments, and full asset management systems. Other reliable solutions include Telestream Vantage, and MOG mxfSPEEDRAIL.

Both ContentAgent and Vantage have robust orchestration capabilities that in many facilities delay the need for a full blown asset management solution.

Optimize: The case for full asset management

A backup and preservation plan assists the facility’s current workflows, but does very little to streamline them beyond the orchestration of backup and media transcoding. For five years as product manager for the most successful media asset management systems in the history of the planet, I saw a lot of asset management systems sold. I also saw a lot of unhappy customers. Did our products do everything we claimed? Unequivocally, I say yes. Were all our customers satisfied? Most were, but many weren’t. Every asset management system promises to streamline workflows, connect disparate systems, and cut production time. Profits will increase. For the most part, I found that asset management worked best in fast turnaround broadcast or broadcast-like environments such as recorded studio productions. They were not nearly as successful in traditional post, both scripted and unscripted. Going into the assert management selection and deployment process with clear goals and expectations will greatly increase the odds of success.

Ask yourself the following questions. The answers will help you determine what you need from an asset management system.

Do I own my content?

This is the most important question in the process. If you don’t own the content, there’s a good chance you don’t need an archive. Without the ability to repurpose the content for your own needs, the only reason to maintain an archive is as a client service. Odds are the content owner already has an archive solution for redistribution and repurposing. You likely won’t need to store finished masters. Perhaps you’ll be asked to store the production assets for future re-edits or for use next season. Tread carefully. Many clients think they want that option, but are unwilling to pay enough to make it a worthy investment for you. At 400:1 shooting ratios, storing production assets is far more costly than maintaining an archive of masters. Even at the lower shooting ratios of news, most operations only saw the raw footage from the most important events.

It comes down to this: If you own the content and are looking to repurpose it for future monetization, you’re going to be most interested in media asset management, an enterprise system that connect your library to your OTT, VOD, web CMS, and other enterprise systems. If you don’t own the content, you’ll be exploring production asset managment (PAM) solutions. Its role is limited to the production of the work-for-hire piece, so you are tracking much more granular metadata with less need for complex rights management and hooks into the enterprise systems.

Does my staff consist of full-time employees or freelance contractors?

This is the number one contributing factor when asset management doesn’t deliver to expectations in post. Post workflows are most often executed by freelancers. They work for you for six to twelve weeks, and they’re gone. No matter how intuitive vendors have made asset management systems, they are different from the standard Media Composer / ISIS (Unity) workflows that permeate greater Los Angeles. Change is tough. People resist. And training people is neither easy nor cheap. The more invisible the system is to editors and assistants, the higher the likelihood of success. Solicit input from editors, assistant editors and producers.

Where is my workflow bottlenecked?

Look for manual, serial processes that can be automated and parallelized. It’s not just the raw time saved by a performance boost that benefits the workflow. The lower error rate will save additional time and valuable online storage space. Typical areas in need of optimization include transcode and ingest, logging, and review and approval. Each of these problems can be alleviated without a full asset management solution. Sometimes one or two point products targeted at the issue will be a better value than complete system. We’ve already discussed transcode. Other point solutions such as Media Silo and Aframe for logging and review and approval might fill the gap and save tens of thousands of dollars doing so.

Do our needs fluctuate from week to week?

Asset management systems are typically deployed to solve specific workflow challenges. The traditional transcode and ingest server is not easily repurposed as a logging station when shooting is complete and everyone is ramping up for editing, but an asset management system whose components can be virtualized affords the flexibility to put more compute horsepower behind the services that need it when they need it without complex installs. Even a facility looking for an entry level asset management system should only consider solutions that can be deployed in a VM.

Final Words

After a long evaluation of your workflow and a look at the various vendor offerings, you have decided you need a digital asset management system. I leave you with these final words of advice.

  • Keep it small. All modern MAMs are modular. You can always add capabilities later. Don’t overwhelm your team with too much change all at once. Settle on one or two pain points to ease in the first phase.
  • Minimize customizations. They are expensive to build and always take longer than expected. The cost is not just in the professional services, though those hourly rates add up quickly. Customizations are costlier to support, and add complexity to staff training. If the out of the box solution is almost what you want, it’s probably all you need.
  • Have a bake off. Insist on seeing your current workflow demonstrated live, every aspect of it. Allow a vendor to show an alternative approach, but make sure fully they understand what you need to get done at every phase of production.
  • Go over the statement of work thoroughly with the vendor prior to your first payment. Insist on a single point of contact on the vendor team, and only allow a sinbgle point of contact on your team to sign off on change orders.
  • Don’t skimp on training. This should go without saying, but it always needs saying. MAMs are damn expensive, even “small” systems can run into the six figures. It’s very tempting to scale back on the training, especially in an industry with such high turnover. But it is crucial good habits are instilled from Day One. Also, have one person on staff trained extensively enough to train others and write the production bible for the facility.

Implemented properly, a MAM will increase productivity by eliminating mistakes and increasing the number of people able to access and work with the content in real time. If you have questions or comments send them my way, and I’ll do my best to answer them promptly.

The exobrain

stylized brainBack in 2009 Scott Adams of Dilbert fame described his concept of the exobrain in a blog post. He argued that his smartphone was an extension of his brain used for offloading data and outsourcing simple mental tasks. When Dilbert speaks the world listens, and the post is often cited. In later posts he expanded the concept into organizational learning – the organization’s culture is a data store. Pairing organizational culture with data storage and retrieval is nothing new. It’s called knowledge management.

As technology advances and culture evolves the idea of the exobrain as a physical device becomes outdated. Though my laptop, tablet, and smartphone have some specialized capabilities that give them individual exobrain duties, their duties converge more often than diverge. I can communication via text, voice, and video with all three. All three can search the Internet. And all three can store text and rich media. Most interestingly, and most importantly in the case of a scatterbrain like me, no one of those devices represents a single point of failure. I can get through most days without any one of those, and many days without any two.

The same cannot be said of the services these devices access. Going a day without Gmail, Skype, or Dropbox is not so simple. The crucial data I’ve either uploaded to these service or chosen only to download as needed must be available 24/7. I can’t get anywhere without GPS. I wouldn’t even know where to go without access to my calendar. The implications of this shift from dependence on device to dependence on service are pretty astounding. Big players get this. Google is building a network of services that devices must access to be viable.

He who owns the platform wins. As device dependent as Apple’s business model is, it has invested heavily to make sure the necessary services for their devices, such as iTunes and MobileMe, are available to make those devices useful.

The ideal services are those like Gmail and Dropbox that are device agnostic. They become indispensable to the user very quickly. All data is available on all devices and is always current. The second tier of services are those like Evernote. It’s a great piece of note taking software with web tools and local apps for all the major device OSes, but the functionality of the apps varies from device to device. I can take notes on my PC, formatting them to be easily read, but should I access that note from my iPad I have to sacrifice rich text formatting – forever. Livescribe is a more distant second. It uses different data models on the Mac and PC, so simply syncing your notebooks on the cloud only works if you stay within a the same OS family. The utility of an application or service diminishes exponentially as the number of data files the user must manage increases. Thus, Livescribe is flirting with irrelevance if it’s unable to solve this problem.

The Holy Grail, which the top tier services are approaching, is to become a Rosetta Stone. The user needs this service as a bridge between workflows and devices. For example, I can read a Microsoft Word document on a Blackberry without any additional software purchases as long I store it on Google Documents.

As software and services evolve in the media and entertainment space, those tools that act as a platform – accepting all formats from all devices, and make their data available for viewing on the widest variety of devices will win. The standalone editor seat will become a museum piece. Frictionless collaboration is where we are heading.

Fuze Movie announced

Years ago when we were launching Xprove, we met Michael Buday who was working on a very impressive synchronous online video review and approval system. SyncVue might have been a little ahead of its time, but in its latest incarnation as Fuze Movie it might gain traction. Here’s the PR announcement from my friend Kevin Bourke with links.

Avid posts RED workflow paper

Credit my colleague Michael Phillips for authoring this RED workflow paper. Avid’s RED support continues to evolve, so stay tuned for further anouncements.

Mastering codecs revisited

A lot of folks were excited by the release of the Windows ProRes 422 decoders for QuickTime last week. It does solve a workflow issue for editors needing to get ProRes material into third party Windows applications, but it doesn’t allow for roundtripping.

The lack of a Windows encoder is only part of the problem with ProRes. Its lack of alpha channel support remains a dealbreaker for many editors and motion graphics pros.

Looking for a reasonable bandwidth, all I-frame mastering codec? DNxHD remains the best cross-platform mastering solution.

%d bloggers like this: