Transcoding, exporting, and mixing down media can all be massive time sucks during post, which teams can ill afford when deadlines are tight. In addition to time lost, these factors can also contribute to lost creativity, taking editors, colorists, and other post professionals out of the creative mindset and flow. Enter distributed processing.
So, what is distributed processing and how can it transform your post-production workflow?
Distributed processing essentially refers to using several compute cores at the same time, which makes rendering dramatically faster than if it was performed on one lone computer. According to Chris Lawrence, CG supervisor on the seven-time Academy Award®-winning Gravity, it makes all the difference in the world: "To render Gravity on a single core machine with a single processor in it and be ready for 2013 [the year the film was released], you would need to start before the dawn of Egyptian civilization." Even in 2019, it took Pixar, which has one of the largest render farms in the world, anywhere between 60–160 hours to render a single frame of the highly detailed Toy Story 4, according to Insider.
These macro examples illustrate the power of distributed processing and why it is critical for post-production professionals working at any scale. It allows production houses to leverage all of their available computing resources to speed up their workflows and maximize their creative artists' most limited asset: their time.
But how does this all work, and how can you harness distributed processing even if you don't have access to a massive render farm?
What Is Distributed Processing?
Distributed processing takes a complex computing task and divides it among a network of individual machines (or nodes), which then complete their part of the task and send it back to be compiled into one seamless output.
Most of modern life relies on distributed processing, as anything that runs in the cloud is effectively using a network of servers to compute specific tasks. More colorful examples include blockchain farms, massively multiplayer online games (MMOs), and virtual reality communities, not to mention the render farms of visual effects and animated movies.
Most post-production software today can make use of distributed processing in some way, either through access to machines via a local shared network or through uploading jobs to render on a cloud-based rendering system and downloading the results. The ability to coordinate these processes as well as modify, track, and prioritize them provides even more benefits and control for any facility or content creation team.
How Does Distributed Processing Work?
Rather than sequentially running through a specific job, distributed processing breaks a task up into segments and distributes them through the network to be executed in parallel, saving a lot of time in the process.
Distributed processing essentially requires:
- A network of individual workers or nodes;
- Each with access to the same shared storage; and
- Organized and commanded by distribution management software.
Each node could be a dedicated "headless" machine, basically a computing core and/or GPU as in a render farm, or simply an idle computer somewhere in your post-production facility. These nodes are assessed and configured to best use the resources they have to perform the task at hand optimally. So, for example, a machine with a fast GPU would be best used for complex video encoding and rendering, while a less capable machine can still be put to good use with lighter tasks, such as compiling audio mixdowns or consolidating media.
One thing to be aware of is that every node will need access to all the same media and third-party plugins used by the originating machine on the project in order to contribute to the job.
How Distributed Processing Can Turbocharge Your Post-Production Workflow
There are several benefits to working with a distributed processing system in post-production, given that it can speed up your entire workflow at every point in which time-intensive computing tasks are involved. This includes everything from ingesting and transcoding media to new edit-friendly codecs, to rendering complex effects and plugins on the timeline, to encoding final exports, mixdowns, and project consolidation.
All of these "time-suck" tasks, which would otherwise slow down the creative process, can be completed exponentially faster through distributed processing. The result of all this is that it avoids wasting your creative talent's valuable time, allowing them to focus on and stay in the creative flow of a project instead of being interrupted by having to manage these kinds of technical tasks.
Most often the details of these tasks—for example, the priority of the job in the queue or which available machines to make use of on the network—can be set up once and then simply selected per job and executed in the most optimal fashion.
A further benefit of distributed processing is that it potentially removes e-waste from your post-production ecosystem, as older machines that might not be suitable for the latest and greatest projects can be profitably put to good use in a distributed processing network, rather than sitting, at best, redundantly on a shelf or, at worst, in a landfill.
Ultimately, the ability to leverage all of the available processing power of every machine in your facility and to speed up the workflow of every project that passes through your facility will enable you to make the most of your available resources, saving both time and money.
More Like This
5 Artificial Intelligence Applications for Editorial Post Teams
Emerging artificial intelligence applications can free post-production teams from costly, time-consuming tasks, ultimately resulting in better work.
How to Tailor Post-Production Security Protocols to Fit Our New Normal
With remote workflows now commonplace, how can post-production security evolve to protect your networks and media from threats?
Map It Out: How to Manage Metadata When Importing and Exporting Media
Without a clear strategy for managing metadata during video post production, that crucial metadata can get lost along the way.