What is VFX?
VFX or Visual Effects are digitally created or manipulated effects added to a piece of film to enhance the storyline or particular visual elements. Generally the effects are impossible to create on set and require hours of careful editing and composition to be added to a scene. CGI (Computer Generated Imagery) is added in post-production alongside the SFX (Special Effects) which are practical and typically take place either on set with actors or in miniature form by an SFX team.
Types of Visual Effects...
"I don't use film cameras. I don't do visual effects the same way. We don't use miniature models; it's all CG now, creating worlds in CG. It's a completely different toolset. But the rules of storytelling are the same." - James Cameron
Advertising
VFX is used to attract the attention of passive viewers and tell a story in a short amount of time, it can be used to enhance or exaggerate elements that help to sell.
VFX in feature film usually sticks to a consistent tone or style depending on the genre/storyline. Special effects are often used alongside as a guide for actors and to bridge the gap between the raw footage and post-production effects.
Film
Television
In television the VFX pipeline is significantly shorter but VFX must live up to the quality viewers are used to seeing in big budget films. Netflix's Stranger Things has a reported budget of £6-8mil per episode.
Gaming
Both console and PC games have evolved over time to push the limitations of the hardware's processing power and graphics abilities. Motion capture is often used to help developers and animators create realistic cutscenes and character behavior.
VFX Techniques Used in Media
Modern Motion Capture is used frequently in games to incorporate full actor performances into a cutscene/gameplay. In this game, the main character Ellie is exploring post-apocalyptic America on horseback/on foot, motion capture footage accurately tracks the actor as they traverse obstacles and fight off infected characters. There are a few reasons this method works well for producing modern video games, animating single frames of a detailed conversation, minor body language and realistic movements is incredibly time consuming for the GC departments. Using detailed data capture software, data capture techs can record actor performances to be used later on by CG staff who will rig and tidy up movement before placing in a virtual scene. It's important that accurate mathematics and physics are used when recreating environments in-game. Basic motion capture could be used as part of a pre-vis for the CG department and artists to use for reference. The basic footage needs to be lit and placed into a scene, the actor rigs need to be rigged and reskinned using the art department's designs, the 3D art department work hard to bring the characters to life with a lot more detail.
Motion Capture and especially facial motion capture is now used for huge motion pictures to give animated characters more detailed expressions that match the original actor performance. In this clip Andy Serkis (Gollum-Lord of The Rings Trilogy etc..) explains how human expressions and emotions can be adapted and used across non-humanoid characters to enhance the power of the character. ILM has developed several different ground-breaking facial capture techniques (Medusa, Anyma and Flux) that now don't need tracking dots and in some cases not even a head-mounted camera.
Compositing 2D and 3D Elements
In the clip above, DNEG Studio explain the VFX breakdown for critically acclaimed series Chernobyl. In this scene, the powerplant workers are forced to climb up onto the roof of the powerplant to shovel pieces of highly radioactive graphite back into the exposed reactor core below. The whole scene had to be divided into 6 separate shots for tracking purposes, each time the camera looks down at the rubble on the roof the scene is stitched together. Actors were filmed on a physical rooftop ( Physical set in Lithuania) surrounded by Blue screens which were later replaced with simulated views over Pripyat. These views needed to be tracked in and stabilized to look realistic, the view down into the exposed core was also blue-screened and built in 3D to be composited in at a later stage.
The actors were able to follow a rubble path through the carnage, CG rocks and graphite pieces were added in to fill the gaps and add more textures to the roof. During a section of the scene the camera also tracks into a virtual move to look up at a CG tower above the actors, it then moves back into the real camera move seamlessly so audiences get the impression it is really in the scene. To achieve the smoke effects, months were spent designing/researching smoke and particle simulations. The smoke needed to change at different stages of the explosion and aftermath and seem realistic yet deadly enough to convince viewers. DNEG created around 550 different shots for the series including those using complex 3D building models based on very real reactors on location (some 3D models covering around 12km of the site). These models had to be accurately tracked into scenes filmed with actors/firefighters and blended into the scene at a believable scale. These also needed to seem realistic from multiple camera angles and in different lighting conditions.
When the VFX team had worked out where virtual cameras needed to be placed for exterior shots (usually 'dangerous' places like on top of the chimneys or in a helicopter looking down at the reactor core) they had to make sure high resolution 3D scenes looked just as realistic as the other shots filmed in real time. The VFX supervisor (Max Dennison) was careful however to base camera situation in reality so didn't want to have modern virtual floating camera moves, instead placing them on rooftops or wherever a Tripod could stand in order for the scenes to seem as realistic as possible. Due to the time period in which the series is set (1986), time appropriate cars, helicopters and other background elements like crowds also had to be created and integrated. Arial shots over the location in Lithuania were edited to remove any satellite dishes/modern elements in order to look more like 1986 in Pripyat, the scale of the location was also reduced to seem more accurate. Colour grading throughout ensures that the scenes all match up adds a dark green/blue cold uncomfortable atmosphere.
Videos from (Chernobyl - DNEG, 2021)
Alternative Production Methods...
In the Mandalorian series for Disney+, the production team at ILM (Industrial Light ad Magic) utilized the Stagecraft production venue as a way of streamlining the VFX process for television. The Stagecraft screen is a 20’ high by 270-degree semi-circular LED video wall and ceiling with a 75’-diameter performance space, allowing for placement of huge set pieces and props to have a place within the composition, these can blend with the background when lined up correctly, and helps actors and crew to visualize what is taking place as they go.
Through Epic’s Unreal Engine software, the screen is able to track the camera movements to correctly display the pixel-accurate scene rendered in real time on a tablet or laptop device, virtual locations are built from scratch digitally using location inspiration and artist concept renders before being cast onto the screen in real time with live depth and shadows. This greatly reduces VFX work in post-production and means that all the silver reflective ships, set pieces and armor correctly reflect the environment. This set-up means that actors don't need to be filmed in front of green screens and keyed out in post production, compositors are able to work with much more accurate lighting and concept/pre-visualisation/environment/lighting artists can adapt their vision in real time to suit the mis-en-scene, more power is in the hands of the production crew and VFX supervisor who can visualize how a full composition will look before it enters post-production. Similar style stages are now under development for other studios.
Compositing Techniques in VFX Software...
A compositor must be able to understand how a scene is meant to be lit in order to appear realistic; the colours, composition/layout and camera perspective of the shot all contribute to this also. Compositing software like Nuke, Adobe After Effects, Houdini and Maya are used to manipulate/combine footage to achieve incredibly convincing composites for anything from television to blockbuster movies. Depending on the way the data is captured in camera, different procedures and techniques are used to extract and manipulate it. Below is a screenshot from Nuke (13.v4) to show the workspace of a compositor in the VFX industry:
The empty black area in the top of the screen is the main viewer workspace, when a video file is plugged into the viewer node it will appear here to view/ scrub through. The node graph is directly below; here the compositor is able to organize various chains of nodes to form a composition. Typically a piece of footage is imported via a 'Write node' and connected to the viewer, 'transform nodes' may be used to resize/rotate footage as well as adjusting basic motion blur. If the footage was filmed on a blue/green screen then it would need to be extracted using a 'Chromakeyer node' which can remove a specific colour background leaving the actor isolated. Other filters can be added through nodes and assets like 3D models can be added to a comp, all these components are connected via nodes which lead to an output/viewer to be exproted as a full composition.
Not all compositing software uses nodes, some use a layering technique and are frequently used for compositions that use more 2D assets and animation. Adobe After Effects allows compositors to layer different clips of footage above one another, masks are used to block out same areas like a traditional painted matte. After Effects can camera track, chromakey and offers a lot of other similar properties to Nuke but is considered a slightly more basic software for new users.
Compositors are notified of the aspect ratio and colour space for the comp which needs to be set up prior to editing. Depending on how the footage was filmed (camera type, aspect ratio, shutter speed etc...) the artist may need to alter the footage before or after bringing it into the program. A good compositor will be able to detect any issues with footage compatibility early on, they should also be able to match each piece of footage to a consistent colour space.
Compositing in film with VFX assets (E.g. Smoke, explosions, blood, 3Dmodels etc...) requires the compositor to correctly grade each element with the correct lighting and colour levels to look like it belongs in a scene with other elements.
This is an example of a Nuke Node graph used to 3D track footage, a 'read' node allows the software to read a piece of footage and display it in the viewport above (provided it's connected to a viewer node). Other nodes are linked to the footage to manipulate its properties, the footage is converted to a linear colour space in order for alterations to be correct before being rendered in the correct format for the client. Compositors may be working with green screen footage that needs to be extracted, or footage with alphas that need to be separated/edited. Generally file formats like MOV, JPEG, OpenEXR are used in Nuke as well as HDR footage with much more depth of colour and data. Nuke also uses plug-ins for some specific scripts which are useful for compositors.