Interview with Gaël Honorez on Nozon's work on Minuscule: Valley of the Lost Ants
We are thankful to the multi-talented Gaël Honorez for talking to us about their use of Arnold on the feature film Minuscule: Valley of the lost Ants.
In the beginning, Minuscule was a short film, that was further developed by Futurikon for film and television. Broadcast on France Télévisions and in more than 100 countries, with more than 650,000 DVDs sold in France alone, the series of short episodes has become a favorite across cultures and for all age groups. After a long creative process, with real footage shot in national parks in beautiful High Provence, Thomas Szabo and Hélène Giraud have directed an 89-minute feature film, an epic adventure movie full of surprises and unusual plot twists... a bit like Lord of the Rings in the insect world!
Can you tell us a bit about yourself and the history behind Nozon?
I'm lead of the lighting/shading department at Nozon, also doing R&D. Nozon is a Belgian company specialized in visual effects and 3D animation. It was founded in 1998, and for 10 years we worked almost exclusively on high-end commercials both local and foreign. Around 2010 we expanded our business to fiction by working on some feature film special effects, an animated series, and the animated feature film Minuscule. We are now working on another animated feature film, Astérix: Le domaine des Dieux, which will also be rendered in Arnold, on Katana. We have three sites, Brussels, Liège and Paris.
What was the size of the team that worked on Minuscule?
Nozon did the modeling, texturing, rigging, lookdev, crowd simulations, and all the special effects (water, smoke, fire, dust...), as well as all the rendering and compositing. In total 70 people worked on the project, with about 20-25 at the same time. Half of the keyframe animation was done by another French studio: 2d3d-animations.
Which modelling, animation and texturing packages were used?
Zbrush and Maya for modeling, Maya for animation, and Mari/Photoshop for texturing. Houdini and Softimage were used for particles and volumetric effects, and Massive for the crowd simulations.
Why did you use Arnold for this project?
Before Arnold, we relied on two render engines: Maxwell for its realistic results and a Renderman-compliant engine (Air) for its versatility. Arnold, being the best of the two worlds by far, has been our only renderer for years now. While season two of the animated series was done with a Renderman-compliant renderer, switching to Arnold for the movie was a no-brainer.
What was the biggest challenge that you had to overcome in this film?
Stereoscopy was the big one. Working at such small scales on live action plates was a challenge for the tracking, and a lot of plates were re-spatialized in Nuke. Almost all the movie has live action plates, but there are also full-CG sets (the doll-house and the river bed). So we had to be fully realistic on those to avoid any transition gap for the viewer, and it was quite challenging for the river bed. Some Massive shots were also a challenge simulation-wise.
There are some really complex scenes, how did you approach rendering so many ants?
The Massive shots were rendered using Massive2Arnold, the geometry procedural DSO from Javier Gonzalez Gabriel. I would like to tell you a story of how challenging it was and how we overcame it, but Arnold chewed the geometry like it was nothing. We never had to think about splitting things in layers or groups unless it was easier for compositing.
There was a unique geometry for every ant on the screen. The largest Massive simulation had 24.994.868 polygons per set; 175.700.693 unique triangles across 16 sets. Comparing a scene with two sets, Arnold 18.104.22.168 (the latest used on the show) uses 1.88 GB for the polymesh data only, and the newly released Arnold 22.214.171.124 is using 1.2 GB. All of the Massive shots were done in a single pass, except for the shot having 16 sets that didn't fit into memory. We had to split that scene into several passes back then to fit the memory (16 GB of ram on our renderfarm); given these new numbers, we could probably have rendered it in only one pass in 4.2!
The initial setup to render Massive in Arnold through Massive2Arnold needed some development (mainly processing rib files to be more convenient), but once in place it went smoothly. Javier was also very responsive fixing bugs and implementing new features, like level of detail (LOD) based on the camera. We never had to use the low-definition level.
Did you have to write any custom shaders to achieve a particular effect?
We have a lot of custom shaders, but our main shading tool is the built-in Standard shader. Specifically for Minuscule, we did some developments around stereoscopy, like outputting disparity maps. We also did some experiments on re-rendering only the occluded parts of the second eye, but as the render times were not that huge, it never went into production.
On the other hand, we rely a lot on procedural DSOs for generating geometries, instances and particles at render time. The Arnold API is a wonderful sandbox. I went a little crazy on a firework shot, instantiating lights on a particle simulation through a procedural, ending in about a hundred of small point lights lighting a Massive simulation.
Rendering volumetrics was implemented in Arnold during the production, so we have incorporated OpenVDB in the pipeline for Minuscule. We are quite happy that every new feature in Arnold is always immediately production-ready.
Can you talk a bit about the shading for the ants and their displacement detail?
We already had all the characters from the TV show, but made using a Renderman-compliant renderer. The first step was to redo all shaders while preserving the same look, this time in a physically correct renderer.
Displacement was done inside Zbrush by our amazing look-dev department. As the models were locked from the TV series, they had to be very subtle when enhancing them. They worked very closely with the directors to match their vision.
Which bug was the most difficult to shade?
The ladybug was probably the most challenging bug, because of the translucent wings and some specific specular highlights on the elytra that we have to keep in any lighting condition. But the most complicated character was probably the fish.
Did you use a lot of sub-surface scattering?
It was an artistic choice from the art director, to not deviate too much from the simpler look of the first season of the show, so the ants don't have much sub-surface scattering (SSS). But all the props and new characters (the fish, the frog and the lizard) have SSS, e.g. the sugar pieces, composed of thousands of instantiated sugar crystals. The leaves, cherries, berries ... all of them have SSS of course.
How important is 3D motion blur in a project like this, and how practical is it in Arnold?
It's not important, it's primordial to us. We use 3D motion blur only, on every project (and we love the recent Arnold enhancements in this area).
We use a minimum of three motion steps, or any other odd number, to avoid losing animation details. With two or four steps, you don't even export the frame the animator sees in its viewport, and that can be a huge problem with a fast-moving object like a bug, ie. if a bug is bouncing on an object in a single frame, you don't even see it touching the surface.
Independently of Minuscule, bringing motion blur data correctly inside Arnold was a challenge that I'm happy to have solved. But not because of Arnold itself. It was quite hard getting different caches coming from different applications (Maya, Shave&Haircut, Yeti, Naiad...), as well as velocity vectors from a particle simulation, acting the same way, and realistically. We export the motion data across a whole frame in any case, using the camera settings exclusively. Not only it is more logical (a sequence doesn't have any animation gap) but it suits the camera shutter controls of Arnold quite well.
Depending on the shot, the AA samples with motion blur can vary from 4 up to 10 or even 12 for very fast moving objects. In case we have to increase the samples, we have some tools to re-compute all the light samples accordingly in order to not affect render times.
Within MtoA, did you use Arnold's IPR for interactive shading and lighting?
I'm personally using Arnold IPR all the time. Even to render an image without tweaking it, as the progressive rendering allows me to see any problem almost instantly. Some of our artists still have post-traumatic stress coming from years of Maya Software Render/Mental Ray IPR, and are more cautious. But they always end up using it a lot after realizing how stable it is with MtoA/Arnold.
How did the MtoA plugin improve since the beginning of this project and how much custom code did you add to it?
I think the project started while MtoA was still an alpha version. So the improvements were enormous during the course of the production. A lot of bugs were squashed during the production of the movie, but no CG bugs were harmed during that process. Generally speaking, updating Arnold or MtoA was never a big problem for us. Volumetric support and ray-traced SSS were also introduced during the production, and we used both of them immediately. Most of the smoke/fire effects are done using OpenVDB or MayaFluid inside Arnold.
Almost all the code I wrote for MtoA was during the alpha period, and those patches made it into the official version, so I am not sure it still qualifies as custom code. Currently, other than custom translators, we still have some custom additions. One of the latest is bending the rays of a camera in to make any shader think they come from another one, for stereoscopic rendering. But most of the current changes are in fact removing some options I don't really like as a lighting TD, like changing light decay, or affecting specular/diffuse/SSS/GI differently from a light. We are very pedantic about our lighting modus operandi. I find these options are usually bringing more problems than they solve. [ed: see this light linking blog post from Gaël]
How big is your render farm and what were your render times?
Not very big in the end. Minuscule was mainly rendered in two sites (Nozon Paris and Nozon Brussels). Brussels handled the heaviest shots, on a render farm of 20 blades (12 AMD cores @ 2.8ghz). Paris did more shots but on a longer run on more or less the same render power.
Render times depended on scene complexity. The simple scenes are wide shots of small insects, and the larger ones are full-CG with thousands of insects in them. According to the render manager database, times vary from a minute up to two hours, averaging 16 minutes, including both eyes.
What version of Arnold did you use for this film? Have you tried the latest 4.2 release?
We used several versions of Arnold during the production of Minuscule, the latest being the one introducing volume rendering, around Arnold 126.96.36.199. For MtoA, I'm really unable to say as we are using custom builds.
We have been using Arnold 4.2 in production for 2 or 3 months now. Without doing anything, the average speed-up is at least 20%. When using back-lighting heavily, it simply cuts render times in half. The new thread scheduler works flawlessly, we have 99% CPU time occupation on the renderfarm across every job. And our look-dev artists are quite happy to see the displacement process going way faster too. This is probably the most amazing release since the introduction of the ray-traced sub-scattering.