We use cookies on this website. By using this site, you agree that we may store and access cookies on your device. Find out more and set your preferences here.
Image courtesy of Vine FX

Q&A session between Vine FX and Marcos Fajardo discussing their work on Merlin

[N.B. See our previous article on Merlin.]


Vine It seems that Arnold is slightly lazy regarding light. That is, in the real world, we only see light because there is something for it to bounce off of, such as objects, the ground, dust and so on. For example, in space - which is empty - everything looks dark and black even when lots of light is shining from stars. Therefore, if Arnold does the same, it depends entirely on the quality of the surfaces that the artist creates in his scene. How does this differ from the way other renderers work? In other words, how does Arnold use the textures and surfaces more effectively?

Marcos Fajardo That's right. Most renderers nowadays can compute bounce lighting in one way or another. Arnold just does it in a simpler and more "brute force" way, by solving a Monte Carlo approximation to the radiative transfer equations that govern light transport. In practice this means tracing more rays. The more rays traced, the lower the noise in the solution and obviously the longer the image takes to render. This is beautiful in that it solves the problem for the user without tricks. It just works. You just have to worry about the render time. And that's again what makes Arnold special: we have optimized the raytracing machinery under the hood so that those rays are much faster to compute than other renderers, while using much less memory than other renderers. In addition, we have implemented many Monte Carlo variance reduction techniques which result in needing less rays to achieve a given level of noise (i.e. faster convergence).

Vine Would you say that an artist can do more by working in modelling and texturing software first to perfect their surfaces, and then letting Arnold handle the light in the render, than waiting to refine in compositing?

Marcos Fajardo Yes. But Arnold still allows you to do crazy things in compositing if that's what you really want. You can do it either way.

Vine Could you explain the differences between relying on bounced lighting, and using the earlier ambient and diffuse occlusion techniques mentioned in this release? How does the bounced lighting model still work for a complex surface like Vine’s translucent skin?

Marcos Fajardo The ambient occlusion technique is a sort of poor man's global illumination. It basically gives you contact shadows and darker corners. It doesn't compute bounce lighting at all. It's a shading trick that still requires the user to know how and when to apply it. It's easy to screw up and have the ambient occlusion look horrible. Quite often people overdial the intensity or contrast of the ambient occlusion and it just looks horrible. There's absolutely no reason to use ambient occlusion in a renderer that does the right thing, other than legacy and people being used to it. It takes years for people to get rid of old habits.

Vine Why doesn’t Arnold have to pre-calculate secondary data - point clouds, shadow maps and irradiance caches - before starting a render? Does it not need this information at all, or are the calculations made at a different point?

Marcos Fajardo As mentioned above, Arnold solves the radiative transfer equation (aka the "rendering equation" or "radiance equation") directly without approximations, on-the-fly. Older techniques required to do a preprocess and store intermediate shading information (in pointclouds, or in caches, or in textures) just because they didn't have a fast raytracer, so they claimed that these preprocessing techniques were 10x faster than raytracing (instead of biting the bullet and optimizing their raytracers). Yes, preprocessing can still be faster in raw render time, but it is so painful to keep track of that intermediate information that you end up in a horrible mess of files and out of sync caches and tons of boring parameters to control the speed-quality-size tradeoff of those caches. Why go through all that pain if there's a much easier way that gives you exact results with only 1 control (the number of rays)? As in many other areas of engineering, "the simpler system wins".

Vine Where does the time-saving factor mainly come from?

Marcos Fajardo Because there is no precomputations, you get faster iterations. If you move a light, there's no need to recompute shadow maps, pointclouds or caches, which could take minutes (or, in huge productions, hours). You just move a light, and start tracing rays inmediately, and you get a rough preview of the image in almost-real time. The artist is liberated. And as mentioned above, there's less controls so it's more natural. Lighters suddenly do the job they are supposed to do: use their artistic skills to light a scene, as opposed to tweaking lots of knobs just to get something out of the renderer, and to keep an eye on all the temporary files generated on disk, and generally not knowing what the hell is going on unless you have a PhD in computer science.

Vine Regarding pipeline integration, what kind of collaboration has Solid Angle had with software companies to ensure format compatibility? I read a comment by one user who said that in some ways Arnold acts as an API that programs can access for rendering models and animations created earlier in production. Would you say that this is true? Could you explain a little further, please?

Marcos Fajardo We support common formats out of the box like OpenEXR, Alembic, etc. In addition, Arnold has a powerful C++/Python SDK (or API) that allows pipeline programmers to extend Arnold or plug their own formats and tools into Arnold or vice-versa. They can write their own shaders for example, or their own display drivers.