One of the worst things to sample for any brute-force ray tracer are specular highlights / reflections with a low roughness in motion blur. Even worse so on fine displacements or bump. And EVEN more worse with lots of small highlights. When all of these things come together sampling these highlights in motionblur is going to become really hard and with conventional methods you will end up having to rely on extemely high AA samples and even then the highlight-streaks will most likely still be dotty… And you won’t make any friends if they have to paint these streaks smooth in Comp :) So during the crunch time of a recent project I was brain storming with some of my collegues how this could potentially be fixed without needing too much samples and I’ve been working on implementing that idea which seems to work quite nicely.
Judging from some messages I received recently there seems to be an ongoing interest on using Linux at home on your Desktop. Some people might want to try it out, because they are searching for an alternative to Windows or OS X or just because they want to try out something new and explore the possibilities of Linux. But there are a few things to consider before getting into the world of Linux and most people are a bit overwhelmed at where to start exactly so I will try to give some tips that helped me personally in setting up a working system, alongside some examples why Linux is my personal favorite.
A few months back Digital Tutors/Pluralsight have picked me up to release a training series with them. It’s entitled Intermediate to Advanced AOV Manipulation Techniques in NUKE. In it I will be showing some tips/tricks to work as efficiently as possible with multipass renders. It starts by giving a brief introduction to AOV’s for newcomers, just so that everyone is on the same page but will quickly ramp up into more advanced topics.
After the introductions I prepared a small project to integrate a CG car into a live action environment. Tweaking it to make it look good in the shot and an indepth rundown on why I do what are just a few of the things I will be discussing.
Also check out some before/after screenshots of the CG slapcomp vs the final composite to get a rough idea:
Iridescence on surfaces is an interesting effect and can be a challenge to get looking correctly. This effect occurs for example on some animal skin as well as on surfaces covered in oil under certain conditions. Because it is a very specific look it can often times require lots of iterations until the client is happy with the result, so quick turnarounds are often neccessery. Luckily it also often times is quite a subtle effect so it’s a good candidate to try and look develop in 2D instead of constantly re-rendering your CG.
The new RIS mode introduced in RenderMan 19 is a completely new render-engine that is very different from REYES. Being a brute force path tracer (uni- and bidirectional modes) it works much more like other renderers that follow a similar approach (e.g. Arnold). That approach aims to make the render process more simple and interactive. And while I personally don’t like it completely yet it seems to get widely adopted in the film industry and we all have to adjust to it sooner or later :)
In the process of trying to streamline things a bit more in my day-to-day workflow I have recently been working on a hand-full of small helper tools. One of them is a small set of scripts that handle the conversion of a selection of files to Arnold & RenderMan’s native .tx/.tex files from the filebrowser… because people like the GUI, right? :)
One of the great things about Katana is the ability to have a simple compositor right within your lighting/lookdev environment. I have been using its 2D features more or less extensively throughout the last couple of projects. It’s quite handy to tweak your HDR that you’re using right before you plug it into an environment light, without the need to go into Nuke again if you’re doing just smaller tweaks. Unfortunately Katana’s 2D interface lacks quite a few things that would make it a valueable alternative to Nuke for preparation tasks, because you have to wire more things together. So I was working on a few ‘gizmos’ to make the process a bit less cumbersome.
Integrating CG into uneven soil is an interesting challenge. On a recent show we had to integrate CG characters into plates which often had a grassy ground. For Comp it’s benefitial if you get masks from Rendering to aid the blending of the CG elements into the grass. If the ground the characters are walking on is flat this can easily be achieved with a standard PWorld AOV. It get’s a bit trickier though if the characters are walking up a hill or a similarly uneven surface, because the coordinates at which the character comes in contact with the ground are constantly changing. I’ve been trying to come up with a system to make a Comper’s life easier and found a relatively simple solution.