I received a question the other day that was asking how I would approach painting out lights from LatLong HDRI’s that are near the bottom or the top of the image. Anything near those poles will usually be heavily distorted and are a pain to remove if you try to paint on the original map. However there’s quite an easy way to remove them in Nuke and at the same time get the lights to a nice and rectangular format for using them on area lights.
I have recently started getting into Shader Writing in OSL (actually rather Pattern Writing just to not get frustrated too quickly :-) ). To start things off I wrote a simple Pattern that lets you create a life 3d mask in your shading network that can be used to alter different shading effects in very particular areas of an object without the need to paint texture masks. On top of that being OSL it should work in any renderer that supports it (e.g. PRMan/RIS, Cycles in Blender, VRay, etc).
Earlier this year I was lucky enough to get my hands on a Ricoh Theta S, which I was playing around with to quickly capture 360° environments. Initially I didn’t expect much but I was more than pleasantly surprised with the quality that I could achieve in such a short amount of time and realized it’s potential usability for image based lighting. Shooting up to 10 bracketed images in less than 2 minutes and automatic stitching is hard to beat at that price-tag.
One of the worst things to sample for any brute-force ray tracer are specular highlights / reflections with a low roughness in motion blur. Even worse so on fine displacements or bump. And EVEN more worse with lots of small highlights. When all of these things come together sampling these highlights in motionblur is going to become really hard and with conventional methods you will end up having to rely on extemely high AA samples and even then the highlight-streaks will most likely still be dotty… And you won’t make any friends if they have to paint these streaks smooth in Comp :) So during the crunch time of a recent project I was brain storming with some of my collegues how this could potentially be fixed without needing too much samples and I’ve been working on implementing that idea which seems to work quite nicely.
Judging from some messages I received recently there seems to be an ongoing interest on using Linux at home on your Desktop. Some people might want to try it out, because they are searching for an alternative to Windows or OS X or just because they want to try out something new and explore the possibilities of Linux. But there are a few things to consider before getting into the world of Linux and most people are a bit overwhelmed at where to start exactly so I will try to give some tips that helped me personally in setting up a working system, alongside some examples why Linux is my personal favorite.
A few months back Digital Tutors/Pluralsight have picked me up to release a training series with them. It’s entitled Intermediate to Advanced AOV Manipulation Techniques in NUKE. In it I will be showing some tips/tricks to work as efficiently as possible with multipass renders. It starts by giving a brief introduction to AOV’s for newcomers, just so that everyone is on the same page but will quickly ramp up into more advanced topics.
After the introductions I prepared a small project to integrate a CG car into a live action environment. Tweaking it to make it look good in the shot and an indepth rundown on why I do what are just a few of the things I will be discussing.
Also check out some before/after screenshots of the CG slapcomp vs the final composite to get a rough idea:
Iridescence on surfaces is an interesting effect and can be a challenge to get looking correctly. This effect occurs for example on some animal skin as well as on surfaces covered in oil under certain conditions. Because it is a very specific look it can often times require lots of iterations until the client is happy with the result, so quick turnarounds are often neccessery. Luckily it also often times is quite a subtle effect so it’s a good candidate to try and look develop in 2D instead of constantly re-rendering your CG.
The new RIS mode introduced in RenderMan 19 is a completely new render-engine that is very different from REYES. Being a brute force path tracer (uni- and bidirectional modes) it works much more like other renderers that follow a similar approach (e.g. Arnold). That approach aims to make the render process more simple and interactive. And while I personally don’t like it completely yet it seems to get widely adopted in the film industry and we all have to adjust to it sooner or later :)
In the process of trying to streamline things a bit more in my day-to-day workflow I have recently been working on a hand-full of small helper tools. One of them is a small set of scripts that handle the conversion of a selection of files to Arnold & RenderMan’s native .tx/.tex files from the filebrowser… because people like the GUI, right? :)