<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Julius Ihle &#187; lookdev</title>
	<atom:link href="http://julius-ihle.de/?feed=rss2&#038;tag=lookdev" rel="self" type="application/rss+xml" />
	<link>http://julius-ihle.de</link>
	<description>LookDev/Lighting TD &#124; Compositor</description>
	<lastBuildDate>Sun, 18 Jun 2023 07:39:58 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.1.41</generator>
	<item>
		<title>HDR Prep Tip #1: Painting out Elements near Poles</title>
		<link>http://julius-ihle.de/?p=2283</link>
		<comments>http://julius-ihle.de/?p=2283#comments</comments>
		<pubDate>Sat, 14 Jan 2017 16:17:23 +0000</pubDate>
		<dc:creator><![CDATA[Julius]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cgi]]></category>
		<category><![CDATA[computer graphics]]></category>
		<category><![CDATA[hdr]]></category>
		<category><![CDATA[hdri]]></category>
		<category><![CDATA[high dynamic range]]></category>
		<category><![CDATA[lighting]]></category>
		<category><![CDATA[look development]]></category>
		<category><![CDATA[lookdev]]></category>
		<category><![CDATA[maya]]></category>
		<category><![CDATA[nuke]]></category>
		<category><![CDATA[prep]]></category>
		<category><![CDATA[shading]]></category>
		<category><![CDATA[vfx]]></category>
		<category><![CDATA[visual effects]]></category>

		<guid isPermaLink="false">http://julius-ihle.de/?p=2283</guid>
		<description><![CDATA[I received a question the other day that was asking how I would approach painting out lights from LatLong HDRI&#8217;s that are near the bottom&#8230;<p><a href="http://julius-ihle.de/?p=2283" class="more-link post-excerpt-readmore"><span class="more-link-inner">Read more</span><span class="more-link-brd"></span></a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://julius-ihle.de/?p=2283"><img class="lazyload  wp-image-2290 alignleft" data-original="http://julius-ihle.de/wp-content/uploads/2017/01/hdrpreptips1.png" alt="hdrpreptips1" width="800" height="333" /></a>I received a question the other day that was asking how I would approach painting out lights from LatLong HDRI&#8217;s that are near the bottom or the top of the image. Anything near those poles will usually be heavily distorted and are a pain to remove if you try to paint on the original map. However there&#8217;s quite an easy way to remove them in Nuke and at the same time get the lights to a nice and rectangular format for using them on area lights.</p>
<p><span id="more-2283"></span></p>
<p>The original question came up under the post of my HDR Prepper Gizmo for Nuke:</p>
<blockquote><p>I have a question, how do you adjust your lights that are skewed and need to be formatted to fit your area lights? Do you adjust your arealights scale to match your bounding boxes? This example is kind of made for this scenario however there are many lights that are distorted and need to be flattened for your arealights. Im curious as to how your workflow solves this.</p></blockquote>
<p>Which is a more than valid question as I have used a very simple-to-work-on example when I was demoing this Gizmo. However more than often in production the HDR&#8217;s you receive will most likely either have lights or other objects you might need to remove at the poles of the LatLong environment map.</p>
<p>A good example for it is this HDR map that I got from <a title="example HDR" href="https://hdrihaven.com/hdri.php?hdri=garage&amp;dl=no" target="_blank">HERE</a>:</p>
<p><img class="lazyload  size-full wp-image-2284 aligncenter" data-original="http://julius-ihle.de/wp-content/uploads/2017/01/00baseHDR.png" alt="00baseHDR" width="1025" height="515" /></p>
<p>&nbsp;<br />
All the lights on the ceiling (especially the one circled in red) would be very hard to paint out like this and you would need to distort it heavily to get it straight and map it on an arealight.</p>
<p>Luckily there&#8217;s a node available in Nuke, that is called SphericalTransform. With this node you can convert between different environment map conventions (Latlong, Mirror ball, etc.). One thing you can do is convert your input Latitude / Longitude map to a cubic map. Visually what you can imagine what it&#8217;s doing is that it seemingly puts you in the middle of the environment map (at the original position of the camera) and allows you to look around from that point of view. The output will be undistorted from the point of view of the camera (provided you set your output format to a square format):</p>
<p><a href="http://julius-ihle.de/wp-content/uploads/2017/01/01st_ll_to_cubic.png"><img class="lazyload  size-full wp-image-2285 aligncenter" data-original="http://julius-ihle.de/wp-content/uploads/2017/01/01st_ll_to_cubic.png" alt="01st_ll_to_cubic" width="1277" height="597" /></a></p>
<p>As you can see in this example I rotated my input around so that it looks at the ceiling and directly at the light that I am trying to remove. Now that exact light is nice and square and very easy to paint out:</p>
<p><a href="http://julius-ihle.de/wp-content/uploads/2017/01/02st_paintout.png"><img class="lazyload  size-full wp-image-2286 aligncenter" data-original="http://julius-ihle.de/wp-content/uploads/2017/01/02st_paintout.png" alt="02st_paintout" width="1282" height="598" /></a></p>
<p>Or you could also use this handy little thingy: <a title="HDR Prepper for Nuke" href="http://julius-ihle.de/?p=156" target="_blank">HDR Prepper Gizmo</a> ;)</p>
<p>Once that is done you can convert it back from <em>Cube</em> to <em>Lat Long map</em> with another Spherical Transform Node. There are 2 things to watch out for to make sure it matches your original HDR orientation:</p>
<ul>
<li>Make sure your <em>Input Rotation Order </em>is the inverse of your first Spherical Transform (e.g. ZXY -&gt; YXZ)</li>
<li>Negate all the numbers from your first Spherical Transform (e.g. 90 becomes -90 and vice-versa)</li>
</ul>
<p>&nbsp;</p>
<p><a href="http://julius-ihle.de/wp-content/uploads/2017/01/03st_cubic_to_ll_final_paintover.png"><img class="lazyload aligncenter size-full wp-image-2288" data-original="http://julius-ihle.de/wp-content/uploads/2017/01/03st_cubic_to_ll_final_paintover.png" alt="03st_cubic_to_ll_final_paintover" width="1920" height="875" /></a></p>
<p>&nbsp;</p>
<p>You can then either make sure your alpha is completely opaque before converting it back to LatLong and merge it on top of your original or Keymix it back in in exactly the areas where you need it.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;<br />
<span style="color: #808080;">_________________________________________________</span><br />
<span style="color: #808080;"><span style="font-size: small;"> If this post has helped you in any way you can express your gratefulness by using the <em>Donate </em>Button below to buy me a coffee! :)</span></span><br />
<a href="https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&amp;hosted_button_id=VH5RW3YDWDJSQ" target="_blank" rel="nofollow"><img class="lazyload" data-original="https://www.paypal.com/en_US/i/btn/x-click-but21.gif" alt="" /></a></p>
<div class="moz-text-html" lang="x-unicode">
<div><span style="font-size: small;"><span style="color: #808080;">3M5xNSV7g2NzVpHMqzYKhgoKJZ8CWa644m</span></span><br />
<img class="lazyload alignleft size-full wp-image-2576" data-original="http://julius-ihle.de/wp-content/uploads/2018/03/bc_ldgr_qrcode.png" alt="bc_ldgr_qrcode" width="128" height="128" /></div>
</div>
]]></content:encoded>
			<wfw:commentRss>http://julius-ihle.de/?feed=rss2&#038;p=2283</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Playing with OSL #1: 3D Shading Masks</title>
		<link>http://julius-ihle.de/?p=2266</link>
		<comments>http://julius-ihle.de/?p=2266#comments</comments>
		<pubDate>Tue, 03 Jan 2017 21:35:36 +0000</pubDate>
		<dc:creator><![CDATA[Julius]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[arnold]]></category>
		<category><![CDATA[blender]]></category>
		<category><![CDATA[cgi]]></category>
		<category><![CDATA[computer graphics]]></category>
		<category><![CDATA[lighting]]></category>
		<category><![CDATA[look development]]></category>
		<category><![CDATA[lookdev]]></category>
		<category><![CDATA[maya]]></category>
		<category><![CDATA[open shading language]]></category>
		<category><![CDATA[shading]]></category>
		<category><![CDATA[vfx]]></category>
		<category><![CDATA[visual effects]]></category>
		<category><![CDATA[vray]]></category>

		<guid isPermaLink="false">http://julius-ihle.de/?p=2266</guid>
		<description><![CDATA[I have recently started getting into Shader Writing in OSL (actually rather Pattern Writing just to not get frustrated too quickly :-) ). To start&#8230;<p><a href="http://julius-ihle.de/?p=2266" class="more-link post-excerpt-readmore"><span class="more-link-inner">Read more</span><span class="more-link-brd"></span></a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://julius-ihle.de/?p=2266"><img class="lazyload alignleft wp-image-2267" data-original="http://julius-ihle.de/wp-content/uploads/2017/01/osl_pmask.jpg" alt="osl_pmask" width="800" height="333" /></a></p>
<p>I have recently started getting into Shader Writing in OSL (actually rather Pattern Writing just to not get frustrated too quickly :-) ). To start things off I wrote a simple Pattern that lets you create a life 3d mask in your shading network that can be used to alter different shading effects in very particular areas of an object without the need to paint texture masks. On top of that being OSL it should work in any renderer that supports it (e.g. PRMan/RIS, Cycles in Blender, VRay, etc).</p>
<p><span id="more-2266"></span></p>
<p>The Pattern itself only has a few simple controls:</p>
<div id="attachment_2268" style="width: 453px" class="wp-caption aligncenter"><a href="http://julius-ihle.de/wp-content/uploads/2017/01/osl_PMask_UI.png"><img class="lazyload wp-image-2268 size-full" data-original="http://julius-ihle.de/wp-content/uploads/2017/01/osl_PMask_UI.png" alt="osl_pmask_ui" width="443" height="570" /></a><p class="wp-caption-text">PMask OSL Pattern inside Maya with RenderMan (PxrOSL Node)</p></div>
<p>&nbsp;</p>
<p>Here&#8217;s a quick explanation:<br />
<span style="text-decoration: underline;">Mapping</span> &#8211; determines whether to use world- or object-space for the 3d mask (choose <em>PWorld</em> or <em>PRef</em>). Unfortunately the PxrOSL node in Maya ignores shader metadata for creating a nice dropdown menu here (Blender users might be luckier :)).<br />
<span style="text-decoration: underline;">Radius</span> &#8211; The X, Y, Z (-&gt; R, G, B) scale of the spherical mask.<br />
<span style="text-decoration: underline;">Whitepoint</span> &#8211; Values below 1 will move the core towards the outside edge<br />
<span style="text-decoration: underline;">Blackpoint</span> &#8211; Values above 0 will move the outside edge towards the core<br />
<span style="text-decoration: underline;">Gamma</span> &#8211; Can be used to increase or decrease the falloff</p>
<p>All of these parameters are mappable, too! In the case of this Tool this has some very nice benefits.</p>
<p>The first one is that you can create a locator for example and map its translation to the Pos input. If you are using <em>PWorld</em> as your Mapping type it will use the world space coordinates so wherever you move the locator the mask will go as well. However assuming you are not animating the locator this will be a static mask in worldspace and any objects which are animated will swim through the mask.<br />
If you want your mask to follow your object you can set the Mapping to <em>PRef</em>. This will use the coordinates of the objects translation and deformation from a reference position (zero&#8217;d out transforms) and will make the mask stick to the object wherever it&#8217;s going. Keep in mind that the Origin for the position (0,0,0) is always wherever the object has zero&#8217;d out transforms. This means that you might need an offset if you have frozen your transforms at one point. Another alternative would be to parent or constraint a locator to a moving object and leave the Mapping set to <em>PWorld</em>.<br />
To double-check the coordinates the OSL node allows you to output <em>PWorld</em> and <em>PRef</em> for debugging purposes (you can take the RGB color values of a point in the render as your Pos value).</p>
<p>Also most of the time a simple sphere is not very desirable, so you can also map the radius with other procedurals or textures to break up its shape.</p>
<p>Because this is based on position data it&#8217;s completely independent of the object&#8217;s UV&#8217;s.</p>
<p>You can download it <a title="download compiled .oso" href="http://53035544.de.strato-hosting.eu/data/PMask.oso">HERE</a>.</p>
<p>Here&#8217;s a quick demo video:<br />
<iframe src="https://player.vimeo.com/video/197934849" width="640" height="338" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><a href="https://vimeo.com/197934849">3D Position Masks in OSL for Shading Networks</a> from <a href="https://vimeo.com/julsvfx">Julius Ihle</a> on <a href="https://vimeo.com">Vimeo</a>.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;<br />
<span style="color: #808080;">_________________________________________________</span><br />
<span style="color: #808080;"><span style="font-size: small;"> If this post has helped you in any way you can express your gratefulness by using the <em>Donate </em>Button below to buy me a coffee! :)</span></span><br />
<a href="https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&amp;hosted_button_id=VH5RW3YDWDJSQ" target="_blank" rel="nofollow"><img class="lazyload" data-original="https://www.paypal.com/en_US/i/btn/x-click-but21.gif" alt="" /></a></p>
<div class="moz-text-html" lang="x-unicode">
<div><span style="font-size: small;"><span style="color: #808080;">3M5xNSV7g2NzVpHMqzYKhgoKJZ8CWa644m</span></span><br />
<img class="lazyload alignleft size-full wp-image-2576" data-original="http://julius-ihle.de/wp-content/uploads/2018/03/bc_ldgr_qrcode.png" alt="bc_ldgr_qrcode" width="128" height="128" /></div>
</div>
]]></content:encoded>
			<wfw:commentRss>http://julius-ihle.de/?feed=rss2&#038;p=2266</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Digital Tutors/Pluralsight Training Release</title>
		<link>http://julius-ihle.de/?p=641</link>
		<comments>http://julius-ihle.de/?p=641#comments</comments>
		<pubDate>Sat, 20 Feb 2016 14:20:23 +0000</pubDate>
		<dc:creator><![CDATA[Julius]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[aov]]></category>
		<category><![CDATA[aovs]]></category>
		<category><![CDATA[cgi]]></category>
		<category><![CDATA[compositing]]></category>
		<category><![CDATA[computer graphics]]></category>
		<category><![CDATA[diffuse]]></category>
		<category><![CDATA[lighting]]></category>
		<category><![CDATA[lookdev]]></category>
		<category><![CDATA[multipass]]></category>
		<category><![CDATA[refraction]]></category>
		<category><![CDATA[shading]]></category>
		<category><![CDATA[specular]]></category>
		<category><![CDATA[texturing]]></category>
		<category><![CDATA[vfx]]></category>
		<category><![CDATA[visual effects]]></category>

		<guid isPermaLink="false">http://julius-ihle.de/?p=641</guid>
		<description><![CDATA[A few months back Digital Tutors/Pluralsight have picked me up to release a training series with them. It&#8217;s entitled Intermediate to Advanced AOV Manipulation Techniques&#8230;<p><a href="http://julius-ihle.de/?p=641" class="more-link post-excerpt-readmore"><span class="more-link-inner">Read more</span><span class="more-link-brd"></span></a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://julius-ihle.de/?p=641"><img class="lazyload alignleft wp-image-642" data-original="http://julius-ihle.de/wp-content/uploads/2016/02/dt.png" alt="dt" width="800" height="333" /></a></p>
<p>A few months back Digital Tutors/Pluralsight have picked me up to release a training series with them. It&#8217;s entitled <strong><span style="text-decoration: underline;"><a title="Intermediate-to-Advanced-AOV-Manipulation-Techniques-in-NUKE" href="https://www.pluralsight.com/courses/intermediate-advanced-aov-manipulation-techniques-nuke-2397" target="_blank">Intermediate to Advanced AOV Manipulation Techniques in NUKE</a></span></strong>. In it I will be showing some tips/tricks to work as efficiently as possible with multipass renders. It starts by giving a brief introduction to AOV&#8217;s for newcomers, just so that everyone is on the same page but will quickly ramp up into more advanced topics.<br />
After the introductions I prepared a small project to integrate a CG car into a live action environment. Tweaking it to make it look good in the shot and an indepth rundown on why I do what are just a few of the things I will be discussing.</p>
<p>Also check out some before/after screenshots of the CG slapcomp vs the final composite to get a rough idea:<span id="more-641"></span></p>
<p style="text-align: left;">  <a href="http://julius-ihle.de/wp-content/uploads/2016/02/dt_training_slap.png" target="_blank"><img class="lazyload alignleft wp-image-644" data-original="http://julius-ihle.de/wp-content/uploads/2016/02/dt_training_slap.png" alt="dt_training_slap" width="600" height="338" /></a> <a href="http://julius-ihle.de/wp-content/uploads/2016/02/dt_training_final.png"><img class="lazyload alignleft wp-image-643" data-original="http://julius-ihle.de/wp-content/uploads/2016/02/dt_training_final.png" alt="dt_training_final" width="600" height="338" /></a></p>
]]></content:encoded>
			<wfw:commentRss>http://julius-ihle.de/?feed=rss2&#038;p=641</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>More Katana Macros/Tools</title>
		<link>http://julius-ihle.de/?p=492</link>
		<comments>http://julius-ihle.de/?p=492#comments</comments>
		<pubDate>Sat, 28 Feb 2015 10:12:44 +0000</pubDate>
		<dc:creator><![CDATA[Julius]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[arnold]]></category>
		<category><![CDATA[cookie]]></category>
		<category><![CDATA[gobo]]></category>
		<category><![CDATA[katana]]></category>
		<category><![CDATA[lighting]]></category>
		<category><![CDATA[lookdev]]></category>
		<category><![CDATA[rendering]]></category>
		<category><![CDATA[renderman]]></category>
		<category><![CDATA[slidemap]]></category>
		<category><![CDATA[vfx]]></category>
		<category><![CDATA[workflow]]></category>

		<guid isPermaLink="false">http://julius-ihle.de/?p=492</guid>
		<description><![CDATA[During the last year I luckily had some time to spend on digging a bit deeper into Katana. From working on assets and shots I&#8230;<p><a href="http://julius-ihle.de/?p=492" class="more-link post-excerpt-readmore"><span class="more-link-inner">Read more</span><span class="more-link-brd"></span></a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://julius-ihle.de/?p=492"><img class="lazyload alignleft wp-image-493" data-original="http://julius-ihle.de/wp-content/uploads/2015/02/katana_dev.jpg" alt="katana_dev" width="800" height="333" /></a></p>
<p style="text-align: justify;">During the last year I luckily had some time to spend on digging a bit deeper into Katana. From working on assets and shots I experienced some moments where I thought it&#8217;d be great to have some basic tools to help get the job done just a little bit quicker. So whenever I had a bit of spare time I was developing some Macros/Gizmos/Tools to help artists on some common problems.<span id="more-492"></span></p>
<p><iframe src="//player.vimeo.com/video/120869259" width="500" height="281" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><a href="https://vimeo.com/120869259">Katana Macro/Tool Dev</a> from <a href="https://vimeo.com/julsvfx">Julius Ihle</a> on <a href="https://vimeo.com">Vimeo</a>.</p>
<p>&nbsp;</p>
<p>One of the tools is a macro that helps managing multiple lights. Based on a CEL you can freely modify exposed shading and transform parameters. This especially comes in handy in scenes with bigger environments that have to be lit with multiple light sources. It also features some randomization controls for lightcolor and rotation &amp; scale.</p>
<p><a href="http://julius-ihle.de/wp-content/uploads/2015/02/lmanager.png"><img class="lazyload aligncenter  wp-image-494" data-original="http://julius-ihle.de/wp-content/uploads/2015/02/lmanager.png" alt="lmanager" width="479" height="580" /></a></p>
<p>&nbsp;</p>
<p>Next up we occasionally had some shots that required very accurate light cookies/gobos based on the plate. On recent shows we for example had some shots of characters walking through the woods with quite direct lighting which caused very complex noticeable shadows from the tree&#8217;s branches and leaves. In some scenes one could get away with putting just some random lightcookies as projectors (slidemaps) on the lights, but others needed a more accurate solution. This is a macro I have been working on which automates the whole process:</p>
<p><a href="http://julius-ihle.de/wp-content/uploads/2015/02/cookiemaker.png"><img class="lazyload aligncenter  wp-image-495" data-original="http://julius-ihle.de/wp-content/uploads/2015/02/cookiemaker.png" alt="cookiemaker" width="480" height="413" /></a>It basically works this way: You project the plate through the render camera onto the set geometry and render that result back through the light source which will should give you a proper light gobo. Now, there are usually a few things to consider, like the camera&#8217;s focal length, pre/post transformations of the plate to match the light&#8217;s scale and so on which all depends on the way the slidemap is implemented into the shader. On this one no one had to worry about it, because it&#8217;s all handled automatically :) The only drawback is that the generation of the resulting cookie involves a 2-step rendering process (projection on geo and post-tweaks) which makes it not as interactive as it could be, but luckily those renders go through really quick at least.</p>
<p>Here&#8217;s a quick run through on a previous set up which is based on the same technique, but a bit more manual and goes through Nuke. This should also work with really any setup that requires building accurate light cookies.<br />
Say this is your light setup (in this case in Katana, but could be Maya, Houdini, or whatever):</p>
<p><a href="http://julius-ihle.de/wp-content/uploads/2015/02/katana_lightsetup.png"><img class="lazyload aligncenter  wp-image-496" data-original="http://julius-ihle.de/wp-content/uploads/2015/02/katana_lightsetup.png" alt="katana_lightsetup" width="754" height="547" /></a><br />
You replicate the same thing in Nuke importing the same camera and set geometry. You project the plate through the rendercam onto the set geometry and copy over the light transformations from your 3D package into a new camera, which you then render through:</p>
<p><a href="http://julius-ihle.de/wp-content/uploads/2015/02/nuke_lightsetup.png"><img class="lazyload aligncenter  wp-image-497" data-original="http://julius-ihle.de/wp-content/uploads/2015/02/nuke_lightsetup.png" alt="nuke_lightsetup" width="999" height="382" /></a><br />
The tricky part is getting the correct settings for focal length, etc. on your camera that resembles the light, which involves a bit trial and error. You can then just desaturate the map and grade, paint and do all the fancy comp stuff to make it do what you want and then use it on your lights:</p>
<p><a href="http://julius-ihle.de/wp-content/uploads/2015/02/katana_cookie_arealight2.jpg"><img class="lazyload   wp-image-498 aligncenter" data-original="http://julius-ihle.de/wp-content/uploads/2015/02/katana_cookie_arealight2.jpg" alt="katana_cookie_arealight2" width="498" height="280" /></a> <a href="http://julius-ihle.de/wp-content/uploads/2015/02/plate_720.jpg"><img class="lazyload   wp-image-499 aligncenter" data-original="http://julius-ihle.de/wp-content/uploads/2015/02/plate_720.jpg" alt="plate_720" width="500" height="281" /></a></p>
<p>When the guys from FX do their magic and blow things up into thousands of pieces that&#8217;s always awesome. Anyone in comp is usually happy if you provide them ID&#8217;s for just about anything so, provided the FX cache contains separate geometry pieces or face-sets that&#8217;s pretty straight forward with this ID randomizer:</p>
<p><a href="http://julius-ihle.de/wp-content/uploads/2015/02/idassgner.png"><img class="lazyload aligncenter size-full wp-image-500" data-original="http://julius-ihle.de/wp-content/uploads/2015/02/idassgner.png" alt="idassgner" width="508" height="329" /></a><br />
When Lookdev&#8217;ing assets it always helps to ensure some consistency between the renders. While we had templates that managed chrome- and greyballs and rendercameras, we didn&#8217;t have a setup for the turntable itself&#8230; Well, now we have:</p>
<p><a href="http://julius-ihle.de/wp-content/uploads/2015/02/turntable.png"><img class="lazyload aligncenter size-full wp-image-501" data-original="http://julius-ihle.de/wp-content/uploads/2015/02/turntable.png" alt="turntable" width="723" height="296" /></a></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;<br />
<span style="color: #808080;">_________________________________________________</span><br />
<span style="color: #808080;"><span style="font-size: small;"> If this post has helped you in any way you can express your gratefulness by using the <em>Donate </em>Button below to buy me a coffee! :)</span></span><br />
<a href="https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&amp;hosted_button_id=VH5RW3YDWDJSQ" target="_blank" rel="nofollow"><img class="lazyload" data-original="https://www.paypal.com/en_US/i/btn/x-click-but21.gif" alt="" /></a></p>
<div class="moz-text-html" lang="x-unicode">
<div><span style="font-size: small;"><span style="color: #808080;">3M5xNSV7g2NzVpHMqzYKhgoKJZ8CWa644m</span></span><br />
<img class="lazyload alignleft size-full wp-image-2576" data-original="http://julius-ihle.de/wp-content/uploads/2018/03/bc_ldgr_qrcode.png" alt="bc_ldgr_qrcode" width="128" height="128" /></div>
</div>
]]></content:encoded>
			<wfw:commentRss>http://julius-ihle.de/?feed=rss2&#038;p=492</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Katana 2D Helpers</title>
		<link>http://julius-ihle.de/?p=299</link>
		<comments>http://julius-ihle.de/?p=299#comments</comments>
		<pubDate>Sun, 24 Aug 2014 18:03:35 +0000</pubDate>
		<dc:creator><![CDATA[Julius]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[2d]]></category>
		<category><![CDATA[hdr]]></category>
		<category><![CDATA[hdri]]></category>
		<category><![CDATA[ibl]]></category>
		<category><![CDATA[image based lighting]]></category>
		<category><![CDATA[katana]]></category>
		<category><![CDATA[lighting]]></category>
		<category><![CDATA[lookdev]]></category>
		<category><![CDATA[macro]]></category>
		<category><![CDATA[rendering]]></category>
		<category><![CDATA[tools]]></category>
		<category><![CDATA[workflow]]></category>

		<guid isPermaLink="false">http://julius-ihle.de/?p=299</guid>
		<description><![CDATA[One of the great things about Katana is the ability to have a simple compositor right within your lighting/lookdev environment. I have been using its&#8230;<p><a href="http://julius-ihle.de/?p=299" class="more-link post-excerpt-readmore"><span class="more-link-inner">Read more</span><span class="more-link-brd"></span></a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://julius-ihle.de/?p=299"><img class="lazyload alignnone wp-image-260" data-original="http://julius-ihle.de/wp-content/uploads/2014/08/katana_2dhelpers.jpg" alt="katana_2dhelpers" width="800" height="333" /></a></p>
<p>One of the great things about Katana is the ability to have a simple compositor right within your lighting/lookdev environment. I have been using its 2D features more or less extensively throughout the last couple of projects. It&#8217;s quite handy to tweak your HDR that you&#8217;re using right before you plug it into an environment light, without the need to go into Nuke again if you&#8217;re doing just smaller tweaks. Unfortunately Katana&#8217;s 2D interface lacks quite a few things that would make it a valueable alternative to Nuke for preparation tasks, because you have to wire more things together. So I was working on a few &#8216;gizmos&#8217; to make the process a bit less cumbersome.</p>
<p style="text-align: justify;"><span id="more-299"></span></p>
<p style="text-align: justify;">When I&#8217;m doing lookdev I usually try a lot of different environments to test my shading in. Now it could be that you would want to rotate each HDR slightly differently because maybe the main lightsource should come from a different direction. As rotating the envlight everytime you test a new HDR isn&#8217;t non-destructive at all I built a simple group that imitates Photoshop&#8217;s offset feature:</p>
<div style="width: 640px; " class="wp-video"><!--[if lt IE 9]><script>document.createElement('video');</script><![endif]-->
<video class="wp-video-shortcode" id="video-299-1" width="640" height="360" preload="metadata" controls="controls"><source type="video/ogg" src="http://julius-ihle.de/wp-content/uploads/2014/08/katana_2d_offset.ogv?_=1" /><a href="http://julius-ihle.de/wp-content/uploads/2014/08/katana_2d_offset.ogv">http://julius-ihle.de/wp-content/uploads/2014/08/katana_2d_offset.ogv</a></video></div>
<p style="text-align: justify;">The maximum offset distance is defined by the image width/height. So if the input image has a width of 2048 pixels you cannot offset it more than +/-2048 pixels in x.</p>
<p style="text-align: justify;">Also in case one decides to go for an environment-only lighting workflow you still want to have a good control over the individual lights in your HDR. This workflow gives you less control, as you cannot move individual lights around, but is much quicker to set up. When going this route it helps if you can still tweak the intensity of the individual lights within the HDR and maybe slightly adjust their color. So I mashed together a very simple group that emulates Nuke&#8217;s default Keyer-Node. It outputs the resulting key into the alpha while leaving RGB untouched. Controls should be self-explanatory.</p>
<p style="text-align: justify;"><a href="http://julius-ihle.de/wp-content/uploads/2014/08/simple_keyer_katana.png"><img class="lazyload wp-image-302 aligncenter" data-original="http://julius-ihle.de/wp-content/uploads/2014/08/simple_keyer_katana.png" alt="simple_keyer_katana" width="444" height="188" /></a></p>
<p style="text-align: justify;">Now this might work for some cases. As the name implies, it&#8217;s just a simple keyer however. Here&#8217;s another example HDR:</p>
<p style="text-align: justify;"><a href="http://julius-ihle.de/wp-content/uploads/2014/08/hue_keyer_baseHDR.png"><img class="lazyload aligncenter  wp-image-446" data-original="http://julius-ihle.de/wp-content/uploads/2014/08/hue_keyer_baseHDR.png" alt="hue_keyer_baseHDR" width="499" height="264" /></a></p>
<p style="text-align: justify;">Let&#8217;s say one would like to grade the sky, because we are a bit picky about the blue shade of it :) If you were to try to use the simple keyer you would not be able to properly isolate just the sky because the sun and the bright screen-left building also have very high values in the blue channel. So I created a slightly more sophisticated keyer for those kind of purposes:</p>
<p style="text-align: justify;"><a href="http://julius-ihle.de/wp-content/uploads/2014/08/hue_keyer.png"><img class="lazyload aligncenter size-full wp-image-447" data-original="http://julius-ihle.de/wp-content/uploads/2014/08/hue_keyer.png" alt="hue_keyer" width="470" height="273" /></a></p>
<p style="text-align: justify;">Hm, interface-wise it looks pretty much the same like the other keyer&#8230; What&#8217;s so fancy about it? This one does not work on one channel at a time, but instead looks at the difference between channels. If you were to set it to blue, like in this example, it will basically just isolate the areas in which the blue channel has higher values than the average of red and green. Apart from that it also features the option to pull a saturation key, pretty much identical to Nuke (Colorspace RGB-&gt;HSV or Keyer set to &#8220;saturation&#8221;), except for the fact that it works in linear space, instead of sRGB.<br />
Here&#8217;s an example matte overlay when isolating a matte with this Keyer &#8211; as you can see the sky is quite well isolated and does not include the sun or other bright bits of the image:</p>
<p style="text-align: justify;"><a href="http://julius-ihle.de/wp-content/uploads/2014/08/hue_keyer_matteHDR.png"><img class="lazyload aligncenter  wp-image-448" data-original="http://julius-ihle.de/wp-content/uploads/2014/08/hue_keyer_matteHDR.png" alt="hue_keyer_matteHDR" width="498" height="225" /></a></p>
<p style="text-align: justify;">So without having to wire up a whole lot of stuff one can get masks comparatively quick. What I personally like to do is having one base image that I can use for mask inputs. I just wire my mattes for the individual lights together so that I have the different lights isolated in R,G,B and A. So after I have my keys I can crop out the areas I want and recombine them using my &#8220;channel_shuffle&#8221; group. Controls are quite simple. One has to specify which channel they want to use as an input and into which channel the input should output to (similar to Nuke&#8217;s shuffle node&#8230; with a less pretty interface :) ). That way you can quickly get one RGBA matte using crops to isolate the different keys:</p>
<p style="text-align: justify;"><a href="http://julius-ihle.de/wp-content/uploads/2014/08/channel_shuffle_katana.png"><img class="lazyload wp-image-303 aligncenter" data-original="http://julius-ihle.de/wp-content/uploads/2014/08/channel_shuffle_katana.png" alt="channel_shuffle_katana" width="828" height="261" /></a></p>
<p style="text-align: justify;">Then it&#8217;s just a matter of using them on a gain-node for example to tweak the exposure of the individual lights.</p>
<p style="text-align: justify;">Example file is <span style="text-decoration: underline;"><a title="HERE" href="http://53035544.de.strato-hosting.eu/data/misc_2d_v006.katana" target="_blank">HERE</a></span> to play with.</p>
]]></content:encoded>
			<wfw:commentRss>http://julius-ihle.de/?feed=rss2&#038;p=299</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="http://julius-ihle.de/wp-content/uploads/2014/08/katana_2d_offset.ogv" length="2111954" type="video/ogg" />
		</item>
		<item>
		<title>HDR Prepper Nuke Gizmo for IBL (Updated!)</title>
		<link>http://julius-ihle.de/?p=156</link>
		<comments>http://julius-ihle.de/?p=156#comments</comments>
		<pubDate>Sun, 10 Nov 2013 17:55:17 +0000</pubDate>
		<dc:creator><![CDATA[Julius]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[compositing]]></category>
		<category><![CDATA[hdr]]></category>
		<category><![CDATA[hdri]]></category>
		<category><![CDATA[ibl]]></category>
		<category><![CDATA[image based lighting]]></category>
		<category><![CDATA[lighting]]></category>
		<category><![CDATA[lookdev]]></category>
		<category><![CDATA[nuke]]></category>
		<category><![CDATA[rendering]]></category>
		<category><![CDATA[vfx]]></category>

		<guid isPermaLink="false">http://julius-ihle.de/?p=156</guid>
		<description><![CDATA[Image based lighting often times simplifies getting started with a good lighting setup when trying to integrate CG into life action. Often times it&#8217;s beneficial&#8230;<p><a href="http://julius-ihle.de/?p=156" class="more-link post-excerpt-readmore"><span class="more-link-inner">Read more</span><span class="more-link-brd"></span></a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://julius-ihle.de/?p=156"><img class="lazyload alignleft wp-image-2179" data-original="http://julius-ihle.de/wp-content/uploads/2013/11/hdrprepper.jpg" alt="hdrprepper" width="800" height="333" /></a></p>
<p>Image based lighting often times simplifies getting started with a good lighting setup when trying to integrate CG into life action. Often times it&#8217;s beneficial to have the IBL sphere to just contain the ambient lights from the environment and therefore to paint out any actual light sources from the set. Those can then be used on separate lights to give the main illumination from the set lights, while the IBL sphere contributes ambient environment light. Now, if you have a lot of different environments and consequently a lot of different HDR&#8217;s it can be tedious to paint out the lights and output them after that&#8230; <span id="more-156"></span>Apart from all the other work that needs to be done anyway (merge to hdr, stitch, match to plate, etc.). Sooo, let&#8217;s try to speed things up a bit&#8230; once again :)</p>
<p><strong>//01.04.2014 Update</strong></p>
<p>I was working on a Nuke gizmo that takes care of removing lights from an HDR with as little effort as possible. Upon creating the Gizmo you will be presented with a bounding box. This area should contain a light within the scene that you want to extract. Just put it tightly around a light source. If the HDR contains a light which isn&#8217;t planar to fit into the rectangle you can plug in an alpha into the &#8220;Mask&#8221; input. After you cleaned up one light you can proceed with adding another HDR_Prepper node to remove as many lights as you wish.<br />
The Gizmo UI is divided into 3 sections:</p>
<p><img class="lazyload" alt="" /><a href="http://julius-ihle.de/wp-content/uploads/2013/11/hdr_prepper_v02_properties.png"><img class="lazyload aligncenter size-full wp-image-2277" data-original="http://julius-ihle.de/wp-content/uploads/2013/11/hdr_prepper_v02_properties.png" alt="hdr_prepper_v02_properties" width="604" height="670" /></a></p>
<p>The &#8220;Light Removal Settings&#8221; tab contains settings to ease the removal of light sources. It basically works by smearing in edge pixels of the crop area. Most of the knobs should be more or less self-explanatory. Either refer to the tooltips or watch the example demo further down this post :)<br />
The topmost part of the gizmo deals with outputting both the extracted lights and the cleaned up HDR. The output path is the root path to where both the HDR as well as the lights will be rendered to. If you have multiple light sources and consequently many HDR_Prepper nodes in your script you can hit the &#8220;Set all to this folder&#8221; button to make all other HDR_Prepper nodes use the same output path.<br />
The &#8220;Light Name&#8221; is the name of the output for the given light that you are removing. Therefore each light needs to have a unique name, because otherwise they will just overwrite each other. If you have multiple lights of the same type (e.g. multiple computer monitors that illuminate the scene) you can call one of them &#8220;screen&#8221; or &#8220;screen1&#8243; for example, select all the other nodes which are used for the other screens and hit &#8220;Set selected to this name&#8221;. That way the selected HDR_Preppers will be called like the current one numbered numerically.<br />
The &#8220;Env HDR Output&#8221; section deals with outputting the cleaned HDR. You can give it a name, specify a format of your choice and choose wheter to convolve the output or not. Once your settings are set you just have to hit &#8220;Create Env Output&#8221; and the corresponding nodes will be created with the settings you have chosen. Keep in mind that you need to have the <a title="EnvConvolve" href="http://www.nukepedia.com/gizmos/filter/envconvolve" target="_blank">EnvConvole Gizmo</a> installed, otherwhise this section won&#8217;t work.</p>
<p>Aaaand again, demos explain it best, so here you go :) :</p>
<p><iframe src="//player.vimeo.com/video/90675497" width="500" height="293" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><a href="http://vimeo.com/90675497">HDR Prepper Update (Nuke Gizmo)</a> from <a href="http://vimeo.com/julsvfx">Julius Ihle</a> on <a href="https://vimeo.com">Vimeo</a>.</p>
<p>The gizmo itself can be obtained <a title="DOWNLOAD" href="http://53035544.de.strato-hosting.eu/perso/HDR_Prepper.gizmo" target="_blank"><span style="text-decoration: underline;">HERE</span></a>.</p>
<p>(So far only been tested on Linux but should work regardless of the OS.)</p>
]]></content:encoded>
			<wfw:commentRss>http://julius-ihle.de/?feed=rss2&#038;p=156</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Simple PrimVar Helper Script (updated)</title>
		<link>http://julius-ihle.de/?p=101</link>
		<comments>http://julius-ihle.de/?p=101#comments</comments>
		<pubDate>Sat, 14 Sep 2013 15:09:04 +0000</pubDate>
		<dc:creator><![CDATA[Julius]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[lighting]]></category>
		<category><![CDATA[lookdev]]></category>
		<category><![CDATA[maya]]></category>
		<category><![CDATA[primitive variables]]></category>
		<category><![CDATA[primvars]]></category>
		<category><![CDATA[prman]]></category>
		<category><![CDATA[python]]></category>
		<category><![CDATA[rendering]]></category>
		<category><![CDATA[renderman]]></category>
		<category><![CDATA[vfx]]></category>

		<guid isPermaLink="false">http://julius-ihle.de/?p=101</guid>
		<description><![CDATA[Primitive Variables are a really cool way to get variation in Renderman Shading Networks. As it requires adding Attributes to your Shape Nodes within Maya&#8230;<p><a href="http://julius-ihle.de/?p=101" class="more-link post-excerpt-readmore"><span class="more-link-inner">Read more</span><span class="more-link-brd"></span></a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://julius-ihle.de/?p=101"><img class="lazyload alignleft wp-image-2184" data-original="http://julius-ihle.de/wp-content/uploads/2013/09/primvar_helper.jpg" alt="primvar_helper" width="800" height="333" /></a></p>
<p>Primitive Variables are a really cool way to get variation in Renderman Shading Networks. As it requires adding Attributes to your Shape Nodes within Maya to control the variation it can be quite time consuming to set it up. So I was trying an attempt to build an interface that helps a bit with the set up.<span id="more-101"></span>The GUI let&#8217;s you choose a name, the PrimVar type and min/max values of which it will randomly pick floating point numbers within that range. It is also possible to just use integers with the according checkbox. The GUI is far from being pretty or well organized, but it does the job. :)</p>
<p>&nbsp;</p>
<p>As I am a bit short on time however I just had the time to properly make use of Float PrimVars as that&#8217;s what I need for a current project I&#8217;m doing. As with all my scripts they are not written very well, but should be kind of readable. So feel free to modify it as needed. I hope I will find the time soon to make the other types work properly.<br />
So for example using 3 PrimVars with just one shader on all objects results in loads of variation&#8230; Which is always a good thing :)</p>
<p>&nbsp;</p>
<p>Additionally as you can see there is also an &#8220;Object ID&#8221; part in the GUI. It let&#8217;s you add integer PrimVars with increasing values for each object. At the same time it will add an AOV with the same name like the primvar (of course this custom AOV has to be set up in Slim beforehand).<br />
An alternative approach to making Object ID&#8217;s would be having distinct floating point values for each object. So the value that get&#8217;s assigned by the script corresponds to the value that is being rendered within the Object ID AOV. For example object 3, that has the Object ID primvar value of 3 assigned has a value of 3,3,3,3 for RGBA. The good part is that it is very easy to set up, but one may run into aliasing issues when dealing with very thin geometry for example. To make use of this technique I also prepared a <strong><span style="text-decoration: underline;"><a href="http://53035544.de.strato-hosting.eu/data/ObjectID.gizmo">Nuke Gizmo </a></span></strong>that handles these kind of Object ID&#8217;s and has a slider to separate them.</p>
<p>Also here&#8217;s a quick demo:<br />
<iframe src="//player.vimeo.com/video/75132311" width="500" height="313" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><a href="http://vimeo.com/75132311">Simple PrimVar Helper Script</a> from <a href="http://vimeo.com/julsvfx">Julius Ihle</a> on <a href="https://vimeo.com">Vimeo</a>.</p>
<p>So hopefully I will find some time to complete the script, but for now if anyone&#8217;s interested feel free to grab it from <strong><span style="text-decoration: underline;"><a title="HERE" href="http://53035544.de.strato-hosting.eu/data/primvar_helper_v002.py">HERE</a></span></strong>.</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://julius-ihle.de/?feed=rss2&#038;p=101</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>Dirt AOVs with Yeti &amp; PRMan</title>
		<link>http://julius-ihle.de/?p=78</link>
		<comments>http://julius-ihle.de/?p=78#comments</comments>
		<pubDate>Sat, 03 Aug 2013 17:04:21 +0000</pubDate>
		<dc:creator><![CDATA[Julius]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[aov]]></category>
		<category><![CDATA[lighting]]></category>
		<category><![CDATA[lookdev]]></category>
		<category><![CDATA[passes]]></category>
		<category><![CDATA[peregrine labs]]></category>
		<category><![CDATA[prman]]></category>
		<category><![CDATA[rendering]]></category>
		<category><![CDATA[renderman]]></category>
		<category><![CDATA[vfx]]></category>
		<category><![CDATA[yeti]]></category>

		<guid isPermaLink="false">http://julius-ihle.de/?p=78</guid>
		<description><![CDATA[CG fur always has the tendency to look very clean and soft. To break up the structure and to give a bit more realism I&#8230;<p><a href="http://julius-ihle.de/?p=78" class="more-link post-excerpt-readmore"><span class="more-link-inner">Read more</span><span class="more-link-brd"></span></a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://julius-ihle.de/?p=78"><img class="lazyload alignleft wp-image-2186" data-original="http://julius-ihle.de/wp-content/uploads/2013/08/yeti_dirt.jpg" alt="yeti_dirt" width="800" height="333" /></a></p>
<p>CG fur always has the tendency to look very clean and soft. To break up the structure and to give a bit more realism I like to mix it with a bit of dirt, e.g. some leafs or sticks in the fur of a character walking through the woods. Here&#8217;s a quick rundown on what I&#8217;m doing to achieve this using Yeti and Renderman within Maya.<span id="more-78"></span>Let&#8217;s start with a simple sphere and some clumpy fur on it to get started. This is my base fur setup.</p>
<p>&nbsp;</p>
<p>What I&#8217;m doing next is to create another YetiNode on the same Mesh. This will be (one of) my dirt layers. In this example I&#8217;d like to have some leafs between the fur strands. For this I&#8217;m going to use polyPlanes with textures on them. So to distribute those planes along my sphere I need to first of all get some guide strands lain out. To make the planes take the direction of the fur I have to create a custom comb attribute which I&#8217;ll call &#8220;dirt_dir&#8221; in this example. So after comb&#8217;ing the grown fur with the groom I&#8217;m going to create yet another comb which has a comb attribute called &#8220;dirt_dir&#8221; in it.</p>
<p>&nbsp;</p>
<p>Next I&#8217;ll import the planes which are supposed to hold the leafs and instance them on the fur strands. For the alignment I&#8217;m going to choose my &#8220;dirt_dir&#8221; attribute to make the planes align along the fur strands. For this to work I have to set the &#8220;Instance To&#8221; to &#8220;Elements.</p>
<p>&nbsp;</p>
<p>To get a bit more variation and randomness one can also change the alignment, scale and twist variation as well as tick the &#8220;Deform&#8221; checkbox in the &#8220;Objects&#8221; tab.<br />
Alright, now that this is setup you can assign a simple Shader which holds a leaf texture for example and render and you&#8217;ll have a bit of dirt between the fur. However I personally like to have a bit more control over things in comp, so let&#8217;s take this a step further. I&#8217;m going to set up an AOV which holds the color and alpha of the leafs respectively, but the actual fur render will not contain the leafs at all. To set this up, I just map the leaf&#8217;s color to the diffuse and the leaf&#8217;s alpha to the mask input of a GPSurface Shader. Also I will turn up the transparency to full white because I don&#8217;t want the leafs to appear in the actual beauty render.</p>
<p>&nbsp;</p>
<p>For more information on how to set up custom AOV&#8217;s in Slim there&#8217;s a nice explanation <a title="using aovs" href="http://renderman.pixar.com/view/using-aovs" target="_blank">over at pixar.</a></p>
<p>When rendered and everything is set up correctly something like this should come out:</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>Of course with a bit more sophisticated than just planes with textures on them one can get really nice results while still maintaining a lot of control. One thing I might add is that the color AOV of the dirt  should actually be rendered without the fur being visible. Otherwise as you can see it will be held out by the fur aswell, which could lead to premultiplication issues in comp. In this example it&#8217;s not really obvious but there might be cases where you should be aware of it.</p>
]]></content:encoded>
			<wfw:commentRss>http://julius-ihle.de/?feed=rss2&#038;p=78</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
	</channel>
</rss>
