Replacing a specular pass in comp

I woke up today with a question floating in a skype conversation with can more or less be translated as :

If I create a sphere in Nuke
plug an HDR map
with the wP, can we make a "fake spec"?
with the Hdr?

Which made me think of a technique I made up last year, more or less day for day, as I needed to replace the reflections on the front lenses of a pair of binoculars. I didn't ask about the specifics of today's question but I can only guess that they are the same as the ones that prompted me to do what I will explain below.

The problem was that a specific part of an object needed to display a reflection of the 3D environment. This would not come up if you were doing direct reflexions or at least using a map of your 3D set as a reflexion map; but can happen when you are using a generic spherical hdri for whatever reason (cost and simplicity in our case).

Solution

The solution I came up with is utilizing the relight node.

spec_relight1.png

As shown in FIG. I am plugin an Environment light in the Relight node, with a Specular material.

  • The Relight node is left with default parameters, I've only ticked use alpha.

  • The Environment node as some interesting parameters. Color, intensity and blur size directly affect the output look while mirror image and rotate affect the position on the environment map.
    (I've chose to handle the rotations using a SphericalTransform)

  • The Specular shader is pretty important. Leaving white at 1 will render the correct colors. Min shininess and max shininess affect the look of the specular, and to be perfectly franc, I'm not sure how. I chose to not use them and went below the interface minimum values by setting them to 0; this renders a perfect reflection on the whole surface.

The next step is more or less just a matter of replacing the rendered specular pass. Here I used to rendered pass to make a mask before merging my new reflexion on top of the diffuse pass. You could choose not to use a mask of the previous pass and utilize the min shininess and max shininess values instead. How you want to merge the new reflexion is heavily dependent on the look you're after.

Results

On the short clip below, I've displayed the original render, a render of my comp and a render of the same 3D scene that gave the original render but with the environment what I've used in nuke, to serve as a comparison.

I also opted to not replace the indirect specular as most of it was the result of rays hitting the ground.

It is obviously not as good as finding a solution utilizing your render engine and all its physicality but it's fairly quick and decent looking.

I've made the comp available for download here : download the comp.

Eyes Ping from World Position Pass

Let's preface this post by stating that I'm fairly confident that the following, used as it is, is a pretty bad idea but it is interesting none the less. At least for the math behind it.

As I was watching Star Wars Rebels a few days ago, I noticed that the ping in the eyes was pointing towards a light source or at least a fixed point in space.

On the left, an extracted gif from saison's 3 episode 2 preview, Star Wars Rebels The Antilles Extraction, https://youtu.be/E0M2RC5ENLI

On the shows I worked on, we used two techniques to create the ping, a texture or a mesh. In both cases, the ping was following the eye and not the "light source". Which is something that always bugged me.

On the right, an extracted gif from Skylanders Academy's trailer, https://youtu.be/FeMStkCW2LY

Maya has a Closest Point constraint that lets you attach a locator to a mesh surface and moves it to the closed point to the target. On paper, it could be use to move the ping mesh toward the light. In practice, I was somewhat disappointed by the results I got.

The movements produced by that constraint are jittery and favor the vertices to the rest of the surface.

 
 

The idea

The pretty bad one, but the interesting one.

What Maya does is basically checking if the coordinates of the surface match the coordinate of the line passing by the center of the sphere and the target. Or at least, that's what it looks like to me. It also sounds like it could be done with a world position pass and some locators.

I'm not entirely sure my method is the simplest one, but here is how I get the coordinates of a line in 3D space knowing the coordinates of two of its points.

We know that the coordinates of the line \(D\) is be described by the parametric equation \({\displaystyle \left\{{\begin{matrix}x=at+x_{A}\\y=bt+y_{A}\\z=ct+z_{A}\end{matrix}}\right.\quad t\in \mathbb {R} }\) when \({\displaystyle A\left(x_{A},y_{A},z_{A}\right.)}\) is a point of \(D\) and \({\vec{u}}{\begin{pmatrix}a\\b\\c\end{pmatrix}}\) is one of its direction vectors.

What I have at my disposition in Nuke is two axis, one given by a fbx export of the position of the eye and one of the position of the light source, and the world position pass.

I can use one of the axis as point \(A\) but I still need the direction vector, which is pretty easy to get as it is \({\vec{u}}{\begin{pmatrix}x_B-x_A\\y_B-y_A\\z_B-y_A\end{pmatrix}}\) , with point \(B\) being the second axis.

Now, rather than checking each pixel of the world position pass and getting a unique white pixel if one is on the line (which would be pretty rare; we would need to approximate to get more results), I decided to draw the distance between the line and the pixel.

To do so, I need to first find the projection of each point of the world position pass on the line.

With \({\displaystyle C\left(x_{C},y_{C},z_{C}\right.)}\) a point of the world position pass and \({\displaystyle H\left(x_{H},y_{H},z_{H}\right)}\) the projection of \(C\) on \(D\) the segment length \({\overline {{\mathrm {AH}}}}\) is $${\overline {{\mathrm {AH}}}={\frac {(x_{{\mathrm {C}}}-x_{{\mathrm {A}}})x_{u}+(y_{{\mathrm {C}}}-y_{{\mathrm {A}}})y_{u}+(z_{{\mathrm {C}}}-z_{{\mathrm {A}}})z_{u}}{{\sqrt {x_{u}^{2}+y_{u}^{2}+z_{u}^2}}}}}$$ which gives the coordinates of \(H\) as $$\left\{{\begin{aligned}x_{{\mathrm {H}}}=\ &x_{{\mathrm {A}}}+{\frac {\overline {{\mathrm {AH}}}}{{\sqrt {x_{u}^{2}+y_{u}^{2}+z_{u}^2}}}}x_{u}\\y_{{\mathrm {H}}}=\ &y_{{\mathrm {A}}}+{\frac {\overline {{\mathrm {AH}}}}{{\sqrt {x_{u}^{2}+y_{u}^{2}+z_{u}^2}}}}y_{u}\\z_{{\mathrm {H}}}=\ &z_{{\mathrm {A}}}+{\frac {\overline {{\mathrm {AH}}}}{{\sqrt {x_{u}^{2}+z_{u}^{2}+z_{u}^2}}}}z_{u}\\\end{aligned}}\right.$$ The distance between the point of the world position pass and its projection is : $${\overline {{\mathrm {CH}}}}={\sqrt {(x_{C}-x_{H})^{2}+(y_{C}-y_{H})^{2}+(z_{C}-z_{H})^{2}}}$$

The alpha channel expression is :

clamp(r==0 && g==0 && b==0 ? 0 : 1 - sqrt((r-x)**2 + (g-y)**2 + (b-z)**2) / radius)

Which is one minus the equation above divided by the radius. This limits the size of the circle and sets the whitest point on the shortest distance instead of the farther.

Results

Specular pass from the render engine

Calculated specular from the expression

Difference between the two speculars

And that is it. I never put it to the test in production nor did I create a proper gizmo. I hope that you have found some interest in this first blog post!

 

EDIT : Cyril from 2019 here!

Here is the comp files, in case you’re interested.