Eyes Ping from World Position Pass

Let's preface this post by stating that I'm fairly confident that the following, used as it is, is a pretty bad idea but it is interesting none the less. At least for the math behind it.

As I was watching Star Wars Rebels a few days ago, I noticed that the ping in the eyes was pointing towards a light source or at least a fixed point in space.

On the left, an extracted gif from saison's 3 episode 2 preview, Star Wars Rebels The Antilles Extraction, https://youtu.be/E0M2RC5ENLI

On the shows I worked on, we used two techniques to create the ping, a texture or a mesh. In both cases, the ping was following the eye and not the "light source". Which is something that always bugged me.

On the right, an extracted gif from Skylanders Academy's trailer, https://youtu.be/FeMStkCW2LY

Maya has a Closest Point constraint that lets you attach a locator to a mesh surface and moves it to the closed point to the target. On paper, it could be use to move the ping mesh toward the light. In practice, I was somewhat disappointed by the results I got.

The movements produced by that constraint are jittery and favor the vertices to the rest of the surface.

 
 

The idea

The pretty bad one, but the interesting one.

What Maya does is basically checking if the coordinates of the surface match the coordinate of the line passing by the center of the sphere and the target. Or at least, that's what it looks like to me. It also sounds like it could be done with a world position pass and some locators.

I'm not entirely sure my method is the simplest one, but here is how I get the coordinates of a line in 3D space knowing the coordinates of two of its points.

We know that the coordinates of the line \(D\) is be described by the parametric equation \({\displaystyle \left\{{\begin{matrix}x=at+x_{A}\\y=bt+y_{A}\\z=ct+z_{A}\end{matrix}}\right.\quad t\in \mathbb {R} }\) when \({\displaystyle A\left(x_{A},y_{A},z_{A}\right.)}\) is a point of \(D\) and \({\vec{u}}{\begin{pmatrix}a\\b\\c\end{pmatrix}}\) is one of its direction vectors.

What I have at my disposition in Nuke is two axis, one given by a fbx export of the position of the eye and one of the position of the light source, and the world position pass.

I can use one of the axis as point \(A\) but I still need the direction vector, which is pretty easy to get as it is \({\vec{u}}{\begin{pmatrix}x_B-x_A\\y_B-y_A\\z_B-y_A\end{pmatrix}}\) , with point \(B\) being the second axis.

Now, rather than checking each pixel of the world position pass and getting a unique white pixel if one is on the line (which would be pretty rare; we would need to approximate to get more results), I decided to draw the distance between the line and the pixel.

To do so, I need to first find the projection of each point of the world position pass on the line.

With \({\displaystyle C\left(x_{C},y_{C},z_{C}\right.)}\) a point of the world position pass and \({\displaystyle H\left(x_{H},y_{H},z_{H}\right)}\) the projection of \(C\) on \(D\) the segment length \({\overline {{\mathrm {AH}}}}\) is $${\overline {{\mathrm {AH}}}={\frac {(x_{{\mathrm {C}}}-x_{{\mathrm {A}}})x_{u}+(y_{{\mathrm {C}}}-y_{{\mathrm {A}}})y_{u}+(z_{{\mathrm {C}}}-z_{{\mathrm {A}}})z_{u}}{{\sqrt {x_{u}^{2}+y_{u}^{2}+z_{u}^2}}}}}$$ which gives the coordinates of \(H\) as $$\left\{{\begin{aligned}x_{{\mathrm {H}}}=\ &x_{{\mathrm {A}}}+{\frac {\overline {{\mathrm {AH}}}}{{\sqrt {x_{u}^{2}+y_{u}^{2}+z_{u}^2}}}}x_{u}\\y_{{\mathrm {H}}}=\ &y_{{\mathrm {A}}}+{\frac {\overline {{\mathrm {AH}}}}{{\sqrt {x_{u}^{2}+y_{u}^{2}+z_{u}^2}}}}y_{u}\\z_{{\mathrm {H}}}=\ &z_{{\mathrm {A}}}+{\frac {\overline {{\mathrm {AH}}}}{{\sqrt {x_{u}^{2}+z_{u}^{2}+z_{u}^2}}}}z_{u}\\\end{aligned}}\right.$$ The distance between the point of the world position pass and its projection is : $${\overline {{\mathrm {CH}}}}={\sqrt {(x_{C}-x_{H})^{2}+(y_{C}-y_{H})^{2}+(z_{C}-z_{H})^{2}}}$$

The alpha channel expression is :

clamp(r==0 && g==0 && b==0 ? 0 : 1 - sqrt((r-x)**2 + (g-y)**2 + (b-z)**2) / radius)

Which is one minus the equation above divided by the radius. This limits the size of the circle and sets the whitest point on the shortest distance instead of the farther.

Results

Specular pass from the render engine

Calculated specular from the expression

Difference between the two speculars

And that is it. I never put it to the test in production nor did I create a proper gizmo. I hope that you have found some interest in this first blog post!

 

EDIT : Cyril from 2019 here!

Here is the comp files, in case you’re interested.