Dealing with frequency in procedural animation

Back to the bouncing ball..

In the last days we had to deal with a procedural animation for the wings of a thinkerbell. The final animation consisted in having the wings flapping very fast and changing the speed along the animation.
Easy task, isn’t it?
To have a continuous animation, we could drive some parameters of our object by using a trigonometric function, i.e.:

But what if we animate the frequency value along time? Here’s what we may get:

This translates into the motion of our 3d object as instantaneous jumps and changes of direction. The reason why this behaviour is that by changing the frequency of the sinus curve, we are squashing/stretching its base period, but the instantaneous phase is not kept, i.e. at the same instant we ‘see’ different angles between the two curves.  Thus, to fix the problem, we have to impose that when we are changing the frequency, we keep the same instantaneous phases, i.e.:

As we can see, the new instantaneous phase brings an additional phase term p1, coming from the previous phase. This term must keep the same propery of the new instantaneous phase:

Please note that t is not the same between the first and the second term.

In general, to keep the continuity at each time t (in a discrete time, e.g. our frames), the instantaneous phase for t=tn can be written as:

Here’s how our final sinusioid will look like:

And here’s our example with the new formula applied:

Note in this case the curve has ‘broken tangents’ at the points changing frequency. This because the resulting function is not continuous in C1, ie. for it’s derivate. To have smoother transitions, simply avoid instantaneus changes of frequency.

We can use this formula into an expression. Because the iterative sum, a smart way to evaluate the expression would be storing the partial sums along the playback of the animation:

Though this method is very efficient, it has some limitations. The partial sum is calculated according the previous expression evaluation, that means, if we scroll the timeline or jump back and forth during the playback (as usually animator do while working), we loose the exact counting of the partial sum and the result is wrong.  To fix that, at each expression evaluation we have to recalculate the whole sum, so the expression becomes state-independent:

So far so good… for now.

What about setting high frequencies? We might experiment ugly animations, jumps and even… no animation.

The reason is due to the downsampling of the motion described by our high frequency sinusoid and the frame rate chosen to playback in our animation, that usually is set as 24 fps.

Let’s consider this case:

Where the blue curve is our expression, the vertical red lines marks the frames and the red circles are the result of computation of the expression at each frame.

In this case it is easy to understand the movement of our object. Now let’s increase progressively the sinus frequence:

The jumps between the frames are slightly more marked but we can still perceive the motion. Keep increasing the frequence:

The movement starts to become fuzzy..

In this case the period of the sinusiod equals the time between 2 frames. Even if the expression describes a movement, the final result after sampling is no movement. The same effect verifies if we have the period equals to 2 fotograms.

Keep increasing the frequency:

In this case the movement can be perceived as a lower frequency signal, i.e. we perceive the object as it is slowing down.

All these effects are well known in signal theory, and as anticipated, is the result of the wrong sampling of the equation describing the motion. They can be also seen in the classic inverse wheel rotation effect.

Now the question is, what is our maximum frequency bound?

According the Nyquist-Shannon theorem, the maximum frequency we can have for a certain fs (sampling frequency, in our case is the fps) cannot be higher than fs/2. i.e., if we animate our object at a frequency higher than 12 sampling at 24 fps, we perceive a distorted motion.

A higher range of action in the computer graphics is given by the vector motion blur algorithm. In this case, even if the object ‘jumps’ too much between adjacent frames, the blurring helps the viewer to perceive the motion.  Moreover the subsampling method of this algorithm captures the motion inbetween frames.

But even motion blur sumbsampling has its bounds. If the frequency of our object is higher than ( scene fps/2) * #subsamples, the algorithm perceives in input wrong motion, exactly as aforementioned.

In visual effects industry for extreme frequencies unreachable by the renderer even by using motion blur subsampling, a common technique consists in animating multiple copies of the object with lower frequencies and different phases and compositing the copies in post production.

A more accurate but computationally expensive way to sove the problem would be ‘supersampling’ the frames, e.g. rendering at 48 fps and ‘downsampling’ in post production.

Merging vertices at corner


Today a Modeler at work asked to me if there is a tool in Maya to merge two vertices from 2 different meshes in the corner, (not just by averaging their positions) like depicted in figure:

I was unsure about the toolset for modeling provided by Maya, but I thought it was a good chance to write out some code and have fresh material for the blog.

My idea was to create a script that, by selecting some vertices or edges from different (or the same) meshes would move the external points right to the corner.

This very simple task presents some non-trivial challenges.

First of all, what we ‘see’ programmatically from a mesh is just a cloud of points, their connections (the edges), the faces and so on, so we have to figure what we need as input from the user and how to elaborate it to achieve the result.

When developing tools, one of the crucial aspect I focus more is keeping the tool usable, that means simple and easy to understand for the user.

Here’s my approach:

Imagine if we could extend the longitudinal edges for each mesh: the intersection points between these edges would give us the final positions where to place the vertices at the border for each mesh.

Mathematically speaking, our problem can be reduced into calculating the intersection points between 2 lines representing 2 adjacent edges from the different meshes.

Now we can remodel the problem in this way: given 2 edges, calculate their intersection and move the closer points to the intersection.

Given 2 lines R and S in a 3d space (in our case constructed by extending to infinite 2 mesh edges), we might have the following cases:

- R intersect S in P(x’, y’, z’);

- R || S => they are parallel, thus S never intersect R or R=S always;

- R and S are crooked: never intersect and no parallel.

According the different cases, we might approach different solutions. I opted for averaging the border vertices in case S=R and in the other cases finding the 2 closest points P and R where P in R and Q in S, so we keep the direction of the edge. In case R intersect S, it follows that P=R.

Selected 2 edges, we can find the two extreme vertices using the polyInfo command:

sel =
if len(sel) !=2:
   OpenMaya.MGlobal.displayError("Select exactly 2 edges")

   vtxsA = cmds.polyInfo(sel[0], edgeToVertex=1)[0].split()
   vtxsB = cmds.polyInfo(sel[1], edgeToVertex=1)[0].split()

Now we have a set of four points. We need to know wich 2 of these points to move after we find the intersection. We might match point by point and find the 2 closest. I used a different approach: for each point of an edge, find the closer to the average of the 2 vertices of the other edges (A and B represents the 2 edges):

if distance(posA[1], avgB)< distance(posA[0], avgB):


 if distance(posB[1], avgA)< distance(posB[0], avgA):


We can compute the intersection between 2 lines in the 3d space using  different approaches, according the way we choose to represent them: e.g. intersection between planes, parametric equations and so on.

I went for parametric equations:

For S:

X(t) = x0 + (x1-x0)*t

Y(t) = y0 + (y1-y0)*t

Z(t) = z0 + (z1-z0)*t

For R:

X’(h) = x’0 + (x’1-x’0)*h

Y’(h) = y’0 + (y’1-y’0)*h

Z’(h) = z’0 + (z’1-z’0)*h

with l=x1-x0, l’=x1′-x0′, m=y1-y0, m’= .., n=.., n’=..,  as the director parameter of the lines.

From the parametric equations we can compute the vector difference:

D(t,h) = [Dx(t,h), Dy(t,h), Dz(t,h)]


Dx = X’(h) – X(t), Dy=Y’(h) – Y(t), Dz=Z’(h)-Z(t)

We can study the gradient of D(t,h) to find the minimum value, i.e. :

This give us a system in 2 equations with 2 variables (t and h).

According the solutions given by the system we can see if the lines intersect, are parallel or crooked. The rest of the code is raw math to translate all these concepts.

The final lines assign the corresponding coordinates given by h and t to the closest vertices to the corner:

PA_X = qA + t*lA

PA_Y = uA + t*mA

PA_Z = wA + t*nA

PB_X = qB + h*lB

PB_Y = uB + h*mB

PB_Z = wB + h*nB


cmds.xform(closerA, t=[PA_X, PA_Y, PA_Z], ws=1)
cmds.xform(closerB, t=[PB_X, PB_Y, PB_Z], ws=1)

You can download the sourcecode here:

To test it, source the contained .py file, select 2 edges as shown in picture and call the function mergeAngle()

The code is pretty rough but effective.. just coded in half an hour. It wants to be a start point to work on. A better implementation would include iterate along all the border vertices to get the corresponding edges and fix them at a glance.. any suggestion, request, improvement is very welcome.

Thanks for reading

Hello world!

Hello world.. a familiar sentence for programmers..

In these months my experience related to programming and vfx is growing very fast, and I feel the need to share my knowledge to the world. This is why I decided to open a Blog section on the website.

In this blog I will post random thoughts about programming and 3d. My goal is to push my (and yours) knowledge always one step forward.

Thus, you wouldn’t find general basic tutorials (there’s no need to reinvent the wheel), except for advanced topics, where a different reading key would help to demistify  complex concepts.

Stay tuned