A moment in the sun Journals

It’s time for 17 to 23 July 2016. After last week, I’m back in the saddle. This week’s entry is mostly about automation.

Pointy’s new eyebrow means using Blender nightlies for production. The nightlies have an option to switch on the new and improved dependency graph. Sadly, the new and improved dependency graph breaks my scenes – specifically, the constraint relationship between Pointy and his sweet hat. Remember this from last week?

The good news is that I found a fix! (Also there’s dialogue on that shot now.)

The bad news is that for every single affected scene, I’d have to splat the old
Pointy, load the new Pointy, apply the old Pointy’s action, splat the
old sombrero, load the new sombrero, constrain the new sombrero to
Pointy’s head and apply a new sombrero action to make the sombrero the right shape. Assuming I even do these things correctly the first time, that’s hours of tedious busywork standing in the way of making progress. Every time I open an old shot to work on it, I have to do that again.

I decided to teach Blender how to do that so that I don’t have to.

Blender has a full scripting language (Python) built in and an application programming interface (API) for making Blender do stuff automagically. Five hours later, I can make Blender do the
aforementioned loady-switchy-constrainy-deletey drudge work. I can even
put the script behind a convenient button on the interface so I don’t have to go
looking for it. Yay!

I’ve also made some extra buttons to control “Only Render” and “Background Images” from 3D view without diving into the properties menu as well as choosing which kind of keyframe I’m adding. These are very handy for layout.

It’s nice being able to code. 🙂

I made progress on another task this week – making a procedural pebbly texture for the ground. I did tests back in February 2015 but they used geometry. Scattering millions of stones across the ground however will make my renders way slower. Here’s the version with actual geometry:

And here again is the texture-only version based on Voronoi noise:

The geometry version is more realistic and the procedural version is more impressionistic. I like them both. Ultimately it’s an art direction call which one I go with, but the procedural texture could win out with a few bits of actual geometry on top.

Thanks for reading and I’ll see you next week!


Using sound to drive animation in Blender – video tutorial

It’s here! In this tutorial, I go over Blender’s “Bake Sound to F-Curves” function and some of its options, demonstrate ways of using the resulting data within objects and armatures, and provide a whirlwind tour of the audio spectrum.

The second half of the tutorial shows you how to identify and extract data from specific frequencies in order to make a drummer armature play the drums using automatically triggered actions – no need to keyframe each individual hit or even open the NLA Editor!

Software you’ll need to follow along at home: Blender 2.70 or higher, Audacity or any DAW.


I’m working on a Blender function which generates a pseudo-random sequence of numbers guaranteed to be as similar or dissimilar as you need. The current “random” in F-Curves sometimes spits out identical consecutive values and that annoys me. This function makes sure things keep moving when you tell them to.

The values generated are between 0.0 and 1.0. It’s available in Blender as a driver function so you can use it to “jiggle" whatever animation-ready value you want. (Or maybe I should rename it jitter. If I say "jiggle”, people might think of “jiggle physics”…)

Inputs are currently an arbitrary channel name (“x”, “y”, “rotz”, etc), minimum percent dissimilarity as a float from 0.0 to 1.0 and maximum percent dissimilarity as a float with the same bounds. The inputs are sanitised and sanity-checked on the way through to make sure they won’t break anything or crash Blender.

Because it’s in driver-land, you can send in variables from objects which can then be keyed, meaning you could start something fairly still then turn the jitter factor up and down over time by multiplying the function’s output with a variable that starts at 0.0, rises to 1.0 and goes back down to 0.0.

The channels allow the function to drive multiple things using the same iteration of the random variable. In the case of scaling, the first time you execute it on the “scale” channel, it’ll generate a random value. On immediately subsequent calls, the function just returns whatever value it last generated on that channel.

It would be nice (and probably not that hard) to add a smoothing function which keeps a cache for any channel and averages it. Maybe I’ll do a wrapper function with smoothing or something. It would also be better to make the “return last value on channel” thing something that has to be explicitly asked for instead of a default behaviour.

Looking ahead I’d like to offer it as an add-on but I’m still in the middle of fighting add-on API dragons for the first time. I’ll suss it out. 🙂