Giving up right

It’s been 17 to 23 September 2017 but I’m posting this at dawn on Sunday the 24th. I can’t seem to shake this cough and I don’t think I’ve had a full night of sleep in a week. Tonight will probably be more of the same.

This would have been the weekend that AAAAAAAAAAA wrapped up, being as this is the Queens Birthday long weekend, but bouts of illness and dayjob stress all ground its precious momentum to a halt before it reached any kind of releasable state. There’s still no soundtrack and it’s still three shots long.

Truth is, I’m not inclined to pick it up and keep going with it, either. I’ve lost my taste for doing any animated film stuff, honestly. Doing the solo animated film thing isn’t viable for me right now anyway, so it makes sense to switch to something else.

Something like learning how to use all these nifty plug-ins I’ve bought (e.g. Retopoflow), or getting to grips with bits of Blender I avoid (e.g. hair system and physics), or just smashing through the model-rig-texture workflow over and over without trying to fit the result into an overall project. Something like honing skills and getting out there: entering competitions; drawing (or speedsculpting) during work breaks; identifying and filling skill gaps; generally keeping the juices flowing while banking a lot of short focussed work. I went through some of that process with music so I already know the value of it.

“I’m grinding” doesn’t sound as grandly impressive as “I’m making an animated film”, but I can live with that. My ambitions are more practical now.

Thanks for reading!

A first time for everything

It’s been 10 to 16 September 2017. I was down with a respiratory infection all week and I still haven’t shaken it completely. I’m tired of coughing. 😐

Submissions for the Suzanne Awards 2017 opened earlier this week. (The Suzannes are the Blender Conference’s little Blender-centric film competition.) I’ve entered RYGCBMK◯ with no expectations at all of it getting anywhere. It’s my first time entering anything into any kind of animation festival. Hopefully its fluffy good times lift the spirits of all who consider it for an award. 🙂

I’ve been listening to a ton of Belgian new beat and eurodance this week like a sad old bastard. Here’s a classic from the era and the very first CD single I ever bought: L.A Style’s “James Brown Is Dead”.

An even more off week

The week just gone was 3 to 9 September 2017. I didn’t start up Blender or switch on my modular synth all week.

I appear to be having a run of creatively unproductive weeks where I’m completely out of patience, stamina and focus. This week has been the worst one so far: either I didn’t even want to make anything or I got five minutes into half-arsing something then decided it was all too hard and gave up.

So that’s where I’ve been this week. I’d like to be somewhere different next week, within reason.

Speaking of funk in an entirely different (and better) sense, I’ve been listening to a lot of James Brown lately. Here’s a long form session from 1973 called “Doing It To Death” where James is super happy to see Maceo Parker. I hope you like it!

Patch of the day – Augmented vocoder

 

I got an Expert Sleepers Disting Mk 4 module on Friday. It is Eurorack’s designated swiss army chainsaw. Amongst its 76 modes, it can be a vocoder. I took it for a spin with a John Cage lecture as the modulator signal on Friday evening and the above is what came out.

Analogue and analogue-style vocoders are a system of band-pass filters, envelope followers and amplification. Two signals enter the vocoder: a carrier signal which supplies the timbre of the output sound, and a modulator signal which supplies frequency and amplitude information.

A bank of band-pass filters cut each signal into strips of frequency (think bass and treble, but much more precise). The vocoder tracks how loud the modulator signal is at each strip, then adjusts the volume of the carrier’s strips to follow the modulator’s strips. The carrier’s strips are combined back together as a whole signal and out comes Cylons.

One of the challenges with vocoders is getting output that’s as intelligible as the input. The intelligibility of the Mk 4’s vocoder was not great, so I mixed in a filtered version of the original speech. The filter is tuned to sibilant sounds (like s, f and th) which were getting lost in the vocoder process. It’s an old trick – the EMS Vocoder used for the Cylons has a cleverer version which lets sibilance through as it’s required – but a dumb version works in a pinch and as a result you can understand Mr Cage just fine.

For further fun, I’m modulating the pitch of the carrier signal with a random signal. This makes Mr Cage sound much more floaty than usual as the carrier floats up and down in pitch, giving him a rather more vague delivery.

Potato

It’s been 27 August to 2 September 2017.

Lately I’ve been hit with a mixture of bad sleep, mid-project boredom, dayjob stress and general restlessness. AAAAAAAAAAAAAA has stalled and I’m even finding it hard to get my head together to write a blog entry. This mental fuzz is truly brutal.

So let’s forget about all that stuff this week. Here’s the long version of Marvin Gaye’s “Got To Give It Up” which I could happily listen to until the heat death of the universe.

I’ve also been watching a ton a YouTube puppet show called Glove and Boots. It’s a mix of comedy sketches, inept let’s plays and live streams. You may know them from their Vertical Video Syndrome PSA, their helpful product testing videos or their song parodies (from which I borrowed this week’s blog title). Lately they have also been doing a brisk trade in catchphrases:

I got super inspired by watching these jokers having too much fun, so I set up Open Broadcast Studio and figured out how to get my recording setup and modular synth streaming to YouTube. I also briefly fantasised about doing my own puppet show because I miss doing voice acting and animation is slow, but then I remembered I have too many hobbies as is.

Sorry for the abrupt ending but I’m about to be late for something and I’ve tried about thirty times to finish the blog properly. As promised, here’s an image of a potato which has nothing to do with anything other than the blog title. See you next week, hopefully with no need for a potato.

I hereby potato this blog entry for its own good. Potato potato.

The cactus speaks!

It’s been 20 to 26 August 2017. In short: I got the rest of the shot rendered, then I got sick.

Here’s an ungraded version of that render with a draft sound effect for the magic cactus. I don’t think it’s quiiiite there yet. See what you think!

 

I don’t have an idea for where to go with Shot 4 yet, except that there should be no more spinning for a bit. The last shot had spinning in it too. Enough spinning for now, I think.

Since I’m focusing on sound, some kind of sound-related gag would work. I don’t even have an actual “AAAAAAAAAAAAAAAAAAAAAA” sound recorded yet, just ideas. Having that sound could lead to all kinds of silly and fun places. I can certainly give that some thought while I’m back-filling the rest of the shots with sound design.

Patch of the day: They interrupt ladies and gentlemen

Have some sound art.

The original sample is a snatch of the Mercury Theater’s infamous “War of the Worlds” broadcast, looped to twist its meaning. The chief glitch mangler effect here is a PT2399 digital delay chip living inside the Befaco Crush Delay v2. Its delay length is being automatically modulated by four stacked modulation sources – two random, two cyclic – via a Befaco A*B+C attenuverter/offsetter. The strength & offset of the delay modulation and feedback strength of the delay were manually manipulated during the recording.

tl;dr = it’s like autechre but funnier

Sample source from http://mercurytheatre.info/

Hurry up and wait, it’s rendering time!

It’s been 13 to 19 August 2017. The animation for Shot 3 is done and I’m midway through 36 hours of rendering as I type this. There’s 286 frames to render in total with each 16-bit 1920*1080 OpenEXR frame rendering in 11 minutes on my desktop and in 19 minutes on my laptop. As of right this minute, there’s about 37 frames left to do. It’s all looking fine so far with nothing obvious to fix up in composite.

No need to wait for this though – here’s a boomsmash of the final (unrendered) animation, based on last week’s blocking. I might tweak a few frames in final edit, but it’s render-ready. 🙂

 

Not all cactuses levitate when you scream at them, but you should always try it just in case.

Instead of hopping directly into Shot 4 once Shot 3 is rendered, I want to jump over to sound editing for the existing shots so I don’t get too backlogged there. Also I just took delivery of a nifty frequency shifter which should be super helpful for creating the hum of a levitating cactus…

Speaking of Blender 2.79 though, it’s going to be a big release: I’m already using the new render denoising and filmic colour LUTs for AAAAAAAAAA; surface deform will likely come in very handy down the track; and Blender users have been patiently awaiting a principled shader and shadow catcher for literally years. Also the simplifications to the video encoder panel are awesome. If you’re a Blenderhead, go grab the release candidate, read the release notes on what’s changed and report any bugs you find! 🙂

See you next week!

Patch of the day – frequency splatter

Finally took delivery of a Synthesis Technology E560 Deflector Shield today! This first patch tweaks the speed of the frequency shift in reaction to the volume of John Cage‘s speech. The louder John Cage speaks, the more bubbly and swirly he sounds. Sounds wicked sci-fi in headphones. 🙂

Patch of the day: Space drips

It’s rainy here so I made some cosmic trickling noises.

I start with a clock signal. That clock signal goes into a random shift register which spits out a different new voltage every time the clock ticks. The voltage goes into a voltage-controlled filter with its resonance control set high so it’ll ring if it stays on one voltage for more than a fraction of a second.

The bandpass output of that filter goes into the aux path of a (crunchy) digital delay – this means there’s feedback path between the delay and the filter which “remembers” what noise the filter was just making and feeds it back to the filter as an input. This creates a subtle fade effect from one “drip” to the next.

Enjoy! 🙂