Categories
Journals

This is how I know I’m getting the hang of this OSL malarkey – this procedural yin yang symbol took about 15 minutes to do.

Basically any number of points which are an even distance from another point form a circle. When they’re plotted, anything inside the circle forms a disc and anything else outside the circle is.. well.. something else.

The draw function checks if the point is within the top of the smaller circle first (less than 0.125 from X:0.0, Y:0.5 – Z isn’t relevant). Then it checks if the point is within the bottom circle (less than 0.125 from X:0.0, Y:0.5). Then it checks if it’s within the bottom larger circle (less than 0.5 from X:0.0, Y:-0.5). Then it checks if the point is within the top circle (less than 0.5 from X:0.0, Y:0.5). Then it checks if it’s in the right half of the circle (Po[0] is the X position). If it’s not within the right half of the circle where X is greater than 0, it must be in the left half where X is less than zero.

The shader itself checks to see whether the point being plotted is further than 1 away from point X:0.0 Y:0.0. If it is, 1 is output – in the rest of the shader, this makes the disc transparent. If it’s closer than 1, it goes down to another checkpoint – if we’re drawing a point less than 0.98 from the origin, it uses the draw function and returns either the yin or yang colour. If we’re not drawing that point (we’re between 0.98 and 1), we need to return the border colour.

Little bit rough I suppose but it’ll do. 🙂 It would be possible to hook input variables into things like the size of the dots and then slave those to an F-Curve driven by music to make them pump in time to music, but that would just be crass, wouldn’t it?

Here’s the code. You can figure out how to do the crass part yourself. (Hint: You need to pass a variable from the shader to the draw function to affect the value that’s currently set at 0.125 – maybe calculate the size of the smaller discs as 0.125 + (0.125 * Scale) before doing the region check.)

 float draw(point Po) {

    if (distance(Po, point(0.0, 0.5, 0.0)) < 0.125) {

        return 1;

    }

    else if (distance(Po, point(0.0, -0.5, 0.0)) < 0.125) {

        return 0;

    }

    else if (distance(Po, point(0.0, -0.5, 0.0)) < 0.5) {

        return 1;

    }

    else if (distance(Po, point(0.0, 0.5, 0.0)) < 0.5) {

        return 0;

    }

    else if (Po[0] > 0) {

        return 1;

    }

    else {

        return 0;

    }

}

shader yinyang(

    point Po = P,

    color Yin = (0.0),

    color Yang = (1.0),

    color Border = (0.0),

    output color Col = 0.0,

    output float Alpha = 0.0

) {

    float Dist = distance(Po, point(0.0,0.0,0.0));

    Alpha = (Dist > 1) ? 1.0 : 0.0;

    if (Dist < 0.98) {

        Col = draw(Po) ? Yang : Yin;

    }

    else {

        Col = Border;

    }

}

Categories
Journals

As promised, here’s how I did the talking circles. The material setup is pretty simple – most of the heavy lifting takes place in the script which I’ve included below.

In the node setup, LoFreqInput, MidFreqInput, and HiFreqInput are all driven by an F-Curve baked from the sound file. (Important tip: when baking f-curves from sound, ensure your cursor is at frame 1 – it’ll start baking the sound from wherever the cursor is placed.)

Bake Sound to F-Curves turns the amplitude of a sound file into a series of values from 0.0 to 1.0. It has the following useful parameters:

  • Lowest Frequency: Any activity below this sound frequency will be ignored
  • Highest Frequency: Any activity above this sound frequency will be ignored.
  • Attack time: how long it takes for a stronger signal to register as a change in value – larger attack time means more smoothed out values, shorter attack time means the f-curve responds quicker to changes in amplitude so jitter is more likely
  • Release time: how long it takes for a change in signal to tail off – larger release times mean peaks are held for longer, slower release times mean the peaks drop away quicker.
  • Threshold: how loud the signal has to be before it registers on the F-Curve.

There’s Accumulate, Additive, Square and Square Threshold as well, but I haven’t played with those yet. Incidentally, you can press Space and type Bake Sound to from the F-Curve view to call the function. 🙂

I went with the default attack/release values when baking all the curves. In the material setup, LoFreqInput was 500.0 high; HiFreqInput was 5000.0 low, and MidFreqInput was everything in between. Amplitude was all frequencies – basically the defaults.

Here’s the relevant bits of the script itself:

shader voiceSpec(

    point Po = P,

    float Scale = 1.0,

    float Offset = 1.0,

    float TwitchScale = 1.0,

    float LoFreqCarrier = 1.0,

    float LoFreqMultiplier = 1.0,

    float MidFreqCarrier = 1.0,

    float MidFreqMultiplier = 1.0,

    float HiFreqCarrier = 1.0,

    float HiFreqMultiplier = 1.0,

    

    output color Colour = 0.5,

    output float LoFac = 0.0,

    output float MidFac = 0.0,

    output float HiFac = 0.0,

) {

    float Distance = distance(point(0.0,0.0,0.0), P) * Scale;    

    LoFac = sin(Offset + (Distance * LoFreqMultiplier) + (LoFreqMultiplier * TwitchScale * LoFreqCarrier)) * LoFreqCarrier;

    MidFac = sin(Offset + (Distance * MidFreqMultiplier) + (MidFreqMultiplier * TwitchScale * MidFreqCarrier)) * MidFreqCarrier;

    HiFac = sin(Offset + (Distance * HiFreqMultiplier) + (HiFreqMultiplier * TwitchScale * HiFreqCarrier)) * HiFreqCarrier;

    

    Colour = color(MidFac, HiFac, LoFac);

}     

So, once all that stuff is fed into the OSL script, the maths happens. First, it figures out how far away from the centre the calculation’s happening with float Distance = distance(point(0.0,0.0,0.0), P) * Scale; – distance(point1, point2) returns a distance, and we scale that distance with the Scale parameter into the variable Distance.

We then perform three fairly similar operations to get our red, green and blue rings: sin(Offset + (Distance * FreqMultiplier) + (FreqMultiplier * TwitchScale * FreqCarrier)) * FreqCarrier;

We use an Offset to make sure the factor value stays above zero, then we add together Distance multiplied by FreqMultiplier – the higher the multiplier, the quicker it scales. We then add that to FreqMultiplier multiplied by TwitchScale and FreqCarrier. (FreqCarrier receives FreqInput.) This is the element that makes the rings wiggle back and forth – as the value of FreqCarrier changes with the baked F-Curve, it scales TwitchScale and FreqMultiplier. All of this gets fed into a sine function which creates bands of brightness as the values increase. The quicker the input value increases (thanks to FreqMultiplier), the more bands we get.

(Just so it’s absolutely clear, this combination of variables as well as the actual ring numbers was sheer trial and error on the day. Cycles’ realtime pathtracing mode made them much easier to discover than it would otherwise have been.)

We then multiply the result of the sine function by FreqCarrier so that the bands are brightest at high volume and dark when the F-Curve is at or near 0.0. The reason that we get circular bands from the sine function and not some other shape is that the input to the sine function is being calculated (partly) based off any given position’s distance from a central point. I discovered it’s possible to feed other things into the Po input to warp that relationship to interesting effect – Musgrave noise was pretty spectacular.

Doing this for low, mid and high frequencies, we combine the results together into a single colour at the end with Colour = color(MidFac, HiFac, LoFac); – the high result goes into the green channel, the mid result goes into the red channel and the low result goes into the blue channel.

The shader outputs include a colour output as well as separate outputs for each frequency band. (There’s also a couple of testing outputs as well which I don’t really need for a production version.)

Of course, frequency bands are just one possible input. I could bake F-Curves from the overall amplitude of drums, guitar and vocals for the inputs, for instance. Or even kick drum, snare drum and hi-hats if I just wanted to visualise drums.

Anyway. The colour was then given to Emission, the material was applied to a plane which was cameraed exactly from above, I set render samples to 5 and bounces to 1. The frames rendered out in under a second each.

I put the sound and still frames together in Blender’s VSE and rendered it directly out to video from there, and that’s that. 🙂

Categories
Journals

Quiet One. I have a transmission from your home planet. Would you like to hear it?

I had a yen to mess around with making concentric circles look like they’re talking. I did this in about half an hour with Open Shading Language. The blue rings are driven by frequencies below 500Hz, red is driven by 500-5000Hz, green is driven by 5000Hz and above.

I wrote a technical breakdown here.

Categories
Journals

After finding this excellent guide to converting GLSL shader routines to OSL, I thought I’d dig up a couple of the simpler but still spiffy-looking ones on GLSL Sandbox for conversion practice.

The successes and failures have both been pretty great. I don’t know what I’m doing, I’m fairly sure I found a bug but I will need to test a bit more to be sure. But yeah. Shaders. Cool stuff. 🙂

Haven’t really got many tips on how to convert, suffice it to say it helps to know how vectors and scalars behave. Also worth knowing is the following.

In GLSL, vec2 pos = (gl_FragCoord.xy / resolution.xy) means that pos contains members pos.x and pos.y with ranges from from 0.0 to 1.1

In OSL: there’s a global point P which you can put into the shader signature as point Po = P. Po will then contains three read-only indices – P[0], P[1] and P[2]. These are floats of X Y and Z respectively containing values from 0.0 to 1.0 or close enough.

You can factor in time by putting float FTime = 1.0 in the shader signature. Once you’ve compiled the shader, hover mouse over FTime and hit i to insert a keyframe. Then hit up the F-Curves Editor, add a Generator and you’ll have time as frames. If you the time to be fed in as seconds instead of frames, set the first order co-efficient (aka.the parameter next to the X) as 1/24 or 1/yourframerate. (Drivers do not appear work for OSL node parameters.)

Think I might render out a video or two for tomorrow. This stuff is cool in motion.

Originals: http://glsl.heroku.com/e#12424.0 and http://glsl.heroku.com/e#12485.1 – thanks to Stefan Hintz and the anonymous contributor of the other GLSL shader.