Tuesday, December 22, 2020

Reaper clipping, limiting, and loudness for mastering.

 Reaper includes some tools for managing loudness and peaks. Let's look at them and what they do.

Background.

Mastering is the process of preparing completed songs for distribution. In particular, adjusting the overall EQ and managing the loudness and peaks. For digital distribution, you generally want to target a specific loudness such as -14 LUFS, and keep peaks at -1dB.

Hard clipping happens when the peaks of a loud signal are clamped. There is a sharp bend in the waveform which introduces aliasing and harsh distortion. For content such as sin waves, piano, and voice, this distortion will be very audible. For thick and dirty pop and rock, hard clipping might not be noticed or might yield a pleasing result.

Soft clipping is any method that squishes the peaks down as they approach / exceed the 0dB limit. Soft clipping avoids the sharp bend and harsh distortion of hard clipping, but does affect the sound. There might be some subtle harmonic distortion, or you might notice some of the sharpness of the mix is reduced.

When mastering, it is common that after using gain to achieve the desired loudness, there will be peaks above -1db that need to be clipped, using hard and/or soft clipping (among the many other methods in the DAW toolkit). Some of these plugins include controls for loudness (gain into the clipping) and peak ceiling (gain after clipping). These are generally the same as you would get by adding gain plugins before and after. The only thing that ultimately matters is the final output: is it loud enough, are the peaks less than -1db, and is it free of undesired audible artifacts.

Setup

Note at C3 into ReaSynth triangle wav at 0db into an oscilliscope. Shows mapping of input to output level.


Event Horizon Clipper

"Threshold" boosts gain and introduces hard clipping

"Ceiling" is just a volume duck applied after the hard clipping

"Soft Clip" is a buggy mess that introduces additional unnecessary discontinuities.

Never put this in your mastering chain!


Event Horizon Limiter/Clipper

"Threshold" boosts gain and introduces hard clipping where peaks hit 0db

"Ceiling" is a gain attenuation applied after the hard clipping

"Release" has no visible effect, even at relevant time scales

"Ceiling" by itself is just a gain attenuation. Moving "Threshold" down loudens at the expense of hard clipping. Hard clipping will be very audible on many types of content, and there are probably better ways to louden. Might be useful at "Threshold -1, Ceiling -1" as a last line of defense to guarantee no peaks above -1dB [common advice to avoid problems that digital music services might have when encoding a file that hits 0db]. But ideally you want to have your peaks under control before hitting this plugin... in which case you wouldn't need it at all. If you notice that the input meter recorded a peak above -1dB, then clipping happened somewhere, and would be audible on a sin wave. If you see an input peak of 6dB then this plugin probably introduced aliasing artifacts.


JS Soft Clipper / Limiter

Bends the tops of peaks so they don't stick as far up. Imposes a hardclipping limit on the output signal. Below -6dB the response is linear, so it will only impact the louder parts of the peaks.

"Boost" boosts gain going into the curve, increasing loudness, pushing peaks further into the bend, and possible introducing/increasing hard clipping.

"Output Brickwall" moves the hardclipping threshold down, without decreasing loudness. This makes the bend more extreme, and makes it more likely your peaks hit the hardclipping threshold. e.g. my peaks are perfectly 0dB, I pull the brickwall down to -1dB, now the top 1dB of my peaks are hardclipped.

With controls at 0, If the input peak is 0dB, the output peak will be bent down to -1.9dB giving you more headroom. If the input peak is 0dB, I can use Boost to add 4dB of fairly transparent loudness which brings the output peaks back up to 0dB again. At this point any further boost, or pulling down the brickwall, or higher input peaks will introduce hardclipping.

This plugin is a nice option because it allows some loudening before hardclipping. If the input peaks are not under control, hardclipping will be introduced.


JS ReaLoud

Implements a curve map which soft clips, guaranteeing output peaks never exceed 0db, and adds 3dB of loudness in the process. The curve is linear until around -3dB, so only the highest peaks are modified.

"Mix" fades the effect in. At 0% the signal is untouched, at 50% there will be 1.5dB of loudness added. Note that the 0dB clipping is always applied, and the lower the mix value the harder the clipping (the sharper the angle).

There is no free lunch; the bending of the curve on peaks will color the sound somewhat. And it kicks in pretty hard at -3dB (compared to the -6dB of the soft clipping part of JS Soft Clipper) But even though extreme peaks will end up completely flat (which is not a natural place for the speaker cone to sit), the curve bends smoothly into flat, so it doesn't introduce sharp aliasing artifacts the way that hardclipping does.

The 3dB of loudness gain is not only arbitrary but almost besides the point. This plugin gives you a nice soft clipping to tame your peaks. Put a -3dB gain in front of it, and you cancel out the loudness boost. If you need more loudness, add a linear gain in front of it and drive it do desired loudness.

There is also a version with a lowpass filter.

Recommended. Set mix to 100% to avoid hardclipping. Follow with a -1dB attenuation to avoid 0dB peaks in the output file. Put a pre-gain in front of it to adjust the desired loudness. Start with -3dB to avoid driving the curve. Measure the LUFs of the output file, and increase the pre-gain as much as necessary to hit -14LUFs. Always double-check with your ears.



JS Louderizer

"Mix" fades the effect in.

"Drive" adjusts the curve.

Mix with no Drive does nothing, including no clipping.

Drive with no Mix does nothing, including no clipping.

Drive 100% and slowly bring the mix in, it bends the curve.

Drive 100 Mix 100 looks the same as ReaLoad for inputs less than 0db, but overdriving it produces a negative response!

Strictly inferior to ReaLoud and rather useless for mastering.







Friday, June 26, 2020

Musical Time in After Effects (BPM, beat, phase, loop)

Musical Time in After Effects

Much of modern music is generated to a strict metronome, or on a grid in a music editor. So if we known the precise BPM and where the first beat is, we can predict the exact location of every beat in the song, and extrapolate counters for bars, loops, and cycles.

Editing keyframes to music by hand is time-consuming and error-prone. It is easy enough to create a phrase of keyframes and add a loopOut() script. However the keyframes and the looping are limited by the accuracy of a single frame, and a frame is pretty large in musical time. You might have found yourself having to choose between the frame just before or just after a sound happens, searching for the best alignment. Now imagine that amount of slack multiplied tens or hundreds as the frame rounding errors accumulate every loop. At 90 BPM and 30fps, each beat is 20 frames. Assume half a frame of rounding error for a one-beat loop (imagine something throbbing with a 4/4 kick drum) and after a few seconds it is visibly out of sync. You can move to larger 8- and 16-bar loops, but the drift will still be ruinous by the end of the song.

A solution to this is to calculate (or be told) the BPM to 4 or 5 significant digits (e.g. 98.265). In a music editor you can visually and audibly verify that every single beat of the song is accurately located by this BPM. Then use some scripting in After Effects to calculate everything in floating point from this very accurate BPM.

"Music BPM.ffx" is a preset that does some basic calculations to provide accurate musical beat and bar count and phase. You can then pick-whip these and use simple expressions to design musical movements. Because everything is calculated in floating point from your very accurate BPM, everything will stay perfectly in sync.

Using the preset

Get a very accurate BPM value, and if the beat doesn't start at 0:00 also a frame-accurate time value for where the beat drops. Put the audio file in your project at 0:00, add the "Music BPM" preset. You will get the following fields:
  • input_BPM: put your very accurate BPM here
  • input_Downbeat: put here where the beat drops, in seconds
  • input_BeatsPerBar: most music works well with 4, but 8 or 16 might be better defaults for some music, or you can put odd numbers here for odd time signatures
  • BeatCount: outputs a beat counter, which starts negative and counts "down" to 0 when the downbeat happens, then up from there. You can use this to mark different sections of the song, and to calculate different cycles and rhythms;
  • BeatPhase: ramps from 0 to 1 every beat
  • BarPhase: ramps from 0 to 1 every bar
  • BarBeats: ramps from e.g. 0 to 3 every bar. Sometimes easier to think about than pure phase.

Some recipes

Throb with the beat

Add a size expression to your target layer. Pick-whip the "BarPhase" slider, and edit the expression to the following. Use linear() to set the limits for the response, because
a) size is on a scale of 100 while phase is on scale of 1
b) it looks bad to go all the way to 0, pick tasteful endpoints
c) we want to map the beginning of the phase (0) to a larger scale.
In this case we need to convert a 1D value to 2D to control the size so we [s, s] at the end.

s = linear(thisComp.layer("song").effect("BeatPhase")("Slider"), 120, 100); [s, s]

Punchy Throb

"1 - phase" to get a ramp down from 1 to 0, then take it to the third power so that values less than 1 get pulled down quickly towards 0. Since we already have a ramp down, we don't need to swap the mix and max arguments to linear(). This gives us a curve that stays mostly near 100 with a strong punch to 120 on the beat. If instead we take pow(BeatPhase) and use linear(s, 120, 100), the curve would spend most of its time near 120, only falling off towards 100 right before the next beat hits. Two different vibes, each good in their own way.

s = Math.pow(1-thisComp.layer("song").effect("BeatPhase")("Slider"), 3);
s = linear(s, 100, 120);
[s, s]

alternate

s = Math.pow(thisComp.layer("song").effect("BeatPhase")("Slider"), 3);
s = linear(s, 120, 100);
[s, s]

Rotate once per bar

360 * thisComp.layer("song").effect("BarPhase")("Slider") 

Rock back and forth with the beat, smoothly

Use cos() so it starts at an extreme (sin will start centered) and -10 so it starts at the left.
-10 * Math.cos(2*Math.PI*thisComp.layer("song").effect("BarPhase")("Slider") )

Rock back and forth with the beat, jumping from left lean to right lean

Use the (condition ? true_value : false_value) ternary operator to pick one of two values. In this case sin() is a better fit for the timing we want.
-10 * (Math.sin(2*Math.PI*thisComp.layer("song").effect("BarPhase")("Slider") ) > 0 ? 1 : -1)

Change color with the beat

Modulate the BeatCount to within the range (0-4]. This ends up being the same as BarPhase, but for more complicated motions you will often end up working directly with BeatCount so that's what we do here. Use this number to cycle white-yellow-red-black at full opacity. I decided that I didn't want the colors to flash during the intro, so I added some logic to keep the color black until the beat dropped. For a more complicated song, I will use an expression controller named "HasBeat" and keyframe it on and off as the drums come in and out, so I can pick-whip it into expressions to disable them when there is no beat.

beatCount = thisComp.layer("song").effect("BeatCount")("Slider");
beatMod = (beatCount % 4 + 4) % 4;
c = [0, 0, 0, 1];
if (beatCount >= 0) {
  if        (beatMod < 1) { c = [1,1,1, 1]; }
  else if (beatMod < 2) { c = [1,1,0, 1]; }
  else if (beatMod < 3) { c = [1,0,0, 1]; }
}
c

Draw a custom keyframe curve, and have it repeat in perfect time

In this case the song had a 16-bar piano loop, and I wanted to respond to a few swells without the jitter of converting the audio directly to keyframes. I created an layer "Piano Curve" and made sure I had the piano loop lined up at 0:00 on the timeline. Looking at the waveform and listening to the loop, I keyframed the size to swell with the piano line. I then hid the "Piano Swell" layer, deleted the audio layer I had used as a guide, and put the following expression on my target layer.

loopBeats = 16;
loopSec = loopBeats * 60 / thisComp.layer("song").effect("input_BPM")("Slider");
downbeat = thisComp.layer("song").effect("input_Downbeat")("Slider");
loopedTime = ((time-downbeat) % loopSec + loopSec) % loopSec;
thisComp.layer("Piano Curve").transform.scale.valueAtTime(loopedTime)



Notes

% is remainder, and will go negative for negative numbers. For modulo, use (x % N + N) %N;

beatLen = 60 / bpm


Saturday, June 13, 2020

Modul8 / Modul8 Module Best Practices

Best Practices for Writing Modul8 Modules:

  • Variables are camelCase
  • Put all your code in Init(). Define functions like handleKeyword(keyword, param, layer) and the only code in the KeywordEvent() block should be to call your handleKeyword() function. Keeping all the code on one screen makes it easy to audit, rename variables, and understand what the code is doing. You can also use "return" to skip code, instead of indenting the rest of the method.
  • Add a "module active" toggle button which defaults to off and auto-serializes. If this button is off, your module does nothing. Modul8 does not save which modules are active on a per-project basis. So if users have a lot of different projects using different modules, it is a common problem that an old project won't work until they figure out which modules to turn on and off. Adding an active toggle to your module lets them save that information as part of the project and there is no need to actually turn off the modules at the modul8 level. This is especially important for modules that have periodical, direct event, or keyword actions, because those might screw up someone's project. But it is also nice for performance reasons, if your module does a lot of GUI updates, to be able to skip them when the module is turned off.
  • Use versions on your modules, and put a changelog in the description. You can use option-Enter to add newlines to the description box.
  • Use the names of GUI elements rather then the messages. Specifically, in MessageEvent() ignore 'msg' and use param['NAME'] instead. The names are easier to find in the GUI editor, and you have to use name when modifying the GUI from the code. Having different values for the message in the "Script Connect" area, or having to keep the messages and names in sync is bothersome and an easy source of bugs. You do have to put something in the "script connect" box, but as long as you ignore the 'msg' variable in your code it doesn't matter what it is.
  • Logging script output is very CPU-intensive, so comment out all your debug message when done. If your module outputs to console as part of its feature, then it should have a "module active" button that defaults to False
  • If your module will have a (global) and (layer) version, define a variable "global = False" and write your code to handle both modes. Anytime you update the code, copy it from the (layer) module into the (global) module and change the value to True. Alternately, copy the .m8m file on disk which will include the layout as well as the code. Then you only need to update the 'global' variable to True.
  • Consider ignoring GUI changes during startup. If you have a fader that drives some keywords, but might be out of sync because the user changed those keywords through the main modul8 GUI, then you should ignore the fader position on load. Otherwise, you will override the values in the stored project: "hey, why are these values different than I left them??". This is especially important if your module doesn't have an [active] button which defaults to off. Define "finishedInit = False" in Init() and then set it true in PeriodicalEvent(); empirically, MessageEvent of saved state will run before PeriodicalEvent, so if not finishedInit you know it is due to project load and not an explicit user interaction. Turning off auto-serialize for certain GUI elements is another way to address this, although you will then lose the information about that element's value at time of save.
Best Practices for Modul8:
  • Use LB Notes and write down how your projects are structured, which modules are being used, and which MIDI and keyboard bindings you use.