These days, I’m trying to turn pretty much anything into a time-variant system so here is my experience with a 1-pole filter. What follows is a short blog post on a couple of days of work where I have experimented with different designs to find the best compromise for a filter whose cutoff frequency can be piloted by signals.

A 1-pole filter, which is called so because it has one resonance, is described by the following difference equation:

y[n] = a * x[n] + b * y[n - 1].

y[n] is the output, a is the input coefficient, x[n] is the input, b is the feedback coefficient, y[n - 1] is the output one sample in the past.

The sign of the feedback coefficient determines the type of the filter and the position of the pole in the spectrum. Positive values result in a low-pass (LP) with a resonance at 0Hz; negative values result in a high-pass (HP) with a resonance at Nyquist, which is half the samplerate (SR).

In order for the filter to be stable, the absolute value of the feedback coefficient must not exceed 1, and the input coefficient is calculated as 1 - |b| so that we have unity gain at the pole.

The cutoff of a filter is defined as the frequency at which the input is attenuated by about 3.01dB.

The larger the feedback coefficient, the closer the cutoff of the filter will be to its pole. Of course, the relationship between the cutoff and the feedback coefficient is nonlinear, so this is the tricky aspect of the design of this filter.

Let’s consider the LP type.

To determine whether the coefficient calculation is accurate or not, we can tune in the cutoff of the filter and the frequency of a sinusoid that we use as input. Then, we can calculate the RMS and see if the attenuation is as expected. We will consider a good frequency response anything within a +/- 3% error from the expected attenuation.

The first experiments that I performed were using the implementation from the Audio Programming book by Boulanger and Lazzarini. The equation for the calculation of the coefficients uses trigonometric functions and is the following:

sqrt((2 - cos(w))^2 - 1) - 2 + cos(w)

where w is the angular frequency defined as 

2 * π * f_Hz / SR.

Please note that the design above produces negative coefficients. That is because they have based the design on a difference equation prototype with a negative sign for the feedback coefficient, so the coefficients would flip back into positive values.

The frequency response of a digital filter is dependent on the SR, so we can use the normalised frequency (let’s call it f_n) defined as

f_Hz / SR

to have a unit which consistently describes the system regardless of its SR. Given that the maximum frequency in a digital system is Nyquist, the normalised frequency will be in the [0;.5] range.

With the design described above, the filter implemented in Pure Data Vanilla has a good frequency response from about .000010721 (~4.7281Hz at 44.1kHz SR) all the way up to .5 (Nyquist), although the closer we get to the lower value, the more the coefficients produce a staircase and there is less resolution for the fractional part of the cutoff. Below the lower value, the response of the filter worsens and the coefficients are clipped to 1 after ~.000067303 (~2.96807Hz) is reached, meaning that from that point downward the filter will just output DC (0Hz).

Another design that I tried and which also uses trigonometric functions is that from Cliff Sparks. The formula that he uses is the following:

2 - cos(w) - sqrt((cos(w) - 3) * (cos(w) - 1)).

This filter behaves very similarly to the one from Boulanger and Lazzarini, although the cutoff after which the coefficients are clipped to 1 is here .0000389273 (~1.7167Hz).

Miller Puckette in his implementation uses a formula that is an approximation:

1 - w

where w, like we said above, is the angular frequency.

That is a computationally very efficient calculation which actually has a rather good behaviour in the very low frequency range, roughly from .00000023692 (~.0104Hz at 44.1kHz) – which is the largest possible coefficient before unity in PD (.999999) – up to .00644731 (~284.326Hz). Though, the coefficients need to be clipped to 0 after ~.15915 or they would result in negative values.

After a look online, I found a blog post from Nigel Redmon who proposes the implementation from the book Musical Applications for Microprocessors by Hal Chamberlin. On a comment, Nigel also described the reasoning of the author of the book who proposed this implementation:

“He notes that a one-pole filter is a leaky integrator, which is equivalent to an analog R-C low-pass filter. He also notes that it’s well-known that capacitor voltage discharge follows an inverse exponential curve: E = E0exp(-T/RC), where E is the next voltage after period T, given the initial voltage E0, R and C are the resistor and capacitor values in ohms and farads. The cutoff frequency is defined by Fc = 0.5πRC; substituting we get E = E0exp(-2πFcT). The time change, T, is the sample rate in the digital system, so converting to sampling frequency, Fs = 1/T, we get E = E0exp(-2πFc/Fs). To adjust for a gain of 1 at DC (0 Hz), we set the input multiplier to 1 – that value.”

Fc is the normalised frequency (f_n), so the final formula for the coefficients is:

exp(-2 * π * f_n).

This is by far the best compromise as, like Puckette’s implementation, it has a very good response in the very low range while keeping a reasonably good response throughout the whole spectrum.

Are the formulae using trigonometric functions wrong? No. They would actually give the most accurate response in a different environment.

Pure Data is an amazing software: it is open-source, reliable and efficient, although the internal resolution is (yet) single-precision, and that results in inaccurate coefficients when using trigonometric functions. Besides, if one wanted to implement the filter in the audio domain, the cosine function ([cos~]) is based on a table and that would result in even less accurate values than those given by the message domain one ([cos]).

The math operators involved in Puckette’s and Chamberlin’s implementations are consistent in both the message and audio domains, so they are good candidates for the time-variant version of the filter.

Puckette’s formula is the least computationally expensive, so it is probably the best one if all you care is filtering in the very low range.

Chamberlin’s implementation involves a call to an exponential function but it is still a reasonable option considering that the filter would work for the entire spectral range.

And considering that flipping the sign of the feedback coefficient would flip the frequency response, a HP filter based on that implementation could easily be achieved by changing the sign of the coefficient while flipping the cutoff around the centre of the spectrum:

-exp(-2 * π * (.5 - f_n)).