LIES (sysmap) 1.0

The photo above (kindly taken by Athena Zhu) is from the premier of the LIES (sysmap) 1.0 project, performed at the Edinburgh College of Art on the 28th of March 2016.

This project builds on my previous performance LIES (topology/nodes) 2.0, although there are some substantial differences which make them two independent works.

The first thing to mention is that, in the new project, some of the DSP techniques in LIES (t/n) 2.0 have been replaced with new ones, whereas the others are of the same type but have been modified and improved. For example, two of the new DSP units are based on pulse-width modulation and filter modulation, the latter being a simple yet effective process where the type (high-pass or low-pass) of a filter and its cutoff changes according to the input signal. Granulators and samplers have been modified and are now implementing variable buffer size and windowing functions, and nested FM modules have a variable nonlinear transfer function. Furthermore, each DSP unit consists of up to eight cascaded subunits, and they are thus capable of generating different streams for multi-channel setups.

In general, another fundamental difference is that the DSP algorithms have been designed so that all their parameters operate at sample-rate: the system is now fully time-varying and every state variables of each subunit is dependent on the incoming signal.

Lastly, and most importantly, this project implements the idea of systemic mapping, which essentially consists in establishing positive and negative feedback relationships between the parameters of each DSP module. This idea is realised by using a one-to-many mapping strategy: a [0;1] range fader, which I control during the performance, interconnects the parameters in the DSP units based on their characteristics so that different state variables can be explored while keeping the same kind of relationships between the parameters.

One of the focus of my work is to implement autonomous systems with complex behaviours. A necessary aspect of achieving autonomy in feedback systems is that of self-oscillation. Such a condition is dependent on two main factors, among others: the amount of energy that the processes allow to recirculate in order to establish a self-sustaining state (given that the feedback coefficients are greater than one); the spectral content of that energy, namely whether it is located in the same areas of the spectrum as those of the poles/resonances of the system (at that specific time). Particularly in feedback systems, these two factors are strictly interrelated and basically all DSP parameters can affect both of them, so it is possible to focus on either of the two to establish the counterbalancing or imbalancing mechanisms among pairs of variables. Intuitively, the criterion for the first factor might be that the higher the amount of energy, the higher the chances of self-oscillation. For the second factor, things are less straightforward considering that the system is nonlinear and time-varying, so it is not always easy to know the exact position of the poles, although we can consider that a wider spectral content would result in higher chances for the system to resonate and self-oscillate. Besides, this criterion would also be consistent with von Foerster’s order-from-noise principle, or Prigogine’s order-out-of-chaos principle, for which non-periodic and large fluctuations, as in rich spectra, can trigger self-organisation more easily. It is important to note that a self-oscillating feedback system needs to be stable. In some cases, power-preservation matrixes are used for such a purpose. I currently have not implemented that technique, although I am quite happy with the results achieved using look-a-head limiters.

Now, to give a practical example of what a counterbalancing or imbalancing mechanism could be with regard to the energy amount or spectral content, let’s consider two parameters in a granulator: grain density and grain size. Concerning the energy amount, density and size are directly proportional to the energy flowing in the system. Concerning the spectral content, density is directly proportional to the richness of the spectrum, whereas size is inversely proportional – the time-bandwidth inverse proportion dates back to 1947 with Gabor’s seminal paper on “acoustical quanta.” Thus, for the first case, counterbalancing translates into having parameters that move towards opposite directions, while for the second case it corresponds to the parameters moving towards the same direction. From this, it is clear that, in some cases, some inconsistencies may arise as, for example, it would not be possible to have a counterbalancing behaviour for both time (energy per unit time) and spectral content. This, though, is not necessarily a problem, as having opposite relationships in a pair of variables with regard to the two domains could be the desired effect, so it would be a matter of keeping track of these restraints in order to have an overall configuration which satisfies the desired effects.

Empirically, my attempt for the performance at ECA was to have roughly an equal number of positive and negative feedback relationships in each DSP unit. This way, the interplay between these relationships should keep the system on the edges of its state transitions, which, in turn, should enhance variety and complexity. A more deterministic approach for connecting the parameters and have a roughly equal number of the two types of relationships could be the following. Assuming that a DSP unit has the parameters A, B, C, D, E, F, G, if we base the connections on only one of the two main factors determining self-oscillation (energy amount or spectral content), we can apply the counterbalancing relationship among successive pairs: A-B, B-C, …, F-G. This way, the relationship which started at A-B will be flipping from negative to positive each time that the function is applied, and, relative to each parameter, half of the remaining parameters will have a negative feedback relationship, whereas the remaining half will have a positive one.

Finally, a further improvement of this idea might be that of having a bidimensional control interface, for example the x-y coordinates of the mouse cursor rather than a single fader, so that the time-domain-based relationships could be mapped on one axis, while the frequency-domain-based ones could be mapped on the other, creating a weighted interpolation between the two when exploring the bidimensional area.