Meduza, Vilnius, 2023

Bitumen floor (500 kg), 250 cm neon light, 4 speakers, PA, sound, 2023

In the late 80’s a former classical flute student Andy Hildebrand changes his career path and moves on to study advanced digital signal processing. Having become a specialist in stochastic estimation theory (and by now a doctor) Hildebrand starts working for Exxon, one of the world’s largest oil and gas companies. There he develops a very idiosyncratic solution to the one simple question that the corporation is asking him – where is the oil? Hildebrand develops a program to process data from reflection seismology using seismic waves to locate hidden gas and oil sources. By sending sonic signals into the earth and using advanced algorithms to correlate and predict (in other words, attune the data) Dr. Hildebrand could finally locate the oil as if by magic.

Around ten years later a small US-based software company called Silent Talker begins developing their own algorithm to detect micro-facial expressions. Through decades that follow the company becomes the world-leading developer of automated deception recognition systems, or put simply – algorithmic lie detectors. Then a few former employees come up with their own version of the software which they call iBorderCtrl. Developed together with the border patrols of Spain, Greece and UK this software was commissioned and funded through the European Union’s Horizon 2020 program. The algorithm was scanning the facial micro-expressions of migrants or asylum-seekers entering the EU and based on their facial movements alone would determine their level of veracity, essentially whether they are “lying” or not. 

One day over a dinner table Dr. Hildebrand is jokingly challenged by a friend to “invent something”.  The friend then proceeds to suggest that Andy should invent something that would make him sing in tune. It dawns on Dr. Hildebrand that oil extraction and voice tuning technologies share a lot in common – correlation (statics determination), linear predictive coding (deconvolution), synthesis (forward modeling), formant analysis (spectral enhancement) – these technologies are shared amongst music and geophysical applications.

He rushes back home and soon after patents a new technology which is called Auto-Tune, a software to automatically tune human voice in an organic and imperceptible manner. This software becomes the dream tool of music studios and singers, as it eliminates the possibility of any note going out of tune ever again. The algorithm becomes a dirty-secret of the music industry, as nobody is admitting its usage in public until the 1998 pop-hit Believe by Cher. The song utilized the now ubiquitous auto-tuning sound effect to its first massive recognition and commercial success. 

Having no scientifically reasonable way to train their algorithm in the arts of lying, the iBorderCtrl team trained their algorithm based on 32 hired actors, who played out simulated deceptive or truthful situations in the lab setting. This data was then used to train the algorithm in determining whether the migrant is being deceitful about their intentions.

Synthetic Exercises uses a sound library which I compiled when working on another piece in which I was subverting and retraining the iBorderCtrl algorithm. In Synthetic Exercisesthis library which was used to train the algorithm to “vocalize” the data, became itself my main point of fascination. As I trained the algorithm, it started spitting out early, noisy variants – not quite the human voice, not quite the finished sonority either. It produced a library of its own, artefacts, vocal errors, and sonic byproducts, all documenting the birth of a particular digital vocalise. Synthetic Exercises is an arrangement of these sonic events, rendered heavily through auto-tune, exercising speech without words.

Made with the support of ZKM | Center for Art and Media, BALTIC Centre for Contemporary Art, and The Creative Industries Fund NL

Andrius Arutiunian · Synthetic Exercises (part 1)
·

◀︎