HYBRID-Box, Dresden, 2023 HYBRID-Box, Dresden, 2023 Werkleitz, Halle, 2022 Stroom, The Hague, 2023 Stroom, The Hague, 2023
Installation, 2-channel video, 7-channel audio, 2020

The Irresistible Powers of Silent Talking
is an installation investigating border violence through AI-powered technologies. It is based on a notorious iBorderCtrl software – developed together with border patrols of Spain, Greece and UK, the system claimed to be an automated deception recognition algorithm. Commissioned and funded through the European Union’s Horizon 2020 program, the algorithm was supposed to scan the  facial micro-expressions of migrants entering the EU.

The migrant would be asked to answer a set of innocuous questions to a white-male police officer in the form of a digital avatar. The avatar, designed to look menacing and authoritative, would be programmed to speak in Standard British English and would not react in any interactive way with the person entering the border. During the interview, the algorithm would then make a decision of veracity – essentially deciding whether the person has been “lying” or not – using an undisclosed set of criteria. The iBorderCtrl hasn’t published these criteria in any public form and maintains them as their commercial secret. Essentially, it makes a coded decision based on the algorithmic assumptions about truth or deceit.

However, through recent findings we now know that it produces a most of its decisions using reductive and at best, speculative, methods. As an example of such reductive approach, the iBorderCtrl team trained their algorithm on 32 hired actors, who played out simulated deceptive or truthful situations in the lab setting. Aside from theatre of Brechtian proportions entering into decision making of refugees and migrants lives, the algorithm itself is based on an out-dated notion of what we can render as “truth”.*

In The Irresistible Powers of Silent Talking, this system is recreated and put under duress itself. In the installation, the iBorderCtrl police-man avatar is rendered voiceless and is exposed to the viewer. The avatar’s face is scanned for expressions of deceit or truth, using the same principles behind the iBorderCtrl. The installation maps the supposed deviations in veracity and renders them audible through sound. But the results are murky at best, the scans of policeman producing an ambiguous and undetermined sonic results. Just like in the border-crossings, the effect is simply that of a state of violence.

First commissioned by the FACT Liverpool, EMARE/EMAP, it was also made with the support of Hertz Lab and ZKM | Center for Art and Media Karlsruhe, BALTIC Centre for Contemporary Art, Gateshead, The Creative Industries Fund NL, and Stroom Den Haag.

* The normalization of affect recognition (and its later automatization via lie detectors, and subsequently AI-driven systems) largely stems from the theoretic writings of Paul Ekman. Affect recognition systems function on the assumption that there exist non-verbal facial (micro) gestures, labelled as the “biomarkers of deceit”. This assumption is based on the theories of Ekman who claims that lying is an emotionally demanding task that may leave non-verbal behavioural traces (Ekman & Rosenberg, 2005). This idea has led to a development of a multi-million industry purportedly based on deception recognition, recently enhanced by the so-called machine-learning-assisted software and fueled by private and governmental entities, more than eager to capitalize on the notion of quantifiable truth, deception, and human nature.

At the same time, this type of scientific method is often based on colonial and racially biased principles (for example Ekman’s studies having used freshman collegestudents as the subject group, comparing their results with those of illiterate subjects from New Guinea; Silent Talker (the precursor company of iBorderCtrl) being partially inspired by David Efron’s research (1941) which was conducted on non-consenting migrants of Sicilian and Lithuanian-Jewish origin), as well as questionable methods among which is the famous case of Ekman refusing his researched to be peer-reviewed, claiming it might reveal maters of state security.)

Various academic studies have repeatedly demonstrated that Ekman’s theory has little if any credibility in terms of reliability in the real-life forensic situations and in the context of randomized and rigorous experiments. However, Ekman’s ideas continue to live on not only in the fantasies of AI surveillance system’s developers, but also in our shared imagination through the shows like Lie to Me (2009-11, Fox network), his contribution to the popular Pixar film Inside Out (2018), and Ekman’s inclusion in the Time Magazine 100 Most Influential People list (2009 edition).