A Language Apparatus

Tags: 
Abstract (in English): 

Through the creative projects Bodytext, Tower and Crosstalk the author explores how language and communication function in a hybridized context where human and machine are responsible for both the articulation and interpretation of texts. The dynamics of such a hybrid apparatus allow insights into how the making of meaning and its reception can be considered as a socio-technical system, with implications for how people are situated and instantiated.

Bodytext, Tower and Crosstalk are language based digitally mediated performance installations. They each use progressive developments of generative and interpretative grammar systems. Bodytext (2010) was authored in Adobe Director and coded in Lingo and C++. Tower (2011) was developed with a bespoke large scale immersive virtual reality simulator and was coded in Python. Crosstalk (2014) was developed and coded in Processing.

Bodytext is a performance work involving speech, movement and the body. A dancer's movement and speech are re-mediated within an augmented environment employing real-time motion tracking, voice recognition, interpretative language systems, projection and granular audio synthesis. The acquired speech, a description of an imagined dance, is re-written through projected digital display and sound synthesis, the performer causing texts to interact and recombine with one another through subsequent re-compositions. What is written is affected by the dance, whilst the emergent texts determine what is danced. The work questions and seeks insight into the relations between kinesthetic experience, memory, agency and language.

Tower is an interactive work where the computer listens to and anticipates what is to be said by those interacting with it. It is a self-learning system, and as the inter-actor speaks, the computer displays what they say and the potential words they might speak next. The speaker may or may not use a displayed word. New word conjunctions are added to the corpus employed for prediction. In its first version the initial corpus was a mash-up of Joyce’s Ulysses and Homer’s Odyssey. Words uttered by the inter-actor appear as a red spiral of text, at the top of which the inter-actor is located within the virtual reality environment. Wearing a head mounted display the inter-actor can look wherever they wish, although they cannot move. The predicted words appear as white flickering clouds of text in and around the spoken words. What emerges is an archeology of speech where what is spoken can be seen amongst what might have been said, challenging the unique speaker’s voice.

Crosstalk is a multi-performer installation where movement and speech are re-mediated within an augmented 3D environment employing real-time motion tracking, multi-source voice recognition, interpretative language systems, a bespoke physics engine, large scale projection and surround-sound audio synthesis. The acquired speech of inter-actors is re-mediated through projected digital display and sound synthesis, the inter-actors physical actions causing texts to interact and recombine with one another. The elements in the system all affect how each adapts, from state to state, as the various elements of the work – people, machines, language, image, movement and sound – interact with one another. Crosstalk explores social relations, as articulated in performative language acts, in relation to generative ontologies of self-hood and the capacity of a socio-technical space to “make people”.

(Source: ELO 2015 Conference Catalog)

The permanent URL of this page: 
Record posted by: 
Hannah Ackermans