No, this is not about Facebook, and how we seemingly interact with text that sometimes seems like a bunch of posturing and talking at cross purposes!
This is about a performance I went to last Saturday at ASU, brought by the School of Arts, Media and Engineering.
The key ‘performance — a semi-choreographed interaction of two women –was to demonstrate how conversations (and text) can ‘make’ people, and their reality. Meaning, how language doesn’t just represent us, but shapes who we are, even while we use it. Here’s how they describe it. The art form:
interrogates these questions (using) 3D infra-red motion tracking, voice acquisition, speech recognition, multi-screen video projection and multi-channel surround sound to create an immersive multimedia environment.
As the dancers move and speak, speech recognition software reveals sentences (and sentence fragments) on two screens at right angles to each other. Then these texts begin to intersect, and create some interesting visual ‘performances’ – dropping off, angling, growing, and interacting with the other person’s texts.
The event was the work of visual artist, Simon Biggs, and composer, Garth Paine, both of whom dabble in the algorithms that work behind the scenes.
Why I found this fascinating was that it is in an oblique way related to my work in Chat Republic, and how our conversations determine our realities. We are, whether we like it or not, immersed in a digital landscape, and what we say to each other lives in a textual sense out there.
One does not have to be steeped in social media to be part of this Web 2.0 world, where much of what we do is cross-referenced by algorithms –when we sign on to purchase something, do a Google search, or leave a comment –that build profiles of us, and builds identities of us.
Just check what Facebook appears to be doing, sneakily boosting your ‘Likes’ when you message someone.