Thursday, 29 July 2010

mind’s eye



mind’s eye

Intention and Aims

Explore the relationship between live and media, in particular looking at the digital double, live body and virtual body.

How to achieve?

A performance, incorporates digital media in live performance. Onstage acting combine with recorded and live camera projections through two main projection screens and two-plat TV screens. Main focus is the usage of digital double.

Video and live camera projection:

1 Insert images in different perspectives and scale, POV shots and close-up shots.

2 Digital double works as reflection, represent character.

3 Digital double as alter ego present another side of character.

4 Digital double as representations of thoughts convey emotion.

5 Utilisation of videos working with time and space.

mind’s eye is my lastly work, a performance incorporates video and live camera images, which closely connects to my research area. Its another experimental practice by which would would be built up my MA show.

And thanks Jeanne’s excellent performing.

Wednesday, 28 July 2010

mind’s eye


below are photos from the performance








HANDS ON EYS

Hands on Eyes

A cooperate exhibition with Kappa, Marouso, Jeanne, Dora and Cherry at The Ray Factory, on 24 Jun 2010.
This event offers myself a great opportunity working in a team and presenting my own work, also it achieve my another aim that get different type of audiences and present to them.


Photos outside our exhibition room and performing space.


Advertising for our MA show.
Sign to the exhibition room:


Myself and Cherry ’s performing space:



Monday, 21 June 2010

mind’s eye



A performance at EYES ON HANDS.
Cherry’s performance will be co-operated in.




Sunday, 20 June 2010

Open studio


A poster of VLP for open studio

video installation for MA show

Last Thursday is private view of BA show. In order to promote our MA show in September, MA student have a open studio presenting our our works. I presented a 2 min video clip which reviews two work I done, Connection and i Thinker, which co-present with Esteban, Kappa, Sandy, Dora, Marouso and Cherry.

And I found this, video installation, is a good way to show our work at MA show. The idea is, each of us make a video preview / review our works, and show during our MA show, which can give audience a idea of what VLP is. What is more, since we are not putting on our performance for the whole week, audience who come for our MA show but miss our show time, the video installation could represent VLP.

physical/virtual interaction

Currently I read an interested article about Blue Bloodshot Flowers, ‘The Jeremiah Project”, written by Susan Broadhurst (2004), which is a performance, basic on a love affair.

I am interested in how practitioners incorporate technology allowing both humans and objects to be located and tracked seamlessly and in real time. Particular, how does a performer, a avatar, Jeremiah, perform and communicate in the project.

My understanding of how Jeremiah works technically:

The basic way is that a camera sees things, and the computer generates different faces for Jeremiah, depending on what the camera sees. This is an example of simple 'if' logic.

If the camera sees moving objects, the computer gives Jeremiah a happy face

If the camera sees still objects, the computer gives Jeremiah an angry face.

This is the 'emotion engine', the computer programme that creates Jeremiah's visual changes. Note, these 'if' rules are decided by the director.

However, the computer also has some basic AI, artificial Intelligence, which means that the basic 'if' rules develop and change a little over time, and this change depends on what the computer 'experiences'. This is why the reactions of Jeremiah are sometimes random. It is because the computer is changing the basic rules that it was first given.

This is quite interesting, because it actually starts to break the Director's control of the performance. If there is no AI, then Jeremiah is controlled in the same way that the images that you choose for your productions are controlled - by the director. But, with the AI, Jeremiah begins to do things that the director has not controlled or planned. Admittedly, the director gives Jeremiah his original simple brain and reactions, but over time this changes in a way that the director cannot predict.

I think: 'Jeremiah is unique in that he embodies intelligence that is no way prescriptive. Therefore, the performance is a direct and real time interaction between performer, audience, and technology.' (The point is that Jeremiah begins to escape the first rules he is given)

'As well as questioning conventions of authorship, ownership, and intertexuality, the digital technology that created Jeremiah subverts assumptions of reproduction and representation because in every performance Jeremiah is original, just as an improvising artist is original. Jeremiah is literally reproduced again and not represented’.

In conclusion, Jeremiah is a visual 'facial' way to show emotion in a projection, with the emotion determined by real body physical movement. But, because of the AI of the computer, the emotion that Jeremiah shows is not always predictable and controlled by the director, which raises interesting questions and possibilities.

The project analyzes and explores the interface between physicality and AI technology in practices. The manifestation of this AI technology would take various forms which will be explored and investigated over time, demonstrating both visual and aural physical/virtual, also Susan Broadhurst talks about representing emotion on the screen, because the avatar, Jeremiah, shows emotion on its 'face'. (2004)

Here, the usage of digital body certainly a way of showing emotion that I have not considered in my research before. (It actually gives me a potential idea for my final performance in which I could return to my earlier interest in gestalt and colour theory and emotions.)

Referent

Susan Broadhurst 2004 (Winter), Vol. 48, No. 4 (T184), Pages 47-57

Posted Online 13 March 2006.

(doi:10.1162/1054204042442044)

© 2004 Massachusetts Institute of Technology

Followers

Search This Blog