Lewis Sykes – Proposal


I’m particularly interested in sound and the nature of sound and how it combines and interplays with moving image within the audio-visual contract.

So the artistic outputs of my research – sound responsive and interactive gallery installations, real-time visualisations of musical performances, short audiovisual films and live audiovisual performance – all attempt to show a deeper connection between what is heard and what is seen by making the audible visible. Since this process is relatively unconcerned with the literal depiction of objects from the real world my creative outputs are essentially abstract rather than figurative. It’s not that surprising, since my main artistic influences are the creators of early abstract film and somewhat later computer-aided abstract animation such as Oscar Fischinger and John Whitney Sr.

By looking for similar qualities to the vibrations that generate sound in the visual domain I’m trying to create an amalgam of the audio and visual where there is a more literal ‘harmony’ between what is seen and heard. In fact my research argues for an aesthetics of vibration and a harmonic complementarity between sound and moving image.

I’m Interested in sensory multi-modality and in creating artistic experiences that engage with more than one sense simultaneously – although my current emphasis is on audiovisualisation. So my work is predominantly concerned with the phenomenal – that which can be experienced through the senses – rather with the noumenal – that which resides in the imagination and ‘inner visions’. I’m fascinated by how the interplay between sound and moving image might affect us perceptually. So I explore aspects of sensory-integration – the ‘blurring’ of the senses where each impacts upon the other and creates a combined perceptual whole – but with an artistic intent. To try and find those conditions under which an audiovisual percept – a combined sonic and visual object of perception – is not just seen and heard but is instead ‘seenheard’. In this way my research is focused on the perceptual as opposed to the cognitive – in how we perceive the world around us rather than how we interpret, contextualise and make sense of it.

For Art Lab I’m planning to focus on the performative aspects of my project – exploring real-time audiovisual performance using some of the custom-made hardware and software I’ve developed as part of my research. If the components arrive in time this may include ‘fricken’ lasers… yay!


Nick Rothwell – Proposal

Rather than a project proposal, I want to propose a process: the creative application of modern, expressive high-level programming languages to the the making of artworks.

I’ve spent several years making software-based media art (in sound, in visuals and physically), and in all cases one of the most interesting aspects has been the choice of programming language, and how the different software structures supported by different languages manifest themselves in the final work. There’s a big difference between (say) writing a game engine in a low-level language like C, versus writing in a higher-order functional language like Clojure, and the ability to live-code an artwork in Clojure or Python (using a platform like Field, Overtone, Quil and others) results in a different kind of work to one which is preprogrammed and built in a slower iterative process.

As the years go by I become surrounded by more and more affordable, software-addressable physical controllers: keyboards, grid devices, Arduino-based constructions, DMX hardware. I’m interested in a goal-less exploration of programming language techniques and structures as manifested on physical devices, and ways in which high-level coding patterns enable us to create multimedia experiences which would be difficult, or impossible, in more conventional ways.

Aside: this proposal reflects my role as mentor, since I’m happy to be involved in helping other attendees to realise their creative ideas in such ways as might involve coding, or even to introduce new language and programming techniques as a spark to new ways of thinking about creating work.

Nick Rothwell1

Jaygo Boom – Proposal

My starting point is the idea of the ‘eternal culture’ (i.e. the theories of Vilém Flusser, design fictions of a future in which new networks of communication fundamentally alter human existence) I am interested in how this contrasts with the ancient idea of an implicit rhythm laid down by nature that entered the human cosmos on every level, reflected in the art, the poetry, the culture building and the language of civilization. Throughout the week I will be playfully investigating these boundaries through digital and public engagement, exploring how to humanize digital encounters and make them more familiar, understandable and regain this sense of lost natural rhythm.

This enquiry leads on from past projects including, a web enabled breakfast bar controlled by two farmyard chickens sixty miles away (worldwidewegg). http://bit.ly/144ClSj An AR app (Bombaze) http://bit.ly/14PujiT A paradox in that it questions if there is any relevance to incorporating the virtual within public space anyway? BLINK a new project which humanizes the online browsing experience – http://owenmelbz.github.io/POJ/

Ideas to develop via programming during the open studio will be BLINK – To make it more functional. INTERNET SHAKE – An app you have to shake to get a brighter, bigger, better picture, making you have to work for your personal satisfaction.

My skills with app development are small, although I am skilled in actionscript, maxmsp, some arduino and some QTZ.

Dr Karen Wood – Proposal

Stream Project

Dr Karen Wood and Genevieve Say make up the Stream Project where we have been working in collaboration with neuroscientist, Tony Steffert who is currently working on his PhD in EEG and sonification.  We wanted to develop our first collaboration together to make a solo dance performance showing the dancer’s physiological activity by means of sound and animation/lighting of which she can then interact with. The aim of this project is to make the internal external to the audience through use of sound, animation and lighting.

Our main aim is for the audiences to see the internal activity of the dancer and how she will interact with her own physiological state. A small portable device with electrodes and other sensors are placed on the dancer to transmit real-time, physiological data to control the animation and sound. Previously, the dancer has then used the sound to manipulate the choreography.

So far, we have presented our work at 2 events and the work has shown the physiological parameters of electroencephalography (EEG), heart rate variability (HRV) and respiration, which have been displayed via a project screen of graphical images and sound.

Research and development: http://youtu.be/-5JloQPBr70

Project Development

We are particularly interested in brainwave data obtained from neurofeedback; a therapy, which uses EEG data to reduce certain brainwave states in favour of others. Initially we wanted to work at converting EEG and HRV data into movement and sound and experiment with moving while recording EEG and HRV data in real-time. We would now like to experiment with different ways of how we can communicate with the audience and the challenges that are faced. The project team would like to explore ways of working with EEG and HRV data to produce a live and projected piece of dance performance, which could result in an immersive and interactive environment and/or a stage production. We now envisage looking at how we can connect the dancer’s EEG, HRV and respiratory data into a real-time feed using lighting or projection as sources.

We will no longer use sound as we had done previously.  We have a composer, Dr Gavin Wayte, who is interested in composing a piece of music for us using the previous EEG, HRV and respiratory data that was recorded as stimulus for the composition.  This would therefore mean that we would not have to focus so much on sound and lighting or animation/visual image will become the main element.

This is an original way of incorporating real-time feedback of the body’s physiology into performance. EEG and HRV have not been employed to create lighting to interact with dance movement before. There is originality in creatively looking at how EEG can be effected by and/or have an effect on an individual within performance. There is scope to convert the data into visual effects that will provide an innovative aesthetic to the performance.  Ben will work with us to write code that converts the data into lighting effects.

This project is taking some of the cutting-edge neuroscience technology that will soon become ubiquitous for computing gaming. From the increasing curiosity in neuroscience, this project will integrate neuroscience with a creative artform.

Future funding will be sought from ACE and the long term plan is to apply for a Wellcome Trust Small Arts Grant. It is felt that the project needs further development before applying for this.

We are looking at putting on a performance of this work in collaboration with Manchester Science Festival. We are also looking at gallery spaces that might be interested in putting on this work.


Ralph Mills – Proposal

The Mega-Museum Project

My over-ambitious, lofty and naïve goal for my mega-museum project (working title), is to create a digital artefact — a “virtual museum” — the crowd-sourced galleries of which would be accessible by everyone. My deadline is October 2013, timed to coincide with the grand opening of Manchester School of Art’s new building.

We all curate narratives that illuminate our everyday lives. Many of those narratives relate to objects. However museums-within-walls are historically descended from the collections of elites, can rarely display more than a fraction of their accessioned collections and are by definition fixed in one place (so have to be visited)

I visualise a simple-to-use “anti-museum,” a digital museum to which everyone might contribute both exhibits and, most importantly, their accompanying narratives and through which everyone could browse and search. Using digital tools the “visitor,” wherever they are in the world, would explore within the museum for objects and their stories, and assemble and create personalised exhibitions that match their individual interests.

The {CODE Creatives} brief was to focus initially on a “technology gallery” that would share, explore and display technologies that we have valued, used and experienced together with our stories of our relationships with them. So my digital artefact, whatever that will eventually be, will use that as a central idea. However, because my PhD research is looking at other objects, I’m intending to create something that will be applicable to that field of research as well as many others.

I also see my (anti-)museum as offering a resource to museums that so often have to reject, politely but sometimes hurtfully, the donations of objects that, though not of interest to the curators for entirely justifiable reasons, are hugely important to the potential donors.

Because access to online digital resources isn’t universal (indeed it could be argued that a significant proportion of those interested in sharing their narratives might be members of generations where online activity is minimal, although I’m going to attempt to leap that hurdle) I also aim to create something that can be accessible in other ways.

This extended museum has several advantages over the traditional museum-within-walls:

  • The objects in it are still “alive” because they still form part of our personal narratives; It values “everyday” objects that have been/are important to us, rather than the behind- the-glass possessions of elites; It enables the sharing of immense amounts of knowledge and experience.
  • The Mega-Museum will be “ordinary” enough, technologically simple and non-threatening to welcome anyone who can, say, complete an online form and press a submit button.

The project originates in my personal goal of enhancing the dissemination and sharing of my PhD research into everyday objects, and perhaps crowdsourcing elements of my research activities.

Daksha Patel – Proposal

Noise and Signal

This project evolved out of my experience of taking part in an EEG study, and the questions it raised about the relationship between ‘noise’ and ‘signal’ in medical studies. ‘Noise’ in scientiCic studies, is understood as variation or interference in data, and if often caused by movement.

The project will involve experimenting with biosensors such as EEG, pulse and muscle sensors and designing systems to visualise and project live data from sensor signals upon a surface on which I will draw. A biofeedback loop will be designed into the programme, creating ‘noise’ in the system, and enabling me to visualise the noise as well as the signal.

I’m interested in the fact that science is a knowledge producing process but also an image producing process. Images are important resources in the public evaluation and popularisation of science. If ‘noise’ is Ciltered out of scientiCic imagery and the public ‘imaginary’ – or if biomedical images are understood as noise free and Cixed – is it essential that ‘other’ representations exist, one’s which include the ‘noise’ alongside the ‘signal’? What role can drawing play in repositioning this ‘imaginary’?

Daksha Patel

Ben Lycett – Proposal

Stimulating the Visual Cortex:Investigating the use of non-traditional ‘visual’ data and neuroplasticity

Ben’s own experience tells him it is possible to be dyslexic in one language but not another – i.e. English but not C++.

Ben’s experimental technology project investigates the way the brain sorts sensory inputs and then processes that information by attempting to use senses other than vision – and specifically the tongue – as an interface to the visual centres of the brain. He plans to create an alternative form of ‘vision’ via a low resolution, braille-like, plate placed on the tongue. He want’s to investigate if the brain will sort and process this information in a different way. If it does then in the longer term it may be possible to present textual information in such a way that will allow a dyslexic person to ‘read’ using a part of the brain that is not dysfunctional.

Ben will integrate and adapt a range of existing technologies into his Arduino Tongue – Ultrasonic Sonar Glasses, Eyewriter and Microsoft’s XBox Kinect.

The first output of the project will be an invisible maze that the participant traverses – a virtual landscape that the participant can move through in physical space – using only their tongue to sense the walls of the maze. After the user is calibrated to the system, they will be able to experience a set of audio visual sculptures that they will perceive in 3D space.

The final iteration of the project will be to increase the resolution of the system to be able to reproduce text.

Angela Davies – Proposal

I would like to develop sensory interaction within my sculptures through the exploration of arduino technology. When the user approaches the work, it stimulates a response, the response is a film projection of the shadow of the sculpture. The sculpture becomes animated through interaction.

Angela Davies

David Jackson – Proposal

Devising a suitable format for telling digital stories

As a creative developer focused on the ways in which digital affordances can transform the writing process, I’m interested in the issues surrounding performance of digital written works. A normal paper-based reading to an audience, though not perfectly replicating the experience of reading from paper, can successfully render a version of the work when read out. However, the characteristics common to digital texts – multi-author, multimedia, and indeed multi-linear elements – create a divergent form not always suitable for linear delivery. I would like to investigate other ways of performing this writing over the week, considering digital and performance-based ideas in an attempt to see if digital texts can be better translated into performances. I want to explore how performers might interpret text, and how audience experience can be augmented by adding digital elements. The findings of this work will feed into a small work commissioned for the opening event of the Manchester Art School Art and Design Building.

Simon Woolham – Proposal

I wish to realise the technicalities of a new project which I proposed for the launch of the new art building, bringing to life the building itself through the hidden narratives associated with it. For the Art Lab week I wish to fully immerse myself in realising the full potential of my ideas by experimenting with different spaces around the building using a projector and image mapping technologies. My films will bring to life a series of spaces that are digitally manipulated and made to move. The spaces will be modified to show some minor action in repetitive motion. These subtle movements will humorously draw attention to the miniscule, the enhanced sound of each action dramatising the spaces kinetic forces.

Simon Woolham