blog

ImproTech Paris-New York 2012

by

Image from ImproTech Paris-New York
Last week, I took a vacation day to attend one day of workshops at NYU as part of  ImproTech 2012 Paris-New York That website descibes the event as:

The ImproTech Paris – NYC 2012 workshop is dedicated to the exploration of the links between musical improvisation and digital technologies.
Gathering researchers and artists from both research & creation scenes, it favors the idea of using digital intelligence as a source of continuous and sophisticated creation, in a complex interaction with live musicians, as opposed to mere decorative digital effects.

My academic training is as a composer; my Master’s degree was in Electronic/Computer Music, and this was an area where I did some (now very primitive) work. Every few years I pick that project back up and restart it based on what I’ve learned as a developer since the last time I worked on it, then get distracted by some other project and let it go into hibernation for a while again. I wasn’t able to get to either of the concerts or the second day of workshops, but the one day I did attend was a day well spent.
Some highlights for me:
George Lewis gave a brief keynote for the event, touching on a number of themes that would recur throughout the day, including a bit of history. His 1993 piece/software Voyager is a classic in the field, and the rest of the day found other presenters citing his work.
The French philosopher Pierre Saint Germier spent an hour discussing the queestion of whether it’s even possible for computers to improvise. He started the discussion from the ‘By Definition Objection’ — since all computers are able to do is deterministically execute the instructions they are programmed with, even in the presence of randomness, you cannot say that they are improvising. Eventually, he worked his way around to what he referred to as a ‘retreat position’ — maybe computers can’t replicate improvisation, but perhaps they can simulate it, “and maybe that’s good enough for you.” I’m also having a difficult time reconciling these arguments with the writings of others (like neuroscientist Sam Harris, who has developed a position that what we perceive as our own free will is an illusion that’s not compatible with the laws of physics).
After ruminating for a while, I’ve decided that retreat position is good enough for me, and I don’t accept that it’s much of a retreat, and that the whole argument depends too much on playing with definitions and semantics. Perhaps the most important lesson I learned is that I don’t think I’d make a great philosopher. Also: setting the tone for the rest of the day — even though the only technology his presentation depended on was a PDF slideshow on a USB drive, it took about 10 minutes for issues with the projector to be worked out.
Frédéric Bevilacqua and Norbert Schnell, both from IRCAM, showed work they’ve been doing with their “MO: Modular Musical Objects”  project, providing a gestural interface to sound production.
[youtube=http://www.youtube.com/watch?feature=player_embedded&v=Uhps_U2E9OM]
[vimeo http://vimeo.com/22120867]
As part of their presentation, they trained the soccer ball shown in the above “Urban Musical Game” Vimeo clip with some new gesture/sound combinations, and tossed it out into the audience. For me, it raised a number of questions on the role of skill and virtuosity in systems like this. I’m not sure to what extent those questions are important, at least at this stage in the development of these systems. I was just proud that as the soccer ball made its way to me in the audience, I represented my 15-year old soccer playing son by using my head, not my hands on the ball.
After lunch, the presentations were more music than talk.
Violinist Mari Kimura demonstrated some of her recent work using an augmented violin system that’s been developed for her — sensors on her bow hand track its position and movement and that data is used inside patches running in Max. She emphasized several times how important it is for her as a violinist to not have additional pieces of hardware or controllers onstage with her — the easy answer to many of these issues would be to put a foot pedal or two on stage, but she refuses to do that. The hand-tracking system is a very elegant (and essentially invisible) solution to the problem. You can get a taste of it here:

[youtube=http://www.youtube.com/watch?feature=player_embedded&v=Uf7IxTD19eg]

Do check out more of her videos; I especially like one featuring a duet with the LEMUR guitarbot that I last saw on Pat Metheny’s Orchestrion tour.

Saxophonist Steve Lehman did a performance of his piece Manifold for Alto Sax and a trio of ‘virtual instrumentalists, running in Max/MSP. One of the things I’ve always liked about his music is his use of meter and tempo to create a wonderfully elastic sense of time, and that was very much present in this piece (modulo some technical problems he spoke about after the performance — even virtual players can have an off day). Here’s a different performance with two additional human performers from the International Contemporary Ensemble.

[youtube=http://www.youtube.com/watch?v=ra3flQolnL8&feature=player_embedded]

Steve Coleman (also on alto sax; I’ve been a huge fan of his writing and playing back to the late 80s) and Gilbert Nouno explained some details of a piece “Musical Arrythmias” for sax and electronics that they had performed at the opening night concert the previous night. I wish that I had been able to hear the piece in its entirety, because the fragments were compelling.

Jaime Oliver showed a computer vision based instrument developed as an exploration of the space and intersections between instrument design and composition. In his brief talk preceding his demo he outlined the idea of ‘Open Scores’, where the capabilites and limitations of a system define a space within which a set of musical possibilites may be explored and realized by the human performer who completes the piece by performing it. He showed some video of an earlier system called the Silent Drum:

[youtube=http://www.youtube.com/watch?v=2kLVqgUMGSU&feature=player_embedded#!]

…before doing a live demo of his more recent “MANO” instrument.

The MANO Controller consists of a black rectangular surface, which is sensed with a video camera. The computer algorithm analyzes the image obtained from the video camera, looking for hands and extracting from them the most relevant parameters, which are then used to control sound.

One of the most interesting features of the MANO Controller is that it recognizes from which side a hand is entering the rectangle and is able to classify it as a finger or as a hand and to interpolate between them. Any gesture then is classified in terms of the side where it originates (left, right, bottom) and in terms of its size (as a finger or a hand).
Since a different mapping can be assigned to each combination of side and size (eg. left-finger or bottom-hand) at any given moment there can be 6 different sound mappings. Furthermore, combinations of these (eg. left-finger AND bottom-hand) might lead to a new mapping altogether. The use of these classifications of side and size allow the composer to use them combinatorially, controlling the movement through the score. This allows the performer to change and interpolate between mappings through continuous gestures without the need of using buttons.
For each hand or finger, multiple parameters are obtained. Most parameters are interdependent, that is, if one parameter changes it usually changes another parameter as well. This is an important feature since it allows for organic behavior of sounds.

[vimeo http://vimeo.com/10907372]

Finally, Robert Rowe form NYU’s Music and Audio Research Lab, whose facility hosted the workshop did a presentation discussing the MARL program and its faculty, all of which were extremely impressive. When the workshops were done, he offered to take anyone interested on a tour of the facility, an offer a few of us took him up on. I had no expectation that he’d remember, but 20 years ago or so I was touring the MIT Media Lab when considering doing PhD work there, and he took me on a similar tour of that facility (including showing me their first NeXT machine, which had just arrived).

A much better day than I would have had at the office.

+ more