Several times we’ve worked on projects in the pro audio/music instrument industries that have used a very useful C++ cross-platform application framework called ‘JUCE‘. It was originally developed as part of the ‘Tracktion‘ digital audio workstation, and later extracted out as a standalone framework (in much the same way that Ruby on Rails was extracted from 37 Signals’ work on Basecamp). For open source projects, JUCE is licensed under the GPL, and for commercial projects it’s licensed under a very reasonable per-company license (not per-project, or per-year). Applications written using the framework can be deployed on Windows, OS X, Linux, iOS, and Android.
For audio developers, it’s an incredibly useful framework that contains solid and well-conceived classes to deal with many of the lower-level tasks that any application processing audio will need to deal with — opening and configuring multichannel audio devices (which may involve dealing with numerous different driver stacks), finding and loading audio effects plug-ins (again — several different formats exist in the wild, including VST and Apple’s AudioUnits), accepting and generating MIDI data, implementing audio effects algorithms, reading and writing audio files, and so on.
I’ve worked on a few projects that used JUCE, but was never involved in any of the work that touched that audio layer. I’ve been considering writing the next version of a long-standing personal music software system over to JUCE, and after spending some time poring over the documentation and sample code, I decided that it would be best for my sanity to start with a smaller project as a learning experience, and document what I learned as I go.
- An Introduction to Lock-Free Programming: “the lock in lock-free does not refer directly to mutexes, but rather to the possibility of “locking up” the entire application in some way, whether it’s deadlock, livelock — or even due to hypothetical thread scheduling decisions made by your worst enemy. That last point sounds funny, but it’s key. Shared mutexes are ruled out trivially, because as soon as one thread obtains the mutex, your worst enemy could simply never schedule that thread again.” As they keep adding more and more cores to processors, being able to handle concurrency becomes even more important, but the more I think about it, the less I think that we know how to deal with the problem. I suspect that Erlang’s approach using message passing of blocks of immutable data is the long-term answer, but I see the syntax of that language being too big a hurdle for most folks to leap over.
- Design Patterns: When Breaking the Rules is Okay: “We know what buttons should look like, how they should behave and how to design the Web forms that rely on those buttons. And yet, broken forms, buttons that look nothing like buttons, confusing navigation elements and more are rampant on the Web. It’s a boulevard of broken patterns out there.” I think that we’re on the cusp of a new Cambrian Explosion of new UI/HCI approaches, and I hope that we all can keep a solid grounding in the fundamentals as we invent this next new wave of computing.
- bpython: A replacement REPL for Python, adding things like syntax highlighting, autocomplete, rewind/undo. Similar to iPython, but less heavyweight.
- music-21: a Toolkit for Computer-Aided Musicology: Very interesting Python library out of MIT for dealing with music at the symbolic level. I especially like the example below — one of the first bits of code that I ever wrote was a piece of BASIC to calculate 12-tone matrixes to save myself time in undergrad music theory. (Yes, of course I spent more time writing the code than I would have spent doing things by hand…)
For example, we can create an instance of the row from Alban Berg’s Violin Concerto, use the show() method to display its contents as text, and then create and print a TwelveToneMatrix object.
<<< from music21 import serial
><< aRow = serial.RowBergViolinConcerto()
><< aMatrix = aRow.matrix()
0 3 7 B 2 5 9 1 4 6 8 A
9 0 4 8 B 2 6 A 1 3 5 7
5 8 0 4 7 A 2 6 9 B 1 3
1 4 8 0 3 6 A 2 5 7 9 B
A 1 5 9 0 3 7 B 2 4 6 8
7 A 2 6 9 0 4 8 B 1 3 5
3 6 A 2 5 8 0 4 7 9 B 1
B 2 6 A 1 4 8 0 3 5 7 9
8 B 3 7 A 1 5 9 0 2 4 6
6 9 1 5 8 B 3 7 A 0 2 4
4 7 B 3 6 9 1 5 8 A 0 2
2 5 9 1 4 7 B 3 6 8 A 0
Last week, I took a vacation day to attend one day of workshops at NYU as part of ImproTech 2012 Paris-New York That website descibes the event as:
The ImproTech Paris – NYC 2012 workshop is dedicated to the exploration of the links between musical improvisation and digital technologies.
Gathering researchers and artists from both research & creation scenes, it favors the idea of using digital intelligence as a source of continuous and sophisticated creation, in a complex interaction with live musicians, as opposed to mere decorative digital effects.
My academic training is as a composer; my Master’s degree was in Electronic/Computer Music, and this was an area where I did some (now very primitive) work. Every few years I pick that project back up and restart it based on what I’ve learned as a developer since the last time I worked on it, then get distracted by some other project and let it go into hibernation for a while again. I wasn’t able to get to either of the concerts or the second day of workshops, but the one day I did attend was a day well spent.
Some highlights for me:
George Lewis gave a brief keynote for the event, touching on a number of themes that would recur throughout the day, including a bit of history. His 1993 piece/software Voyager is a classic in the field, and the rest of the day found other presenters citing his work.
The French philosopher Pierre Saint Germier spent an hour discussing the queestion of whether it’s even possible for computers to improvise. He started the discussion from the ‘By Definition Objection’ — since all computers are able to do is deterministically execute the instructions they are programmed with, even in the presence of randomness, you cannot say that they are improvising. Eventually, he worked his way around to what he referred to as a ‘retreat position’ — maybe computers can’t replicate improvisation, but perhaps they can simulate it, “and maybe that’s good enough for you.” I’m also having a difficult time reconciling these arguments with the writings of others (like neuroscientist Sam Harris, who has developed a position that what we perceive as our own free will is an illusion that’s not compatible with the laws of physics).
After ruminating for a while, I’ve decided that retreat position is good enough for me, and I don’t accept that it’s much of a retreat, and that the whole argument depends too much on playing with definitions and semantics. Perhaps the most important lesson I learned is that I don’t think I’d make a great philosopher. Also: setting the tone for the rest of the day — even though the only technology his presentation depended on was a PDF slideshow on a USB drive, it took about 10 minutes for issues with the projector to be worked out.
Frédéric Bevilacqua and Norbert Schnell, both from IRCAM, showed work they’ve been doing with their “MO: Modular Musical Objects” project, providing a gestural interface to sound production.
As part of their presentation, they trained the soccer ball shown in the above “Urban Musical Game” Vimeo clip with some new gesture/sound combinations, and tossed it out into the audience. For me, it raised a number of questions on the role of skill and virtuosity in systems like this. I’m not sure to what extent those questions are important, at least at this stage in the development of these systems. I was just proud that as the soccer ball made its way to me in the audience, I represented my 15-year old soccer playing son by using my head, not my hands on the ball.
After lunch, the presentations were more music than talk.
Violinist Mari Kimura demonstrated some of her recent work using an augmented violin system that’s been developed for her — sensors on her bow hand track its position and movement and that data is used inside patches running in Max. She emphasized several times how important it is for her as a violinist to not have additional pieces of hardware or controllers onstage with her — the easy answer to many of these issues would be to put a foot pedal or two on stage, but she refuses to do that. The hand-tracking system is a very elegant (and essentially invisible) solution to the problem. You can get a taste of it here:
Do check out more of her videos; I especially like one featuring a duet with the LEMUR guitarbot that I last saw on Pat Metheny’s Orchestrion tour.
Saxophonist Steve Lehman did a performance of his piece Manifold for Alto Sax and a trio of ‘virtual instrumentalists, running in Max/MSP. One of the things I’ve always liked about his music is his use of meter and tempo to create a wonderfully elastic sense of time, and that was very much present in this piece (modulo some technical problems he spoke about after the performance — even virtual players can have an off day). Here’s a different performance with two additional human performers from the International Contemporary Ensemble.
Steve Coleman (also on alto sax; I’ve been a huge fan of his writing and playing back to the late 80s) and Gilbert Nouno explained some details of a piece “Musical Arrythmias” for sax and electronics that they had performed at the opening night concert the previous night. I wish that I had been able to hear the piece in its entirety, because the fragments were compelling.
Jaime Oliver showed a computer vision based instrument developed as an exploration of the space and intersections between instrument design and composition. In his brief talk preceding his demo he outlined the idea of ‘Open Scores’, where the capabilites and limitations of a system define a space within which a set of musical possibilites may be explored and realized by the human performer who completes the piece by performing it. He showed some video of an earlier system called the Silent Drum:
…before doing a live demo of his more recent “MANO” instrument.
The MANO Controller consists of a black rectangular surface, which is sensed with a video camera. The computer algorithm analyzes the image obtained from the video camera, looking for hands and extracting from them the most relevant parameters, which are then used to control sound.
One of the most interesting features of the MANO Controller is that it recognizes from which side a hand is entering the rectangle and is able to classify it as a finger or as a hand and to interpolate between them. Any gesture then is classified in terms of the side where it originates (left, right, bottom) and in terms of its size (as a finger or a hand).
Since a different mapping can be assigned to each combination of side and size (eg. left-finger or bottom-hand) at any given moment there can be 6 different sound mappings. Furthermore, combinations of these (eg. left-finger AND bottom-hand) might lead to a new mapping altogether. The use of these classifications of side and size allow the composer to use them combinatorially, controlling the movement through the score. This allows the performer to change and interpolate between mappings through continuous gestures without the need of using buttons.
For each hand or finger, multiple parameters are obtained. Most parameters are interdependent, that is, if one parameter changes it usually changes another parameter as well. This is an important feature since it allows for organic behavior of sounds.
Finally, Robert Rowe form NYU’s Music and Audio Research Lab, whose facility hosted the workshop did a presentation discussing the MARL program and its faculty, all of which were extremely impressive. When the workshops were done, he offered to take anyone interested on a tour of the facility, an offer a few of us took him up on. I had no expectation that he’d remember, but 20 years ago or so I was touring the MIT Media Lab when considering doing PhD work there, and he took me on a similar tour of that facility (including showing me their first NeXT machine, which had just arrived).
A much better day than I would have had at the office.