I’ve been writing a seriesofposts here over the last few months discussing the JUCE C++ application framework and how useful it’s been in creating the ‘Scumbler’ looping audio performance application that’s my current nights & weekends project. One of the important requirements that I had for the project was that it be able to use existing VST or AU audio effects plug-ins to process the audio during performance.
Over the years since I graduated with a degree in electronic and computer music, I’ve accumulated a fairly large shelf of books on digital signal processing theory and applications. Earlier this year, Focal Press released what’s the most usable book on writing audio effects plug-ins. It’s very easy to find books for theorists, or for people who already have very heavy math backgrounds explaining the concepts behind DSP; it’s rarer to find resources targeted at interested and motivated practitioners. Anyone coming to this book with a relatively solid background in C++ programming and high-school math (trig and at least pre-calculus) should be able to work through it and come out the other end with an understanding of much of the DSP that’s needed out in the wild.
Several times we’ve worked on projects in the pro audio/music instrument industries that have used a very useful C++ cross-platform application framework called ‘JUCE‘. It was originally developed as part of the ‘Tracktion‘ digital audio workstation, and later extracted out as a standalone framework (in much the same way that Ruby on Rails was extracted from 37 Signals’ work on Basecamp). For open source projects, JUCE is licensed under the GPL, and for commercial projects it’s licensed under a very reasonable per-company license (not per-project, or per-year). Applications written using the framework can be deployed on Windows, OS X, Linux, iOS, and Android.
For audio developers, it’s an incredibly useful framework that contains solid and well-conceived classes to deal with many of the lower-level tasks that any application processing audio will need to deal with — opening and configuring multichannel audio devices (which may involve dealing with numerous different driver stacks), finding and loading audio effects plug-ins (again — several different formats exist in the wild, including VST and Apple’s AudioUnits), accepting and generating MIDI data, implementing audio effects algorithms, reading and writing audio files, and so on.
I’ve worked on a few projects that used JUCE, but was never involved in any of the work that touched that audio layer. I’ve been considering writing the next version of a long-standing personal music software system over to JUCE, and after spending some time poring over the documentation and sample code, I decided that it would be best for my sanity to start with a smaller project as a learning experience, and document what I learned as I go.
As mentioned here and here on the A&L blog, Audiobus, an app for live app-to-app audio streaming on iOS has launched today with about a dozen supported apps, with more to come. This app and SDK look to be a game changer for audio production on iOS devices. The developers have shown previews of multitrack recording which will make it even better.
As a follow up to Audiobus – Live app-to-app audio streaming for iOS here’s a recently released demo video of some Audiobus enabled apps in action. The Audiobus team is working with 3rd party developers to integrate Audiobus into their apps and coordinate on a launch date. The demos they’ve released so far look awesome.