"Wouldn’t it be cool if…"
We’ve all had that thought about something. How do you get from there to a product you can use and share with others? When I found myself playing with an idea that seemed exciting and new, I decided to capture some notes on the process, from the perspective of an Engineering Manager at a company that’s all about implementing people’s creative visions.
Last year, I posted here about an animation control framework called ‘Friz’ that works within the JUCE Application Framework.
As I said in that post:
After over a decade of work, the final specification documents for MIDI 2.0 have been released to the public!
There’s a fantastic article on the MIDI.org site that explains what the hubbub is all about that I’ll just point you at rather than rewriting — check it here: Details about MIDI 2.0
When MIDI 1.0 was released in 1983, the complete document that detailed all you needed to know about it was eight pages long. Expect to need to read a bit more than that in 2020—the full spec for MIDI 2.0 is five separate documents, each looking at a single part of the system:
M2-100: Overview of the specifications
M2-101: Specification of MIDI-CI, the Capability Inquiry portion of MIDI 2 that’s required to enable devices to query each other and determine how two devices can work together.
M2-102: Common Rules for MIDI-CI Profiles explains how to define and work with MIDI 2 profiles to define controllers and other configuration data to permit devices to automatically adapt to the capabilities present in the currently connected instruments.
M2-103: Rules for Property Exchange, the new provisions for querying current settings and capabilites of connected devices
M2-104: Definition of the new Universal MIDI Packet data structure and the high-resolution MIDI 2 message protocol.
Before you can access these documents, you’ll need to create a (free!) account with The MIDI Association, which is an organization of MIDI users. If you’re not already a member, the link to access the docs will redirect you first to the login/account creation page.
Download everything here and then go make cool stuff with it.
Spectrogram of swelling trumpet sound
Art+Logic’s Incubator project has made a lot of progress. In a previous post I mentioned that Dr. Scott Hawley’s technique to classify audio involved converting audio to an image and using a Convolutional Neural Network (CNN) to classify the audio based on this image. That image is a spectrogram. I’m going to go into some detail about what we do to create one, and why to the best of my ability.
Next week (18-20 November) I’ll be attending the annual Audio Developer Conference in London. On Tuesday November 19th at 16:00 I’ll be part of a team providing the first public details about the forthcoming MIDI 2.0 standard.
The ADC is usually live-streamed on YouTube as it happens, an unfortunate series of events have endangered that this year — you can learn more about that and consider contributing to the fund that will pay for the recording and livestreaming of conference sessions—I frequently return to the archived videos and point other developers to them for reference.
Check the JUCE YouTube channel for the streams during the event (and come back later for archived recordings, or watch sessions from earlier years).
The full schedule for the event is here.
If you’re attending the event, please do track me down and say ‘hey’.