After over a decade of work, the final specification documents for MIDI 2.0 have been released to the public!
There’s a fantastic article on the MIDI.org site that explains what the hubbub is all about that I’ll just point you at rather than rewriting — check it here: Details about MIDI 2.0
When MIDI 1.0 was released in 1983, the complete document that detailed all you needed to know about it was eight pages long. Expect to need to read a bit more than that in 2020—the full spec for MIDI 2.0 is five separate documents, each looking at a single part of the system:
M2-100: Overview of the specifications
M2-101: Specification of MIDI-CI, the Capability Inquiry portion of MIDI 2 that’s required to enable devices to query each other and determine how two devices can work together.
M2-102: Common Rules for MIDI-CI Profiles explains how to define and work with MIDI 2 profiles to define controllers and other configuration data to permit devices to automatically adapt to the capabilities present in the currently connected instruments.
M2-103: Rules for Property Exchange, the new provisions for querying current settings and capabilites of connected devices
M2-104: Definition of the new Universal MIDI Packet data structure and the high-resolution MIDI 2 message protocol.
Before you can access these documents, you’ll need to create a (free!) account with The MIDI Association, which is an organization of MIDI users. If you’re not already a member, the link to access the docs will redirect you first to the login/account creation page.
Download everything here and then go make cool stuff with it.
Spectrogram of swelling trumpet sound
Art+Logic’s Incubator project has made a lot of progress. In a previous post I mentioned that Dr. Scott Hawley’s technique to classify audio involved converting audio to an image and using a Convolutional Neural Network (CNN) to classify the audio based on this image. That image is a spectrogram. I’m going to go into some detail about what we do to create one, and why to the best of my ability.
For the past year or so, I’ve been working as one of a group of developers within the Protocol Working Group of the MIDI Manufacturers Association to create prototype tools and applications that implement the upcoming MIDI 2.0 specification as it’s worked its way through many drafts to the point where it’s now ready for the MMA and AMEI, their Japanese counterpart, to vote on its adoption as an official standard.
I’m looking forward to presenting more information on what’s new for musicians and developers in the new standard, both here on the A+L blog and out in the real world.
“It’s going to be the coolest thing ever.”
You know enough by now to be doubtful when a client makes this statement, but you’re willing to entertain the idea that this may not, in fact, be a tragedy in the making.
“It’s going to be a music machine – like, full keyboard and everything – but each of the keys is going to be mapped to – wait for it – cat sounds! We’ll call it the ‘Meowsic Machine’! Oh, and we need it to be accessible to everyone via the Web. Which is easy, right?
You are reminded that the universe can be a cruel place.
It’s now your job to make this happen. Over the course of a few posts, we’re going to look at the Web Audio API, and build the Meowsic Machine together. In the process, we’ll also enjoy a dalliance with Vue.js, and dip our toes into the deep-end with Web Workers. Today, we take the first step in this historic journey—convincing the browser to actually let us play sounds.
Art+Logic has kicked-off its first software Incubator project, and I was selected to handle the development effort. After meeting Dr. Scott Hawley and being briefed on the technique he uses for classification of audio files using neural networks (NN), and determining current and future features, we were ready to begin the project. While we go through this process, I’ll be documenting it on this blog. (more…)