Okay — so far, we have an in-progress Scumbler application that can interface with audio hardware and route audio signals through itself (part 1) and also load third-party audio effects plugins into that audio stream (part 2). This time, we’ll add code that processes the audio to create the gradually fading loop that is the heart of the whole system. JUCE provides a few base classes that will once again simplify our work here greatly — all of the common behavior that we want to be able to support is abstracted away cleanly into the framework, and we just add the code that makes our app unique here.
AudioProcessor
http://www.rawmaterialsoftware.com/juce/api/classAudioProcessor.html
Classes derived from the AudioProcessor base class inherit a ton of useful functionality — the AudioProcessorGraph that we already saw in part 1 of this series is designed to hold all of the audio processors used in your app and connect them together to implement your desired signal flow (and actually is-a AudioProcessor itself). JUCE also includes additional classes that you can use to ‘wrap’ your signal processing code so that it can be distributed as a plugin that can be hosted in other audio applications. For our purposes, the interesting member function is the processBlock() function. This function is called repeatedly by the high-priority thread that handles all audio I/O for the application. You’re given references to two objects, an AudioSampleBuffer containing new input samples and a MIDIBuffer that contains time-stamped MIDI messages. When your function is done, you need to replace the contents of those two buffers with the modified values after your effect algorithm has processed the new input. Because this function is being called as part of audio I/O, it’s important that we not take any longer for processing samples than we absolutely need to, and because we’ve now realized that our app has to be multi-threaded, we need to take that into account and protect any member variables that will need to be accessed from both the audio and user interface threads.
AudioSampleBuffer
http://www.rawmaterialsoftware.com/juce/api/classAudioSampleBuffer.html
The AudioSampleBuffer class is (even considering how great the rest of JUCE is) an incredibly useful and utilitarian class. Rather than cobbling together containers that can deal with the lists of floating point numbers that we use to represent audio samples, this class cleanly represents a block of audio samples — it understands that we probably need to have multiple synchronized channels of audio, that we may want to efficiently change the gain being applied to a block of samples (and that we probably want to change that gain over time to implement fades in and out). We want to be able to copy block of samples into and out of sample buffers. We want to be able to quickly find the highest/lowest samples in a region of the buffer (useful for displaying a waveform where we need to be able to use a single pixel to display more than one sample’s worth of data) or the root mean squared level of a region of samples (useful for metering-style displays).
CriticalSection & ScopedLock
http://www.rawmaterialsoftware.com/juce/api/classCriticalSection.html
Since we’re working with multiple threads, it’s important that we be able to guarantee that a chunk of code will be allowed to run to completion without another thread interrupting it and behaving incorrectly because our member variables were left in an incoherent state. The CriticalSection class is a simple re-entrant Mutex that we can use. It’s especially useful with the associated ScopedLock class, which uses C++’s RAII-style design to claim the CriticalSection’s lock when created, and guarantee that the lock is released when the ScopedLock is destroyed, whether because that destruction happens automatically when we leave the scope where it was declared or because an exception occurred. Programming with threads can be obnoxious and tricky, but classes like these make is much less so. (Well, still tricky, but less obnoxious).
The LoopProcessor Class
With those three things, we’re ready to create the processor class that loops audio for us. Requirements for this class include:
- We can either be playing or not. If we’re not playing, any samples that arrive at our processBlock() function are passed through without modification.
- The duration of the loop must be changeable by the user. We’ll default to a duration of 4 seconds.
- There’s a variable loop feedback between 0 and -96 dB. Each time through the loop, we apply that gain to the current contents of the loop samples.
- New samples being passed into the loop are mixed into the current contents of the loop, and stored into the loop buffer for the next pass. Those mixed samples are also the output values for this call to processBlock().
The only part of the processBuffer() member function is that we need to remember that it’s almost a certainty that the size of our loop sample buffer isn’t an integer multiple of the size of buffers being passed to that function, so we need to be aware of cases where we need to assemble the output from some number of samples at the end of our loop with some samples from the beginning of the loop when we wrap around in time. The processBlock() member function looks like this — with the comments, I hope that it’s easy to puzzle out what’s going on.
[sourcecode language=”cpp”]
void LoopProcessor::processBlock(AudioSampleBuffer& buffer, MidiBuffer& midiMessages)
{
if (fTrack->IsPlaying())
{
// Lock down all of the protected code sections.
ScopedLock sl(fMutex);
int sampleCount = buffer.getNumSamples();
int loopSampleCount = fLoopBuffer->getNumSamples();
float feedbackGain = fFeedback;
for (int channel = 0; channel < fChannelCount; ++channel)
{
// this is easy if we don’t need to wrap around the loop
// buffer when processing this block
if (fLoopPosition + sampleCount < loopSampleCount) { // Add samples from 1 loop ago, multiplying them by // the feedback gain. buffer.addFrom(channel, 0, *fLoopBuffer, channel, fLoopPosition, sampleCount, feedbackGain); // … and copy the mixed samples back into the loop buffer // so we can play them back out in one loop’s time. fLoopBuffer->copyFrom(channel, fLoopPosition, buffer,
channel, 0, sampleCount);
}
else
{
// first, process as many samples as we can fit in at the
// end of the loop buffer.
int roomAtEnd = loopSampleCount - fLoopPosition;
// and we need to put this many samples back at the
// beginning.
int wrapped = sampleCount - roomAtEnd;
// add samples from a loop ago, adjusting feedback gain.
// part 1:
buffer.addFrom(channel, 0, *fLoopBuffer, channel,
fLoopPosition, roomAtEnd, feedbackGain);
// part 2:
buffer.addFrom(channel, roomAtEnd, *fLoopBuffer, channel,
0, wrapped, feedbackGain);
// and now copy the mixed samples back into the loop buffer:
// part 1:
fLoopBuffer-&gt;copyFrom(channel, fLoopPosition, buffer,
channel, 0, roomAtEnd);
// part 2:
fLoopBuffer-&gt;copyFrom(channel, 0, buffer, channel,
roomAtEnd, wrapped);
}
}
// set the loop position for the next block of data.
fLoopPosition = fLoopPosition + sampleCount;
if (fLoopPosition &gt;= loopSampleCount)
{
fLoopPosition -= loopSampleCount;
++fLoopCount;
}
// Notify anyone who's observing this processor that we've
// gotten new sample data.
this-&gt;sendChangeMessage();
}
}
[/sourcecode]