(Audio reactive sphere by Andreas Köberle.)
For years it has been possible to write general purpose code for those data crunching cards almost all of us have in our devices by using tools like NVIDIA’s CUDA and OpenCL, but that power is rarely applied to non-graphical tasks except in high performance computing.
Yes. It is hardly common to reach the upper limits of the CPU resource in audio these days, and according to the authors that might be due in part to the GPU. “…the GPU has been used to ease the load on the CPU, caused by the computational complexity of generating and processing many sounds simultaneously.” Mixing multiple buffers of audio audio on the GPU is a natural fit. But they have shown that’s not all it can do well.
Their GPU ran higher resolution simulations without dropouts than even the latest i7 chips available to them. The 2.7GHz Intel Core i7 maxed out at 65% of what the NVIDIA GeForce GT 650M could do before dropouts occurred. This performance is achieved despite the fact that “Realtime finite difference synthesis … is arguably not an efficient use of the GPU”.
What computations will you be offloading to the GPU?
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.