Not sure why Core Audio isn't an Objective C API
It’s for performance, right? It’s the only good reason I can think of. And, you know, it sounds sensible. I mean, ultra-low latency and all that, you probably don’t want that objc dispatch overhead.
I just did an experiment, however. I dislike working with C APIs, so I’m writing Cocoa wrappers for Core Audio, just exposing those pesky Component properties that take five lines to set or get, with simple methods. Suddenly I thought, “Wait, what if I try to use an objc method as a render callback? Those require very low latency and are called often. So I should be seeing some of that objc overhead.”.
Very unscientific comparison, comparing a simple sine renderer in c and objc, on an MBP 1.83x2 (source available on request):
- CPU usage in app using C callback: 3.0%
- CPU usage in app using ObjC callback: 3.1%
This isn’t, by far, any compelling evidence that Core Audio should be Objective C; I’m just saying it seems more feasible than I originally thought it’d be. Also, actually thinking about the problem, I realize that the callback’s only called 44100/512 ≈ 86 times a second and has about 10 ms to complete (astronomically long in computer terms).
But NeXT did it that way, didn’t they? I want to remember that NeXT had basically /everything/ in objc, including drivers and audio and such things. So, why not Mac OS X? NeXT was hardly known for being a slow OS. Tell me what I’m missing in the comments.