Paul Davis (fellow human being and author of the
Ardour DAW) suggested on Twitter that we should have a conversation which would be recorded and available for download/stream via the internet on-demand (apparently this is a thing now?).
It seemed like a reasonable (or dare I say interesting, given our common experiences) suggestion, and generally my willingness to talk to people has a sort of vaguely gaussian curve where the X axis they want to talk to me and the Y axis is how willing I am to talk to them: if they don't want to talk to me at all, or if they REALLY want to talk to me, then I'm pretty much not interested, but if they're in the middle, then sure why not?
We had our interview (which was enjoyable, and it even would have been without a pasty on my desk staring me down for when it was over) using NINJAM in voice chat mode, which mostly works amazingly well, until it doesn't (and then you have to reinitialize the channels -- I've been meaning to fix that for years but meh). I haven't (nor will I likely) listened to it in its entirety, but hopefully I didn't make too much of a fool of myself (thank you Paul for editing it). Paul also is being very modest about his interviewing skills, I thought he did an excellent job (for an emacs user), but we're always our biggest critics (except those instances where you get asshole critics, but to be fair they are usually just assholes and not so much critics).
Anyway,
here's the interview.
One final note, after our conversation I do find myself wondering how good of musicians we would both be by now, had we not chosen to write DAWs and instead had spent the last 36 collective years with our time devoted to playing rather than programming. It's a completely unknowable and unrealistic question, anyway: whether or not you consider programming an art, it shares with art the most difficult part which is maintaining an interest in a particular thing for extended periods of time. Would I be an amazing guitarist if I'd spent the last 15 years playing guitar for 8 hours a day? Probably. Could I possibly spend 15 years playing guitar for 8 hours a day? No chance in hell.
4 Comments
It's now time when I bitch about, and document my experiences dealing with Apple's documentation/APIs/etc. For some
reason I never feel the need to do this on other platforms, maybe it's that they tend to have better documentation
or less fuss to deal with, I'm not sure why, but anyway if you search for "macOS" on this blog you'll find previous installments. Anyway, let's begin.
A bit over a year ago Apple started making computers with their own CPUs, the M1. These have 8 or more cores, but have a mix of slower and faster cores, the slower cores having lower power consumption (whether or not they are more efficient per unit of work done is unclear, it wouldn't surprise me if their efficiency was similar under full load, but anyway now I'm just guessing).
The implications for realtime audio of these asymmetric cores is pretty complex and can produce all sorts of weird behavior. The biggest issue seems to be when your code ends up running on the efficiency cores even though you need the results ASAP, causing underruns. Counterintuitively, it seems that under very light load, things work well, and under very heavy load, things work well, but for medium loads, there is failure. Also counterintuitively, the newer M1 Pro and M1 Max CPUs, with more performance cores (6-8) and fewer efficiency cores (2), seem to have a larger "medium load" range where things don't work well.
The short summary:
- Ignore the thread QoS APIs, for realtime audio they're apparently not applicable (and do not address these issues). This was the biggest timesink for me -- I spend a ton of time going "why doesn't this QoS setting do anything?" Also Xcode has a CPU meter and for each thread it says "QoS unavailable"... so confusing.
- If a normal thread yields via usleep() or pthread_cond_timedwait() for more than a few hundred microseconds, it'll likely end up running on an efficiency core when it resumes (and it takes an eternity in terms of audio blocks to get bumped back to a performance core, by which there's been an underrun and the thread probably will go back to sleep anyway). Reducing all sleeps/waits to at most a few hundred microseconds is a way to avoid that fate (though Apple recommends against spinning, likely for good reason). It's not ideal, but you can effectively pin normal threads to performance cores using this method.
- Porting Your Audio Code to Apple Silicon was the most helpful guide (I wish I had seen the link at the bottom of one of the other less-helpful guides sooner! so much time wasted...), though it assumes some knowledge which doesn't seem to be linked in the document:
You want to get your realtime threads in the same thread workgroup as the CoreAudio device's (via kAudioDevicePropertyIOThreadOSWorkgroup), and to do that you first have to make your threads realtime threads using thread_policy_set(THREAD_TIME_CONSTRAINT_POLICY) (side note: we probably should have been doing this already, doh), ideally with the similar parameters that the coreaudio thread uses, which seems to be a period and constraint of ((1000000000.0 * blocksize * mach_timebase_info().denom) / (mach_timebase_info().numer * srate), and a computation of half that. If you don't set this policy, it will fail adding your thread to the workgroup (EINVAL in that case means "thread is not realtime" and not "workgroup is canceled" per the docs). Once you do that, you do effectively get your threads locked to performance cores, and can start breathing again.
Perhaps this was all obvious and documented and I failed to read the right things, but anyway I'm just putting this here in case somebody like me would find it useful.
4 Comments

The sensation of paddling across the river on a warm October night,
lit by the lights of the city reflected back from the clouds, Jupiter
(or was it Saturn?) poking through, hypnotized by the smooth but
significant swells of the rising tide that (counterintuitively) tries
to send us to the harbor and out to the sea, the rhythms of the
neighboring boats paddling, rising and falling, things that probably
could be captured with technology but instead will be stored in wet
memory, dreamed about on the cold days to follow.
2 Comments
Hopefully in 2036 I'm not calling 2021-me an idiot, too. Here's an interesting old bug situation:
Dan Worrall posted a
video on the 1175 JSFX, which was written by
Scott Stillwell, way back in 2006, and graciously provided for inclusion with REAPER. In this video, Dan finds that the ratio control is completely broken, and posits a fix to it (adding a division by 2.1).
Sure enough, the code was broken. I went looking through the code
1 to see why, and sure enough, there's a line which includes a multiply by 2.08136898, which seems completely arbitrary and incorrect! OK, so we see the bug
2. How did that constant get there?
When the 1175 JSFX was originally written, REAPER and JSFX were very young, and the JSFX log() (natural log, we'll call it by ln() from now on) and log10() implementations were broken. On a 8087 FPU, there are instructions to calculate logarithms, but they work in base 2, so to calculate ln(x) you use log2(x)/log2(e)
3. Prior to REAPER 1.29 it was mistakenly log2(x)*log2(e), due to the ordering of an fdiv being wrong
4 and insufficient (or non-existent, rather) testing. So when that got fixed, it actually fixed various JSFX to behave correctly, and that should've been the end of it. This is where the stupid part comes in.
I opted to preserve the existing behavior of existing JSFX for this change, modifying code that used log() functions to multiply log2(e)*log2(e), which is .... 2.08136898. Doh. I should've tested these plug-ins before deciding to make the change. Or at least understood them.
Anyway, that's my story for the day. I'm rolling my eyes at my past self. As a footnote, schwa found that googling that constant finds some interesting places where that code has been reused, bug and all.
* 1 (beyond our SVN commits and all the way back to our VSS commits, all of which have thankfully been imported into git)
* 2 (and it will be fixed in REAPER 6.26, we're calling the non-fixed mode "Broken Capacitor" because that felt about right)
* 3 (or if you're clever and care about speed, log2(x)*ln(2) because ln(X) = logX(X)/logX(e) and logX(X)=1, but now my head hurts from talking about logarithms)
* 4 I haven't fully tested but I think it was working correctly on Jesusonic/linux, and when porting the EEL2 code to MSVC I didn't correctly convert it.
3 Comments