Preferences

The real fun is optimising maths. Remove all divisions. Create LUTs, approximations, CPU specific tricks. Despite the fact CPUs are magnitudes faster now, they are still slow for real time processing.

Real time does not mean fast, it means deterministic

Thus such micro optimizations are seldomly used. Quite the opposite, you try to avoid jitter which could be the result of caches

While real-time does not mean fast, micro optimisations are frequently used. No one likes slow DSP audio software.
> No one likes slow DSP audio software.

And then there's Diva at its highest output quality setting... :)

Yes, I did think twice about posting that precisely because of Diva.
Jitter does not matter if deadlines are met. It only matters if it can cause deadlines to be missed (sometimes).
If you have a buffer that's being clocked out and your goal is to keep data flowing, the jitter is going to influence how small your buffer can be. Let's say you're producing 56Khz audio, the best you can do is produce a [sample] exactly at that frequency. If you have 1ms jitter now you need a 1ms buffer so you have delay. If jitter is small enough, like 0.1ns jitter in some SIMD calculation, then for all intent and purpose it doesn't matter for an audio application...
You've just restated my point. If the deadlines are met, jitter doesn't matter. Ergo, you can't meet deadlines if your jitter is too large. Otherwise, it doesn't matter.
Wouldn't the deadline be now+zero for real time audio applications? If I'm building a guitar pedal (random example) ideally I want no delay from the input to the output. Any digital delay makes things strictly worse and so any jitter matters. That said, the difference between zero and very close to zero does become a moot point given small enough values for any practical purpose.
There are some digital audio systems that do sample-by-sample processing. Old school digidesign, for example.

But very little digital audio gear works that way these days. The buffer sizes may be small (e.g 8 or 16 samples), but most hardware uses block structured (buffer by buffer) processing.

So there's always a delay, even if 1 sample.

Basically "It doesn't matter when it doesn't matter".
> Create LUTs

This has been slower for most things that raw computation for well over a decade (probably more like two).

If there are complex equations involved, it absolutely is faster. You can also create intermediate LUTs, so the tables are small and fit in cache and then do interpolation on the fly.
Not at all, when you work with DSP even nowdays using LUTs is very common and usually faster.

You are not saving a sin table, but very complex differential equations.

Yeah, isn’t hitting memory (especially if it can’t fit in L1-2 cache) one of the biggest sources of latency? Especially that on modern CPUs it is almost impossible to max out the arithmetic units, outside of microbenchmarks?
You don't really do these any more on a modern CPU. This is stuff I used to do 30 years ago and you might still do if you're on a micro-controller or some other tiny system. The CPUs aren't slow. Tne main problem is if the OS doesn't schedule your process it doesn't matter how fast the CPU is.
This is great fun! But it's much more prevalent in embedded DSP than desktop.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal