- Is there evidence that minimizing finger movement is ergonomically desirable? It seems like "repetitive" is a key part of RSI, so making the exact same small motion over and over again may not be optimal.
I think about piano players, who obviously need to move their hands and arms a lot to hit the keys (and with more force). Definitely takes a lot more energy than typing on a computer keyboard, but is there evidence that it's any more or less likely to cause injury?
- Cool and surprising to see built-in support for the Snyderphonics Manta [1], which is a pretty niche controller. I wrote the `libmanta` library [2] that is vendored into sapf. Haven't touched the library in a few years (though I still use my Manta), so it feels good to see it pop up!
- Thanks for the extra info, I read through some of your entries on GPU optimization and it definitely seems like it's been a journey! Thanks for blazing the trail.
- I’m very curious about your experience doing audio on the GPU. What kind of worst-case latency are you able to get? Does it tend to be pretty deterministic or do you need to keep a lot of headroom for occasional latency spikes? Is the latency substantially different between integrated vs discrete GPUs?
- Does it work with separate browsers on the same machine? Not sure but I’d guess this sort of filtering would be more common on the browser than the OS
- I saw that phrase and thought it was pretty weird. Hunting wild animals for food is not some fringe thing that happens in "other places" I've eaten tons of fish, duck, deer, elk, etc. that were all "wild animals".
- I remember back in 2008-ish Johnny Lee at CMU built a cool hack that tracked the user's head using a Wiimote as an infrared camera, and used it for this kind of effect.
https://youtu.be/Jd3-eiid-Uw?t=147
Turns out that head-tracking parallax is surprisingly effective even without stereo vision. I'd guess there's some component about the effect working best when your head motion is large relative to the distance between your eyes, and also best for objects far enough away from your eyes that you're not getting a lot of information from the stereo vision.
I don't know exactly where those thresholds are, but I wouldn't be surprised if a pinball machine is in a regime where it works well.
- Say I walk into a machine, and then I walk out, and also an exact duplicate walks out of a nearby chamber. My assumption is that we’d both feel like “me”. One of us would have the experience of walking into the machine and walking out again, and the other would have the experience of walking into the machine and being teleported into the other chamber.
Im probably lacking in imagination, or the relevant background, but I’m having trouble thinking of an alternative.
- The underlying clang featurs support compile-time checks as well via the Performance Constraints system: https://conference.audio.dev/session/2024/llvms-real-time-sa...
- > Doing brute force evaluation on 1024² pixels, the bytecode interpreter takes 5.8 seconds, while the JIT backend takes 182 milliseconds – a 31× speedup!
> Note that the speedup is less dramatic with smarter algorithms; brute force doesn't take advantage of interval arithmetic or tape simplification! The optimized rendering implementation in Fidget draws this image in 6 ms using the bytecode interpreter, or 4.6 ms using the JIT backend, so the improvement is only about 25%.
I love how this is focused on how the JIT backend is less important with the algorithmic optimizations, and not on how the algorithmic optimizations give a 1000x improvement with bytecode and 40x with JIT.
- it's an unfortunate terminology collision.
- array languages: rank is the dimensionality of an array, i.e. a vector is rank-1, a matrix is rank-2, a N-D array is rank-N
- linear algebra: rank is the number of linearly-independent columns (and also rows)
So for example, if you have a 5x5 matrix where 4 of the columns are linearly independent, it would be rank-4 in the linear algebra sense, and rank-2 in the array language sense.
I guess (though I've never really thought of it before) that you could say that the array-language definition is the rank (in the linear algebra sense) of the index space. Not sure if that's intentional.
- OSC (Open Sound Control) is just awesome. It's basically a lightweight protocol on top of UDP packets. It's not hard to roll your own implementation if there isn't one for your platform. It's lacking a lot of features you'd need for a scalable system, but when you just need a few systems to send realtime messages to each other, it's tough to beat.
I've used it a lot for the original designed use-case (sending parameter updates between controllers and music synths), but also a bunch of other things (sending tracking information from a python computer vision script to a Unity scene).
- I'm a little confused as to the fundamental problem statement. It seems like the idea is to create a protocol that can connect arbitrary applications to arbitrary resources, which seems underconstrained as a problem to solve.
This level of generality has been attempted before (e.g. RDF and the semantic web, REST, SOAP) and I'm not sure what's fundamentally different about how this problem is framed that makes it more tractable.
- zz^* is fine for scalar complex numbers, but z^*z is nice because it also works for vectors. You can think of the complex conjugate as a special case of an adjoint, and the hermetian transpose is another special case.
- I did a bunch of contract work last year at a company that was all-in on Julia and it was a really pleasant experience.
IMO one of the issues with Julia is that it’s easy to get nerd-sniped trying to do clever things with the type system and to make as much of your code as possible statically-inferable. Code and libraries that rely heavily on type dispatch ends up throwing MethodErrors deep into the call stack, far away from your code, which makes it harder to debug them.
More mature Julia developers tend to keep things simpler, and make better use of dynamic types instead of contorting to treat it like a statically-typed language.
- One of the issues with Julia for this kind of thing is that it’s tuned for throughput more than latency, especially the sort of worst-case latency you worry about for realtime systems. You have to be careful to make sure any methods that are called ahead of time so they’re not JIT-compiled in your audio loop. It’s also hard to write zero-allocation code, which is what you need if you don’t want the GC pausing your program at an inopportune time.
- > an error of even 1/8 mm in the placement of the camera would result in a useless image.
That doesn’t make sense to me. Presumably part of the image stitching process is aligning the images to each other based on the areas they overlap, so why do they need that much precision in the camera placement? I’d think keeping the camera square to the painting would be important to minimize needing to skew the images, but that doesn’t seem to be what they’re talking about.
- The tube amp simulation market is already pretty…saturated. :)
People have used the same infrastructure to allow you to compile Julia code (with restrictions) into GPU kernels