The thing about reverb is they require a lot of state and nonlinearity is undesirable.
For NeuralDSP it's a bit different because they use NN's to simulate a guitar amp circuit which is a nonlinear system and so there's no simple way to "capture" the effect the way that you can for reverb sims or speaker sims. And while you can make a very accurate model using something like SPICE, that won't run in realtime. With traditional amp modeling you basically take the SPICE version and try to optimize and cheat as much as you can so it can run in realtime, at the cost of accuracy.
So that's what NeuralDSP's goal is, a system that approximates the amplifier but can also be computed in real-time, except done using a trained NN instead of a human-optimized variant of the SPICE circuit.
They have a couple whitepapers on their website, though none of them go deep enough to really give away their secret sauce. But basically according to them, making a NN model of an amplifier at a fixed setting is fairly simple. Where they had to get novel with it is adjustable settings/parameters. E.g. turning the drive up, or turning the treble down. Just capturing a few hundred or thousand models based on adjusting parameters and cross-fading between them doesn't sound realistic. So they had to come up with a larger model architecture that can "learn" those parameter changes.
https://www.research.ed.ac.uk/en/publications/neural-modelli...
Don’t use NAM. Learn PyTorch.
2. Write SIMD intrinsics by hand. None of the libraries are as fast.
3. Don’t use sigmoid or tanh functions as your nonlinear activation. Instead approximate them with the softsign function which is much cheaper.
Depends on exact architecture, but these optimizations have yielded 10-30x improvement for single threaded CPU real time audio applications.
When GPU audio matures all this may be unnecessary.