Is it faster for large models, or are the optimizations more noticeable with small models? Seeing that the benchmark uses a 0.6B model made me wonder about that.
toughOP
I have not tested it but its from a deepseek employee i don't know if it's used in prod there or not!
1. https://github.com/GeeeekExplorer/nano-vllm