Preferences

did you see yesterday nano-vllm [1] from a deepseek employee 1200LOC and faster than vanilla vllm?

1. https://github.com/GeeeekExplorer/nano-vllm


Gracana
Is it faster for large models, or are the optimizations more noticeable with small models? Seeing that the benchmark uses a 0.6B model made me wonder about that.
tough OP
I have not tested it but its from a deepseek employee i don't know if it's used in prod there or not!

This item has no comments currently.