Preferences

b0a04gl parent
i was skimming through this and kinda surprised how tight the whole thing is. like it does 90% of what vllm does, but the code's readable end to end. no extra infra, no orchestration layers yelling at you. i got it running on local in minutes and throughput actually beat vllm on my 4070. wasn't expecting that.

if we can do this level of performance in 1.2k lines, what if we go the other way split the model across devices or even machines, stream token-by-token, but keep prefix cache consistent across hops. can we design inference engines that think in terms of modular attention scopes instead of monolithic graphs? is it even possible


This item has no comments currently.