Preferences

No, vLLM is a thing for serving language models: https://github.com/vllm-project/vllm

barrenko
Is it more like llama.cpp then? I don't have access to the good hardware.
jasonjmcghee
llama.cpp is optimized to serve one request at a time.

vllm is optimized to serve many requests at one time.

If you were to fine tune a model and wanted to serve it to many users, you would use vllm, not llama.cpp

jasonjmcghee
Here's a super relevant comment from another post https://www.hackerneue.com/item?id=44366418

This item has no comments currently.