If you're using the api cost of the model to estimate it's size, then you can't use this size estimate to estimate the inference cost.
tok/s cannot in any way be used to estimate parameters. It's a tradeoff made at inference time. You can adjust your batch size to serve 1 user at a huge tok/s or many users at a slow tok/s.
[1]: https://docs.google.com/spreadsheets/d/1kc262HZSMAWI6FVsh0zJ...