Whereas with Python, even in the latest version, you're already looking at atleast 10x the amount of startup latency in practice.
Note: This is excluding the actual time that is made for the network call, which can of course also add quiete some milliseconds, depending on how far on planet earth your destination is.
Compare:
import requests
print(requests.get("http://localhost:3000").text)
to package main
import (
"fmt"
"io"
"net/http"
)
func main() {
resp, _ := http.Get("http://localhost:3000")
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body)
fmt.Println(string(body))
}
I get: python3: 0.08s user 0.02s system 91% cpu 0.113 total
go 0.00s user 0.01s system 72% cpu 0.015 total
(different hardware as I'm at home).I wrote another that counts the lines in a file, and tested it against https://www.gutenberg.org/cache/epub/2600/pg2600.txt
I get:
python 0.03s user 0.01s system 83% cpu 0.059 total
go 0.00s user 0.00s system 80% cpu 0.010 total
These are toy programs, but IME that these gaps stay as your programs get biggerI believe in the past people have looked at putting the standard library in a zip file instead of splatted out into a bunch of files in a dirtree. In that case, I think python would just do a few stats, find the zipfile, loaded the whole thing into RAM, and then index into the file.
"If python was implemented totally different it might be fast" - sure, but it's not!
It's tooling agnostic and there are a couple ways to generate them, but the easiest it to just use pants build.
Pants also does dependency traversal (that's the main reason we started using it, deploying a microservices monorepo) so it only packages the necessary modules.
I haven't profiled it yet for cold starts, maybe I'll test that real quick.
https://www.pantsbuild.org/dev/docs/python/overview/pex
Edit: just ran it on a hello world with py3.14 on m3 macbook pro, about 100 +/-30 ms for `python -m hello` and 300-400 (but wild variance) for executing the pex with `./hello/binary.pex`.
I'm not sure if a pants expert could eke out more speed gains and I'm also not sure if this strategy would win out with a lot of dependencies. I'm guessing the time required to stat every imported file pales in comparison to the actual load time, and with pex, everything needs to be unzipped first.
Pex is honestly best when you want to build and distribute an application as a single file (there are flags to bundle the python interpreter too).
The other option is mypyc, though again that seems to mostly speed up runtime https://github.com/mypyc/mypyc
Now if I use `python -S` (disables `import site` on initialization), that gets down to ~15ms execution time for hello world. But that gain gets killed as soon as you start trying to import certain modules (there is a very limited set of modules you can work with and still keep speedup. So if you whole script is pure python with no imports, you could probably have a 20ms cold start).