> Also, the servers were doing 200 Mbps, so I couldn't have kept up _much_ longer, no matter the limits.
For cost reasons or system overload?
If system overload ... What kind of storage? Are you monitoring disk i/o? What kind of CPU do you have in your system? I used to push almost 10GBps with https on dual E5-2690 [2], but it was a larger file. 2690s were high end, but something more modern will have much better AES acceleration and should do better than 200 Mbps almost regardless of what it is.
[1] to be honest, I'm not sure I understand the intent of open_file_cache... Opening files is usually not that expensive; maybe at hundreds of thousands of rps or if you have a very complex filesystem. PS don't put tens of thousands of files in a directory. Everything works better if you take your ten thousand files and put one hundred files into each of one hundred directories. You can experiment to see what works best with your load, but a tree where you've got N layers of M directories and the last layer has M files is a good plan, 64 <= M <= 256. The goal is keeping the directories compact so searching and editing is effective.
[2] https://www.intel.com/content/www/us/en/products/sku/64596/i...
I may have a hint here - remember, that Nginx was created in the times of dialup was a thing yet and having single Pentium 3 server was a norm (I believe I've seen myself that wwwXXX machines in the Rambler DCs over that time).
So my a bit educated guess here, that saving every syscal was sorta ultimate goal and it was more efficient in terms of at least latency by that times. You may take a look how Nginx parses http methods (GET/POST) to save operations.
Myself I don't remember seeing large benefits of using open_file_cache, but I likely never did a proper perf test here. Say ensure use of sendfile/buffers/TLS termination made much more influence for me on modern (10-15 years old) HW.
Quoting:
> Traffic
>All root servers have a dedicated 1 GBit uplink by default and with it unlimited traffic. Inclusive monthly traffic for servers with 10G uplink is 20TB. There is no bandwidth limitation. We will charge € 1 ($1.20)/TB for overusage.
NVMe disks are incredibly fast and 1k rps is not a lot (IIRC my n100 seems to be capable of ~40k if not for the 1 Gbit NIC bottlenecking). I'd try benchmarking without the tuning options you've got. Like do you actually get 40k concurrent connections from cloudflare? If you have connections to your upstream kept alive (so no constant slow starts), ideally you have numCores workers and they each do one thing at a time, and that's enough to max out your NIC. You only add concurrency if latency prevents you from maxing bandwidth.
https://community.nginx.org/t/too-many-open-files-at-1000-re...
Also, the servers were doing 200 Mbps, so I couldn't have kept up _much_ longer, no matter the limits.