Preferences

dale_glass parent
I believe the low maximum lifetimes are becoming a thing because revocation failed.

CRLs become gigantic and impractical at the sizes of the modern internet, and OCSP has privacy issues. And there's the issue of applications never checking for revocation at all.

So the obvious solution was just to make cert lifetimes really short. No gigantic CRLs, no reaching out to the registrar for every connection. All the required data is right there in the cert.

And if you thought 47 days was unreasonable, Let's Encrypt is trying 6 days. Which IMO on the whole is a great idea. Yearly, or even monthly intervals are long enough that you know a bunch of people will do it by hand, or have their renewal process break and not be noticed for months. 6 days is short enough that automation is basically a must and has to work reliably.


Andoryuuta
Semi-related: Firefox 142 was released a few days ago and is now using CRLite[0], which apparently only needs ~300kB a day for for the revocation lists in their new clubcard data-structure[1].

[0]: https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...

[1]: https://github.com/mozilla/clubcard

FuriouslyAdrift
Because of all my internal systems that use certs to connect (switches, routers, iot, etc) that have manual only interfaces (most are tftp), I have had to go back to just running my own CA infrastructure and only using public CAs for non-corporate or mixed audience sites/services.

It's really annoying because I have to carve outs for browsers and other software that refuse to connect to things with unverifiable certs and adding my CA to some software or devices is a either a pain or impossible.

It's created a hodge podge of systems and policies and made our security posture full of holes. Back when we just did a fully delegated digicert wildcard (big expense) on a 3 or 5 year expiration, it was easy to manage. Now, I've got execs in other depts asking about crazy long expirations because of the hassle.

Why is fronting these systems with a central haproxy with TLS termination or similar not an option?
dvdkon
Because then you have plain HTTP running over your network. The issue here (I presume) is not how to secure access over the Internet, but within an internal network.

Plenty of people leave these devices without encrypted connections, because they are in a "secure network", but you should never rely on such a thing.

Nothing stops you from using a self-signed certificate with a ridiculous expiration period for HTTPS between the reverse proxy and the device in question.
FuriouslyAdrift
Except browsers and other software that are becoming hard-coded to block access to such devices.

We used to use Firefox solely for internal problem devices with IP and subnet exclusions but even that is becoming difficult.

whatevaa
Fronting a switch management interface with haproxy? Are you sure that is a good idea?
Yes. If we're talking about handling TLS termination and putting an IP behind a sensible hostname, I don't see what's wrong about using a reverse proxy. Note that this does not imply making it accessible on the internet.
FuriouslyAdrift
Yet more infra that must now be managed and a point of failure. No thank you.
layer8
CRLs don’t have to be large, since they only need to list revoked certificates that also haven’t expired yet. Using sub-CAs, you can limit the maximum size any single CRL could possibly have. I’m probably missing something, but for SSL certificates on the public internet I don’t really see the issue. Where is the list of such compromised non-expired certificates that is so gigantic?
compumike
Just thinking out loud here: an ACME DNS-01 challenge requires a specific DNS TXT record to be set on _acme-challenge.<YOUR_DOMAIN> as a way of verifying ownership. Currently this is a periodic check every 45 or 90 or 365 days or whatever, which is what everyone's talking about.

Why not encode that TXT record value into the CA-signed certificate metadata? And then at runtime, when a browser requests the page, the browser can verify the TXT record as well, and cache that result for an hour or whatever you like?

Or another set of TXT records for revocation, TXT _acme-challenge-revoked.<YOUR_DOMAIN> etc?

It's not perfect, DNS is not at all secure / relatively easy to spoof for a single client on your LAN, I know that. But realistically, if someone has control of your DNS, they can just issue themselves a legit certificate anyway.

ameliaquining
I think the problem with this idea is not security (as you point out, the status quo isn't really better), but availability. It's not all that uncommon for poorly designed middleboxes to block TXT records, since they're not needed for day-to-day web browsing and such.

Also, I don't see how that last paragraph follows; is your argument just that client-side DNS poisoning is an attack not worth defending against?

Also, there's maybe not much value in solving this for DNS-01 if you don't also solve it for the other, more commonly used challenge types.

ashleyn
Certbot has this down to a science. I haven't once had to touch it after setting it up. 6 days doesn't seem like an onerous requirement in light of that.

This item has no comments currently.