Preferences

gdbsjjdn parent
I understand OP's frustration, but the alternate view is that mandating better practices is a forcing function for businesses that otherwise don't give a shit about users or their privacy or security.

For all the annoyance of SOC2 audits, it sure does make my manager actually spend time and money on following the rules. Without any kind of external pressure I (as a security-minded engineer) would struggle to convince senior leadership that anything matters beyond shipping features.


Jeslijar
Why is a month's expiration better than a year or two years?

Why wouldn't you go with a week or a day? isn't that better than a whole month?

Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?

Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?

Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.

allan_s
I think it's all about change management

a whole month put you in the "if you don't have the resource to automate it, it's still doable by a human, not enough to crush somebody, but still enough to make the option , let's automate fully something to consider"

hence why it's better than a week or a day (it's too much pressure for small companies) better than hours/minutes/secondes (it means you go from 1 year to 'now it must be fully automated right now ! )

a year or two years was not a good idea, because you loose knowledge, it creates pressure (oh my.... not the scary yearly certificate renewal, i remember last year we broke something, we i don't remember what...)

A month, you either start to fully document it, or at least to have it fresh in your mind. A month give you time to everytime think "ok, we have 30 certicates, can't we have a wild card, or a certificate with several domain in it?"

> Perhaps it's time to go with another method entirely.

I think that's the way forward, it's just that it will not happen in one step, and going to one month is a first step.

source: We have to manage a lot of certificate for a lot of different use cases (ssh, mutual ssl for authentification, classical HTTPS certificate etc. ) and we learned the hard way that no 2 years is not better than 1 , and I agree that one month would be better

also https://www.digicert.com/blog/tls-certificate-lifetimes-will...

ameliaquining
I think the less conservative stakeholders here would honestly rather do the six-day thing. They don't view the "still doable by a human" thing as a feature; they'd rather everyone think of certificate management as something that has to be fully automated, much like how humans don't manually respond to HTTP requests. Of course, the idea is not to make every tiny organization come up with a bespoke automation solution; rather, it's to make everyone who writes web server software designed to be exposed to the public internet think of certificate management as included within the scope of problems that are their responsibility to solve, through ACME integration or similar. There isn't any reason in principle why this wouldn't work, and I don't think there'd have been a lot of objections if it had worked this way from the beginning; resistance is coming primarily from stakeholders who don't ever want to change anything as they view it as a pure cost.

(Why not less than six days? Because I think at that point you might start to face some availability tradeoffs even if everything is always fully automated.)

belval
> it creates pressure (oh my.... not the scary yearly certificate renewal, i remember last year we broke something, we i don't remember what...)

Ah yes, let's make a terrible workflow to externally force companies who can't be arsed to document their processes to do things properly, at the expense of everyone else.

hombre_fatal
But it's a decent trade-off and you're using sarcasm in place of fleshing out your claim.

Monthly expiration is a simple way to force you to automate something. Everyone benefits from automating it, too.

FuriouslyAdrift
I just recently had a executive level manager ask if we could get a 100 year cert for our ERP as the hassle of cert management and the massive cost of missing a renewal made it worth it.

He said six figures for the price would be fine. This is an instance where business needs and technology have gotten really out of alignment.

How on earth would that make more sense than properly setting up ACME and forgetting about the problem for the next hundred years?? If your bespoke ERP system is really so hostile toward cert changes, put it behind a proper reverse proxy with modern TLS features and self-sign a certificate for a hundred years, and be done with it.

It'll take about fifteen minutes of time, and executive level won't ever have to concern themselves with something as mundane as TLS certificates again.

FuriouslyAdrift
Support contract states we cannot put it behind a proxy. We used to use HAProxy and multiple web server instances, but the support switched to India and they claimed they could no longer undertsand or support that configuration. Since it is a main system for the entire org and the support contract is part of our financial liability and data insurance, the load balancer had to go. This is corporate enterprise IT. Now you know why sysadmins are so grumpy.
slipperydippery
Most safety & security dysfunction stories: high level management-tier misaligned incentives, incompetence, and ignorance, overriding the expert advice of mere peons, leading to predictable catastrophes (not to mention, usually, extra costs in the meantime—just hidden ones).

Most solutions: make the peons watch a training video or attend a training session about how they should speak up more.

My condolences:)
darkwater
> How on earth would that make more sense than properly setting up ACME and forgetting about the problem for the next hundred years?? If your bespoke ERP system is really so hostile toward cert changes, put it behind a proper reverse proxy with modern TLS features and self-sign a certificate for a hundred years, and be done with it.

I completely agree with you but you would be astonished by how many companies, even small/medium companies that uses recent technologies and are otherwise pretty lean, still think that restarting/redeploying/renewing as less as possible is the best way to go instead of fixing the root issue that makes restarting/redeploying/renewing a pain in the ass.

moduspol
I don't know about OP but I've also worked plenty of places where I seem to be the only person who understands TLS.

And not even at the "math" level. I mean, like, how to get them into a Java keystore. Or how to get Apache or nginx to use them. That you need to include the intermediate certificate. How to get multiple SANs instead of a wildcard certificate. How to use certbot (with HTTP requests or DNS verification). How to get your client to trust a custom CA. How to troubleshoot what's wrong from a client.

I think the most rational takeaway is just that it's too difficult for a typical IT guy to understand, and most SMBs that aren't in tech don't have anyone more knowledgeable on staff.

FuriouslyAdrift
I have to schedule at least 30 days out on any change or restart for main systems and I may be overruled by ANY manager.

I actually watched for crashes (thank you inventory control department shenanigans) so that I can sneak in changes during a reset.

> […] that restarting/redeploying/renewing as less as possible is the best way to go instead of fixing the root issue that makes restarting/redeploying/renewing a pain in the ass.

I mean… There's a tradeoff to be sure. I also have a list of things that could be solved properly, but can't justify the time expense to doing so compared to repeating the shortcut every so often.

It's like that expensive espresso machine I've been drooling over for years—I can go out and grab a lot of great coffee at a barista shop before the machine would have saved me money.

But in this particular instance, sure; once you factor the operational risk in, proper automation often is a no-brainer.

zoeysmithe
Yep this. This is just "we have so much technical debt, our square pegs should fit into all round holes!"

Business culture devaluing security is the root of this and I hope people see the above example of everything that's wrong with how some technology companies operate, and "just throw money at the problem because security in an annoying cost center" is super bad leadership. I'm going to guess this guy also have an MFA exception on his account and a 7 character password because "it just works! It just makes sense, nerds!" I've worked with these kinds of execs all my career and they are absolutely the problem here.

FuriouslyAdrift
IT serves business needs... not the other way around. If anything, cloud services and mobile device access has made securing anything just about impossible.
op00to
Start your own business - nginx proxy in front of ERP where you handle the SSL for them, put $$ in a trust to ensure there's enough money to pay for someone to update the cert.
johannes1234321
The exact time probably has no "best" but from past times: I have seen so many places where multi-year certificates were used and people forgot about them, till some service suddenly stopped working and then people having to figure out how to replace that cert.

A short cycle ensures either automation or keeping memory fresh.

Automation of course can also be forgotten and break, but it's at least somewhere written down in some form (code) rather than personal memory of a long gone employee who previously uploaded certs to some CA website for signing manually etc

Thorrez
>Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?

Then if your CA went down for an hour, you would go down too. With 47 days, there's plenty of time for the CA to fix the outage and issue you a new cert before your current one expires.

8organicbits
Lots of ACME software supports configuring CA fallbacks, so even if a CA is down hard for an extended period you can issue certificates with the others.

Using LetsEncrypt and ZeroSSL together is a popular approach. If you need a stronger guarantee of uptime, reach for the paid options.

https://github.com/acmesh-official/acme.sh?tab=readme-ov-fil...

Thorrez
If everyone uses that with 1 minute or 1 second expirations, I could certainly see a case where an outage in 1 CA causes traffic migration to another, causing performance issues on the fallback CA too.

>If you need a stronger guarantee of uptime, reach for the paid options.

We don't. If we had 1 minute or 1 second lifetimes, we would.

8organicbits
Oh, agreed. I was responding to the part about extended outages.
btown
Sure, there is an argument about slippery slopes here. But the thing about the adage of "if you slowly boil a frog..." (https://en.wikipedia.org/wiki/Boiling_frog) is that not only is the biological metaphor completely false, it also ignores the fact that there can be real thresholds that can change behavior.

Imagine you run an old-school media company who's come into possession of a beloved website with decades of user-generated and reporter-generated content. Content that puts the "this is someone's legacy" in "legacy content." You get some incremental ad revenue, and you're like "if all I have to do is have my outsourced IT team do this renewal thing once a year, it's free money I guess."

But now, you have to pay that team to do a human-in-the-loop task monthly for every site you operate, which now makes the cost no longer de minimis? Or, fully modernize your systems? But since that legacy site uses a different stack, they're saying it's an entirely separate project, which they'll happily quote you with far more zeroes than your ads are generating?

All of a sudden, something that was infrequent maintenance becomes a measurable job. Even a fully rational executive sees their incentives switch - and that doesn't count the ones who were waiting for an excuse to kill their predecessors' projects. We start seeing more and more sites go offline.

We should endeavor not to break the internet. That's not "don't break the internet, conditional on fully rational actors who magically don't have legacy systems." It's "don't break the internet."

tyzoid
Pretty much any legacy system can have a modern reverse proxy in front of it. If the legacy application can't handler certs sanely, use the reverse proxy for terminating TLS.
btown
"Just use Nginx" was not a viable option here, without additional Certbot etc. orchestration, until 14 days ago! And this is still in preview! https://blog.nginx.org/blog/native-support-for-acme-protocol

And, if you haven't been using a reverse proxy before, or for business/risk reasons don't want to use your main site's infrastructure to proxy the inherited site, and had been handling certificates in your host's cPanel with something like https://www.wpzoom.com/blog/add-ssl-to-wordpress/ - it is indeed a dedicated project to install a reverse proxy!

nisegami
Every year is too infrequent to force automation, leading to admins forgetting to renew their certs. Every minute/day may be too demanding on ACME providers and clutters transparency logs. Dynamic certs just move the problem around because whatever is signing those certs just becomes the SSL cert in practice unless it happens over acme in which case see the point above.
yladiz
I'm not sure if you're arguing in good faith, but assuming you are, it should be pretty self-evident why you wouldn't generate the certificate dynamically each request: it would take too much time to do so, and so every request would be substantially slower, probably as slow as using Tor, since you would need to ask for the certificate from a central authority. In general it's all about balance, 1 month isn't necessarily better than 1 year, but the reduced timeframe means that there's less complexity in keeping some renovation list and passing it to clients, and it's not so short to require more resources on both the issuer and the requester of the certificate.

> Perhaps it's time to go with another method entirely.

What method would you suggest here?

zimpenfish
> since you would need to ask for the certificate from a central authority

Could it work that your long-term certificate (90 days, whatever) gives you the ability to sign ephemeral certificates (much like, e.g. LetsEncrypt signs your 90 day certificate)? That saves calling out to a central authority for each request.

yladiz
Without knowing the technical details too much: Maybe, although I don’t think it would make much difference in my argument, since it would still add too much time to the request. Likely less, but still noticeable.
bananapub
dunno why you're being so obnoxious about it?

a month is better than a year because we never ever ever managed to make revocation work, and so the only thing we can do is reduce the length of certs so that stolen or fraudulently obtained certs can be used for less time.

naasking
On the vulnerability ladder since SSL was introduced, how common and how disastrous have stolen or fraudulent certs really compared to other security problems, and by how much will these changes reduce such disasters?
FuriouslyAdrift
China currently has a large APT campaign using a comprised CA (Billbug).

https://www.darkreading.com/endpoint-security/china-based-bi...

naasking
I agree with the article, this is "potentially very dangerous". Potential is not actual though, and I'm asking about what damage has actually materialized. Is there a cost estimate over the past 20 years vs. say, memory safety vulnerabilities?
capitol_
Is this some sort of troll comment?

I'm sure that you are perfectly able to do your own research, why are you trying to push that work onto some stranger on the internet?

naasking
Is this a troll article? The article asked basically the same question:

    I also wonder how many organizations have had certificates mis-issued due to BGP hijacking. Yes, this will improve the warm fuzzy security feeling we all want at night, but how much actual risk is this requirement mitigating?
Scope creep with diminishing returns happens everywhere.
There was an attempt doing it differently by CRL but it turns out certificate revoking is not feasible in practice on web scale.

Now they are doing next plausible solution. Seems like 47 days is something they found out by let’s encrypt experience estimating load by current renewals but that last part I am just imagining.

fanf2
CRL distribution at web scale is now possible thanks to work by John Schanck at Mozilla https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...

But CRL sizes are also partly controlled by expiry time, shorter lifetimes produce smaller CRLs.

Oh wow that’s really fresh. I was still stuck on cascade bloom filers
yjftsjthsd-h
> Why wouldn't you go with a week or a day? isn't that better than a whole month?

There is in fact work on making this an option: https://letsencrypt.org/2025/02/20/first-short-lived-cert-is...

> Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?

> Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?

Eventually the overhead actually does start to matter

> Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.

Like what?

supertrope
As the limit approaches zero you re-invent Kerberos.

This item has no comments currently.