Why wouldn't you go with a week or a day? isn't that better than a whole month?
Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?
Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?
Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.
a whole month put you in the "if you don't have the resource to automate it, it's still doable by a human, not enough to crush somebody, but still enough to make the option , let's automate fully something to consider"
hence why it's better than a week or a day (it's too much pressure for small companies) better than hours/minutes/secondes (it means you go from 1 year to 'now it must be fully automated right now ! )
a year or two years was not a good idea, because you loose knowledge, it creates pressure (oh my.... not the scary yearly certificate renewal, i remember last year we broke something, we i don't remember what...)
A month, you either start to fully document it, or at least to have it fresh in your mind. A month give you time to everytime think "ok, we have 30 certicates, can't we have a wild card, or a certificate with several domain in it?"
> Perhaps it's time to go with another method entirely.
I think that's the way forward, it's just that it will not happen in one step, and going to one month is a first step.
source: We have to manage a lot of certificate for a lot of different use cases (ssh, mutual ssl for authentification, classical HTTPS certificate etc. ) and we learned the hard way that no 2 years is not better than 1 , and I agree that one month would be better
also https://www.digicert.com/blog/tls-certificate-lifetimes-will...
(Why not less than six days? Because I think at that point you might start to face some availability tradeoffs even if everything is always fully automated.)
Ah yes, let's make a terrible workflow to externally force companies who can't be arsed to document their processes to do things properly, at the expense of everyone else.
He said six figures for the price would be fine. This is an instance where business needs and technology have gotten really out of alignment.
It'll take about fifteen minutes of time, and executive level won't ever have to concern themselves with something as mundane as TLS certificates again.
Most solutions: make the peons watch a training video or attend a training session about how they should speak up more.
I completely agree with you but you would be astonished by how many companies, even small/medium companies that uses recent technologies and are otherwise pretty lean, still think that restarting/redeploying/renewing as less as possible is the best way to go instead of fixing the root issue that makes restarting/redeploying/renewing a pain in the ass.
And not even at the "math" level. I mean, like, how to get them into a Java keystore. Or how to get Apache or nginx to use them. That you need to include the intermediate certificate. How to get multiple SANs instead of a wildcard certificate. How to use certbot (with HTTP requests or DNS verification). How to get your client to trust a custom CA. How to troubleshoot what's wrong from a client.
I think the most rational takeaway is just that it's too difficult for a typical IT guy to understand, and most SMBs that aren't in tech don't have anyone more knowledgeable on staff.
I actually watched for crashes (thank you inventory control department shenanigans) so that I can sneak in changes during a reset.
I mean… There's a tradeoff to be sure. I also have a list of things that could be solved properly, but can't justify the time expense to doing so compared to repeating the shortcut every so often.
It's like that expensive espresso machine I've been drooling over for years—I can go out and grab a lot of great coffee at a barista shop before the machine would have saved me money.
But in this particular instance, sure; once you factor the operational risk in, proper automation often is a no-brainer.
Business culture devaluing security is the root of this and I hope people see the above example of everything that's wrong with how some technology companies operate, and "just throw money at the problem because security in an annoying cost center" is super bad leadership. I'm going to guess this guy also have an MFA exception on his account and a 7 character password because "it just works! It just makes sense, nerds!" I've worked with these kinds of execs all my career and they are absolutely the problem here.
A short cycle ensures either automation or keeping memory fresh.
Automation of course can also be forgotten and break, but it's at least somewhere written down in some form (code) rather than personal memory of a long gone employee who previously uploaded certs to some CA website for signing manually etc
Then if your CA went down for an hour, you would go down too. With 47 days, there's plenty of time for the CA to fix the outage and issue you a new cert before your current one expires.
Using LetsEncrypt and ZeroSSL together is a popular approach. If you need a stronger guarantee of uptime, reach for the paid options.
https://github.com/acmesh-official/acme.sh?tab=readme-ov-fil...
>If you need a stronger guarantee of uptime, reach for the paid options.
We don't. If we had 1 minute or 1 second lifetimes, we would.
Imagine you run an old-school media company who's come into possession of a beloved website with decades of user-generated and reporter-generated content. Content that puts the "this is someone's legacy" in "legacy content." You get some incremental ad revenue, and you're like "if all I have to do is have my outsourced IT team do this renewal thing once a year, it's free money I guess."
But now, you have to pay that team to do a human-in-the-loop task monthly for every site you operate, which now makes the cost no longer de minimis? Or, fully modernize your systems? But since that legacy site uses a different stack, they're saying it's an entirely separate project, which they'll happily quote you with far more zeroes than your ads are generating?
All of a sudden, something that was infrequent maintenance becomes a measurable job. Even a fully rational executive sees their incentives switch - and that doesn't count the ones who were waiting for an excuse to kill their predecessors' projects. We start seeing more and more sites go offline.
We should endeavor not to break the internet. That's not "don't break the internet, conditional on fully rational actors who magically don't have legacy systems." It's "don't break the internet."
And, if you haven't been using a reverse proxy before, or for business/risk reasons don't want to use your main site's infrastructure to proxy the inherited site, and had been handling certificates in your host's cPanel with something like https://www.wpzoom.com/blog/add-ssl-to-wordpress/ - it is indeed a dedicated project to install a reverse proxy!
> Perhaps it's time to go with another method entirely.
What method would you suggest here?
Could it work that your long-term certificate (90 days, whatever) gives you the ability to sign ephemeral certificates (much like, e.g. LetsEncrypt signs your 90 day certificate)? That saves calling out to a central authority for each request.
a month is better than a year because we never ever ever managed to make revocation work, and so the only thing we can do is reduce the length of certs so that stolen or fraudulently obtained certs can be used for less time.
https://www.darkreading.com/endpoint-security/china-based-bi...
I'm sure that you are perfectly able to do your own research, why are you trying to push that work onto some stranger on the internet?
I also wonder how many organizations have had certificates mis-issued due to BGP hijacking. Yes, this will improve the warm fuzzy security feeling we all want at night, but how much actual risk is this requirement mitigating?
Scope creep with diminishing returns happens everywhere.Now they are doing next plausible solution. Seems like 47 days is something they found out by let’s encrypt experience estimating load by current renewals but that last part I am just imagining.
But CRL sizes are also partly controlled by expiry time, shorter lifetimes produce smaller CRLs.
There is in fact work on making this an option: https://letsencrypt.org/2025/02/20/first-short-lived-cert-is...
> Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?
> Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?
Eventually the overhead actually does start to matter
> Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.
Like what?
For all the annoyance of SOC2 audits, it sure does make my manager actually spend time and money on following the rules. Without any kind of external pressure I (as a security-minded engineer) would struggle to convince senior leadership that anything matters beyond shipping features.