What's the end game here? I agree with the dissent. Why not make it 30 seconds?
Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours? I am willing to bet money this threshold will never be crossed.
This feels like much more of an ideological mission than a practical one, unless I've missed some monetary/power advantage to forcing everyone to play musical chairs with their entire infra once a month...
mcpherrinm
I'm on the team at Let's Encrypt that runs our CA, and would say I've spent a lot of time thinking about the tradeoffs here.
Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.
Shorter lifetimes have several advantages:
1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.
2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.
3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.
Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.
Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.
noveltyaccount
When I first set up Let's Encrypt I thought I'd manually update the cert one per year. The 90 day limit was a surprise. This blog post helped me understand (it repeats many of your points) https://letsencrypt.org/2015/11/09/why-90-days/
0xbadcafebee
So it's being pushed because it'll be easier for a few big players in industry. Everybody else suffers.
da_chicken
It's a decision by Certificate Authorities, the ones that sell TLS certificate services, and web browser vendors. One benefits from increased demand on their product, while the other benefits by increasing the overhead on the management of their software, which increases the minimum threshold to be competitive.
There are security benefits, yes. But as someone that works in infrastructure management, including on 25 or 30 year old systems in some cases, it's very difficult to not find this frustrating. I need tools I will have in 10 years to still be able to manage systems that were implemented 15 years ago. That's reality.
Doubtless people here have connected to their router's web interface using the gateway IP address and been annoyed that the web browser complains so much about either insecure HTTP or an unverified TLS certificate. The Internet is an important part of computer security, but it's not the only part of computer security.
I wish technical groups would invest some time in real solutions for long-term, limited access systems which operate for decades at a time without 24/7 access to the Internet. Part of the reason infrastructure feels like running Java v1.3 on Windows 98 is because it's so widely ignored.
bigstrat2003
It is continuously frustrating to me to see the arrogant dismissiveness which people in charge of such technical groups display towards the real world usage of their systems. It's some classic ivory tower "we know better than you" stuff, and it needs to stop. In the real world, things are messy and don't conform to the tidy ideas that the Chrome team at Google has. But there's nothing forcing them to wake up and face reality, so they keep making things harder and harder for the rest of us in their pursuit of dogmatic goals.
It astounds me that there's no non-invasive local solution to go to my router or whatever other appliances web page without my browser throwing warnings and calling it evil. Truly a fuck up(purposeful or not) by all involved in creating the standards. We need local TLS without the hoops.
I wonder if a separate CA would be useful for non-public-internet TLS certificates. Imagine a certificate that won't expire for 25 years issued by it.
Such a certificate should not be trusted for domain verification purposes, even though it should match the domain. Instead it should be trusted for encryption / stream integrity purposes. It should be accepted on IPs outside of publicly routable space, like 192.0.0/24, or link-local IPv6 addresses. It should be possible to issue it for TLDs like .local. It should result in a usual invalid certificate warning if served off a pubic internet address.
In other words, it should be handled a bit like a self-signed certificate, only without the hassle of adding your handcrafted CA to every browser / OS.
Of course it would only make sense if a major browser would trust this special CA in its browser by default. That is, Google is in a position to introduce it. I wonder if they may have any incentive though. (To say nothing of Apple.)
That's the reality and that's an issue unrelated to TLS
Running unmanaged compute at home (or elsewhere ..) is the issue here.
ryao
If the web browsers would adopt DANE, we could bypass CAs and still have TLS.
xorcist
A domain validated secure key exchange would indeed be a massive step up in security, compared to the mess that is the web PKI. But it wouldn't help with the issue at hand here: home router boostrap. It's hard to give these devices a valid domain name out of the box. Most obvious ways have problems either with security or user friendliness.
benlivengood
Frankly, unless 25 and 30 year old systems are being continually updated to adhere to newer TLS standards then they are not getting many benefits from TLS.
Practically, the solution is virtual machines with the compatible software you'll need to manage those older devices 10 years in the future, or run a secure proxy for them.
Internet routers are definitely one of the worst offenders because originating a root of trust between disparate devices is actually a hard problem, especially over a public channel like wifi. Generally, I'd say the correct answer to this is that wifi router manufacturers need to maintain secure infrastructure for enrolling their devices. If manufacturers can't bother to maintain this kind of infrastructure then they almost certainly won't be providing security updates in firmware either, so they're a poor choice for an Internet router.
tptacek
It is reasonable for the WebPKI of 2025 to assume that the Internet encompasses the entire scope of its problem.
tptacek
Or, equivalently, it's being pushed because customers of "big players", of which there are a great many, are exposed to security risk by the status quo that the change mitigates.
mcpherrinm
It makes the system more reliable and more secure for everyone.
I think that's a big win.
The root reason is that revocation is broken, and we need to do better to get the security properties we demand of the Web PKI.
zmmmmm
> It makes the system more reliable
It might in theory but I suspect it's going to make things very very unreliable for quite a while before it (hopefully) gets better. I think probably already a double digit fraction of our infrastructure outages are due to expired certificates.
And because of that it may well tip a whole class of uses back to completely insecure connections because TLS is just "too hard". So I am not sure if it will achieve the "more secure" bit either.
fiddlerwoaroof
It makes systems more reliable and secure for system runners that can leverage automation for whatever reason. For the same reason, it adds a lot of barriers to things like embedded devices, learners, etc. who might not be able to automate TLS checks.
And rather than fix the issues with revocation, its being shuffled off to the users.
Good example of enshittification
yellowapple
"Suffer" is a strong word for those of us who've been using things like Let's Encrypt for years now without issue.
ignoramous
Unless I misunderstood, GP mentions that the problem stems from WebPKI's central role in server identity management. Think of these cert lifetimes as forcefully being signed out after 47 days of being signed in.
> easier for a few big players in industry
Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).
klaas-
I think a very short lived cert (like 7 days) could be a problem on renewal errors/failures that don't self correct but need manual intervention.
What will let's encrypt be like with 7day certs? Will it renew them every day(6 day reaction time), or every 3 days (4 days reaction time). Not every org is suited with 24/7 staffing, some people go on holidays, some public holidays extend to long weekends etc :). I would argue that it would be a good idea to give people a full week to react to renewal problems. That seems impossible for short lived certs.
iqandjoke
Like Apple case. Apple already ask their developer to re-sign the app every 7 days. It should not be the problem.
kassner
That’s only a thing if you are not publishing on Apple Store, no?
dcow
Correct. Or if you’re not using an enterprise distribution cert.
grey-area
Since you’ve thought about it a lot, in an ideal world, should CAs exist at all?
mcpherrinm
There's no such thing as an ideal world, just the one we have.
Let's Encrypt was founded with a goal of rapidly (within a few years) helping get the web to as close to 100% encrypted as we could. And we've succeeded.
I don't think we could have achieved that goal any way other than being a CA.
grey-area
Sorry was not trying to be snarky, was interested in your answer as to what a better system would look like. The current one seems pretty broken but hard to fix.
Ajedi32
In an ideal world where we rebuilt the whole stack from scratch, the DNS system would securely distribute key material alongside IP addresses and CAs wouldn't be needed. Most modern DNS alternatives (Handshake, Namecoin, etc) do exactly this, but it's very unlikely any of them will be usurping DNS anytime soon, and DNS's attempts to implement similar features have been thus far unsuccessful.
tptacek
People who idealize this kind of solution should remember that by overloading core Internet infrastructure (which is what name resolution is) with a PKI, they're dooming any realistic mechanism that could revoke trust in the infrastructure operators. You can't "distrust" .com. But the browsers could distrust Verisign, because Verisign had competitors, and customers could switch transparently. Browser root programs also used this leverage to establish transparency logs (though: some hypothetical blockchain name thingy could give you that automatically, I guess; forget about it with the real DNS though).
I tried filing an issue against the chromium project for DANE support a year or two ago and they closed the issue with a nonsense reason.
talideon
Much as I like the idea of DANE, it solves nothing by itself and you need to sign the zone from tampering. Right now, the dominant way to do that is DNSSEC, though DNSCurve is a possible alternative, even if it doesn't solve the exact same problem. For DANE to be useful, you'd first need to get that set up on the domain in question, and the effort to get that working is far, far from trivial, and even then, the process is so error prone and brittle that you can easily end up making a whole zone unusable.
Further, all you've done is replace one authority (the CA authority) with another one (the zone authority, and thus your domain registrar and the domain registry).
JackSlateur
The zone authority already superseeds the CA authority in all ways
When I manage a DNS zone, I'm free to generate all certificates I want
throwaway2037
This is a great question. If we don't have CAs, how do we know if it OK to trust a cert?
Are there any reasonable alternatives to CAs in a modern world? I have never heard any good proposals.
kbolino
There are some alternatives.
Certificate pinning is probably the most widely known way to get a certificate out there without relying on live PKI. However, certificate pinning just shifts the burden of trust from runtime to install time, and puts an expiration date on every build of the program. It also doesn't work for any software that is meant to access more than a small handful of pre-determined sites.
Web-of-trust is a theoretical possibility, and is used for PGP-signed e-mail, but it's also a total mess that doesn't scale. Heck, the best way to check the PGP keys for a lot of signed mail is to go to an HTTPS website and thus rely on the CAs.
DNSSEC could be the basis for a CA-free world, but it hasn't achieved wide use. Also, if used in this way, it would just shift the burden of trust from CAs to DNS operators, and I'm not sure people really like those much better.
In an ideal world we could just trust people not to be malicious, and there wouldn't be any need to encrypt traffic at all.
WJW
How relevant is that since we don't live in such a world? Unless you have a way to get to to such a world, of course, but even then CAs would need to keep existing until you've managed to bring the ideal world about. It would be a mistake to abolish them first and only then start on idealizing the world.
klysm
CAs exist on the intersection of reality (far from ideal) and cryptography.
Stefan-H
What alternatives come to mind when asking that question? Not being in the PKI world directly, web of trust is what comes to mind, but I'm curious what your question hints at.
grey-area
I honestly don’t know enough about it to have an opinion, have vague thoughts that dns is the weak point anyway for identity so can’t certs just live there instead but I’m sure there are reasons (historical and practical).
efortis
4. Encrypted traffic hoarders would have to break more certs.
I love the push that LE puts on industry to get better.
I work in a very large organisation and I just dont see them being able to go to automated TLS certificates for their self managed subdomains, inspection certificates, or anything else for that matter. It will be interesting to see how the short lived certs are adopted into the future.
ocdtrekkie
Are you aware of a single real world not theoretical security breach caused by an unrevoked certificate that lived too long?
woodruffw
A real-world example of this would be Heartbleed, where users rotated without revoking their previously compromised certificates[1].
Could you explain why Let's Encrypt is dropping OCSP stabling support, instead of dropping it for must-staple only certificates and letting those of us who want must-staple to deal with the headaches? I believe that resolving the privacy concerns involving OCSP raised did not require eliminating must-staple.
mcpherrinm
Must-staple has almost zero adoption. The engineering cost of supporting it for a feature that is nearly unused just isn’t there.
We did consider it.
As CAs prepare for post-quantum in the next few years, it will become even less practical as there is going to be pressure to cut down the number of signatures in a handshake.
ryao
> Must-staple has almost zero adoption. The engineering cost of supporting it for a feature that is nearly unused just isn’t there.
> We did consider it.
That is unfortunate. I just deployed a web server the other day and was thrilled to deploy must-staple from Let's Encrypt, only to read that it was going away.
> As CAs prepare for post-quantum in the next few years, it will become even less practical as there is going to be pressure to cut down the number of signatures in a handshake.
Please delay the adoption of PQAs for certificate signatures at Let's Encrypt as long as possible. I understand the concern that a hypothetical quantum machine with tens of millions of qubits capable of running Shor's algorithm to break RSA and ECC keys might be constructed. However, "post-quantum" algorithms are inferior to classical cryptographic algorithms in just about every metric as long as such machines do not exist. That is why they were not even considered when the existing RSA and ECDSA algorithms were selected before Shor's algorithm was a concern. There is also a real risk that they contain undiscovered catastrophic flaws that will be found only after adoption, since we do not understand their hardness assumptions as well as we understand integer factorization and the discrete logarithm problem. This has already happened with SIKE and it is possible that similarly catastrophic flaws will eventually be found in others.
Perfect forward secrecy and short certificate expiry allow CAs to delay the adoption of PQAs for key signing until the creation of a quantum computer capable of running Shor’s algorithm on ECC/RSA key sizes is much closer. As long as certificates expire before such a machine exists, PFS ensures no risk to users, assuming key agreement algorithms are secured. Hybrid schemes are already being adopted to do that. There is no quantum moore's law that makes it a forgone conclusion that a quantum machine that can use Shor's algorithm to break modern ECC and RSA will be created. If such a machine is never made (due to sheer difficulty of constructing one), early adoption in key signature algorithms would make everyone suffer from the use of objectively inferior algorithms for no actual benefit.
If the size of key signatures with post quantum key signing had been a motivation for the decision to drop support for OCSP must-staple and my suggestion that adoption of post quantum key signing be delayed as long as possible is in any way persuasive, perhaps that could be revisited?
Finally, thank you for everything you guys do at Let's Encrypt. It is much appreciated.
hsbauauvhabzb
How viable are tls attacks, assuming a signed private cert is compromised, you need network position or other things to trigger routing, no?
So for a bank, a private cert compromise is bad, for a regular low traffic website, probably not so much?
cm2187
All of that in case the previous owner of the domain would attempt a mitm attack against a client of the new owner, which is such a remote scenario. In fact has it happened even once?
delfinom
Realistically, how often are domains traded and suddenly put in legitimate use (that isn't some domain parking scam) that (1) and (2) are actual arguments? Lol
zamadatix
Domain trading (regardless if the previous use was legitimate or not) is only one example, not the sole driving argument for why the revocation system is in place or isn't perfectly handled.
Lammy
> but customers aren't able to respond on an appropriate timeline
Sounds like your concept of the customer/provider relationship is inverted here.
crote
No. The customer is violating their contract.
The whole "customer is king" doesn't apply to something as critical as PKI infrastructure, because it would compromise the safety of the entire internet. Any CA not properly applying the rules will be removed from the trust stores, so there can be no exceptions for companies who believe they are too important to adhere to the contract they signed.
luckylion
How would a CA not being able to contact some tiny customer (surely the big ones all can and do respond in less than 90 days?) compromise the safety of the entire internet?
And if the safety of the entire internet is at risk, why is 47 days days an acceptable duration for this extreme risk, but 90 days is not?
The "end game" is mentioned explicitly in the article:
> Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Shorter-lived certificates make OCSP and other revocation mechanisms less of a load-bearing component within the Web PKI. This is a good thing, since neither CAs nor browsers have managed to make timely revocation methods scale well.
(I don't think there's any monetary or power advantage to doing this. The reason to do it is because shorter lifetimes make it harder for server operators to normalize deviant certificate operation practices. The reasoning there is the same as with backups or any other period operational task: critical processes must be continually tested and evaluated for correctness.)
sitkack
Don't lower cert times also get people to trust certs that were created just for their session to MITM them?
That is the next step in nation state tapping of the internet.
woodruffw
I don't see why it would; the same basic requirements around CT apply regardless of certificate longevity. Any CA caught enabling this kind of MITM would be subject to expedient removal from browser root programs, but with the added benefit that their malfeasance would be self-healing over a much shorter period than was traditionally allowed.
ezfe
lol no? lower cert times still extend the root certificates that are already trusted. It is not a noticeable thing when browsing the web as a user.
A MITM cert would need to be manually trusted, which is a completely different thing.
Lammy
I think their point is that a hypothetical connection-specific cert would make it difficult/impossible to compare your cert with anybody else to be able to find out that it happened. A CA could be backdoored but only “tapped” for some high-value target to diminish the chance of burning the access.
the power dynamic here is that the CAs have a "too big to fail" inertia, where they can do bad things without consequence because revoking their trust causes too much inconvenience for too many people. shortening expiry timeframes to the point where all their certificates are always going to expire soon anyways reduces the harm that any one CA can do by offering bad certs.
it might be inconvenient for you to switch your systems to accomodate shorter expiries, but it's better to confront that inconvenience up front than for it to be in response to a security incident.
michaelt
> Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours?
Well you see, they also want to be able to break your automation.
For example, maybe your automation generates a 1024 bit RSA certificate, and they've decided that 2048 bit certificates are the new minimum. That means your automation stops working until you fix it.
Doing this with 2-day expiry would be unpopular as the weekend is 2 days long and a lot of people in tech only work 5 days a week.
timewizard
If the service becomes unavailable for 48 straight hours then every certificate expires and nothing works. You probably want a little more room for catastrophic infrastructure problems.
fs111
Load on the underlying infrastructure is a concern. The signing keys are all in HSMs and don't scale infinitely.
bob1029
How does cycling out certificates more frequently reduce the load on HSMs?
timmytokyo
It's all relative. A 47-day cycle increases the load, but a 48-hour cycle would increase it substantially more.
woodruffw
Much of the HSM load within a CA is OCSP signing, not subscriber cert issuance.
karlgkk
> Why not make it 30 seconds?
This is a ridiculous straw man.
> 48 hours. I am willing to bet money this threshold will never be crossed.
That's because it won't be crossed and nobody serious thinks it should.
Short certs are better, but there are trade-offs. For example, if cert infra goes down over the weekend, it would really suck. TBH, from a security perspective, something in the range of a couple of minutes would be ideal, but that runs up against practical reasons
- cert transparency logs and other logging would need to be substantially scaled up
- for the sake of everyone on-call, you really don't want anything shorter than a reasonable amount of time for a human to respond
- this would cause issues with some HTTP3 performance enhancing features
- thousands of servers hitting a CA creates load that outweighs the benefit of ultra short certs (which have diminishing returns once you're under a few days, anyways)
> This feels like much more of an ideological mission than a practical one
There are numerous practical reasons, as mentioned here by many other people.
Resisting this without good cause, like you have, is more ideological at this point.
pixl97
Heh, working with a number of large companies I've seen most of them moving to internally signed certs on everything because of ever shortening expiration times. They'll have public certs on edge devices/load balancers but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.
plorkyeran
This is a desired outcome. The WebPKI ecosystem would really like it if everyone stopped depending on them for internal things because it's actually a pretty different set of requirements. Long-lived certs with an internal CA makes a lot of sense and is often more secure than using a public CA.
tetha
Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.
It's been a huge pain as we have encountered a ton of bugs and missing features in libraries and applications to reload certs like this. And we have some really ugly workarounds in place, because some applications place a "reload a consul client" on the same level of "reload all config, including opening new sockets, adjusting socket parameters, doing TCP connection handover" - all to rebuild a stateless client throwing a few parameters at a standard http client. But oh well.
But I refuse to back down. Reload your certs and your secrets. If we encounter a situation in which we have to mass-revoke and mass-reissue internal certs, it'll be easy for those who do. I don't have time for everyone else.
donnachangstein
> Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.
Do you promise to come back and tell us the story about when someone went on vacation and the certs issued on a Thursday didn't renew over the weekend and come Monday everything broke and no one could authenticate or get into the building?
kam
At least that sounds like it would be a more interesting story than the one where the person who quit a year ago didn't document all the places they manually installed the 2-year certificate.
tetha
I will. We've been betting Postgres connectivity for a few hundred applications on this over the past three years. If this fucks up, it'll be known without me.
This has been our issue too. We've had mandates for rotating OAuth secrets (client ID & client secret).
Except there are no APIs to rotate those. The infrastructure doesn't exist yet.
And refreshing those automatically does not validate ownership, unlike certificates where you can do a DNS check or an HTTP check.
Microsoft has some technology that next to these tokens they also have a per-machine certificate that is used to sign requests, and those certificates can't leave the machine.
parliament32
We've also felt the pain for OAuth secrets. Current mandates for us are 6 months.
Because we run on Azure / AKS, switching to federated credentials ("workload identities") with the app registrations made most of the pain go away because MS manages all the rotations (3 months) etc. If you're on managed AKS the OIDC issuer side is also automagic. And it's free. I think GCP offers something similar.
Browsers don't design for internal use though. They insist on HTTPS for various things that are intranet only, such as some browser APIs, PWAs, etc
akerl_
As is already described by the comment thread we're replying in, "internal use" and "HTTPS" are very compatible. Corporations can run an internal CA, sign whatever internal certs they want, and trust that CA on their devices.
franga2000
You use the term "internal use" and "corporations" like they're interchangable, but that's definitely not the case. Lots of small businesses, other organizations or even individuals want to have some internal services and having to "set up" a CA and add the certs to all client devices just to access some app on the local network is absurd!
Getting my parents to add a CA to their android, iphone, windows laptop and macbook just so they can use my self hosted nextcloud sounds like an absolute nightmare.
The nightmare only intensifies for small businesses that allow their users to bring their own devices (yes, yes, sacrilege but that is how small businesses operate).
Not everything is a massive enterprise with an army of IT support personnel.
Do I add the root CA of my router manufacturer so I can visit its web interface on my internal network without having half the page functionality broken because of overbearing browser manufacturers who operate the "web PKI" as a cartel? This nowadays includes things such as basic file downloads.
ClumsyPilot
> Corporations can run an internal CA
Having just implemented an internal CA, I can assure you, most corporations can’t just run an internal CA. Some struggle to update containers and tie their shoe laces.
lxgr
Yeah, but essentially every home user can only do so after jumping through extremely onerous hoops (many of which also decrease their security when browsing the public web).
I’ve done it in the past, and it was so painful, I just bit the bullet and started accessing everything under public hostnames so that I can get auto-issued Letsencrypt certificates.
rlpb
Indeed they are compatible. However HTTPS is often unnecessary, particularly in a smaller organisation, but browsers mandate significant unnecessary complexity there. In that sense, brwosers are not suited to this use in those scenarios.
There’s nothing stopping Apple and Google from issuing themselves certificates every 10 minutes. I get no value for doing this. Building out or expanding my own PKI for my company or setting up the infrastructure to integrate with Digicert or whomever gets me zero security and business value, just cost and toil.
Revocation is most often an issue when CAs fuck up. So now we collectively need to pay to cover their rears.
crote
CAs fucking up every once in a while is inevitable. It is impossible to write guaranteed bug-free software or train guaranteed flawless humans.
The big question is what happens when (not "if") that happens. Companies have repeatedly shown that they are unable to rotate certs in time, to the point of even suing CAs to avoid revocation. They've been asked nicely to get their shit together, and it hasn't happened. Shortening cert lifetime to force automation is the inevitable next step.
Spooky23
Silly me, I’m just a customer, incapable of making my own risk assessments or prioritizing my business processes.
You’re portraying people suing CAs to get injunctions to avoid outages as clueless or irresponsible. The fact is Digicert’s actions, dictated by this CA/Browser forum were draconian and over the top responses to a minor risk. This industry trade group is out of control.
End of the day, we’re just pushing risk around. Running a quality internal PKI is difficult.
christina97
What do you mean “WebPKI … would like”. The browser vendors want one thing (secure, ubiquitous, etc), the CAs want a very different thing (expensive, confusing, etc)…
ozim
Problem is browsers will most likely follow the enforcement of short certificates so internal sites will be affected as well.
Non browser things usually don’t care even if cert is expired or trusted.
So I expect people still to use WebPKI for internal sites.
akerl_
The browser policies are set by the same entities doing the CAB voting, and basically every prior change around WebPKI has only been enforced by browsers for CAs in the browser root trust stores. Which is exactly what's defined in this CAB vote as well.
Why would browsers "most likely" enforce this change for internal CAs as well?
ryao
Why would they? The old certificates will expire and the new ones will have short lifespans. Web browsers do not need to do anything.
That said, it would be really nice if they supported DANE so that websites do not need CAs.
nickf
'Most likely' - with the exception of Apple enforcing 825-day maximum for private/internal CAs, this change isn't going to affect those internal certificates.
jiggawatts
I just got a flashback to trying to automate the certificate issuance process for some ESRI ArcGIS product that used an RPC configuration API over HTTPS to change the certificate.
So yes, you had to first ignore the invalid self-signed certificate while using HTTPS with a client tool that really, really didn't want to ignore the validity issue, then upload a valid certificate, restart the service... which would terminate the HTTPS connection with an error breaking your script in a different not-fun way, and then reconnect... at some unspecified time later to continue the configuration.
Fun times...
rsstack
> I've seen most of them moving to internally signed certs
Isn't this a good default? No network access, no need for a public certificate, no need for a certificate that might be mistakenly trusted by a public (non-malicious) device, no need for a public log for the issued certificate.
pavon
Yes, but it is a lot more work to run an internal CA and distribute that CA cert to all the corporate clients. In the past getting a public wildcard cert was the path of least resistance for internal sites - no network access needed, and you aren't leaking much info into the public log. That is changing now, and like you said it is probably a change for the better.
pkaye
What about something like step-ca? I got the free version working easily on my home network.
Not everything that's easy to do on a home network is easy to do on a corporate network. The biggest problem with corporate CAs is how to emit new certificates for a new device in a secure way, a problem which simply doesn't exist on a home network where you have one or at most a handful of people needing new certs to be emitted.
bravetraveler
> A lot more work
'ipa-client-install' for those so motivated. Certificates are literally one among many things part of your domain services.
If you're at the scale past what IPA/your domain can manage, well, c'est la vie.
Spivak
I think you're being generous if you think the average "cloud native" company is joining their servers to a domain at all. They've certainly fallen out of fashion in favor of the servers being dumb and user access being mediated by an outside system.
I’ve unfortunately seen the opposite - internal apps are now back to being deployed over VPN and HTTP
tomjen3
I would love to do that for my homelab, but not all docker containers trust root certs from the system so getting it right would have been a bigger challenge than dns hacking to get a valid certificate for something that can’t be accessed from outside the network.
I am not willing to give credentials to alter my dns to a program. A security issue there would be too much risk.
xienze
> but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.
Introduce them to the idea of having something like Caddy sit in front of apps solely for the purpose of doing TLS termination... Caddy et al can update the certs automatically.
pixl97
Unless they are web/tech companies they aren't doing that. Banks, finance, large manufacturing are all terminating at F5's and AVI's. I'm pretty sure those update certs just fine, but it's not really what I do these days so I don't have a direct answer.
xienze
Sure. The point is, don't bother letting the apps themselves do TLS termination. Too much work that's better handled by something else.
hedora
Also, moving termination off the endpoint server makes it much easier for three letter agencies to intercept + log.
qmarchi
Most responsible orgs do TLS termination on the public side of a connection, but will still make a backend connection protected by TLS, just with a internal CA.
tikkabhuna
F5s don't support ACME, which has been a pain for us.
cpach
It might be possible to run an ACME client on another host in your environment. (IMHO, the DNS-01 challenge is very useful for this.) Then you can (probably) transfer the cert+key to BIG IP, and activate it, via the REST API.
I haven’t used BIG IP in a long while, so take this with a grain of salt, but it seems to me that it might not be impossible to get something going – despite the fact that BIG IP itself doesn’t have native support for ACME.
F5 sells expensive boxes intended for larger installations where you can afford not to do ACME in the external facing systems.
Giving the TLS endpoint itself the authority to manage certificates kind of weakens the usefulness of rotating certificates in the first place. You probably don't let your external facing authoritative DNS servers near zone key material, so there's no reason to let the external load balancers rotate certificates.
Where I have used F5 there was never any problem letting the backend configuration system do the rotation and upload of certificates together with every other piece of configuration that is needed for day to day operations.
EvanAnderson
Exactly. According to posters here you should just throw them away and buy hardware from a vendor who does. >sigh<
Don't expect firmware / software updates to enable ACME-type functionality for tons of gear. At best it'll be treated as an excuse by vendors to make Customers forklift and replace otherwise working gear.
Corporate hardware lifecycles are longer than the proposed timeline for these changes. This feels like an ill thought-out initiative by bureaucrats working in companies who build their own infrastructure (in their white towers). Meanwhile, we plebs who work in less-than-Fortune 500 companies stuck with off-the-shelf solutions will be forced to suffer.
JackSlateur
F5 is the pain.
cryptonym
You now have to build and self-shot a complete CA/PKI.
Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.
You could always ask for wildcard for internal subdomain and use that instead so you will leak your internal FQDN but not individual hosts.
pixl97
I'm pretty sure every bank will auto fail wildcard certs these days, at least the ones I've worked with.
Key loss on one of those is like a takeover of an entire chunk of hostnames. Really opens you up.
JoshTriplett
> Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.
That doesn't seem like the end of the world. It means you shouldn't have `secret-plans-for-world-takeover.example.com`, but it's already the case that secret projects should use opaque codenames. Most internal domain names would not actually leak any information of value.
lokar
I’ve always felt a major benefit of an internal CA is making it easy to have very sort TTLs
SoftTalker
Or very long ones. I often generate 10 year certs because then I don't have to worry about renewing them for the lifetime of the hardware.
lokar
In a production environment with customer data?
SoftTalker
No for internal stuff.
formerly_proven
I'm surprised there is no authorization-certificate-based challenge type for ACME yet. That would make ACME practical to use in microsegmented networks.
Does your hosted service know the private keys or are they all on the client?
benburkert
No, they stay on the client, our service only has access to the CSR. From our docs:
> The CSR relayed through Anchor does not contain secret information. Anchor never sees the private key material for your certificates.
bigp3t3
I'd set that up the second it becomes available if it were a standard protocol.
Just went through setting up internal certs on my switches -- it was a chore to say the least!
With a Cert Template on our internal CA (windows), at least we can automate things well enough!
formerly_proven
Yeah it's almost weird it doesn't seem to exist, at least publicly. My megacorp created their own protocol for this purpose (though it might actually predate ACME, I'm not sure), and a bunch of in-house people and suppliers created the necessary middlewares to integrate it into stuff like cert-manager and such (basically everything that needs a TLS certificate and is deployed more than thrice). I imagine many larger companies have very similar things, with the only material difference being different organizational OIDs for the proprietary extension fields (I found it quite cute when I learned that the corp created a very neat subtree beneath its organization OID).
Pxtl
At this point I wish we could just get all our clients to say "self-signed is fine if you're connecting to a .LOCAL domain name". https is intrinsically useful over raw http, but the overhead of setting up centralized certs for non-public domains is just dumb.
Give us a big global *.local cert we can all cheat with, so I don't have to blast my credentials in the clear when I log into my router's admin page.
shlant
this is exactly what I do because mongo and TLS is enough of a headache. I am not dealing with rotating certificates regularly on top of that for endpoints not exposed to the internet.
SoftTalker
Yep letsencrypt is great for public-facing web servers but for stuff that isn't a web server or doesn't allow outside queries none of that "easy" automation works.
procaryote
Acme dns challenge works for things that aren't webservers.
For the other case perhaps renew the cert at a host allowed to do outside queries for the dns challenge and find some acceptable automated way to propagate an updated cert to the host that isn't allowed outside queries.
Yeroc
Last time I checked there's no standardized API/protocol to deal with populating the required TXT records on the DNS side. This is all fine if you've out-sourced your DNS services to one of the big players with a supported API but if you're running your own DNS services then doing automation against that is likely not going to be so easy!
And may the devil help you if you do something wrong and accidentally trip LetsEncrypt's rate limiting.
You can do nothing except twiddle your thumbs while it times out and that may take a couple of days.
JackSlateur
Haa, yes ! We have that, too ! Accepted warning in browsers ! curl -k ! verify=False ! Glorious future to the hacking industry !
greatgib
As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain. Only the big one embedded in browser will have the receive to have their own CA certificate with whatever period they want...
And in term of security, I think that it is a double edged sword:
- everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it
- Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time. If ever Digicert or Letsencrypt server, or the "cert updating client" is rooted or has a security issue, most servers around the world could be compromised in a very very short time.
As a side note, I'm totally laughing at the following explanation in the article:
47 days might seem like an arbitrary number, but it’s a simple cascade:
- 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
So, 47 is not arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...
lolinder
> everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it
I'm a computing professional in the tiny slice of internet users that actually understands what a cert is, and I never look at a cert by hand unless it's one of my own that I'm troubleshooting. I'm sure there are some out there who do (you?), but they're a minority within a minority—the rest of us just rely on the automated systems to do a better job at security than we ever could.
At a certain point it is correct for systems engineers to design around keeping the average-case user more secure even if it means removing a tiny slice of security from the already-very-secure power users.
gruez
>As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain.
like, private CA? All of these restrictions are only applied for certificates issued under the webtrust program. Your private CA can still issue 100 year certificates.
greatgib
Let's suppose that I'm a competitor of Google and Amazon, and I want to have my Public root CA for mydomain.com to offer my clients subdomains like s3.customer1.mydomain.com, s3.customer2.mydomain.com,...
tptacek
If you want to be a public root CA, so that every browser in the world needs to trust your keys, you can do all the lifting that the browsers are asking from public CAs.
gruez
Why do you want this when there are wildcard certificates? That's how the hyperscalers do it as well. Amazon doesn't have a separate certificate for each s3 bucket, it's all under a wildcard certificate.
vlovich123
Amazon did this the absolute worst way - all customers share the same flat namespace for S3 buckets which limits the names available and also makes the bucket names discoverable. Did it a bit more sanely and securely at Cloudflare where it was namespaced to the customer account, but that required registering a wildcard certificate per customer if I recall correctly.
zamadatix
The only consideration I can think is public wildcard certificates don't allow wildcard nesting so e.g. a cert for *.example.com doesn't offer a way for the operator of example.com to host a.b.example.com. I'm not sure how big of a problem that's really supposed to be though.
anacrolix
No. Chrome flat out rejects certificates that expire more than 13 months away, last time I tried.
nickf
Certificate pinning to public roots or CAs is bad. Do not do it. You have no control over the CA or roots, and in many cases neither does the CA - they may have to change based on what trust-store operators say.
Pinning to public CAs or roots or leaf certs, pseudo-pinning (not pinning to a key or cert specifically, but expecting some part of a certificate DN or extension to remain constant), and trust-store limiting are all bad, terrible, no-good practices that cause havoc whenever they are implemented.
szszrk
Ok, but what's the alternative?
Support for cert and CA pinning is in a state that is much better than I thought it will be, at least for mobile apps. I'm impressed by Apple's ATS.
Yet, for instance, you can't pin a CA for any domain, you always have to provide it up front to audit, otherwise your app may not get accepted.
Doesn't this mean that it's not (realistically) possible to create cert pinning for small solutions? Like homelabs or app vendors that are used by onprem clients?
We'll keep abusing PKI for those use cases.
nickf
I think if you're going to pin, pin to something you control. If it's an API endpoint, you can use a private CA and have the app trust your root, and pin to that. Same end result, but you're not going to be stuck if a third-party you have nothing to do with decides that some part of the hierarchy needs to change.
szszrk
That's the exact opposite of what I'm referring to.
There is a client that has a self hosted web service. Or a SaaS but under his own domain.
There is a vendor that provides nice apps to interact with that service. Vendor distributes them on his own to stores, upgrades etc.
Clients has no interest in doing that, nor any competencies.
Currently there is no solution here: Vendor needs to distribute an app that has Client's CAs or certs built in (into his app realese), to be able to pin it.
I've seen that scenario many times in mid/small-sized banks, insurance and surrounding services. Some of these institutions rely purely on external vendors and just integrate them. Same goes for tech savvy selfhosters - they often rely on third party mobile apps but host backends themselves.
lucb1e
> 47 [is?] arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...
Not related to certificates specifically, and the specific number of days is in no way a security risk, but it reminded me of NUMS generators. If you find this annoyingly arbitrary, you may also enjoy: <https://github.com/veorq/numsgen>. It implements this concept:
> [let's say] one every billion values allows for a backdoor. Then, I may define my constant to be H(x) for some deterministic PRNG H and a seed value x. Then I proceed to enumerate "plausible" seed values x until I find one which implies a backdoorable constant. I can begin by trying out all Bible verses, excerpts of Shakespeare works, historical dates, names of people and places... because for all of them I can build a story which will make the seed value look innocuous
> As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain
Only if browsers enforce the TLS requirements for private CAs. Usually, browsers exempt user or domain controlled CAs from all kinds of requirements, like certificate transparancy log requirements. I doubt things will be different this time.
If they do decide to apply those limits, you can run an ACME server for your private CA and point certbot or whatever ACME client you prefer at it to renew your internal certificates. Caddy can do this for you with a couple of lines of config: https://caddyserver.com/docs/caddyfile/directives/acme_serve...
Funnily enough, Caddy defaults to issueing 12 hour certificates for its local CA deployment.
> no certificate pinning anymore
Why bother with public certificate authorities if you're hardcoding the certificate data in the client?
> Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time.
Those hosts needed a bastion host or proxy of sorts to connect to the outside yearly, so they can still do that today. But I don't see the advantage of using the public CA infrastructure in a closed system, might as well use the Microsoft domain controller settings you probably already use in your network to generate a corporate CA and issue your 10 year certificates if you're in control of the network.
precommunicator
> everyone will be so used to certificates changing all the time, and no certificate pinning anymore
Browser certificate pinning is deprecated since 2018. No current browsers support HPKP.
There are alternatives to pinning, DNS CAA records, monitoring CT logs.
blincoln
Cert pinning is a very common practice for mobile apps. I'm not a fan of it, but it's how things are today. Seems likely that that will have to change with shorter cert lifetimes.
yjftsjthsd-h
If you're in a position to pin certs, aren't you in a position to ignore normal CAs and just keep doing that?
lo0dot0
42
ghusto
I really wish encryption and identity weren't so tightly coupled in certificates. If I've issued a certificate, I _always_ care about encryption, but sometimes do not care about identity.
For those times when I only care about encryption, I'm forced to take on the extra burden that caring about identity brings.
Pet peeve.
tptacek
There's minimal security in an unauthenticated encrypted connection, because an attacker can just MITM it.
SoftTalker
Trust On First Use is the normal thing for these situations.
asmor
TOFU equates to "might as well never ask" for most users. Just like Windows UAC prompts.
superkuh
You're right most of the time. But there are two webs. And it's only in the later (far more common) case that things like that matter.
There is the web as it always has been on http/1.1 that is a hyperlinked set of html documents hosted on a mishmash of random commercial and personal servers. Then there is modern http/2 http/3 CA TLS only web hosted as a service on some other website or cloud; mostly to do serious business and make money. The modern web's CA TLS-only ID scheme is required due to the complexity and risk of automatic javascript execution in browsers.
I wish we could have browsers that could support both use cases. But we can't because there's too much money and private information bouncing around now. Can't be whimsical, can't 'vibe code' the web ID system (ie, self signed not feasible in HTTP/3). It's all gotta be super serious. For everyone. And that means bringing in a lot of (well hidden by acme2 clients) complexity and overhead and centralization (everyone uses benevolent US based Lets Encrypt). This progressive lowering of the cert lifetimes is making the HTTP-only web even more fragile and hard to create lasting sites on. And that's sad.
TOFU works for the old web just great. It's completely incompatible with the modern web because major browsers will only ever compile their HTTP/* libs with flags that prevent TOFU and self-signed. You could host a http/1.1 self-signed and TOFU but everyone (except geeks) would be scared away or incapable of loading it.
So, TOFU works if you just want to do something like "gemini" protocol but instead of a new protocol just stick to original http and have a demographic of retro-enthusiasts and poor people. It's just about as accessible as gemni for most people (ie, not very) except for two differences. 1. Bots still love http/1.1 and don't care if it's plain text. 2. There's still a giant web of http/1.1 websites out there.
TheJoeMan
Not to mention the usage of web browsers for configuring non-internet devices! I mean such as managing a router from the LAN side built-in webserver, how many warnings you have to click through in Firefox nowadays. Hooking an iPhone to an IoT device, the iPhone hates that there's no "internet" and constantly tries to drop the WiFi.
steventhedev
There is a security model where MITM is not viable - and separating that specific threat from that of passive eavesdropping is incredibly useful.
tptacek
MITM scenarios are more common on the 2025 Internet than passive attacks are.
steventhedev
MITM attacks are common, but noisy - BGP hijacks are literally public to the internet by their nature. I believe that insisting on coupling confidentiality to authenticity is counterproductive and prevents the development of more sophisticated security models and network design.
What does their commonality have to do with the use cases where they aren't viable?
jchw
I mean, we do TOFU for SSH server keys* and nobody really seems to bat an eye at that. Today if you want "insecure but encrypted" on the web the main way to go is self-signed which is both more annoying and less secure than TOFU for the same kind of use case. Admittedly, this is a little less concerning of an issue thanks to ACME providers. (But still annoying, especially for local development and intranet.)
*I mistakenly wrote "certificate" here initially. Sorry.
tptacek
SSH TOFU is also deeply problematic, which is why cattle fleet operators tend to use certificates and not piecewise SSH keys.
jchw
I've made some critical mistakes in my argument here. I am definitely not referring to using SSH TOFU in a fleet. I'm talking about using SSH TOFU with long-lived machines, like your own personal computers, or individual long-running servers.
Undoubtedly it is not best practice to lean on TOFU for good reason, but there are simply some lower stakes situations where engaging the CA system is a bit overkill. These are systems with few nodes (maybe just one) that have few users (maybe just one.) I have some services that I deploy that really only warrant a single node as HA is not a concern and they can easily run off a single box (modern cheap VPSes really don't sweat handling ~10-100 RPS of traffic.) For those, I pre-generate SSH server keys before deployment. I can easily verify the fingerprint in the excessively rare occasion it isn't already trusted. I am not a security expert, but I think this is sufficient at small scales.
To be clear, there are a lot of obvious security problems with this:
- It relies on me actually checking the fingerprint.
- SSH keys are valid and trusted indefinitely, so it has to be rotated manually.
- The bootstrap process inevitably involves the key being transmitted over the wire, which isn't as good as never having the key go over the wire, like you could do with CSRs.
This is clearly not good enough for a service that needs high assurance against attackers, but I honestly think it's largely fine for a small to medium web server that serves some small community. Spinning up a CA setup for that feels like overkill.
As for what I personally would do instead for a fleet of servers, personally I think I wouldn't use SSH at all. In professional environments it's been a long time since I've administered something that wasn't "cloud" and in most of those cloud environments SSH was simply not enabled or used, or if it was we were using an external authorization system that handled ephemeral keys itself.
That said, here I'm just suggesting that I think there is a gap between insecure HTTP and secure HTTPS that is currently filled by self-signed certificates. I'm not suggesting we should replace HTTPS usage today with TOFU, but I am suggesting I see the value in a middle road between HTTP and HTTPS where you get encryption without a strong proof of what you're connecting to. In practice this is sometimes the best you can really get anyway: consider the somewhat common use case of a home router configuration page. I personally see the value in still encrypting this connection even if there is no way to actually ensure it is secure. Same for some other small scale local networking and intranet use cases.
ssh server certificates should not be TOFU, the point of SSH certs is so you can trust the signing key.
TOFU on ssh server keys... it's still bad, but less people are interested in intercepting ssh vs tls.
tptacek
Intercepting and exploiting first-contact SSH sessions is a security conference sport. People definitely do it.
jchw
I just typed the wrong thing, fullstop. I meant to say server keys; fixed now.
Also, I agree that TOFU in its own is certainly worse than having robust verification via the CA system. OTOH, SSH-style TOFU has some advantages over the CA system, too, at least without additional measures like HSTS and certificate pinning. If you are administering machines that you yourself set up, there is little reason to bother with anything more than TOFU because you'll cache the key shortly after the machine is set up and then get warned if a MITM is attempted. That, IMO, is the exact sort of argument in favor of having an "insecure but encrypted" sort of option for the web; small scale cases where you can just verify the key manually if you need to.
pabs3
You don't have to TOFU SSH server keys, there is a DNSSEC option, or you can transfer the keys via a secure path, or you can sign the keys with a CA.
gruez
>I mean, we do TOFU for SSH server certificates and nobody really seems to bat an eye at that.
Mostly because ssh isn't something most people (eg. your aunt) uses, and unlike with https certificates, you're not connecting to a bunch of random servers on a regular basis.
jchw
I'm not arguing for replacing existing uses of HTTPS here, just cases where you would today use self-signed certificates or plaintext.
hedora
TOFU is not less secure than using a certificate authority.
Both defend against attackers the other cannot. In particular, the number of machines, companies and government agencies you have to trust in order to use a CA is much higher.
How is an attacker going to MITM an encrypted connection they don't have the keys for, without having rogue DNS or something similar, i.e. faking the actual target?
Ajedi32
It's an unauthenticated encrypted connection, so there's no way for you to know whose keys you're using. The attacker can just tell you "Hi, I'm the server you're looking for. Here's my key." and your client will establish a nice secure, encrypted connection to the malicious attacker's computer. ;)
notTooFarGone
There are enough example where this is just a bogus scenario. There are a lot of IoT cases that fall apart anyway when the attacker is able to do a MITM attack.
For example if the MITM requires you to have physical access to the machine, you'd also have to cover the physical security first. As long as that is not the case who cares for some connection hijack.
If the data you are actually communicating is in addition just not worth the encryption but has to be because of regulation you are just doing the dance without it being worth it.
oconnor663
They MITM the key exchange step at the beginning, and now they do have the keys. The thing that prevents this in TLS is the chain of signatures asserting identity.
2mlWQbCK
You can have TLS with TOFU, like in the Gemini protocol. At least then, in theory, the MTIM has to happen the first time you connect to a site. There is also the possibility for out of band confirmation of some certificate's fingerprint if you want to be really sure that some Gemini server is the one you hope it is.
panki27
You can not MITM a key that is being exchanged through Diffie-Hellman, or have I missed something big?
Connections never start as encrypted, they always start as plain text. There are multiple ways of impersonating an IP even if you don't control DNS, especially if you are in the same local network.
Gigachad
Double especially if it's the ISP or government involved. They can just automatically MITM and reencrypt every connection if there is no identity checks.
gruez
>Connections never start as encrypted, they always start as plain text
Not "never", because of HSTS preload, and browsers slowly adding scary warnings to plaintext connections.
I disagree. Think about every time you use a service (website, email, etc.) you've used before via a network you don't trust (e.g. free WiFi).
On the other hand providing the option may give a false sense of security. I think the main reason SSH isn't MitM'd all over the place is it's a pretty niche service and very often you do have a separate authentication method by sending your public key over HTTPS.
saurik
When I use a service over TLS on a network I don't trust, the premise is that I only will trust the connection if it has a certificate from a handful of companies trusted by the people who wrote the software I'm using (my browser/client and/or my operating system) to only issue said certificates to people who are supposed to have them (which these days is increasingly defined to be "who are in control of the DNS for the domain name at a global level", for better or worse, not that everyone wants to admit that).
But like, no: the free Wi-Fi I'm using can't, in fact, MITM the encryption used by my connection... it CAN do a bunch of other shitty things to me that undermine not only my privacy but even undermine many of the things people expect to be covered by privacy (using traffic analysis on the size, timing, or destination of the packets that I'm sending), but the encryption itself isn't subject to the failure mode of SSH.
ongy
The encryption itself may not be.
Establishing the initial exchange of crypto key material can be.
That's where certificates are important because they add identity and prevent spoofing.
With TOFU, if the first use is on an insecure network, this exchange is jeopardized. And in this case, the encryption is not with the intended partner and thus does not need to be attacked.
woodruffw
> I disagree. Think about every time you use a service (website, email, etc.) you've used before via a network you don't trust (e.g. free WiFi).
Hm? The reason I do use those services over a network I don't trust is because they're wrapped in authenticated, encrypted channels. The authenticated encryption happens at a layer above the network because I don't trust the network.
tikkabhuna
But isn't that exactly the previous posters point? Free WiFI someone can just MITM your connection, you would never know and you think its encrypted. Its the worst possible outcome. At least when there's no encryption browsers can tell the user to be careful.
IshKebab
They could still tell the user to be careful without authentication.
He wasn't proposing that encryption without authentication gets the full padlock and green text treatment.
Ajedi32
In what situation would you want to encrypt something but not care about the identity of the entity with the key to decrypt it? That seems like a very niche use case to me.
xyzzy123
Because TLS doesn't promise you very much about the entity which holds the key. All you really know is that they they control some DNS records.
You might be visiting myfavouriteshoes.com (a boutique shoe site you have been visiting for years), but you won't necessarily know if the regular owner is away or even if the business has been sold.
Ajedi32
It tells you the entity which holds the key is the actual owner of myfavouriteshoes.com, and not just a random guy operating the free Wi-Fi hotspot at the coffee shop you're visiting. If you don't care about that then why even bother with encryption in the first place?
xyzzy123
True.
OK I will fess up. The truth is that I don't spend a lot of time in coffee shops but I do have a ton of crap on my LAN that demands high amounts of fiddle faddle so that the other regular people in my house can access stuff without dire certificate warnings, the severity of which seems to escalate every year.
Like, yes, I eat vegetables and brush my teeth and I understand why browsers do the things they do. It's just that neither I nor my users care in this particular case, our threat model does not really include the mossad doing mossad things to our movie server.
at least it's not evil-government-proxy.com that decided to mitm you and look at your favorite shoes.
xyzzy123
Indeed and the system is practically foolproof because the government cannot take over DNS records, influence CAs, compromise cloud infrastructure / hosting, or rubber hose the counter-party to your communications.
Yes I am being snarky - network level MITM resistance is wonderful infrastructure and CT is great too.
ghusto
It happens often enough that it would be useful to me. Mostly it's when developing something. I'm the one creating the cert, I'm the one putting it in place, I'm in control of the DNS, place I'm connecting to, and it's all local. There is an insignificant change that someone could do something nasty to that connection.
pizzafeelsright
Seems logical.
If we encrypt everything we don't need AuthN/Z.
Encrypt locally to the target PK. Post a link to the data.
lucb1e
What? I work in this field and I have no idea what you mean. (I get the abbreviations like authz and pk, but not how "encrypting everything" and "posting links" is supposed to remove the need for authentication)
mannyv
All our door locks suck, but everyone has a door lock.
The goal isn't to make everything impossible to break. The goal is to provide Just Enough security to make things more difficult. Legally speaking, sniffing and decrypting encrypted data is a crime, but sniffing and stealing unencrypted data is not.
That's an important practical distinction that's overlooked by security bozos.
charcircuit
Having them always coupled disincentivizes bad ISP's from MITM the connection.
silverwind
I agree, there needs to be a TLS without certificates. Pre-shared secrets would be much more convenient in many scenarios.
ryao
How about TLS without CAs? See DANE. If only web browsers would support it.
pornel
DANE is a TLS with too-big-to-fail CAs that are tied to the top-level domains they own, and can't be replaced.
Separation between CAs and domains allows browsers to get rid of incompetent and malicious CAs with minimal user impact.
ryao
DANE lets the domain owner manage the certificates issued for the domain.
pornel
This delegation doesn't play the same role as CAs in WebPKI.
Without DNSSEC's guarantees, the DANE TLSA records would be as insecure as self-signed certificates in WebPKI are.
It's not enough to have some certificate from some CA involved. It has to be a part of an unbroken chain of trust anchored to something that the client can verify. So you're dependent on the DNSSEC infrastructure and its authorities for security, and you can't ignore or replace that part in the DANE model.
panki27
Isn't this excatly the reason why LetsEncrypt was brought to life?
Vegenoid
Isn't identity the entire point of certificates? Why use certificates if you only care about encryption?
I want a middle ground. Identity verification is useful for TLS, but I really wish there was no reliance on ultimately trusted third parties for that. Maybe put some sort of identity proof into DNS instead, since the whole thing relies on DNS anyway.
immibis
Makes it trivial for your DNS provider to MITM you, and you can't even use certificate transparency to detect it.
grishka
You can use multiple DNS providers at once to catch that situation. You can have some sort of signing scheme where each authoritative server would sign something in turn to establish a chain of trust up to the root servers. You can use encrypted DNS, even if it is relying on traditional TLS certificates, but it can also use something different for identity verification, like having you use a config file with the public key embedded in it, or a QR code, instead of just an address.
ryao
If web browsers supported DANE, we would not need CAs for encryption.
Avamander
DNSSEC is just a shittier PKI with CAs that are too big to ever fail.
immibis
It is, but since we rely on DNS anyway, no matter what, and your DNS provider can get a certificate from Let's Encrypt for your site, without asking you, there's merit to combining them. It doesn't add any security to have PKI separate from DNS.
However, we could use some form of Certificate Transparency that would somehow work with DANE.
Also it still protects you from everyone who isn't your DNS provider, so it's valuable if you only need a medium level of security.
Avamander
> It is, but since we rely on DNS anyway, no matter what, and your DNS provider can get a certificate from Let's Encrypt for your site, without asking you, there's merit to combining them.
They can, but they'll also get caught thanks to CT. No such audit infrastructure exists for DANE/DNSSEC.
> It doesn't add any security to have PKI separate from DNS.
One can also get a certificate for an IP addresses.
ryao
There is no need for a certificate from let’s encrypt. DANE lets you put your own self signed certificate into DNS and it should be trusted because DNS is authoritative, although DNSSEC should be required to make it secure.
tptacek
And yet no browser trusts it, and a single-digit percentage of popular zones (from the Tranco list) have signatures; this despite decades of deployment effort. Meanwhile, over 60% of all sites on the Internet have ISRG certificates.
captn3m0
This is great news. This would blow a hole in two interesting places where leaf-level certificate pinning is relied upon:
1. mobile apps.
2. enterprise APIs. I dealt with lots of companies that would pin the certs without informing us, and then complain when we'd rotate the cert. A 47-day window would force them to rotate their pins automatically, making it even worse of a security theater. Or hopefully, they switch rightly to CAA.
bearjaws
Giving me PTSD for working in healthcare.
Health Systems love pinning certs, and we use an ALB with 90 day certs, they were always furious.
Every time I was like "we can't change it", and "you do trust the CA right?", absolute security theatre.
DiggyJohnson
Do you (or anyone) recommend any text based resources laying out the state of enterprise TLS management in 2025?
It’s become a big part of my work and I’ve always just had a surface knowledge to get me by. Assume I work in a very large finance or defense firm.
grishka
Isn't it usually the server's public key that's pinned? The key pair isn't regenerated when you renew the certificate.
toast0
Typical guidance is to pin the CA or intermediate, because in case of a key compromise, you're going to need to generate a new key.
You should really generate a new key for each certificate, in case the old key is compromised and you don't know about it.
What would really be nice, but is unlikely to happen would be if you could get a constrained CA certificate issued for your domain and pin that, then issue your own short term certificates from there. But if those are wide spread, they'd need to be short dated too, so you'd need to either pin the real CA or the public key and we're back to where we were.
nickf
I've said it up-thread, but never ever never never pin to anything public. Don't do it. It's bad. You, and even the CA have no control over the certificates and cannot rely on them remaining in any way constant.
Don't do it. If you must pin, pin to private CAs you control. Otherwise, don't do it. Seriously. Don't.
toast0
There's not really a better option if you need your urls to work with public browsers and also an app you control. You can't use a private CA for those urls, because the public browsers won't accept it; you need to include a public CA in your app so you don't have to rely on the user's device having a reasonable trust store. Including all the CAs you're never going to use is silly, so picking a few makes sense.
richardwhiuk
You don't need both of those things. Give your app a different url.
ori_b
Why should I trust a CA that has no control over the certificate chains?
nickf
Because they operate in a regulated, security industry where changes happen - sometimes beyond their control?
einsteinx2
Repeating it doesn’t make it any more true. Cert providers publish their root certs, you pin those root certs, zero problems.
Dealing with enterprise is going to be fun, we work with a lot of car companies around the world. A good chunk of them love to whitelist by thumbprint. That is going to be fun for them.
philsnow
> As a certificate authority, one of the most common questions we hear from customers is whether they’ll be charged more to replace certificates more frequently. The answer is no. Cost is based on an annual subscription […]
(emphasis added)
Pump the brakes there, digicert. Price is based on an annual subscription. CA costs will actually go up an infinitesimal amount, but they’re already nearly zero to begin with. Running a CA has got to be one of the easiest rackets in the world.
jwnin
Costs to buy certs will not materially change. Costs to manage certs will increase.
bityard
I see that there is a timeline for progressive shortening, so if anyone has any "inside baseball" on this, I'm very curious to know:
Given that the overarching rationale here is security, what made them stop at 47 days? If the concern is _actually_ security, allowing a compromised cert to exist for a month and a half is I guess better than 398 days, but why is 47 days "enough"?
When will we see proposals for max cert lifetimes of 1 week? Or 1 day? Or 1 hour? What is the lower limit of the actual lifespan of a cert and why aren't we at that already? What will it take to get there?
Why are we investing time and money in hatching schemes to continually ratchet the lifespan of certs back one more step instead of addressing the root problems, whatever those are?
dadrian
The root problem is certificate lifetimes are too long relative to the speed at which domains change, and the speed at which the PKI needs to change.
peanut-walrus
So the assumption here is that somehow your private key is easier to compromise than whatever secret/mechanism you use to provision certs?
Yeah not sure about that one...
throwaway96751
Off-topic: What is a good learning resource about TLS?
I've read the basics on Cloudflare's blog and MDN. But at my job, I encountered a need to upload a Let's encrypt public cert to the client's trusted store. Then I had to choose between Let's encrypt's root and intermediate certs, between key types RSA and ECDSA. I made it work, but it would be good to have an idea of what I'm doing. For example why root RSA key worked even though my server uses ECDSA cert. Before I added the root cert to a trusted store, clients used to add fullchain.pem from the server and it worked too — why?
ivanr
I have a bunch of useful resources, most of which are free:
Thx. For one API in my company, there is only root and intermediate certificates are present in the jks file but the leaf certificate is not. Would encryption work without a leaf certificate?
In another instance to connect to a server, only the root certificate is present in the trust store. Does it mean encryption can be performed with just the root certificate.
throwaway96751
> SSL is one of those weird niche subjects that no one learns until they run into a problem
Yep, that me.
Thanks for the blog post!
physicles
Use ECDSA if you can, since it reduces the size of the handshake on the wire (keys are smaller). Don’t bake in intermediate certs unless you have a very good reason.
No idea why the RSA key worked even though the server used RSA — maybe check into the recent cross-signing shenanigans that Let’s Encrypt had to pull to extend support for very old Android versions.
throwaway96751
I've been reading a little since then, and I think it worked with RSA root cert because this cert was a trust anchor of the Chain of Trust of my server's ECDSA certificate.
pizzafeelsright
Curious why you wouldn't have a Q and A with AI?
If the information is relatively unchanged and the details well documented why not ask questions to fill in the gaps?
The Socratic method has been the best learning tool for me and I'm doubling my understanding with the LLMs.
throwaway96751
I think this method works best when you can verify the answer. So it has to be either a specific type of question (a request to generate code, which you can then run and test), or you have to know enough about the subject to be able to spot mistakes.
ori_b
Can someone point me to specific exploits that this key rotation schedule would have stopped?
It seems to me like compromised keys are rare. It also seems like 47 days is low enough to be inconvenient, but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
Avamander
> Can someone point me to specific exploits that this key rotation schedule would have stopped?
It's not only key mismanagement that is being mitigated. You also have to prove more frequently that you have control of the domain or IP in the certificate.
In essence it brings a working method of revocation to WebPKI.
> but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
Compared to a year?
ori_b
> You also have to prove more frequently that you have control of the domain or IP in the certificate.
That doesn't particularly matter; if someone takes over the domain but doesn't have a leaked key, they can't sign requests for the domain with my cert. It takes a leaked key for this to turn into a vulnerability.
On the other hand, anyone that owns the domain can get a perfectly valid cert any time, no need to exploit anything. And given that nobody actually looks at the details of the cert owner in practice, that means that if you lose the domain, the new owner is, treated as legit. No compromises needed.
The only way to prevent that is to pin the cert, which this short rotation schedule makes harder, or pin the public key and be very careful to not regenerate your keys when you submit a new CSR.
In short: Don't lose your domain.
> Compared to a year?
Typically these kinds of things have an exponential dropoff, so most of the exploited folks would be soon after the compromise. I don't think that shortening to this long a period, rather than (say) 24h would make a material difference.
But, again, I'm also not sure how many people were compromised via anything that this kind of rotation would prevent. It seems like most exploits depend on someone either losing control over the domain (again, don't do that; the current issuance model doesn't handle that), or just being phished via a valid cert on an unrelated domain.
Do you have concrete examples of anyone being exploited via key mismanagement (or not proving often enough that they have control over a domain)?
kbolino
I just downloaded one of DigiCert's CRLs and it was half a megabyte. There are probably thousands of revoked certificates in there. If you're not checking CRLs, and a lot of non-browser clients (think programming languages, software libraries, command-line tools, etc.) aren't, then you would trust one of those certificates if it was presented to you. With certificate lifetimes of 47 days instead of a year, 87% of those revoked certificates become unusable regardless of CRL checking.
ori_b
Why is leaving almost 15% of bad certificates in play ok? If it was shortened to 48 hours, then it would make 99.5% of them unusable, and I suspect the real world impact would still be approximately zero.
kbolino
It does not have to be perfect to be better. It's not great that 13% of revoked certificates would still be there (and get trusted by CRL-ignoring clients) but significantly smaller CRL files may get us closer to more widespread CRL checking. The shorter lifetime also reduces the window of time that a revoked certificate can be exploited by that same 87%. While I'd wager most certificates that get revoked are revoked for minor administrative mistakes and so are unlikely to be used in attacks, some revocations are still exploitable, and it's nearly impossible to measure the actual occurrence of such things at Internet scale without concerted effort.
This reminds me a bit of trying to get TLS 1.2 support in browsers before the revelation that the older versions (especially SSL3) were in fact being exploited all the time directly and via downgrading. Since practically nobody complained (out of ignorance) and, at the time, browsers didn't collect metrics and phone home with them (it was a simpler time), there was no evidence of a problem. Until there was massive evidence of a problem because some people bothered to look into and report it. Journalism-driven development shouldn't be the primary way to handle computer security.
Avamander
> That doesn't particularly matter; if someone takes over the domain but doesn't have a leaked key, they can't sign requests for the domain with my cert. It takes a leaked key for this to turn into a vulnerability.
It does, if someone gets temporary access, issues a certificate and then keeps using it to impersonate something. Now the malicious actor has to do it much more often, significantly increasing chances of detection.
crote
The 47 days are (mostly) irrelevant when it comes to compromised keys. The certificate will be revoked by the CA at most 24 hours after compromise becomes known, so a shorter cert isn't really "more secure" than a longer one.
At least, that's what the rules say. In practice CAs have a really hard time saying no to a multi-week extension because a too-big-to-fail company running "critical infrastructure" isn't capable of rotating their certs.
Short cert duration forces companies to automate cert renewal, and with automation it becomes trivial to rotate certs in an acceptable time frame.
_bin_
Is there an actual issue with widespread cert theft? That seems like the primary valid reason to do this, not forcing automation.
cryptonym
Let's Encrypt dropped support for OCSP. CRL doesn't scale well. Short lived certificate probably are a way to avoid certificate revocation quirks.
Ajedi32
It's a real shame. OCSP with Must-Staple seemed like the perfect solution to this, it just never got widespread support.
I suppose technically you can get approximately the same thing with 24-hour certificate expiry times. Maybe that's where this is ultimately heading. But there are issues with that design too. For example, it seems a little at odds with the idea of Certificate Transparency logs having a 24-hour merge delay.
NoahZuniga
Also certificate transparency is moving to a new standard (sunlight CT) that has immediate merges. Google requires maximum merge delay to be 1 minute or less, but they've said on google groups that they expect merges to be way faster.
lokar
The log is not really for real time use. It’s to catch CA non-compliance.
dboreham
I think it's more about revocation not working in practice. So the only solution is a short TTL.
trothamel
I suspect it's to limit how long a malicious or compromised CA can impact security.
hedora
Equivalently, it also maximizes the number of sites impacted when a CA is compromised.
It also lowers the amount of time it’d take for a top-down change to compromise all outstanding certificates. (Which would seen paranoid if this wasn’t 2025.)
lokar
Mostly this. Today of a big CA is caught breaking the rules, actually enforcing repairs (eg prompt revocation ) is a hard pill to swallow.
rat9988
I think op is asking has there been many real case scenarios in practice that pushed for this change?
chromanoid
I guess the main reason behind this move is platform capitalism. It's an easy way to cut off grassroots internet.
gjsman-1000
If that were true, we would not have Let's Encrypt and tools which can give us certificates in 30 seconds flat once we prove ownership.
The real reason was Snowden. The jump in HTTPS adoption after the Snowden leaks was a virtual explosion; and set HTTPS as the standard for all new services. From there, it was just the rollout. (https://www.eff.org/deeplinks/2023/05/10-years-after-snowden...)
(Edit because I'm posting too fast, for the reply):
> How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?
Everyone is reliant on a 3rd party for the internet. It's called your ISP. They also take complaints and will shut you down if they don't like what you're doing. If you are using an online VPS, you have a second 3rd party, which also takes complaints, can see everything you do, and will also shut you down if they don't like what you're doing; and they have to, because they have an ISP to keep happy themselves. Networks integrating with 3rd party networks is literally the definition of the internet.
nottorp
How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?
Let's Encrypt... Cloudflare... useful services right? Or just another barrier to entry because you need to set up and maintain them?
icedchai
You are always dependent on a 3rd party to some extent: DNS registration, upstream ISP(s), cloud / hosting providers, etc.
I dunno. Self-hosting w/o automation was feasible. Now you have to automate. It will lead to a huge amount of link rot or at least something very similar. There will be solutions but setting up a page e2e gets more and more complicated. In the end you want a service provider who takes care of it. Maybe not the worst thing, but what kind of security issues are we talking about? There is still certificate revocation...
icedchai
Have you tried caddy? Each TLS protected site winds up being literally a couple lines in a config file. Renewals are automatic. Unless you have a network / DNS problem, it is set and forget. It is far simpler than dealing with manual cert renewals, downloading the certificates, restarting your web server (or forgetting to...)
It makes end to end responsibility more cumbersome. There were days people just stored MS Frontpage output on their home server.
icedchai
Many folks switched to Lets Encrypt ages ago. Certificates are way easier to acquire now than they were in "Frontpage' days. I remember paying 100's of dollars and sending a fax for "verification."
I've done the work to set up, by hand, a self-hosted Linux server that uses an auto-renewing Let's Encrypt cert and it was totally fine. Just read some documentation.
jack0813
There are very convenient tools to do https easily these days, e.g. Caddy. You can use it to reverse proxy any http server and it will do the cert stuff for you automatically.
chromanoid
Ofc, but you have to be quite techsavy to know this and to set this up. It's also cumbersome in many low-tech situations. There is certificate revocation, I would really like to see the threat model here. I am not even sure if automation helps or just shifts the threat vector to certificate issuing.
jsheard
This change will have a steady roll-out, but if you want to get ahead of the curve then Let's Encrypt will be offering 6 day certs as an option soon.
Don't forget the lede buried here - you'll need to re-validate control over your DNS names more frequently too.
Many enterprises are used to doing this once-per-year today, but by the time 47-day certs roll around, you'll be re-validating all of your domain control every 10 days (more likely every week).
umvi
So does this mean all of our Chromecasts are going to stop working again once this takes effect since (judging by Google's response during the week long Chromecast outage earlier this year) Chromecast is run by a skeleton crew and won't have the resources to automate certificate renewal?
throw0101b
Justification:
> The ballot argues that shorter lifetimes are necessary for many reasons, the most prominent being this: The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
> The ballot also argues that the revocation system using CRLs and OCSP is unreliable. Indeed, browsers often ignore these features. The ballot has a long section on the failings of the certificate revocation system. Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Personally I don't really buy this argument. I don't think the web sites that most people visit (especially highly-sensitive ones like for e-mail, financial stuff, a good portion of shopping) change or become "less trustworthy" that quickly.
gruez
The "less trustworthy" refers to key compromise, not the e-shop going rogue and start scamming customers or whatever.
throw0101a
Okay, the key is compromised: that means they can MITM the trust relationship. But with modern algorithms you have forward security, so even if you've sniffed/captured the traffic it doesn't help.
And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.
gruez
>And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.
By that logic, we don't really need certificates, just TOFU.
throw0101d
> By that logic, we don't really need certificates, just TOFU.
It works fairly well for SSH, but that tends to be a more technical audience. But doing a "Always trust" or "Always accept" are valid options in many cases (often for internal apps).
Nobody forces you to change your key for renewals.
avodonosov
First impression: with automation and short lived certificates, the Certifying Authorities become similar to Identity Providers / OpenID Provider in openid / openid-connect. The certificates are tokens.
And significant part of security is concentrated around the way Certifying Authorities validate the domain ownership. (So called challenges).
Next, maybe clients can run those challenges directly, instead of relying onto certificates? For example, when connecting a server, client client sends two unique values, and the server must create DNS record <unique-val-1>.server.com with record value of the <unique-val-2>. Client check that such record is created and thus the server has proven it controls the domain name.
Auth through DNS, that's what it is. We will just need to speed up the DNS system.
NicolaiS
This will not work as any attacker that can MITM the client (likely scenario for end-users), can also MITM this "certificate issuing" setup and issue their own cert.
The reason an attacker can't MITM Let's Encrypt (or similar ACME issuers) is because they request the challenge-response from multiple locations, making sure a simple MITM against them doesn't work.
A fully DNS based "certificate setup" already exists: DANE, but that requires DNSSEC, which isn't wildly used.
avodonosov
You are right that the scheme I described is vulnerable. Even without MITM. Just fakeserver.com upon receiving request from client sends equal request to server.com, which creates the needed DNS record and thus real client is "convinced" that fakeserver.com controls DNS.
That does not work as DNS is insecure. DNSSEC is not there and may never will.
ryandv
But this is already basically how Let's Encrypt challenges certificate applicants over ACME DNS01 [0].
I would be more concerned about the number of certificates that would need to be issued and maintained over their lifecycle - which now scales with the number of unique clients challenging your server (or maybe I misunderstand, and maybe there aren't even certificates any more in this scheme).
Not to mention the difficulties of assuring reasonable DNS response times and fresh, up-to-date results when querying a global eventually consistent database with multiple levels of caching...
In the scheme I descrubed where client directly runs challenges the certificates are not issued at all.
I am not saying this scheme is really practical currently.
That's just an imaginary situation coming to mind, illustrating the increased importance of domain ownership validation procedures used by Certifying Authorities. Essentially the security now comes down to the domain ownership validation.
Also a correction. The server not simply puts <unique-val-2>, it puts sha256(<unique-val-2> || '.' || <fingerprint of the public key of the account>).
Yes, the ACME protocol uses some account keys. Private key signs a requests for new cert, and public key fingerprint during domain ownership validation confirms that the challenge response was intended for that specific account.
I am not suggesting ACME can be trivially broken.
I just realized that risks of TLS certs breaking is not just risk of public key crypto being broken, but also includes the risks of domain ownership validation protocols.
detaro
And would be replacing the CA PKI with an even more centralized PKI.
xyst
I don’t see any issue here. I already automate with ACME so rotating certificates on an earlier basis is okay. This should be like breathing for app and service developers and infrastructure teams.
Side note: I wonder how much pressure this puts on providers such as LetsEncrypt, especially with the move to validate IPs. And more specifically IPv6…
ShakataGaNai
Because there are lots of companies, large and small, which haven't gotten that far. Lots of legacy sites/services/applications.
I don't disagree with you that it should be super common. But it's surprisingly not in many businesses. Heck, Okta (nominally a large security company) still sends out notifications every time they change certificates and publishes a copy of their current correct certs in github: https://github.com/okta/okta-pki - How they do the actual rotation? No idea, but... I'd guess it's not automatic with that level of manual notification/involvement. (Happy to be proven wrong though).
mystraline
I'm sure this will be buried, but SSL is supposed to provide encryption. That's it.
Self-signed custom certs also does that. But those are demonized.
Also SSL also tries to define a ip-dns certification of ownership, kind of.
There's also a distinct difference between 'this cert expired last week' and 'this cert doesn't exist' and mitm attack. Expired? Just give a warning, not a scare screen. MITM? Sure give a big scary OHNOPE screen.
But, yeah, 47 days is going to wreck havok on network and weird devices.
kbolino
If there was no IP/DNS ownership verification, how would you even know you had been MITMed? You think the attacker is going to set a flag to let you know the certificate is fake?
The only real alternative to checking who signed a certificate is checking the certificate's fingerprint hash instead. With self-signed certificates, this is the only option. However, nobody does this. When presented with an unknown certificate, people will just blindly trust it. So self-signed certificates at scale are very susceptible to MITM. And again, you're not going to know it happened.
Encryption without authentication prevents passive snooping but not active and/or targeted attacks. And the target may not be you, it may be the other side. You specifically might not be worth someone's time, but your bank and all of its other customers too, probably is.
OCSP failed. CRLs are not always being checked. Shorter expiry largely makes up for the lack of proper revocation. But expiration must consequently be treated as no less severe than revocation.
mystraline
True, I get where you're coming from, but I think there's more problems even in those implied answers.
Homoglyph attacks are a thing. And I can pay $10 for a homoglyph name. No issues. I can get a webserver on a VM and point DNS. From there I can get a Letsencrypt. Use Nginx to point towards real domain. Install a mailserver and send out spoofed mails. You can also even set up SPF/DKIM/DMARC and have a complete attested chain.
And its all based on a fake DNS, using nothing more than things like a Cyrillic 'o'.
And, the self-signed issue is also what we see with SSH. And it mostly just works too.
kbolino
The security situation with SSH is actually kind of dismal. You're right that standard SSH server configurations are generally equivalent to self-signed certificates but the trust model often used there is known as TOFU ("trust on first use") and is regarded by people who practice computer security as fundamentally broken. It persists only because the problem is hard to solve and it is still better than nothing and SSH gets targeted for MITM a lot less than HTTPS (SSH is targeted much more by drive-by attacks looking for weak passwords).
TLS with Web PKI is a significantly more secure system when dealing with real people, and centralized PKI systems in general are far more scalable (but not hardly perfect!) compared to decentralized trust systems, with common SSH practices near the extreme end of decentralized. Honestly, the general state of SSH security can only be described as "working" due to a lack of effort from attackers more than the hygienic practices of sysadmins.
Homoglyph attacks are a real problem for HTTPS. Unfortunately, the solutions to that class of problem have not succeeded. Extended Validation certificates ended up a debacle; SRP and PAKE schemes haven't taken off (yet?); and it's still murky whether passkeys are here to stay. And a lot of those solutions still boil down to TOFU since, essentially, they require you to have visited the site at least once before. Plus, there remain fallback options that are easier to phish against. Maybe these problems would have been more solvable if DNSSEC succeeded, but that didn't happen either.
tptacek
It's hard to think of a real-world problem that PAKEs solve for HTTPS.
I know this thread is over, but it's important to understand that it's the browsers who have all the power in CABF and they were the drivers behind this change. Apple proposed it and Google voted yes within minutes of the voting period opening. It was unanimous among CAs too (with 5 abstentions), and nobody really disagrees with it, but it was the browsers who started the initiative now.
The article links to the vote thread: https://groups.google.com/a/groups.cabforum.org/g/servercert...
And here's the CABF discussion before the vote: https://github.com/cabforum/servercert/pull/553/commits/69ce...
nickf
It wasn't just the browsers. Some CAs supported this for a long time, and even directly endorsed the ballot.
You're not wrong about the browsers having 'the power', but then again - they are the representatives of billions of relying parties, so it's expected.
trothamel
Question: Does anyone have a good solution for renewing letsencrypt certificates for websites hosted on multiple servers? Right now, I have one master server that the others forward the well-known requests too, and then I copy the certificate over when I'm done, but I'm wondering if there's a better way.
nullwarp
I use DNS verification for this then the server doesn't even need to be exposed to the internet.
magicalhippo
And if changing the DNS entry is problematic, for example the DNS provider used doesn't have an API, you can redirect the challenge to another (sub)domain which can be hosted by a provider that has an API.
I've done this and it works very well. I had a Digital Ocean droplet so used their DNS service for the challenge domain.
I use dns01 in my homelab with step-ca. works like a charm, and it's my private certificate authority
hangonhn
We just use certbot on each server. Are you worried about the rate limit? LE rate limits based on the list of domains. So we send the request for the shared domain and the domain for each server instance. That makes each renew request unique per server for the purpose of the rate limit.
noinsight
Orchestrate the renewal with Ansible - renew on the "master" server remotely but pull the new key material to your orchestrator and then push them to your server fleet. That's what I do. It's not "clean" or "ideal" to my tastes, but it works.
It also occurred to me that there's nothing(?) preventing you from concurrently having n valid certificates for a particular hostname, so you could just enroll distinct certificates for each host. Provided the validation could be handled somehow.
The other option would maybe be doing DNS-based validation from a single orchestrator and then pushing that result onto the entire fleet.
pornel
I copy the same certbot account settings and private key to all servers and they obtain the certs themselves.
It is a bit funny that LetsEncrypt has non-expiring private keys for their accounts.
bayindirh
There's a tool called "lsyncd" which watches for a file and syncs the changed file to other servers "within seconds".
I use this to sync users between small, experimental cluster nodes.
Have you tried certbot? Or if you want a turnkey solution, you may try Caddy or Traefik that have their own automated certificate generation utility.
throw0101b
getssl was written with a bit of a focus on this:
> Get certificates for remote servers - The tokens used to provide validation of domain ownership, and the certificates themselves can be automatically copied to remote servers (via ssh, sftp or ftp for tokens). The script doesn't need to run on the server itself. This can be useful if you don't have access to run such scripts on the server itself, e.g. if it's a shared server.
Your 90-day snapshot backups will soon become 47-day backups. Take care!
gruez
???
Do people really backup their https certificates? Can't you generate a new one after restoring from backup?
belter
This is going to be one of the obvious traps.
DiggyJohnson
To care about stale certs on snapshots or the opposite?
belter
Both. One breaks your restore, the other breaks your trust chain.
compumike
(Shameless self-promotion) We set up our https://heiioncall.com/ monitoring to give our on-call rotation a non-critical “it can wait until Monday” alert when there are 14 days or less left on our SSL certificates, and a critical alert “do-not-disturb be damned” when 48 hours left until expiry. Because cert-manager got into some weird state once a few years ago, and I’d rather find out well in advance next time.
If this is causing you pain, certbot with Acme DNS challenge is pretty easy to set up to get you certs for your internal services. There are tools for many different dns providers like route53 or cloudflare.
I tend to have secondary scripts that checks if the cert in certbots dir is newer than whatever is installed for a service, and if so install it. Some services prefer the cert in certain formats, some services want to be reloaded to pick up a new cert etc, so I put that glue in my own script and run it from cron or a systemd timer.
AtNightWeCode
Yeah, but the problem as I see it is not to renew the certs. Some systems becomes unstable or needs to reboot during installation of new certificates. I worked on systems where it takes hours to install and use new certificates.
merb
The problem is more or less devices that do not support dns challenges or only support letsencrypt and not the acme protocol (to chain acme servers, etc)
cpach
What kind of devices are you thinking of? Like switches and other network gear?
JackSlateur
I've deployed LE on IPMI (dell, supermicro), so that's not a good excuse ! As long as you have a way to "script" something on your devices (via ssh, API or whatever) .. you are good to go
merb
If you need to script things it will be a pain for a lot of people.
merb
FortiGate 50g (higher version than 7.0 probably fixes that, but no idea when that will be released), some synology nas and there are tons of other boxes like that.
raggi
It sure would be nice if we could actually fix dns.
kevincox
To me I don't really care about the certificate lifetime, what I care about is the time between being allowed to renew and time until expiry.
Right now Let's Encrypt recommends renewing your 90d certificates every 60 days, which means that there is a 30 day window between recommended to renew and expiry. This feels relatively comfortable to me. A long vacation may be longer than 30 days but it is rare and there is probably other maintenance that you should be doing in this time (although likely routine like security updates rather than exceptional like figuring out why your certificate isn't renewing).
So if 47 days ends up meaning renew every 17 days and still have that 30 day buffer I would be quite happy. But what I fear is that they will recommend (and set rate limits based on) renewing every 30 days with a 17 day buffer which is getting a little short for comfort IMHO. While big organizations will have a 24h oncall and medium organizations will have many business hours to figure it out is sucks for individuals who what to go away for a few weeks without worrying about debugging their certificate renewal until they get home.
dsr_
Daniel K Moran figured out the endgame:
"Therefore, the Lunar Bureau of the United Nations Peace Keeping Force DataWatch has created the LINK, the Lunar Information Network Key. There are currently nine thousand, four hundred and two Boards on Luna; new Boards must be licensed before they can rent lasercable access. Every transaction--every single transaction--which takes place in the Lunar InfoNet is keyed and tracked on an item-by-item basis. The basis of this unprecedented degree of InfoNet security is the Lunar Information Network Key. The Key is an unbreakable encryption device which the DataWatch employs to validate and track every user in the Lunar InfoNet. Webdancers attempting unauthorized access, to logic, to data, to communications facilities, will be punished to the full extent of the law."
from The Long Run (1989)
Your browser won't access a site without TLS; this is for your own protection. TLS certificates are valid for one TCP session. All certs are issued by an organization reporting directly to a national information security office; if your website isn't in compliance with all mandates, you stop getting certs.
rhodey
After EFF Lets Encrypt made the change to disable reminder emails I decided I would be moving my personal blog from my VPS to AWS specifically. I just today made the time to make the move and 10 minutes after I find this.
I could have probably done more with Lets Encrypt automation to stay with my old VPS but given that all my professional work is with AWS its really less mental work to drop my old VPS.
Times they are a changing
mystified5016
Why not just automate your LetsEncrypt keys like literally everyone else does? It's free and you have to go out of your way to do it manually.
Or just pay Amazon, I guess. Easier than thinking.
jonathantf2
A welcome change if it gives some vendors a kick up the behind to implement ACME.
ShakataGaNai
There is no more choice. No one is going to buy from (example) GoDaddy if they have to login every 30 days to manually get a new certificate. Not when they can go to (example) digicert and it's all automatic.
It goes from a "rather nice to have" to "effectively mandatory".
jonathantf2
I think GoDaddy supports ACME - but if you're using ACME you might as well use Let's Encrypt and stop paying for it.
schlauerfox
Depends on the kind of certificate you need.
junaru
> For this reason, and because even the 2027 changes to 100-day certificates will make manual procedures untenable, we expect rapid adoption of automation long before the 2029 changes.
Oh yes, vendors will update their legacy NAS/IPMI/whatever to include certbot. This change will have the exact opposite effect - expired self signed certificates everywhere on the most critical infrastructure.
xnyanta
I have automated IPMI certificate rotation set-up through Let's Encrypt and ACME via the Redfish API. And this is on 15 year old gear running HP iLO4. There's no excuse for not automating things.
panki27
People will just roll out almost forever-lasting certificates through their internal CA for all systems that are not publicly reachable.
throw0101d
> through their internal CA
Nope. People will create self-signed certs and tell people to just click "accept".
Avamander
They're doing it right now and they'll continue doing so. There are always scapegoats for not automating.
zephius
Old SysAdmin and InfoSec Admin perspective:
Dev guys think everything is solvable via code, but hardware guys know this isn't true. Hardware is stuck in fixed lifecycles and firmware is not updated by the vendors unless it has to be. And in many cases updated poorly. No hardware I've ever come across that supports SSL\TLS (and most do nowadays) offers any automation capability in updating certs. In most cases, certs are manually - and painfully - updated with esoteric CLI cantrips that require dancing while chanting to some ancient I.T. God for mercy because the process is poorly (if at all) documented and often broken. No API call or middelware is going to solve that problem unless the manufacturer puts it in. In particular, load balancers are some of the worst at cert management, and remember that not everyone uses F5 - there are tons of other cheaper and popular alternatives most of which are atrocious at security configuration management. It's already painful enough to manage certs in an enterprise and this 47 day lifecycle is going to break things. Hardware vendors are simply incompetent and slow to adapt to security changes. And not everyone is 100% in the cloud - most enterprises are only partially in that pool.
tptacek
I think everybody involved knows about the likelihood that things are going to break at enterprise shops with super-expensive commercial middleboxes. They just don't care anymore. We ran a PKI that cared deeply about the concerns of admins for a decade and a half, and it was a fiasco. The coders have taken over, and things are better.
zephius
That's great for shops with Dev teams and in house developed platforms. Those shops are rare outside Silicon Valley and fortune 500s and not likely to increase beyond that. For the rest of us, we are at the mercy of off the shelf products and 3rd party platforms.
tptacek
I suggest you buy products from vendors who care about the modern WebPKI. I don't think the browser root programs are going to back down on this stuff.
nickf
This. Also, re-evaluate how many places you actually need public trust that the webPKI offers. So many times it isn't needed, and you make problems for yourself by assuming it does.
I have horror stories I can't fully disclose, but if you have closed networks of millions of devices where you control both the server side and the client side, relying on the same certificate I might use on my blog is not a sane idea.
whs
Agree. My company was cloud first, and when we built the new HQ buying Cisco gear and VMware (as they're the only stack several implementers are offering) it felt like we were sending the company 15 years backwards
zephius
I agree, and we try, however that is not a currently widely supported feature in the boring industry specific business software/hardware space. Maybe now it will be, so time will tell.
“Hardware vendors are simply incompetent and slow to adapt to security changes.”
Perhaps the new requirements will give them additional incentives.
zephius
Yeah, just like TLS 1.2 support. Don't even get me started on how that fiasco is still going.
yjftsjthsd-h
Sounds like everything is solvable via code, and the hardware vendors just suck at it.
zephius
In a nutshell, yes. From a security perspective, look at Fortinet as an egregious example of just how bad. Palo Alto also has some serious internal issues.
dijit
not really, a lot of those middleware boxes are doing some form of ASIC offloading for TLS, and the PROM that loads the cert(s) are not rated for heavy writes… thus writing is slow, blocking, and will wear your hardware out.
The larger issue is actually our desire to deprecate cipher suites so rapidly though, those 2-3 year old ASICs that are functioning well become e-waste pretty quickly when even my blog gets a Qualys “D” rating after having an “A+” rating barely a year ago.
How much time are we spending on this? The NSA is literally already in the walls.
Havoc
At the same time I don’t think it’s reasonable to make global cert decisions like this based on what some crappy manufacturer failed to implement in their firmware. The issue there is clearly the crap hardware (though the sysadmins that have to deal with it have my condolences)
borgster
The ultimate internet "kill switch": revoke all certificates. Modern browsers refuse to render the page.
webprofusion
Pretty sure this only refers to publicly trusted certs. What percentage of public certs are still being manually managed?
I've been in the cert automation industry for 8 years (https://certifytheweb.com) and I do still hear of manual work going on, but the majority of stuff can be automated.
For stuff that genuinely cannot be automated (are you sure you're sure) these become monthly maintenance tasks, something cert management tools are also now starting to help with.
We're planning to add tracking tasks for manual deployments to Certify Management Hub shortly (https://docs.certifytheweb.com/docs/hub/), for those few remaining items that need manual intervention.
arisudesu
Having short-lived certificates is good, replacing them too often is not. This is implemented trivially for single-host deployments which just run certbot or ACME each subdomains. But for sophisticated setups where a certificate for a domain (or multiple domains or a wildcard) must be shared across fleet of hosts, it is a burden.
There are no ready-made tools available to automate such deployments. Especially if a certificate must be the same for each of the hosts, fingerprint included. Having a single, authoritative certificate for a domain and its wildcard subdomains deployed everywhere is much simpler to monitor. It does not expose internal subdomains in certificate transparency logs.
Unfortunately, organizations (persons) involved in decisions, do not provide such tools in advance.
lo0dot0
I agree. There should be a process in place for checking if changes are ready to be rolled out, and one of the checks should be a working prototype implementation, that is open source, that shows that running your systems can still be managed.
elric
This sounds an awful lot like security theatre.
> The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
This is patently nonsensical. There is hardly any information in a certificate that matters in practice, except for the subject, the issuer, and the expiration date.
> Shorter lifetimes mitigate the effects of using potentially revoked certificates.
Sure, and if you're worried about your certificates being stolen and not being correctly revoked, then by all means, use a shorter lifetime.
But forcing shorter lifetimes on everyone won't end up being beneficial, and IMO will create a lot of pointless busywork at greater expense. Many issuers still don't support ACME.
readthenotes1
I wonder how many forums run by the barely able are going to disappear or start charging.
I fairly regularly get cert expired problems because the admin is doing it as the yak shaving for a secondary hobby
ezfe
Why would they start charging? Auto-renewing certificates with Let's Encrypt are easy to do.
dijit
as long as you only have a single server, or a DNS server that has an API.
Even certbot got deprecated, so my IRC network has to use some janky shell scripts to rotate TLS… I’m considering going back to traditional certs because I geo-balance the DNS which doesn’t work for letsencrypt.
The issue is actually that I have multiple domains handled multiple ways and they all need to be letsencrypt capable for it to work and generate a combined cert with SAN’s attached.
iJohnDoe
Getting a bit ridiculous.
dboreham
Looks like a case where there are tradeoffs to be made, but the people with authority over the decision have no incentive to consider one side of the trade.
bayindirh
Why?
nottorp
The logical endgame is 30 second certificates...
krunck
Or maybe the endgame could be: creation of a centralized service that all web servers are required to be registered with and connected to at all times in order to receive their (frequently rotated) encryption keys. Controllers of said service then have kill switch control of any web service by simply withholding keys.
nottorp
Exactly. And all in the name of security! Think of the children!
bayindirh
For extremely sensitive systems, I think a more logical endgame is 30 minutes or so. 30 seconds is practically continuous generation.
An semi-distributed (intercity) Kubernetes cluster can reasonably change its certificate chain every week, but it needs an HSM if it's done internally.
Otherwise, for a website, once or twice a year makes sense if you don't store anything snatch-worthy.
nottorp
> once or twice a year makes sense
You don't say. Why are the defaults already 90 days or less then?
Why you think so? Keep in mind that revoked certs are not included in CRLs once expired (because they are not valid any more).
saltcured
I was thinking about this with my morning coffee.. the asymptotic end game would be that every TLS connection requires an online handshake with Connection Authorities to validate the server identity synchronously, right?
But on a more serious note, can someone more familiar with these standards and groups explain the scope of TLS certificate they mean for these lifetime limits?
I assume this is only server certs and not trust root and intermediate signing certs that would get such short lifetimes? It would be a mind boggling nightmare if they start requiring trust roots to be distributed and swapped out every few weeks to keep software functioning.
To my gen X internet pioneer eyes, all of these ideas seem like easily perverted steps towards some dystopian "everything is a subscription" access model...
woodruffw
> the asymptotic end game would be that every TLS connection requires an online handshake with Connection Authorities to validate the server identity synchronously, right?
The article notes this explicitly: the goal here is to reduce the number of online CA connections needed. Reducing certificate lifetimes is done explicitly with the goal of reducing the Web PKI's dependence on OCSP for revocation, which currently has the online behavior you're worried about here.
(There's no asymptotic benefit to extremely short-lived certificates: they'd be much harder to audit, and would be much harder to write scalable transparency schemes for. Something around a week is probably the sweet spot.)
"When they voiced objection, Captain Black replied that people who cared about security would not mind performing all the security theatre they had to. To anyone who questioned the effectiveness of the security theatre, he replied that people who really did owe allegiance to their employer would be proud to take performative actions as often as he forced them to. The more security theatre a person performed, the more secure he was; to Captain Black it was as simple as that."
CommanderData
Why bother with such a long staggered approach?
There should be 1 change from 365 to 47 days. This industry doesnt need constant changes, which will force everyone to automating renewals anyway.
datadrivenangel
Enterprises are like lobsters: You gotta crank the water temperature up slowly.
jezek2
Thanks for the heads up. I will adjust my cron jobs to run every week instead of every month.
I need it more frequently to get more time in case there is an error as I tend to ignore the error e-mails for multiple weeks due to my fatigue from handling of various kinds of certificates.
Personally I also have an HTTP mirror for my more important projects when availability is more important than security of the connection.
rfmoz
This could be addressed in a more progressive way.
For example, EV carts had the green bar that was a soft way to promote their presence/use over the normal ones. That bar, started as a strong evidence on the url box and lost that look with the time.
Something like that let the owner to decide and, maybe, the user push their use because it feels more secure, not directly the CA.
AlfeG
Will see how Azure FD will handle this. We opened more than expected tickets with support on certs not updating automatically...
zelon88
This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. While identity and safeguarding against MITM is important, identity is not the primary purpose certificates serve in the real world. At least that is not how they are used or why they are purchased.
They are purchased to provide encryption. Nobody checks the details of a cert and even if they did they wouldn't know what to look for in a counterfeit anyway.
This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
pornel
Browsers check the identity of the certificates every time. The host name is the identity.
There are lots of issues with trust and social and business identities in general, but for the purpose of encryption, the problem can be simplified to checking of the host name (it's effectively an out of band async check that the destination you're talking to is the same destination that independent checks saw, so you know your connection hasn't been intercepted).
You can't have effective TLS encryption without verifying some identity, because you're encrypting data with a key that you negotiate with the recipient on the other end of the connection. If someone inserts themselves into the connection during key exchange, they will get the decryption key (key exchange is cleverly done that a passive eavesdropper can't get the key, but it can't protect against an active eavesdropper — other than by verifying the active participant is "trusted" in a cryptographic sense, not in a social sense).
chowells
I think it's absolutely critical when I'm sending a password to a site that it's actually the site it claims to be. That's identity. It matters a lot.
zelon88
Not to users. The user who types Wal-Mart into their address bar expects to communicate with Wal-Mart. They aren't going to check if the certificate matches. Only that the icon is green.
This is where the disconnect comes in. Me and you know that the green icon doesn't prove identity. It proves certificate validity. But that's not what this is "sold as" by the browser or the security community as a whole. I can buy the domain Wаl-Mart right now and put a certificate on it that says Wаl-Mаrt and create the conditions for that little green icon to appear. Notice that I used U+0430 instead of the letter "a" that you're used to.
And guess what... The identity would match and pass every single test you throw at it. I would get a little green icon in the browser and my certificate would be good. This attack fools even the brightest security professionals.
So you see, Identity isn't the value that people expect from a certificate. It's the encryption.
Users will allow a fake cert with a green checkmark all day. But a valid certificate with a yellow warning is going to make people stop and think.
chowells
Well, no. That's just not true.
I care that when I type walmart.com, I'm actually talking to walmart.com. I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.
Preventing local DNS servers from fucking with users is critical, as local DNS is the weakest link in a typical setup. They're often run by parties that must be treated as hostile - basically whenever you're on public wifi. Or hell, when I'm I'm using my own ISP's default configuration. I don't trust Comcast to not MitM my connection, given the opportunity. I trust technical controls to make their desire to do so irrelevant.
Without the identity component, any DNS server provided by DHCP could be setting up a MitM attack against absolutely everything. With the identity component, they're restricted to DoS. That's a lot easier to detect, and gets a lot of very loud complaints.
BrandoElFollito
You use words that are alien to everyone. Well, there is a small incertainity in "everyone" and it is there where the people who actually understand DHCP, DoS, etc. live. This is a very, very small place.
So no, nobody will ever look at a certificate.
When I look at them, as a security professional, I usually need to rediscover where the fuck they moved the certs details again in the browser.
Right so misrepresenting your identity with similar looking urls is a real problem with PKI. That doesn’t change the fact that certificates are ultimately about asserting your identity, it’s just a flaw in the system.
aseipp
Web browsers have had defenses against homograph attacks for years now, my man, dating back to 2017. I'm somewhat doubtful you're on top of this subject as much as you seem to be suggesting.
racingmars
> This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. [...] identity is not the primary purpose certificates serve in the real world.
Identity is the only purpose that certificates serve. SSL/TLS wouldn't have needed certificates at all if the goal was purely encryption: key exchange algorithms work just fine without either side needing keys (e.g. the key related to the certificate) ahead of time.
But encryption without authentication is a Very Bad Idea, so SSL was wisely implemented from the start to require authentication of the server, hence why it was designed around using X.509 certificates. The certificates are only there to provide server authentication.
gruez
>This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. While identity and safeguarding against MITM is important, identity is not the primary purpose certificates serve in the real world. At least that is not how they are used or why they are purchased.
"example.com" is an identity just like "Stripe, Inc"[1]. Just because it doesn't have a drivers license or article of incorporation, doesn't mean it's not an identity.
>This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
Certbot is trivial to set up yourself, and deploying it in production isn't so hard that you need to be "Google / AWS / Azure" to do it. There's plenty of IaaS/PaaS services that have letsencrypt, that are orders of magnitude smaller than those hyperscalers.
Havoc
Continually surprised by how emotional people get about cert lifetimes.
I get that there are some fringe cases where it’s not possible but for the rest - automate and forget.
lucb1e
If emotional reactions to a (likely intentional) annoyance factor surprises you, remember that people start wars for differing sets of beliefs!
All I care about as a certbot user is what do I need to do.
Do I need to update certbot in all my servers? Or would they continue to work without the need to update?
iqandjoke
The poor vendor folk needs to come more often on site to fix the cert issue.
aaomidi
Good.
If you can't make this happen, don't use WebPKI and use internal PKI.
PixelPaul
Petition to Remove members voting rights from the CABForum who have a commercial interest:
https://chng.it/WcR6t2WQd2
wnevets
Has anyone managed to calculate the increase power usage across the entire internet this change will cause? Well, I suppose the environment can take one more for the team.
margalabargala
The single use of AI to generate that video of Trump licking Elon Musk's feet, used significantly more power than this change will cause to be used over the next decade.
It's great to be environmentally conscious, but if reducing carbon emissions is your goal, complaining about this is a lot like saying that people shouldn't run marathons, because physical activity causes humans to exhale more CO2.
wnevets
> The single use of AI to generate that video of Trump licking Elon Musk's feet, used significantly more power than this change will cause to be used over the next decade.
We are effectively talking about the entire world wide web generating multiple highly secure cryptograph key pairs every 47 days. That is a lot of CPU cycles.
Also you not picking up on the Futurama quote is disappointing.
margalabargala
> We are effectively talking about the entire world wide web generating multiple highly secure cryptograph key pairs every 47 days. That is a lot of CPU cycles.
We aren't cracking highly secure key pairs. We're making them.
On my computer, to create a new 4096-bit key takes about a second, in a single thread. For something I now have to do fewer than 8 times per year. On a 16-core CPU with a TDP of 65 watts, we can estimate that this took 0.0011 watt-hours.
Yes, there are a lot of websites, close to a billion of them. No, this still is not some onerous use of electricity. For the whole world, this is an additional usage of a bit over 9000 kWh annually. Toss up a few solar panels and you've offset the whole planet.
wnevets
> On my computer, to create a new 4096-bit key takes about a second, in a single thread. For something I now have to do fewer than 8 times per year. On a 16-core CPU with a TDP of 65 watts, we can estimate that this took 0.0011 watt-hours.
but you think think it would take a decade for the entire internet to use as much power as a single AI video?
margalabargala
No, doing out the math I see I was being hyperbolic.
That one AI video used about 100kWh, so about four days worth of HTTPS for the whole internet.
detaro
renewing a certificate does not involve making a new keypair either... It's merely a pair of signatures, one for the CSR and one by the CA.
detaro
When you generate a new cert you do not generate a new keypair every time.
riffic
ah this is gonna piss off a few coworkers today but it's a good move either way.
thyristan
semi-related question: where is the letsencrypt workalike for s/mime?
Lammy
This sucks. I'm actually so sick of mandatory TLS. All we did was get Google Analytics and all the other spyware running “““securely””” while making it that much harder for any regular person to host anything online. This will push people even further into the arms of the walled gardens as they decide they don't want to deal with the churn and give up.
ShakataGaNai
First.... 99% of people have zero interest in hosting things themselves anyways. Like on their own server themselves. Geocities era was possibly the first and last time that "having your own page" was cool, and that was basically killed by social media.
As for certs... maybe at the start it was hard, but it's really quite easy to host things online, with a valid certificate. There are many CDN services like Cloudflare which will handle it for you. There are also application proxies like Traefik and Caddy which will get certs for you.
Most people who want their own site today, will use Kinsta or SquareSpace or GitHub pages any one of thousands of page/site hosting services. All of whom have a system for certificates that is so easy to use, most people don't even realize it is happening.
Lammy
lol at recommending Cloudflare and Microsoft (Github) in response to a comment decrying spyware
Every single thing you mentioned is plugged in to the tier-1 surveillance brokers. I am talking plain files on single server shoved in a closet, or cheap VPS. I don't often say this but I really don't think you “get” it.
jonathantf2
A "regular person" won't/can't deal with a server, VPS, anything like that. They'll go to GoDaddy because they saw an advert on the TV for "websites".
Lammy
They absolutely can deal with the one-time setup of one single thing that's easy to set up auto-pay for. It's so many additional concepts when you add in the ACME challenge/response because now you have to learn sysadmin-type skills to care for a periodic process, users/groups for who-runs-what-process and who-owns-what-cert-files, software updates to chase LE/ACME changes or else all your stuff breaks, etc.
Your attitude is so dismissive to the general public. We should be encouraging people to learn the little bits they want to learn to achieve something small, and instead we are building this ivory tower all-or-nothing stack. For what, job security? Bad mindset.
ezfe
Lol this is literally not true. I've set up self-hosted websites with no knowledge (just reading tutorials) and TLS is by far not the hardest step.
Lammy
A brand-new setup is not relevant to what I'm talking about. Try ignoring your entire infrastructure for a few years and see if you still think that lol
0xbadcafebee
I hate this, but I'm also glad it's happening, because it will speed up the demise of Web PKI.
CAs and web PKI are a bad joke. There's too many ways to compromise security, there's too many ways to break otherwise-valid web sites/apps/connections, there's too many organizations that can be tampered with, the whole process is too complex and bug-prone.
What Web PKI actually does, in a nutshell, is prove cryptographically that at some point in the past, there was somebody who had control of either A) an e-mail address or B) a DNS record or C) some IP space or D) some other thing, and they generated a certificate through any of these methods with one of hundreds of organizations. OR it proves that they stole the keys of such a person.
It doesn't prove that who you're communicating with right now is who they say they are. It only proves that it's someone who, at some point, got privileged access to something relating to a domain.
That's not what we actually want. What we actually want is to be assured this remote host we're talking to now is genuine, and to keep our communication secret and safe. There are other ways to do that, that aren't as convoluted and vulnerable as the above. We don't have to twist ourselves into all these knots.
I'm hopeful changes like these will result in a gradual catastrophy which will push industry to actually adopt simpler, saner, more secure solutions. I've proposed one years ago but nobody cares because I'm just some guy on the internet and not a company with a big name. Nothing will change until the people with all the money and power make it happen, and they don't give a shit.
ocdtrekkie
It's hard to express how absolutely catastrophic this is for the Internet, and how incompetent a group of people have to be to vote 25/0 for increasing a problem that breaks the Internet for many organizations yearly by a factor of ten for zero appreciable security benefit.
Everyone in the CA/B should be fired from their respective employers, and we honestly need to wholesale plan to dump PKI by 2029 if we can't get a resolution to this.
dextercd
CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.
It's really not that hard to automate renewals and monitor a system's certificate status from a different system, just in case the automation breaks and for things that require manual renewal steps.
I get that it's harder in large organisations and that not everything can be automated yet, but you still have a year before the certificate lifetime goes down to 200 days, which IMO is pretty conservative.
With a known timeline like this, customers/employees have ammunition to push their vendors/employers to invest into automation and monitoring.
ocdtrekkie
It's actually far worse for smaller sites and organizations than large ones. Entire pricey platforms exist around managing certificates and renewals, and large companies can afford those or develop their own automated solutions.
None of the platforms which I deal with will likely magically support automated renewal in the next year. I will likely spend most of the next year reducing our exposure to PKI.
Smaller organizations dependent on off the shelf software will be killed by this. They'll probably be forced to move things to the waiting arms of the Big Tech cloud providers that voted for this. (Shocker.) And it probably won't help stop the bleeding.
And again, there's no real world security benefit. Nobody in the CA/B has ever discussed real world examples of threats this solves. Just increasingly niche theoretical ones. In a zero cost situation, improving theoretical security is good, but in a situation like this where the cost is real fragility to the Internet ecosystem, decisions like this need to be justified.
Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
This is a group of people who have hammers and think everything is a nail, and unfortunately, that includes a lot of ceramic and glass.
dextercd
I think most orgs can get away with free ACME clients and free/cheap monitoring options.
This will be painful for people in the short term, but in the long term I believe it will make things more automated, more secure, and less fragile.
Browsers are the ones pushing for this change. They wouldn't do it if they thought it would cause people to see more expired certificate warnings.
> Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
Representatives are not voting against the wishes/instructions of their employer.
ocdtrekkie
I mean to give you an example of how far we are from this: IIS does not have built-in ACME support, and in the enterprise world it is basically "most web servers". Sure, you can add some third party thing off the Internet to do it, but... how many banks will trust that?
Unfortunately the problem is likely too removed from understanding for employers to care. Google and Microsoft do not realize how damaging the CA/B is, and probably take the word of their CA/B representatives that the choices that they are making are necessary and good.
I doubt Satya Nadella even knows what the CA/B is, much less that he pays an employee full-time to directly #### over his entire customer base and that this employee has nearly god-level control over the Internet. I have yet to see an announcement from the CA/B that represented a competent decision that reflected the reality of the security industry and business needs, and yet... nobody can get in trouble for it!
The whole industry has been moving in this direction for the last decade
So there is nothing much to say
Except that if you waited the last moment, well you will have to be in a hurry. (non)Actions have consequences :)
I'm glad by this decision because that'll hammer a bit down those resisting, those who but a human do perform yearly renewal. Let's how stupid it can get.
xyzzy123
Can you point to a specific security problem this change is actually solving? For example, can we attribute any major security compromises in the last 5 years to TLS certificate lifetime?
Are the security benefits really worth making anything with a valid TLS certificate stop working if it is air-gapped or offline for 48 days?
> CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.
They're not incompetent and they're not "evil", and this change does improve some things. But the companies behind the top level CA ecosystem have their own interests which might not always align with those of end users.
dextercd
If a CA or subscriber improves their security but had an undetected incident in the past, a hacker today has a 397 day cert and can reuse the domain control validation in the next 397 days, meaning they can MITM traffic for effectively 794 days.
CAs have now implemented MPIC. This may have thwarted some attacks, but those attackers still have valid certificates today and can request a new certificate without any domain control validation being performed in over a year.
New security standards should come into effect much faster. For fixes against attacks we know about today and new ones that are discovered and mitigated in the future.
xyzzy123
People who care deeply about this can use 30 day certs right now if they want to.
"The goal is to minimize risks from outdated certificate data, deprecated cryptographic algorithms, and prolonged exposure to compromised credentials. It also encourages companies and developers to utilize automation to renew and rotate TLS certificates, making it less likely that sites will be running on expired certificates."
I'm not even sure what "outdated certificate data" could be. The browser by default won't negotiate a connection with an expired certificate
xyzzy123
> I'm not even sure what "outdated certificate data" could be...
Agree.
> According to the article:
Thanks, I did read that, it's not quite what I meant though. Suppose a security engineer at your company proposes that users should change their passwords every 49 days to "minimise prolonged exposure from compromised credentials" and encourage the uptake of password managers and passkeys.
How to respond to that? It seems a noble endeavour. To prioritise, you would want to know (at least):
a) What are the benefits - not mom & apple pie and the virtues of purity but as brass tacks - e.g: how many account compromises do you believe would be prevented by this change and what is the annual cost of those? How is that trending?
b) What are the cons? What's going to be the impact of this change on our customers? How will this affect our support costs? User retention?
I think I would have a harder time trying to justify the cert lifetime proposal than the "ridiculously frequent password changes" proposal. Sure, it's more hygenic but I can't easily point to any major compromises in the past 5 years that would have been prevented by shorter certificate lifetimes. Whereas I could at least handwave in the direction of users who got "password stuffed" to justify ridiculously frequent password changes.
The analogy breaks down in a bad way when it comes to evaluating the cons. The groups proposing to decrease cert lifetimes bear nearly none of the costs of the proposal, for them it is externalised. They also have little to no interest in use cases that don't involve "big cloud" because those don't make them any money.
> They didn't do this because they're incompetent but because they think it'll improve security.
No, they did it because it reduces their legal exposure. Nothing more, nothing less.
The goal is to reduce the rotation time low enough that the certificates will rotate before legal procedures to stop them from rotating them can kick in.
This does very little to improve security.
dextercd
Apple introduced this proposal. Why would they care about a CA's legal exposure?
Lower the lifetime of certs does mean that orgs will be better prepared to replace bad certs when they occur. That's a good thing.
More organisations will now take the time to configure ACME clients instead of trying to convince CA's that they're too special to have their certs revoked, or even start embarrassing court cases, which has only happened once as far as I know.
Theories that involve CAs, Google, Microsoft, Apple, and Mozilla having ulterior motives and not considering potential downsides of this change are silly.
nickf
That isn’t at all true.
rcxdude
A large part of why it breaks things is because it only happens yearly. If you rotate certs on a regular pace, you actually get good at it and it stops breaking, ever. (basically everything I've set up with letsencrypt has needed zero maintenance, for example)
ocdtrekkie
So at a 47 day cadence, it's true we'll have to regularly maintain it: We'll need to hire another staff member to constantly do nothing but. (Most of the software we use does not support automated rotation yet. I assume some will due to this change, but certainly not 100%.)
And also, it probably won't avoid problems. Because yes, the goal is automation and a couple weeks ago I was trying to access a site from an extremely large infrastructure security company which rotates their certificates every 24 hours. And their site was broke and the subreddit about their company was all complaining about it. Turns out automated daily rotation just means 365 more opportunities for breakage a year.
Even regular processes break, and now we're multiplying the breaking points... and again, at no real security benefit. There’s like... never ever been a case where a certificate leak caused a breach.
Avamander
> So at a 47 day cadence, it's true we'll have to regularly maintain it: We'll need to hire another staff member to constantly do nothing but. (Most of the software we use does not support automated rotation yet. I assume some will due to this change, but certainly not 100%.)
This is fundamentally a skill issue. If a human can replace the certificate, so can a machine. Write a script.
arp242
You can disagree with all of this, but calling for everyone involved to be fired is just ridiculous and mean-spirited.
rglover
Is it? This is the crux of the problem with a lot of institutions. There's little to no professional accountability for bad moves anymore. It used to be that doing a good job and taking pride in one's work was all you needed to do to keep your job.
Now? It's a spaghetti of politics and emotional warfare. Grown adults who can't handle being told that they might not be up to the task and it's time to part ways. If that's the honest truth, it's not "mean," just not what that person would like to hear.
msie
Eff this shit. I'm getting out of sysadmin.
aristofun
Oh no, Bunch of stupid bureacrats came up with another dumb idea. What a surprise!
curtisszmania(dead)
[dead]
belter
Are the 47 days to please the current US Administration?
eesmith
Based on the linked-to page, no:
47 days might seem like an arbitrary number, but it’s a simple cascade:
* 200 days = 6 maximal month (184 days) + 1/2 30-day month (15 days) + 1 day wiggle room
* 100 days = 3 maximal month (92 days) + ~1/4 30-day month (7 days) + 1 day wiggle room
* 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
vasilzhigilei
I'm on the SSL/TLS team @ Cloudflare. We have great managed certificate products that folks should consider using as certificate validity periods continue to shorten.
bambax
Simply having a domain managed by Cloudflare makes it magically https; yes, the traffic between the origin server and Cloudflare isn't encrypted, so it's not completely "secure", but for most uses it's good enough. It's also zero-maintenance and free.
Keep up the good work! ;-)
vasilzhigilei
Thanks! You can also set up free origin certs to make Cloudflare edge to origin connections encrypted as well.
Lammy
SSL added and removed here ;-)
tinix
yeah I'm convinced this is the real reason for these changes...
perverse incentives indeed.
lucb1e
Is this a joke (as in, that you don't actually work there) to make CF look bad for posting product advertisements in comment threads, or is this legit?
vasilzhigilei
It's one of my first times posting on HN, thought this could be relevant helpful info for someone. Thanks for pointing out that it sounds salesy, rereading my comment I see it too now.
bambax
People who are proud of the work they do are rare enough that we shouldn't punish them for it.
Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours? I am willing to bet money this threshold will never be crossed.
This feels like much more of an ideological mission than a practical one, unless I've missed some monetary/power advantage to forcing everyone to play musical chairs with their entire infra once a month...
Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.
Shorter lifetimes have several advantages:
1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.
2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.
3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.
Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.
Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.
There are security benefits, yes. But as someone that works in infrastructure management, including on 25 or 30 year old systems in some cases, it's very difficult to not find this frustrating. I need tools I will have in 10 years to still be able to manage systems that were implemented 15 years ago. That's reality.
Doubtless people here have connected to their router's web interface using the gateway IP address and been annoyed that the web browser complains so much about either insecure HTTP or an unverified TLS certificate. The Internet is an important part of computer security, but it's not the only part of computer security.
I wish technical groups would invest some time in real solutions for long-term, limited access systems which operate for decades at a time without 24/7 access to the Internet. Part of the reason infrastructure feels like running Java v1.3 on Windows 98 is because it's so widely ignored.
Such a certificate should not be trusted for domain verification purposes, even though it should match the domain. Instead it should be trusted for encryption / stream integrity purposes. It should be accepted on IPs outside of publicly routable space, like 192.0.0/24, or link-local IPv6 addresses. It should be possible to issue it for TLDs like .local. It should result in a usual invalid certificate warning if served off a pubic internet address.
In other words, it should be handled a bit like a self-signed certificate, only without the hassle of adding your handcrafted CA to every browser / OS.
Of course it would only make sense if a major browser would trust this special CA in its browser by default. That is, Google is in a position to introduce it. I wonder if they may have any incentive though. (To say nothing of Apple.)
Old cruft dying there for decades
That's the reality and that's an issue unrelated to TLS
Running unmanaged compute at home (or elsewhere ..) is the issue here.
Practically, the solution is virtual machines with the compatible software you'll need to manage those older devices 10 years in the future, or run a secure proxy for them.
Internet routers are definitely one of the worst offenders because originating a root of trust between disparate devices is actually a hard problem, especially over a public channel like wifi. Generally, I'd say the correct answer to this is that wifi router manufacturers need to maintain secure infrastructure for enrolling their devices. If manufacturers can't bother to maintain this kind of infrastructure then they almost certainly won't be providing security updates in firmware either, so they're a poor choice for an Internet router.
I think that's a big win.
The root reason is that revocation is broken, and we need to do better to get the security properties we demand of the Web PKI.
It might in theory but I suspect it's going to make things very very unreliable for quite a while before it (hopefully) gets better. I think probably already a double digit fraction of our infrastructure outages are due to expired certificates.
And because of that it may well tip a whole class of uses back to completely insecure connections because TLS is just "too hard". So I am not sure if it will achieve the "more secure" bit either.
Good example of enshittification
> easier for a few big players in industry
Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).
What will let's encrypt be like with 7day certs? Will it renew them every day(6 day reaction time), or every 3 days (4 days reaction time). Not every org is suited with 24/7 staffing, some people go on holidays, some public holidays extend to long weekends etc :). I would argue that it would be a good idea to give people a full week to react to renewal problems. That seems impossible for short lived certs.
Let's Encrypt was founded with a goal of rapidly (within a few years) helping get the web to as close to 100% encrypted as we could. And we've succeeded.
I don't think we could have achieved that goal any way other than being a CA.
DANE is the way (https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...)
But no browser have support for it, so .. :/
Further, all you've done is replace one authority (the CA authority) with another one (the zone authority, and thus your domain registrar and the domain registry).
When I manage a DNS zone, I'm free to generate all certificates I want
Are there any reasonable alternatives to CAs in a modern world? I have never heard any good proposals.
Certificate pinning is probably the most widely known way to get a certificate out there without relying on live PKI. However, certificate pinning just shifts the burden of trust from runtime to install time, and puts an expiration date on every build of the program. It also doesn't work for any software that is meant to access more than a small handful of pre-determined sites.
Web-of-trust is a theoretical possibility, and is used for PGP-signed e-mail, but it's also a total mess that doesn't scale. Heck, the best way to check the PGP keys for a lot of signed mail is to go to an HTTPS website and thus rely on the CAs.
DNSSEC could be the basis for a CA-free world, but it hasn't achieved wide use. Also, if used in this way, it would just shift the burden of trust from CAs to DNS operators, and I'm not sure people really like those much better.
https://en.wikipedia.org/wiki/Utah_Data_Center
I work in a very large organisation and I just dont see them being able to go to automated TLS certificates for their self managed subdomains, inspection certificates, or anything else for that matter. It will be interesting to see how the short lived certs are adopted into the future.
[1]: https://en.wikipedia.org/wiki/Heartbleed#Certificate_renewal...
We did consider it.
As CAs prepare for post-quantum in the next few years, it will become even less practical as there is going to be pressure to cut down the number of signatures in a handshake.
That is unfortunate. I just deployed a web server the other day and was thrilled to deploy must-staple from Let's Encrypt, only to read that it was going away.
> As CAs prepare for post-quantum in the next few years, it will become even less practical as there is going to be pressure to cut down the number of signatures in a handshake.
Please delay the adoption of PQAs for certificate signatures at Let's Encrypt as long as possible. I understand the concern that a hypothetical quantum machine with tens of millions of qubits capable of running Shor's algorithm to break RSA and ECC keys might be constructed. However, "post-quantum" algorithms are inferior to classical cryptographic algorithms in just about every metric as long as such machines do not exist. That is why they were not even considered when the existing RSA and ECDSA algorithms were selected before Shor's algorithm was a concern. There is also a real risk that they contain undiscovered catastrophic flaws that will be found only after adoption, since we do not understand their hardness assumptions as well as we understand integer factorization and the discrete logarithm problem. This has already happened with SIKE and it is possible that similarly catastrophic flaws will eventually be found in others.
Perfect forward secrecy and short certificate expiry allow CAs to delay the adoption of PQAs for key signing until the creation of a quantum computer capable of running Shor’s algorithm on ECC/RSA key sizes is much closer. As long as certificates expire before such a machine exists, PFS ensures no risk to users, assuming key agreement algorithms are secured. Hybrid schemes are already being adopted to do that. There is no quantum moore's law that makes it a forgone conclusion that a quantum machine that can use Shor's algorithm to break modern ECC and RSA will be created. If such a machine is never made (due to sheer difficulty of constructing one), early adoption in key signature algorithms would make everyone suffer from the use of objectively inferior algorithms for no actual benefit.
If the size of key signatures with post quantum key signing had been a motivation for the decision to drop support for OCSP must-staple and my suggestion that adoption of post quantum key signing be delayed as long as possible is in any way persuasive, perhaps that could be revisited?
Finally, thank you for everything you guys do at Let's Encrypt. It is much appreciated.
So for a bank, a private cert compromise is bad, for a regular low traffic website, probably not so much?
Sounds like your concept of the customer/provider relationship is inverted here.
The whole "customer is king" doesn't apply to something as critical as PKI infrastructure, because it would compromise the safety of the entire internet. Any CA not properly applying the rules will be removed from the trust stores, so there can be no exceptions for companies who believe they are too important to adhere to the contract they signed.
And if the safety of the entire internet is at risk, why is 47 days days an acceptable duration for this extreme risk, but 90 days is not?
> Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Shorter-lived certificates make OCSP and other revocation mechanisms less of a load-bearing component within the Web PKI. This is a good thing, since neither CAs nor browsers have managed to make timely revocation methods scale well.
(I don't think there's any monetary or power advantage to doing this. The reason to do it is because shorter lifetimes make it harder for server operators to normalize deviant certificate operation practices. The reasoning there is the same as with backups or any other period operational task: critical processes must be continually tested and evaluated for correctness.)
That is the next step in nation state tapping of the internet.
A MITM cert would need to be manually trusted, which is a completely different thing.
the power dynamic here is that the CAs have a "too big to fail" inertia, where they can do bad things without consequence because revoking their trust causes too much inconvenience for too many people. shortening expiry timeframes to the point where all their certificates are always going to expire soon anyways reduces the harm that any one CA can do by offering bad certs.
it might be inconvenient for you to switch your systems to accomodate shorter expiries, but it's better to confront that inconvenience up front than for it to be in response to a security incident.
Well you see, they also want to be able to break your automation.
For example, maybe your automation generates a 1024 bit RSA certificate, and they've decided that 2048 bit certificates are the new minimum. That means your automation stops working until you fix it.
Doing this with 2-day expiry would be unpopular as the weekend is 2 days long and a lot of people in tech only work 5 days a week.
This is a ridiculous straw man.
> 48 hours. I am willing to bet money this threshold will never be crossed.
That's because it won't be crossed and nobody serious thinks it should.
Short certs are better, but there are trade-offs. For example, if cert infra goes down over the weekend, it would really suck. TBH, from a security perspective, something in the range of a couple of minutes would be ideal, but that runs up against practical reasons
- cert transparency logs and other logging would need to be substantially scaled up
- for the sake of everyone on-call, you really don't want anything shorter than a reasonable amount of time for a human to respond
- this would cause issues with some HTTP3 performance enhancing features
- thousands of servers hitting a CA creates load that outweighs the benefit of ultra short certs (which have diminishing returns once you're under a few days, anyways)
> This feels like much more of an ideological mission than a practical one
There are numerous practical reasons, as mentioned here by many other people.
Resisting this without good cause, like you have, is more ideological at this point.
It's been a huge pain as we have encountered a ton of bugs and missing features in libraries and applications to reload certs like this. And we have some really ugly workarounds in place, because some applications place a "reload a consul client" on the same level of "reload all config, including opening new sockets, adjusting socket parameters, doing TCP connection handover" - all to rebuild a stateless client throwing a few parameters at a standard http client. But oh well.
But I refuse to back down. Reload your certs and your secrets. If we encounter a situation in which we have to mass-revoke and mass-reissue internal certs, it'll be easy for those who do. I don't have time for everyone else.
Do you promise to come back and tell us the story about when someone went on vacation and the certs issued on a Thursday didn't renew over the weekend and come Monday everything broke and no one could authenticate or get into the building?
https://www.hackerneue.com/item?id=25380301
Except there are no APIs to rotate those. The infrastructure doesn't exist yet.
And refreshing those automatically does not validate ownership, unlike certificates where you can do a DNS check or an HTTP check.
Microsoft has some technology that next to these tokens they also have a per-machine certificate that is used to sign requests, and those certificates can't leave the machine.
Because we run on Azure / AKS, switching to federated credentials ("workload identities") with the app registrations made most of the pain go away because MS manages all the rotations (3 months) etc. If you're on managed AKS the OIDC issuer side is also automagic. And it's free. I think GCP offers something similar.
https://learn.microsoft.com/en-us/entra/workload-id/workload...
The nightmare only intensifies for small businesses that allow their users to bring their own devices (yes, yes, sacrilege but that is how small businesses operate).
Not everything is a massive enterprise with an army of IT support personnel.
Having just implemented an internal CA, I can assure you, most corporations can’t just run an internal CA. Some struggle to update containers and tie their shoe laces.
I’ve done it in the past, and it was so painful, I just bit the bullet and started accessing everything under public hostnames so that I can get auto-issued Letsencrypt certificates.
There’s nothing stopping Apple and Google from issuing themselves certificates every 10 minutes. I get no value for doing this. Building out or expanding my own PKI for my company or setting up the infrastructure to integrate with Digicert or whomever gets me zero security and business value, just cost and toil.
Revocation is most often an issue when CAs fuck up. So now we collectively need to pay to cover their rears.
The big question is what happens when (not "if") that happens. Companies have repeatedly shown that they are unable to rotate certs in time, to the point of even suing CAs to avoid revocation. They've been asked nicely to get their shit together, and it hasn't happened. Shortening cert lifetime to force automation is the inevitable next step.
You’re portraying people suing CAs to get injunctions to avoid outages as clueless or irresponsible. The fact is Digicert’s actions, dictated by this CA/Browser forum were draconian and over the top responses to a minor risk. This industry trade group is out of control.
End of the day, we’re just pushing risk around. Running a quality internal PKI is difficult.
Non browser things usually don’t care even if cert is expired or trusted.
So I expect people still to use WebPKI for internal sites.
Why would browsers "most likely" enforce this change for internal CAs as well?
That said, it would be really nice if they supported DANE so that websites do not need CAs.
So yes, you had to first ignore the invalid self-signed certificate while using HTTPS with a client tool that really, really didn't want to ignore the validity issue, then upload a valid certificate, restart the service... which would terminate the HTTPS connection with an error breaking your script in a different not-fun way, and then reconnect... at some unspecified time later to continue the configuration.
Fun times...
Isn't this a good default? No network access, no need for a public certificate, no need for a certificate that might be mistakenly trusted by a public (non-malicious) device, no need for a public log for the issued certificate.
https://smallstep.com/docs/step-ca/
'ipa-client-install' for those so motivated. Certificates are literally one among many things part of your domain services.
If you're at the scale past what IPA/your domain can manage, well, c'est la vie.
I am not willing to give credentials to alter my dns to a program. A security issue there would be too much risk.
Introduce them to the idea of having something like Caddy sit in front of apps solely for the purpose of doing TLS termination... Caddy et al can update the certs automatically.
I haven’t used BIG IP in a long while, so take this with a grain of salt, but it seems to me that it might not be impossible to get something going – despite the fact that BIG IP itself doesn’t have native support for ACME.
Two pointers that might be of interest:
https://community.f5.com/discussions/technicalforum/upload-l...
https://clouddocs.f5.com/api/icontrol-rest/APIRef_tm_sys_cry...
Giving the TLS endpoint itself the authority to manage certificates kind of weakens the usefulness of rotating certificates in the first place. You probably don't let your external facing authoritative DNS servers near zone key material, so there's no reason to let the external load balancers rotate certificates.
Where I have used F5 there was never any problem letting the backend configuration system do the rotation and upload of certificates together with every other piece of configuration that is needed for day to day operations.
Don't expect firmware / software updates to enable ACME-type functionality for tons of gear. At best it'll be treated as an excuse by vendors to make Customers forklift and replace otherwise working gear.
Corporate hardware lifecycles are longer than the proposed timeline for these changes. This feels like an ill thought-out initiative by bureaucrats working in companies who build their own infrastructure (in their white towers). Meanwhile, we plebs who work in less-than-Fortune 500 companies stuck with off-the-shelf solutions will be forced to suffer.
Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.
Key loss on one of those is like a takeover of an entire chunk of hostnames. Really opens you up.
That doesn't seem like the end of the world. It means you shouldn't have `secret-plans-for-world-takeover.example.com`, but it's already the case that secret projects should use opaque codenames. Most internal domain names would not actually leak any information of value.
The closest thing is maybe described (but not shown) in these posts: https://blog.daknob.net/workload-mtls-with-acme/ https://blog.daknob.net/acme-end-user-client-certificates/
(disclamer: i'm a founder at anchor.dev)
> The CSR relayed through Anchor does not contain secret information. Anchor never sees the private key material for your certificates.
Give us a big global *.local cert we can all cheat with, so I don't have to blast my credentials in the clear when I log into my router's admin page.
For the other case perhaps renew the cert at a host allowed to do outside queries for the dns challenge and find some acceptable automated way to propagate an updated cert to the host that isn't allowed outside queries.
You can do nothing except twiddle your thumbs while it times out and that may take a couple of days.
And in term of security, I think that it is a double edged sword:
- everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it
- Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time. If ever Digicert or Letsencrypt server, or the "cert updating client" is rooted or has a security issue, most servers around the world could be compromised in a very very short time.
As a side note, I'm totally laughing at the following explanation in the article:
So, 47 is not arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...I'm a computing professional in the tiny slice of internet users that actually understands what a cert is, and I never look at a cert by hand unless it's one of my own that I'm troubleshooting. I'm sure there are some out there who do (you?), but they're a minority within a minority—the rest of us just rely on the automated systems to do a better job at security than we ever could.
At a certain point it is correct for systems engineers to design around keeping the average-case user more secure even if it means removing a tiny slice of security from the already-very-secure power users.
like, private CA? All of these restrictions are only applied for certificates issued under the webtrust program. Your private CA can still issue 100 year certificates.
Support for cert and CA pinning is in a state that is much better than I thought it will be, at least for mobile apps. I'm impressed by Apple's ATS.
Yet, for instance, you can't pin a CA for any domain, you always have to provide it up front to audit, otherwise your app may not get accepted.
Doesn't this mean that it's not (realistically) possible to create cert pinning for small solutions? Like homelabs or app vendors that are used by onprem clients?
We'll keep abusing PKI for those use cases.
There is a client that has a self hosted web service. Or a SaaS but under his own domain.
There is a vendor that provides nice apps to interact with that service. Vendor distributes them on his own to stores, upgrades etc.
Clients has no interest in doing that, nor any competencies.
Currently there is no solution here: Vendor needs to distribute an app that has Client's CAs or certs built in (into his app realese), to be able to pin it.
I've seen that scenario many times in mid/small-sized banks, insurance and surrounding services. Some of these institutions rely purely on external vendors and just integrate them. Same goes for tech savvy selfhosters - they often rely on third party mobile apps but host backends themselves.
Not related to certificates specifically, and the specific number of days is in no way a security risk, but it reminded me of NUMS generators. If you find this annoyingly arbitrary, you may also enjoy: <https://github.com/veorq/numsgen>. It implements this concept:
> [let's say] one every billion values allows for a backdoor. Then, I may define my constant to be H(x) for some deterministic PRNG H and a seed value x. Then I proceed to enumerate "plausible" seed values x until I find one which implies a backdoorable constant. I can begin by trying out all Bible verses, excerpts of Shakespeare works, historical dates, names of people and places... because for all of them I can build a story which will make the seed value look innocuous
From http://crypto.stackexchange.com/questions/16364/why-do-nothi...
Only if browsers enforce the TLS requirements for private CAs. Usually, browsers exempt user or domain controlled CAs from all kinds of requirements, like certificate transparancy log requirements. I doubt things will be different this time.
If they do decide to apply those limits, you can run an ACME server for your private CA and point certbot or whatever ACME client you prefer at it to renew your internal certificates. Caddy can do this for you with a couple of lines of config: https://caddyserver.com/docs/caddyfile/directives/acme_serve...
Funnily enough, Caddy defaults to issueing 12 hour certificates for its local CA deployment.
> no certificate pinning anymore
Why bother with public certificate authorities if you're hardcoding the certificate data in the client?
> Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time.
Those hosts needed a bastion host or proxy of sorts to connect to the outside yearly, so they can still do that today. But I don't see the advantage of using the public CA infrastructure in a closed system, might as well use the Microsoft domain controller settings you probably already use in your network to generate a corporate CA and issue your 10 year certificates if you're in control of the network.
Browser certificate pinning is deprecated since 2018. No current browsers support HPKP.
There are alternatives to pinning, DNS CAA records, monitoring CT logs.
For those times when I only care about encryption, I'm forced to take on the extra burden that caring about identity brings.
Pet peeve.
There is the web as it always has been on http/1.1 that is a hyperlinked set of html documents hosted on a mishmash of random commercial and personal servers. Then there is modern http/2 http/3 CA TLS only web hosted as a service on some other website or cloud; mostly to do serious business and make money. The modern web's CA TLS-only ID scheme is required due to the complexity and risk of automatic javascript execution in browsers.
I wish we could have browsers that could support both use cases. But we can't because there's too much money and private information bouncing around now. Can't be whimsical, can't 'vibe code' the web ID system (ie, self signed not feasible in HTTP/3). It's all gotta be super serious. For everyone. And that means bringing in a lot of (well hidden by acme2 clients) complexity and overhead and centralization (everyone uses benevolent US based Lets Encrypt). This progressive lowering of the cert lifetimes is making the HTTP-only web even more fragile and hard to create lasting sites on. And that's sad.
TOFU works for the old web just great. It's completely incompatible with the modern web because major browsers will only ever compile their HTTP/* libs with flags that prevent TOFU and self-signed. You could host a http/1.1 self-signed and TOFU but everyone (except geeks) would be scared away or incapable of loading it.
So, TOFU works if you just want to do something like "gemini" protocol but instead of a new protocol just stick to original http and have a demographic of retro-enthusiasts and poor people. It's just about as accessible as gemni for most people (ie, not very) except for two differences. 1. Bots still love http/1.1 and don't care if it's plain text. 2. There's still a giant web of http/1.1 websites out there.
*I mistakenly wrote "certificate" here initially. Sorry.
Undoubtedly it is not best practice to lean on TOFU for good reason, but there are simply some lower stakes situations where engaging the CA system is a bit overkill. These are systems with few nodes (maybe just one) that have few users (maybe just one.) I have some services that I deploy that really only warrant a single node as HA is not a concern and they can easily run off a single box (modern cheap VPSes really don't sweat handling ~10-100 RPS of traffic.) For those, I pre-generate SSH server keys before deployment. I can easily verify the fingerprint in the excessively rare occasion it isn't already trusted. I am not a security expert, but I think this is sufficient at small scales.
To be clear, there are a lot of obvious security problems with this:
- It relies on me actually checking the fingerprint.
- SSH keys are valid and trusted indefinitely, so it has to be rotated manually.
- The bootstrap process inevitably involves the key being transmitted over the wire, which isn't as good as never having the key go over the wire, like you could do with CSRs.
This is clearly not good enough for a service that needs high assurance against attackers, but I honestly think it's largely fine for a small to medium web server that serves some small community. Spinning up a CA setup for that feels like overkill.
As for what I personally would do instead for a fleet of servers, personally I think I wouldn't use SSH at all. In professional environments it's been a long time since I've administered something that wasn't "cloud" and in most of those cloud environments SSH was simply not enabled or used, or if it was we were using an external authorization system that handled ephemeral keys itself.
That said, here I'm just suggesting that I think there is a gap between insecure HTTP and secure HTTPS that is currently filled by self-signed certificates. I'm not suggesting we should replace HTTPS usage today with TOFU, but I am suggesting I see the value in a middle road between HTTP and HTTPS where you get encryption without a strong proof of what you're connecting to. In practice this is sometimes the best you can really get anyway: consider the somewhat common use case of a home router configuration page. I personally see the value in still encrypting this connection even if there is no way to actually ensure it is secure. Same for some other small scale local networking and intranet use cases.
TOFU on ssh server keys... it's still bad, but less people are interested in intercepting ssh vs tls.
Also, I agree that TOFU in its own is certainly worse than having robust verification via the CA system. OTOH, SSH-style TOFU has some advantages over the CA system, too, at least without additional measures like HSTS and certificate pinning. If you are administering machines that you yourself set up, there is little reason to bother with anything more than TOFU because you'll cache the key shortly after the machine is set up and then get warned if a MITM is attempted. That, IMO, is the exact sort of argument in favor of having an "insecure but encrypted" sort of option for the web; small scale cases where you can just verify the key manually if you need to.
Mostly because ssh isn't something most people (eg. your aunt) uses, and unlike with https certificates, you're not connecting to a bunch of random servers on a regular basis.
Both defend against attackers the other cannot. In particular, the number of machines, companies and government agencies you have to trust in order to use a CA is much higher.
For example if the MITM requires you to have physical access to the machine, you'd also have to cover the physical security first. As long as that is not the case who cares for some connection hijack. If the data you are actually communicating is in addition just not worth the encryption but has to be because of regulation you are just doing the dance without it being worth it.
Not "never", because of HSTS preload, and browsers slowly adding scary warnings to plaintext connections.
https://preview.redd.it/1l4h9e72vp981.jpg?width=640&crop=sma...
Safari did some half measures starting in Safari 15 (don't know the year) and now fully defaults to https first.
Firefox 136 (2025) now does https first as well.
On the other hand providing the option may give a false sense of security. I think the main reason SSH isn't MitM'd all over the place is it's a pretty niche service and very often you do have a separate authentication method by sending your public key over HTTPS.
But like, no: the free Wi-Fi I'm using can't, in fact, MITM the encryption used by my connection... it CAN do a bunch of other shitty things to me that undermine not only my privacy but even undermine many of the things people expect to be covered by privacy (using traffic analysis on the size, timing, or destination of the packets that I'm sending), but the encryption itself isn't subject to the failure mode of SSH.
Establishing the initial exchange of crypto key material can be.
That's where certificates are important because they add identity and prevent spoofing.
With TOFU, if the first use is on an insecure network, this exchange is jeopardized. And in this case, the encryption is not with the intended partner and thus does not need to be attacked.
Hm? The reason I do use those services over a network I don't trust is because they're wrapped in authenticated, encrypted channels. The authenticated encryption happens at a layer above the network because I don't trust the network.
He wasn't proposing that encryption without authentication gets the full padlock and green text treatment.
You might be visiting myfavouriteshoes.com (a boutique shoe site you have been visiting for years), but you won't necessarily know if the regular owner is away or even if the business has been sold.
OK I will fess up. The truth is that I don't spend a lot of time in coffee shops but I do have a ton of crap on my LAN that demands high amounts of fiddle faddle so that the other regular people in my house can access stuff without dire certificate warnings, the severity of which seems to escalate every year.
Like, yes, I eat vegetables and brush my teeth and I understand why browsers do the things they do. It's just that neither I nor my users care in this particular case, our threat model does not really include the mossad doing mossad things to our movie server.
Yes I am being snarky - network level MITM resistance is wonderful infrastructure and CT is great too.
If we encrypt everything we don't need AuthN/Z.
Encrypt locally to the target PK. Post a link to the data.
The goal isn't to make everything impossible to break. The goal is to provide Just Enough security to make things more difficult. Legally speaking, sniffing and decrypting encrypted data is a crime, but sniffing and stealing unencrypted data is not.
That's an important practical distinction that's overlooked by security bozos.
Separation between CAs and domains allows browsers to get rid of incompetent and malicious CAs with minimal user impact.
Without DNSSEC's guarantees, the DANE TLSA records would be as insecure as self-signed certificates in WebPKI are.
It's not enough to have some certificate from some CA involved. It has to be a part of an unbroken chain of trust anchored to something that the client can verify. So you're dependent on the DNSSEC infrastructure and its authorities for security, and you can't ignore or replace that part in the DANE model.
However, we could use some form of Certificate Transparency that would somehow work with DANE.
Also it still protects you from everyone who isn't your DNS provider, so it's valuable if you only need a medium level of security.
They can, but they'll also get caught thanks to CT. No such audit infrastructure exists for DANE/DNSSEC.
> It doesn't add any security to have PKI separate from DNS.
One can also get a certificate for an IP addresses.
1. mobile apps.
2. enterprise APIs. I dealt with lots of companies that would pin the certs without informing us, and then complain when we'd rotate the cert. A 47-day window would force them to rotate their pins automatically, making it even worse of a security theater. Or hopefully, they switch rightly to CAA.
Health Systems love pinning certs, and we use an ALB with 90 day certs, they were always furious.
Every time I was like "we can't change it", and "you do trust the CA right?", absolute security theatre.
It’s become a big part of my work and I’ve always just had a surface knowledge to get me by. Assume I work in a very large finance or defense firm.
You should really generate a new key for each certificate, in case the old key is compromised and you don't know about it.
What would really be nice, but is unlikely to happen would be if you could get a constrained CA certificate issued for your domain and pin that, then issue your own short term certificates from there. But if those are wide spread, they'd need to be short dated too, so you'd need to either pin the real CA or the public key and we're back to where we were.
(emphasis added)
Pump the brakes there, digicert. Price is based on an annual subscription. CA costs will actually go up an infinitesimal amount, but they’re already nearly zero to begin with. Running a CA has got to be one of the easiest rackets in the world.
Given that the overarching rationale here is security, what made them stop at 47 days? If the concern is _actually_ security, allowing a compromised cert to exist for a month and a half is I guess better than 398 days, but why is 47 days "enough"?
When will we see proposals for max cert lifetimes of 1 week? Or 1 day? Or 1 hour? What is the lower limit of the actual lifespan of a cert and why aren't we at that already? What will it take to get there?
Why are we investing time and money in hatching schemes to continually ratchet the lifespan of certs back one more step instead of addressing the root problems, whatever those are?
Yeah not sure about that one...
I've read the basics on Cloudflare's blog and MDN. But at my job, I encountered a need to upload a Let's encrypt public cert to the client's trusted store. Then I had to choose between Let's encrypt's root and intermediate certs, between key types RSA and ECDSA. I made it work, but it would be good to have an idea of what I'm doing. For example why root RSA key worked even though my server uses ECDSA cert. Before I added the root cert to a trusted store, clients used to add fullchain.pem from the server and it worked too — why?
- If you're looking for a concise (yet complete) guide: https://www.feistyduck.com/library/bulletproof-tls-guide/
- OpenSSL Cookbook is a free ebook: https://www.feistyduck.com/library/openssl-cookbook/
- SSL/TLS and PKI history: https://www.feistyduck.com/ssl-tls-and-pki-history/
- Newsletter: https://www.feistyduck.com/newsletter/
- If you're looking for something comprehensive and longer, try my book Bulletproof TLS and PKI: https://www.feistyduck.com/books/bulletproof-tls-and-pki/
In another instance to connect to a server, only the root certificate is present in the trust store. Does it mean encryption can be performed with just the root certificate.
Yep, that me.
Thanks for the blog post!
No idea why the RSA key worked even though the server used RSA — maybe check into the recent cross-signing shenanigans that Let’s Encrypt had to pull to extend support for very old Android versions.
If the information is relatively unchanged and the details well documented why not ask questions to fill in the gaps?
The Socratic method has been the best learning tool for me and I'm doubling my understanding with the LLMs.
It seems to me like compromised keys are rare. It also seems like 47 days is low enough to be inconvenient, but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
It's not only key mismanagement that is being mitigated. You also have to prove more frequently that you have control of the domain or IP in the certificate.
In essence it brings a working method of revocation to WebPKI.
> but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
Compared to a year?
That doesn't particularly matter; if someone takes over the domain but doesn't have a leaked key, they can't sign requests for the domain with my cert. It takes a leaked key for this to turn into a vulnerability.
On the other hand, anyone that owns the domain can get a perfectly valid cert any time, no need to exploit anything. And given that nobody actually looks at the details of the cert owner in practice, that means that if you lose the domain, the new owner is, treated as legit. No compromises needed.
The only way to prevent that is to pin the cert, which this short rotation schedule makes harder, or pin the public key and be very careful to not regenerate your keys when you submit a new CSR.
In short: Don't lose your domain.
> Compared to a year?
Typically these kinds of things have an exponential dropoff, so most of the exploited folks would be soon after the compromise. I don't think that shortening to this long a period, rather than (say) 24h would make a material difference.
But, again, I'm also not sure how many people were compromised via anything that this kind of rotation would prevent. It seems like most exploits depend on someone either losing control over the domain (again, don't do that; the current issuance model doesn't handle that), or just being phished via a valid cert on an unrelated domain.
Do you have concrete examples of anyone being exploited via key mismanagement (or not proving often enough that they have control over a domain)?
This reminds me a bit of trying to get TLS 1.2 support in browsers before the revelation that the older versions (especially SSL3) were in fact being exploited all the time directly and via downgrading. Since practically nobody complained (out of ignorance) and, at the time, browsers didn't collect metrics and phone home with them (it was a simpler time), there was no evidence of a problem. Until there was massive evidence of a problem because some people bothered to look into and report it. Journalism-driven development shouldn't be the primary way to handle computer security.
It does, if someone gets temporary access, issues a certificate and then keeps using it to impersonate something. Now the malicious actor has to do it much more often, significantly increasing chances of detection.
At least, that's what the rules say. In practice CAs have a really hard time saying no to a multi-week extension because a too-big-to-fail company running "critical infrastructure" isn't capable of rotating their certs.
Short cert duration forces companies to automate cert renewal, and with automation it becomes trivial to rotate certs in an acceptable time frame.
I suppose technically you can get approximately the same thing with 24-hour certificate expiry times. Maybe that's where this is ultimately heading. But there are issues with that design too. For example, it seems a little at odds with the idea of Certificate Transparency logs having a 24-hour merge delay.
It also lowers the amount of time it’d take for a top-down change to compromise all outstanding certificates. (Which would seen paranoid if this wasn’t 2025.)
The real reason was Snowden. The jump in HTTPS adoption after the Snowden leaks was a virtual explosion; and set HTTPS as the standard for all new services. From there, it was just the rollout. (https://www.eff.org/deeplinks/2023/05/10-years-after-snowden...)
(Edit because I'm posting too fast, for the reply):
> How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?
Everyone is reliant on a 3rd party for the internet. It's called your ISP. They also take complaints and will shut you down if they don't like what you're doing. If you are using an online VPS, you have a second 3rd party, which also takes complaints, can see everything you do, and will also shut you down if they don't like what you're doing; and they have to, because they have an ISP to keep happy themselves. Networks integrating with 3rd party networks is literally the definition of the internet.
Let's Encrypt... Cloudflare... useful services right? Or just another barrier to entry because you need to set up and maintain them?
https://letsencrypt.org/2025/02/20/first-short-lived-cert-is...
> The ballot argues that shorter lifetimes are necessary for many reasons, the most prominent being this: The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
> The ballot also argues that the revocation system using CRLs and OCSP is unreliable. Indeed, browsers often ignore these features. The ballot has a long section on the failings of the certificate revocation system. Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Personally I don't really buy this argument. I don't think the web sites that most people visit (especially highly-sensitive ones like for e-mail, financial stuff, a good portion of shopping) change or become "less trustworthy" that quickly.
And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.
By that logic, we don't really need certificates, just TOFU.
It works fairly well for SSH, but that tends to be a more technical audience. But doing a "Always trust" or "Always accept" are valid options in many cases (often for internal apps).
And significant part of security is concentrated around the way Certifying Authorities validate the domain ownership. (So called challenges).
Next, maybe clients can run those challenges directly, instead of relying onto certificates? For example, when connecting a server, client client sends two unique values, and the server must create DNS record <unique-val-1>.server.com with record value of the <unique-val-2>. Client check that such record is created and thus the server has proven it controls the domain name.
Auth through DNS, that's what it is. We will just need to speed up the DNS system.
The reason an attacker can't MITM Let's Encrypt (or similar ACME issuers) is because they request the challenge-response from multiple locations, making sure a simple MITM against them doesn't work.
A fully DNS based "certificate setup" already exists: DANE, but that requires DNSSEC, which isn't wildly used.
But that just a nuance that could be fixed. I elaborate little more on what I mean in https://www.hackerneue.com/item?id=43712754
Thx for pointing to DANE.
I would be more concerned about the number of certificates that would need to be issued and maintained over their lifecycle - which now scales with the number of unique clients challenging your server (or maybe I misunderstand, and maybe there aren't even certificates any more in this scheme).
Not to mention the difficulties of assuring reasonable DNS response times and fresh, up-to-date results when querying a global eventually consistent database with multiple levels of caching...
[0] https://letsencrypt.org/docs/challenge-types/#dns-01-challen...
I am not saying this scheme is really practical currently.
That's just an imaginary situation coming to mind, illustrating the increased importance of domain ownership validation procedures used by Certifying Authorities. Essentially the security now comes down to the domain ownership validation.
Also a correction. The server not simply puts <unique-val-2>, it puts sha256(<unique-val-2> || '.' || <fingerprint of the public key of the account>).
Yes, the ACME protocol uses some account keys. Private key signs a requests for new cert, and public key fingerprint during domain ownership validation confirms that the challenge response was intended for that specific account.
I am not suggesting ACME can be trivially broken.
I just realized that risks of TLS certs breaking is not just risk of public key crypto being broken, but also includes the risks of domain ownership validation protocols.
Side note: I wonder how much pressure this puts on providers such as LetsEncrypt, especially with the move to validate IPs. And more specifically IPv6…
I don't disagree with you that it should be super common. But it's surprisingly not in many businesses. Heck, Okta (nominally a large security company) still sends out notifications every time they change certificates and publishes a copy of their current correct certs in github: https://github.com/okta/okta-pki - How they do the actual rotation? No idea, but... I'd guess it's not automatic with that level of manual notification/involvement. (Happy to be proven wrong though).
Self-signed custom certs also does that. But those are demonized.
Also SSL also tries to define a ip-dns certification of ownership, kind of.
There's also a distinct difference between 'this cert expired last week' and 'this cert doesn't exist' and mitm attack. Expired? Just give a warning, not a scare screen. MITM? Sure give a big scary OHNOPE screen.
But, yeah, 47 days is going to wreck havok on network and weird devices.
The only real alternative to checking who signed a certificate is checking the certificate's fingerprint hash instead. With self-signed certificates, this is the only option. However, nobody does this. When presented with an unknown certificate, people will just blindly trust it. So self-signed certificates at scale are very susceptible to MITM. And again, you're not going to know it happened.
Encryption without authentication prevents passive snooping but not active and/or targeted attacks. And the target may not be you, it may be the other side. You specifically might not be worth someone's time, but your bank and all of its other customers too, probably is.
OCSP failed. CRLs are not always being checked. Shorter expiry largely makes up for the lack of proper revocation. But expiration must consequently be treated as no less severe than revocation.
Homoglyph attacks are a thing. And I can pay $10 for a homoglyph name. No issues. I can get a webserver on a VM and point DNS. From there I can get a Letsencrypt. Use Nginx to point towards real domain. Install a mailserver and send out spoofed mails. You can also even set up SPF/DKIM/DMARC and have a complete attested chain.
And its all based on a fake DNS, using nothing more than things like a Cyrillic 'o'.
And, the self-signed issue is also what we see with SSH. And it mostly just works too.
TLS with Web PKI is a significantly more secure system when dealing with real people, and centralized PKI systems in general are far more scalable (but not hardly perfect!) compared to decentralized trust systems, with common SSH practices near the extreme end of decentralized. Honestly, the general state of SSH security can only be described as "working" due to a lack of effort from attackers more than the hygienic practices of sysadmins.
Homoglyph attacks are a real problem for HTTPS. Unfortunately, the solutions to that class of problem have not succeeded. Extended Validation certificates ended up a debacle; SRP and PAKE schemes haven't taken off (yet?); and it's still murky whether passkeys are here to stay. And a lot of those solutions still boil down to TOFU since, essentially, they require you to have visited the site at least once before. Plus, there remain fallback options that are easier to phish against. Maybe these problems would have been more solvable if DNSSEC succeeded, but that didn't happen either.
I've done this and it works very well. I had a Digital Ocean droplet so used their DNS service for the challenge domain.
https://letsencrypt.org/docs/challenge-types/#dns-01-challen...
It also occurred to me that there's nothing(?) preventing you from concurrently having n valid certificates for a particular hostname, so you could just enroll distinct certificates for each host. Provided the validation could be handled somehow.
The other option would maybe be doing DNS-based validation from a single orchestrator and then pushing that result onto the entire fleet.
It is a bit funny that LetsEncrypt has non-expiring private keys for their accounts.
I use this to sync users between small, experimental cluster nodes.
Some notes I have taken: https://notes.bayindirh.io/notes/System+Administration/Synci...
> Get certificates for remote servers - The tokens used to provide validation of domain ownership, and the certificates themselves can be automatically copied to remote servers (via ssh, sftp or ftp for tokens). The script doesn't need to run on the server itself. This can be useful if you don't have access to run such scripts on the server itself, e.g. if it's a shared server.
* https://github.com/srvrco/getssl
Do people really backup their https certificates? Can't you generate a new one after restoring from backup?
Edit: it’s configured under Trigger -> Outbound Probe -> “SSL Certificate Minimum Expiration Duration”
I tend to have secondary scripts that checks if the cert in certbots dir is newer than whatever is installed for a service, and if so install it. Some services prefer the cert in certain formats, some services want to be reloaded to pick up a new cert etc, so I put that glue in my own script and run it from cron or a systemd timer.
Right now Let's Encrypt recommends renewing your 90d certificates every 60 days, which means that there is a 30 day window between recommended to renew and expiry. This feels relatively comfortable to me. A long vacation may be longer than 30 days but it is rare and there is probably other maintenance that you should be doing in this time (although likely routine like security updates rather than exceptional like figuring out why your certificate isn't renewing).
So if 47 days ends up meaning renew every 17 days and still have that 30 day buffer I would be quite happy. But what I fear is that they will recommend (and set rate limits based on) renewing every 30 days with a 17 day buffer which is getting a little short for comfort IMHO. While big organizations will have a 24h oncall and medium organizations will have many business hours to figure it out is sucks for individuals who what to go away for a few weeks without worrying about debugging their certificate renewal until they get home.
"Therefore, the Lunar Bureau of the United Nations Peace Keeping Force DataWatch has created the LINK, the Lunar Information Network Key. There are currently nine thousand, four hundred and two Boards on Luna; new Boards must be licensed before they can rent lasercable access. Every transaction--every single transaction--which takes place in the Lunar InfoNet is keyed and tracked on an item-by-item basis. The basis of this unprecedented degree of InfoNet security is the Lunar Information Network Key. The Key is an unbreakable encryption device which the DataWatch employs to validate and track every user in the Lunar InfoNet. Webdancers attempting unauthorized access, to logic, to data, to communications facilities, will be punished to the full extent of the law."
from The Long Run (1989)
Your browser won't access a site without TLS; this is for your own protection. TLS certificates are valid for one TCP session. All certs are issued by an organization reporting directly to a national information security office; if your website isn't in compliance with all mandates, you stop getting certs.
I could have probably done more with Lets Encrypt automation to stay with my old VPS but given that all my professional work is with AWS its really less mental work to drop my old VPS.
Times they are a changing
Or just pay Amazon, I guess. Easier than thinking.
It goes from a "rather nice to have" to "effectively mandatory".
Oh yes, vendors will update their legacy NAS/IPMI/whatever to include certbot. This change will have the exact opposite effect - expired self signed certificates everywhere on the most critical infrastructure.
Nope. People will create self-signed certs and tell people to just click "accept".
Dev guys think everything is solvable via code, but hardware guys know this isn't true. Hardware is stuck in fixed lifecycles and firmware is not updated by the vendors unless it has to be. And in many cases updated poorly. No hardware I've ever come across that supports SSL\TLS (and most do nowadays) offers any automation capability in updating certs. In most cases, certs are manually - and painfully - updated with esoteric CLI cantrips that require dancing while chanting to some ancient I.T. God for mercy because the process is poorly (if at all) documented and often broken. No API call or middelware is going to solve that problem unless the manufacturer puts it in. In particular, load balancers are some of the worst at cert management, and remember that not everyone uses F5 - there are tons of other cheaper and popular alternatives most of which are atrocious at security configuration management. It's already painful enough to manage certs in an enterprise and this 47 day lifecycle is going to break things. Hardware vendors are simply incompetent and slow to adapt to security changes. And not everyone is 100% in the cloud - most enterprises are only partially in that pool.
Perhaps the new requirements will give them additional incentives.
The larger issue is actually our desire to deprecate cipher suites so rapidly though, those 2-3 year old ASICs that are functioning well become e-waste pretty quickly when even my blog gets a Qualys “D” rating after having an “A+” rating barely a year ago.
How much time are we spending on this? The NSA is literally already in the walls.
I've been in the cert automation industry for 8 years (https://certifytheweb.com) and I do still hear of manual work going on, but the majority of stuff can be automated.
For stuff that genuinely cannot be automated (are you sure you're sure) these become monthly maintenance tasks, something cert management tools are also now starting to help with.
We're planning to add tracking tasks for manual deployments to Certify Management Hub shortly (https://docs.certifytheweb.com/docs/hub/), for those few remaining items that need manual intervention.
There are no ready-made tools available to automate such deployments. Especially if a certificate must be the same for each of the hosts, fingerprint included. Having a single, authoritative certificate for a domain and its wildcard subdomains deployed everywhere is much simpler to monitor. It does not expose internal subdomains in certificate transparency logs.
Unfortunately, organizations (persons) involved in decisions, do not provide such tools in advance.
> The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
This is patently nonsensical. There is hardly any information in a certificate that matters in practice, except for the subject, the issuer, and the expiration date.
> Shorter lifetimes mitigate the effects of using potentially revoked certificates.
Sure, and if you're worried about your certificates being stolen and not being correctly revoked, then by all means, use a shorter lifetime.
But forcing shorter lifetimes on everyone won't end up being beneficial, and IMO will create a lot of pointless busywork at greater expense. Many issuers still don't support ACME.
I fairly regularly get cert expired problems because the admin is doing it as the yak shaving for a secondary hobby
Even certbot got deprecated, so my IRC network has to use some janky shell scripts to rotate TLS… I’m considering going back to traditional certs because I geo-balance the DNS which doesn’t work for letsencrypt.
The issue is actually that I have multiple domains handled multiple ways and they all need to be letsencrypt capable for it to work and generate a combined cert with SAN’s attached.
An semi-distributed (intercity) Kubernetes cluster can reasonably change its certificate chain every week, but it needs an HSM if it's done internally.
Otherwise, for a website, once or twice a year makes sense if you don't store anything snatch-worthy.
You don't say. Why are the defaults already 90 days or less then?
But on a more serious note, can someone more familiar with these standards and groups explain the scope of TLS certificate they mean for these lifetime limits?
I assume this is only server certs and not trust root and intermediate signing certs that would get such short lifetimes? It would be a mind boggling nightmare if they start requiring trust roots to be distributed and swapped out every few weeks to keep software functioning.
To my gen X internet pioneer eyes, all of these ideas seem like easily perverted steps towards some dystopian "everything is a subscription" access model...
The article notes this explicitly: the goal here is to reduce the number of online CA connections needed. Reducing certificate lifetimes is done explicitly with the goal of reducing the Web PKI's dependence on OCSP for revocation, which currently has the online behavior you're worried about here.
(There's no asymptotic benefit to extremely short-lived certificates: they'd be much harder to audit, and would be much harder to write scalable transparency schemes for. Something around a week is probably the sweet spot.)
"When they voiced objection, Captain Black replied that people who cared about security would not mind performing all the security theatre they had to. To anyone who questioned the effectiveness of the security theatre, he replied that people who really did owe allegiance to their employer would be proud to take performative actions as often as he forced them to. The more security theatre a person performed, the more secure he was; to Captain Black it was as simple as that."
There should be 1 change from 365 to 47 days. This industry doesnt need constant changes, which will force everyone to automating renewals anyway.
I need it more frequently to get more time in case there is an error as I tend to ignore the error e-mails for multiple weeks due to my fatigue from handling of various kinds of certificates.
Personally I also have an HTTP mirror for my more important projects when availability is more important than security of the connection.
For example, EV carts had the green bar that was a soft way to promote their presence/use over the normal ones. That bar, started as a strong evidence on the url box and lost that look with the time.
Something like that let the owner to decide and, maybe, the user push their use because it feels more secure, not directly the CA.
They are purchased to provide encryption. Nobody checks the details of a cert and even if they did they wouldn't know what to look for in a counterfeit anyway.
This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
There are lots of issues with trust and social and business identities in general, but for the purpose of encryption, the problem can be simplified to checking of the host name (it's effectively an out of band async check that the destination you're talking to is the same destination that independent checks saw, so you know your connection hasn't been intercepted).
You can't have effective TLS encryption without verifying some identity, because you're encrypting data with a key that you negotiate with the recipient on the other end of the connection. If someone inserts themselves into the connection during key exchange, they will get the decryption key (key exchange is cleverly done that a passive eavesdropper can't get the key, but it can't protect against an active eavesdropper — other than by verifying the active participant is "trusted" in a cryptographic sense, not in a social sense).
This is where the disconnect comes in. Me and you know that the green icon doesn't prove identity. It proves certificate validity. But that's not what this is "sold as" by the browser or the security community as a whole. I can buy the domain Wаl-Mart right now and put a certificate on it that says Wаl-Mаrt and create the conditions for that little green icon to appear. Notice that I used U+0430 instead of the letter "a" that you're used to.
And guess what... The identity would match and pass every single test you throw at it. I would get a little green icon in the browser and my certificate would be good. This attack fools even the brightest security professionals.
So you see, Identity isn't the value that people expect from a certificate. It's the encryption.
Users will allow a fake cert with a green checkmark all day. But a valid certificate with a yellow warning is going to make people stop and think.
I care that when I type walmart.com, I'm actually talking to walmart.com. I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.
Preventing local DNS servers from fucking with users is critical, as local DNS is the weakest link in a typical setup. They're often run by parties that must be treated as hostile - basically whenever you're on public wifi. Or hell, when I'm I'm using my own ISP's default configuration. I don't trust Comcast to not MitM my connection, given the opportunity. I trust technical controls to make their desire to do so irrelevant.
Without the identity component, any DNS server provided by DHCP could be setting up a MitM attack against absolutely everything. With the identity component, they're restricted to DoS. That's a lot easier to detect, and gets a lot of very loud complaints.
So no, nobody will ever look at a certificate.
When I look at them, as a security professional, I usually need to rediscover where the fuck they moved the certs details again in the browser.
Identity is the only purpose that certificates serve. SSL/TLS wouldn't have needed certificates at all if the goal was purely encryption: key exchange algorithms work just fine without either side needing keys (e.g. the key related to the certificate) ahead of time.
But encryption without authentication is a Very Bad Idea, so SSL was wisely implemented from the start to require authentication of the server, hence why it was designed around using X.509 certificates. The certificates are only there to provide server authentication.
"example.com" is an identity just like "Stripe, Inc"[1]. Just because it doesn't have a drivers license or article of incorporation, doesn't mean it's not an identity.
[1] https://web.archive.org/web/20171222000208/https://stripe.ia...
>This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
Certbot is trivial to set up yourself, and deploying it in production isn't so hard that you need to be "Google / AWS / Azure" to do it. There's plenty of IaaS/PaaS services that have letsencrypt, that are orders of magnitude smaller than those hyperscalers.
I get that there are some fringe cases where it’s not possible but for the rest - automate and forget.
But there's also security implications: https://www.hackerneue.com/item?id=43708319
Do I need to update certbot in all my servers? Or would they continue to work without the need to update?
If you can't make this happen, don't use WebPKI and use internal PKI.
It's great to be environmentally conscious, but if reducing carbon emissions is your goal, complaining about this is a lot like saying that people shouldn't run marathons, because physical activity causes humans to exhale more CO2.
We are effectively talking about the entire world wide web generating multiple highly secure cryptograph key pairs every 47 days. That is a lot of CPU cycles.
Also you not picking up on the Futurama quote is disappointing.
We aren't cracking highly secure key pairs. We're making them.
On my computer, to create a new 4096-bit key takes about a second, in a single thread. For something I now have to do fewer than 8 times per year. On a 16-core CPU with a TDP of 65 watts, we can estimate that this took 0.0011 watt-hours.
Yes, there are a lot of websites, close to a billion of them. No, this still is not some onerous use of electricity. For the whole world, this is an additional usage of a bit over 9000 kWh annually. Toss up a few solar panels and you've offset the whole planet.
but you think think it would take a decade for the entire internet to use as much power as a single AI video?
That one AI video used about 100kWh, so about four days worth of HTTPS for the whole internet.
As for certs... maybe at the start it was hard, but it's really quite easy to host things online, with a valid certificate. There are many CDN services like Cloudflare which will handle it for you. There are also application proxies like Traefik and Caddy which will get certs for you.
Most people who want their own site today, will use Kinsta or SquareSpace or GitHub pages any one of thousands of page/site hosting services. All of whom have a system for certificates that is so easy to use, most people don't even realize it is happening.
Every single thing you mentioned is plugged in to the tier-1 surveillance brokers. I am talking plain files on single server shoved in a closet, or cheap VPS. I don't often say this but I really don't think you “get” it.
Your attitude is so dismissive to the general public. We should be encouraging people to learn the little bits they want to learn to achieve something small, and instead we are building this ivory tower all-or-nothing stack. For what, job security? Bad mindset.
CAs and web PKI are a bad joke. There's too many ways to compromise security, there's too many ways to break otherwise-valid web sites/apps/connections, there's too many organizations that can be tampered with, the whole process is too complex and bug-prone.
What Web PKI actually does, in a nutshell, is prove cryptographically that at some point in the past, there was somebody who had control of either A) an e-mail address or B) a DNS record or C) some IP space or D) some other thing, and they generated a certificate through any of these methods with one of hundreds of organizations. OR it proves that they stole the keys of such a person.
It doesn't prove that who you're communicating with right now is who they say they are. It only proves that it's someone who, at some point, got privileged access to something relating to a domain.
That's not what we actually want. What we actually want is to be assured this remote host we're talking to now is genuine, and to keep our communication secret and safe. There are other ways to do that, that aren't as convoluted and vulnerable as the above. We don't have to twist ourselves into all these knots.
I'm hopeful changes like these will result in a gradual catastrophy which will push industry to actually adopt simpler, saner, more secure solutions. I've proposed one years ago but nobody cares because I'm just some guy on the internet and not a company with a big name. Nothing will change until the people with all the money and power make it happen, and they don't give a shit.
Everyone in the CA/B should be fired from their respective employers, and we honestly need to wholesale plan to dump PKI by 2029 if we can't get a resolution to this.
It's really not that hard to automate renewals and monitor a system's certificate status from a different system, just in case the automation breaks and for things that require manual renewal steps.
I get that it's harder in large organisations and that not everything can be automated yet, but you still have a year before the certificate lifetime goes down to 200 days, which IMO is pretty conservative.
With a known timeline like this, customers/employees have ammunition to push their vendors/employers to invest into automation and monitoring.
None of the platforms which I deal with will likely magically support automated renewal in the next year. I will likely spend most of the next year reducing our exposure to PKI.
Smaller organizations dependent on off the shelf software will be killed by this. They'll probably be forced to move things to the waiting arms of the Big Tech cloud providers that voted for this. (Shocker.) And it probably won't help stop the bleeding.
And again, there's no real world security benefit. Nobody in the CA/B has ever discussed real world examples of threats this solves. Just increasingly niche theoretical ones. In a zero cost situation, improving theoretical security is good, but in a situation like this where the cost is real fragility to the Internet ecosystem, decisions like this need to be justified.
Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
This is a group of people who have hammers and think everything is a nail, and unfortunately, that includes a lot of ceramic and glass.
This will be painful for people in the short term, but in the long term I believe it will make things more automated, more secure, and less fragile.
Browsers are the ones pushing for this change. They wouldn't do it if they thought it would cause people to see more expired certificate warnings.
> Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
Representatives are not voting against the wishes/instructions of their employer.
Unfortunately the problem is likely too removed from understanding for employers to care. Google and Microsoft do not realize how damaging the CA/B is, and probably take the word of their CA/B representatives that the choices that they are making are necessary and good.
I doubt Satya Nadella even knows what the CA/B is, much less that he pays an employee full-time to directly #### over his entire customer base and that this employee has nearly god-level control over the Internet. I have yet to see an announcement from the CA/B that represented a competent decision that reflected the reality of the security industry and business needs, and yet... nobody can get in trouble for it!
The ballot is nothing but expected
The whole industry has been moving in this direction for the last decade
So there is nothing much to say
Except that if you waited the last moment, well you will have to be in a hurry. (non)Actions have consequences :)
I'm glad by this decision because that'll hammer a bit down those resisting, those who but a human do perform yearly renewal. Let's how stupid it can get.
Are the security benefits really worth making anything with a valid TLS certificate stop working if it is air-gapped or offline for 48 days?
> CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.
They're not incompetent and they're not "evil", and this change does improve some things. But the companies behind the top level CA ecosystem have their own interests which might not always align with those of end users.
CAs have now implemented MPIC. This may have thwarted some attacks, but those attackers still have valid certificates today and can request a new certificate without any domain control validation being performed in over a year.
BGP hijackings have been uncovered in the last 5 years and MPIC does make this more difficult. https://en.wikipedia.org/wiki/BGP_hijacking
New security standards should come into effect much faster. For fixes against attacks we know about today and new ones that are discovered and mitigated in the future.
"The goal is to minimize risks from outdated certificate data, deprecated cryptographic algorithms, and prolonged exposure to compromised credentials. It also encourages companies and developers to utilize automation to renew and rotate TLS certificates, making it less likely that sites will be running on expired certificates."
I'm not even sure what "outdated certificate data" could be. The browser by default won't negotiate a connection with an expired certificate
Agree.
> According to the article:
Thanks, I did read that, it's not quite what I meant though. Suppose a security engineer at your company proposes that users should change their passwords every 49 days to "minimise prolonged exposure from compromised credentials" and encourage the uptake of password managers and passkeys.
How to respond to that? It seems a noble endeavour. To prioritise, you would want to know (at least):
a) What are the benefits - not mom & apple pie and the virtues of purity but as brass tacks - e.g: how many account compromises do you believe would be prevented by this change and what is the annual cost of those? How is that trending?
b) What are the cons? What's going to be the impact of this change on our customers? How will this affect our support costs? User retention?
I think I would have a harder time trying to justify the cert lifetime proposal than the "ridiculously frequent password changes" proposal. Sure, it's more hygenic but I can't easily point to any major compromises in the past 5 years that would have been prevented by shorter certificate lifetimes. Whereas I could at least handwave in the direction of users who got "password stuffed" to justify ridiculously frequent password changes.
The analogy breaks down in a bad way when it comes to evaluating the cons. The groups proposing to decrease cert lifetimes bear nearly none of the costs of the proposal, for them it is externalised. They also have little to no interest in use cases that don't involve "big cloud" because those don't make them any money.
No, they did it because it reduces their legal exposure. Nothing more, nothing less.
The goal is to reduce the rotation time low enough that the certificates will rotate before legal procedures to stop them from rotating them can kick in.
This does very little to improve security.
Lower the lifetime of certs does mean that orgs will be better prepared to replace bad certs when they occur. That's a good thing.
More organisations will now take the time to configure ACME clients instead of trying to convince CA's that they're too special to have their certs revoked, or even start embarrassing court cases, which has only happened once as far as I know.
Theories that involve CAs, Google, Microsoft, Apple, and Mozilla having ulterior motives and not considering potential downsides of this change are silly.
And also, it probably won't avoid problems. Because yes, the goal is automation and a couple weeks ago I was trying to access a site from an extremely large infrastructure security company which rotates their certificates every 24 hours. And their site was broke and the subreddit about their company was all complaining about it. Turns out automated daily rotation just means 365 more opportunities for breakage a year.
Even regular processes break, and now we're multiplying the breaking points... and again, at no real security benefit. There’s like... never ever been a case where a certificate leak caused a breach.
This is fundamentally a skill issue. If a human can replace the certificate, so can a machine. Write a script.
Now? It's a spaghetti of politics and emotional warfare. Grown adults who can't handle being told that they might not be up to the task and it's time to part ways. If that's the honest truth, it's not "mean," just not what that person would like to hear.
Keep up the good work! ;-)
perverse incentives indeed.