Let’s Encrypt will be reducing the validity period of the certificates we issue. We currently issue certificates valid for 90 days, which will be cut in half to 45 days by 2028.
This change is being made along with the rest of the industry, as required by the CA/Browser Forum Baseline Requirements, which set the technical requirements that we must follow. All publicly-trusted Certificate Authorities like Let’s Encrypt will be making similar changes. Reducing how long certificates are valid for helps improve the security of the internet, by limiting the scope of compromise, and making certificate revocation technologies more efficient.
I still think the web would have been better off if certificates were signed and part of a web of trust like in GPG/PGP. It wouldn’t stop sites from using trusted CAs to increase their trust levels with browsers, but it would mean that tiny websites wouldn’t need to go through layers of mandatory bullshit and inconvenience. Also means that key signers could have meaningful business relationships rather than being some random CA that nobody has a clue about.
I’m sorry but if you aren’t using automated renewals then you are not using let’s encrypt the way it’s intended to be used. You should take this as an opportunity to get that set up.
Ours is automated, but we incur downtime on the renewal because our org forbids plain http so we have to do TLS-ALPN-01. It is a short downtime. I wish let’s encrypt would just allow http challenges over https while skipping the cert validation. It’s nuts that we have to meaningfully reply over 80…
Though I also think it’s nuts that we aren’t allowed to even send a redirect over 80…
Can’t use DNS?
The same screwed up IT that doesn’t let us do HTTP-01 challenges also doesn’t let us do DNS except through some bs webform, and TXT records are not even vaguely in their world.
It sucks when you are stuck with a dumber broad IT organization…
Yikes. I feel for you man.
I’ve got it setup automated on all my external domains, but trying to automate it on my internal-only domain is rather tedious since not only do I NOT want to open a port for it to confirm, but I have 2 other devices/services on the network not behind my primary reverse proxy that share the same cert.
What In need to do is setup my own custom cron that hits the hosting provider to update the DNS txt entries. Then I need to have it write and restart the services that use the cert. I’ve tried to automate this once before and it did not go so smoothly so I’ve been hesitant on wasting time to try it again… But maybe it’s time to.
What would be ideal is if I could allow it to be automated just by getting a one time dns approval and storing a local private/public key to prove to them that I’m the owner of the domain or something. Not aware of this being possible though.
Depends on which DNS service you are using, a plugin might already exist that would do it for you. e.g. I use cloudflare for DNS and certbot is able to automatically set the txt record.
Technically my renews aren’t automated. I have a nightly cronjob that should renew certificates and restart services, but when the certificates need renewal, it always fails because it wants to open a port I’m already using in order to answer the challenge.
I hear there’s an apache module / configuration I can use, but I never got around to setting it up. So, when the cron job fails, I get an email and go run a script that stops apache, renews certs, and restarts services (including apache). I will be a bit annoying to have to do that more often, but maybe it’ll help motivate me to configure apache (or whatever) correctly.
Debian Stable
You could try using the DNS challenge instead; I find it a lot more convenient as not all my services are exposed.
I’m using automated renewals.
But, that just means there’s a new cert file on disk. Now I have to convince a half a dozen different apps to properly reload that changed cert. That means fighting with Systemd. So Systemd has won the first few skirmishes, and I haven’t had the time or energy to counterattack. Now instead of having to manually poke at it 4x per year, it’s going to be closer to once a month. Ugh.
Half a dozen sounds like a lot, kinda curious what you are running? If they all are web services maybe use a reverse proxy or something?
You could try a path unit watching the cert directory (there are caveats around watching the symlinks directly) or most acme implementations have post renewal hooks you can use which would be more reliable.
Don’t worry, they’ll sell you new software for another $50.00/m/certificate to help with the new certificate fiddling you now have to do monthly. It didn’t make sense for them to release it until they pushed through the 45 day window change through backchannels.
While I agree for my personal use, it’s not so easy in an enterprise environment. I’m currently working to get services migrated OFF my servers that utilize public certificates to avoid the headache of manual intervention every 45 days.
While this is possible for servers and services I manage, it’s not so easy for other software stacks we have in our environment. Thankfully I don’t manage them, but I’m sure I’ll be pulled into them at some point or another to help figure out the best path forward.
The easy path is obviously a load balanced front-end to load the certificate, but many of these services are specialized and have very elaborate ways to bind certificates to services outside of IIS or Apache, which would need to trust the newly issued load balancer CA certificate every 47 days.
Yeah, this has become an issue for us at work as well.
Currently we are doing a POC for an in-house developed solution where a azure function app handles the renewal of certificates for any domain we have, both wildcard and named, and place the certificates in a key vault where services that need them can get access.
Looks to be working, so the main issue now is finding a non-US certificate provider that supports acme. EU has some but even more local there aren’t many options.
minor panic, oh, “2028”.
Might as well adjust the setting now. I had that same feeling for something they changed several years ago and never got around to changing it til all my stuff went down lol.
yeah good advice
Just skip to the point and make it 1 day
the whole point is to not break the internet. slow is fine
I’m trying to think of the last time I heard news about something to do with the internet getting better instead of worse, and I’m genuinely coming up blank.
Automated certificates are relatively new and pretty neat. Killing off the certificate cartels is an added bonus.
Wait, how’s this worse? This makes the Internet safer by reducing the window a leaked key can do harm.
Let’s all just start self signing in protest
Reducing the validity timespan will not solve the problem, it only reduces the risk. And how big is that risk really? I’m an amateur and would love to see some real malicious case descriptions that would have been avoided had the certificate been revoked earlier…
Anybody have some pointers?
Terminology: revoked means the issuer of the certificate has decided that the certificate should not be trusted anymore even though it is still valid.
If a attacker gets access to a certificates key, they can impersonate the server until the validity period of the cert runs out or it is revoked by the CA. However … revocation doesn’t work. The revocation lists arent checked by most clients so a stolen cert will be accepted potentially for a very long time.
The second argument for shorter certs is adoption of new technology so certs with bad cryptographic algorithms are circled out quicker.
And third argument is: if the validity is so short you don’t want to change the certs manually and automate the process, you can never forget and let your certs expire.
We will probably get to a point of single day certs or even one cert per connection eventually and every step will be saver than before (until we get to single use certs which will probably fuck over privacy)
No, but I have a link showing how ISPs and CAs colluded to do a MITM https://notes.valdikss.org.ru/jabber.ru-mitm/
Shorter cert lifespan would not prevent this.
It really just helps in cases where you get hacked, but the hacker doesn’t have continued access. Say someone physically penetrates into your building, grabs the key through an unlocked station, and leaves.
That being said, like you mentioned, if someone is going through this effort, 45 days vs 90 days likely won’t matter. They’ll probably have the data they need after a week anyways.
Encryption key theft really requires a secondary attack afterwards to get the encrypted data by getting into the middle and either decrypting or redirecting traffic. It’s very much a state level/high-corporate attack, not some random group trying to make a few bucks.
I’ve been dreading this switch for months (I still am, but I have been, too!) considering this year and next year will each double the amount of cert work my team has to do. But, I’m hopeful that the automation work I’m doing will pay off in the long run.
Are you not using LE certbot to handle renewals? I can’t even imagine doing this manually.
Personally, yes. Everything is behind NPM and SSL cert management is handled by certbot.
Professionally? LOL NO. Shit is manual and usually regulated to overnight staff. Been working on getting to the point it is automated though, but too many bespoke apps for anyone to have cared enough to automate the process before me.
I’m in the same boat here. I keep sounding the alarm and am making moves so that MY systems won’t be impacted, but it’s not holding water with the other people I work with and the systems they manage. I’m torn between manual intervention to get it started or just letting them deal with it themselves once we hit 45 day renewal periods.
Can you not just setup an nginx reverse proxy at the network edge to handle the ssl for the domain(s) and not have to worry about the app itself being setup for it? That’s how I’ve always managed all software personal or professional
Unfortunately some apps require the certificate be bound to the internal application, and need to be done so through cli or other methods not easily automated. We could front load over reverse proxy but we would still need to take the proxy cert and bind to the internal service for communication to work properly. Thankfully that’s for my other team to figure out as I already have a migration plan for systems I manage.
Why can’t you just have a long lived internally signed cert on your archaic apps and LE at the edge on a modern proxy? It’s easy enough to have the proxy trust the internal cert and connect to your backend service that shouldn’t know the difference if there’s a proxy or not.
Or is your problem client side?
That’s actually a really good idea. I’m not the person you replied to, but I’m taking notes.
One reason for the short certs is to push faster adoption of new technology. Yes that’s about new cryptography in the certs but if you still change all your certs by hand maybe you need to be forced …
Luckily I am using only traefik and everything goes through it that it needs for.
Can’t imagine how annoying it would be to interface with every equipment so there are no https errors…
So what’s the floor here realistically, are they going to lower it to 30 days, then 14, then 2, then 1? Will we need to log in every morning and expect to refresh every damn site cert we connect to soon?
It is ignoring the elephant in the room – the central root CA system. What if that is ever compromised?
Certificate pinning was a good idea IMO, giving end-users control over trust without these top-down mandated cert update schedules. Don’t get me wrong, LetsEncrypt has done and is doing a great service within the current infrastructure we have, but …
I kind of wish we could just partition the entire internet into the current “commercial public internet” and a new (old, redux) “hobbyist private internet” where we didn’t have to assume every single god-damned connection was a hostile entity. I miss the comraderie, the shared vibe, the trust. Yeah I’m old.
Is this the same trust that would infect a box in under a minute if not behind a router?
The same trust of needing to scan anything you downloaded for script kiddie grade backdoors?
Zero click ActiveX / js exploits?
Man I’m probably the same age and those are some intense rose colored glasses 😅
Oh, definitely rose-coloured, but I am thinking even before those days… like when access to Usenet was restricted to colleges and universities, dial-up BBSes … and I didn’t use Windows or MacOS at all back then. ActiveX and js didn’t even exist back then. Boot-sector floppy viruses did, but those were easy to guard against.
Ah yeah, those were interesting times. (Although there were some historically interesting viruses back in the day for those floppies too)
Fond memories though. Learning basic on a cartridge… Using literal cassettes for storage. That horrifying sound of a 5" floppy drive struggling to read that file you really needed. Good times.
Generally speaking that was probably what most of us would identify as pre internet times - but usenet / BBS / and early internet and prior definitely was more bright eyed and optimistic. Probably because it was more about learning and tech and less about monotizing every square inch of your existence 😂
You can already get 6-day certificates if you want to https://letsencrypt.org/2025/01/16/6-day-and-ip-certs
Will we need to log in every morning and expect to refresh every damn site cert we connect to soon?
Automate your certificate renewals. You should be automating updates for security anyway.
That’s a lot easier said that done for hobbyists that need a certificate for their home server. I will give you a real world example. I run Ubuntu Linux (but without snaps) on my main desktop machine, however like the person you replied to I am old and I don’t have a good memory so when I do use Linux I try to take the easiest approach possible. But I also have a server running on a Raspberry Pi, and another family member (that has a Mac) that I exchange XMPP-based instant messages with. The server runs Prosody, and on my Ubuntu box I run Gajim (the one from the apt repository which is version 1.8.4, I have no idea why they won’t put a newer version in the repo). The other family member uses some MacOS-based XMPP client. The problem is that if there is not a valid certificate on the server, Gajim refuses to send or receive anything other than plain-text messages. It won’t sent or receive files or pictures, etc. unless the certificate is valid.
However the Raspberry Pi does other things as well (it would be silly to dedicate a Pi to just running Prosody) and one of those other things puts a pseudo-web server of sorts on port 80, which is only accessible from the local network. So I can’t use Certbot because it insists on being able to connect to a web server. Even if I had a general web server on the Pi, which I don’t have and don’t want, it would be restricted for local access only. Also, I’m not paying for a DNS address for my own home server. What I found I could do is get a DuckDNS address (they are free) and use that to get a LE certificate. But the procedure is very manual and kind of convoluted, you have to ssh into the server using two separate sessions and enter some information in each one, because of the absolutely asinine way LE’s renewal process works if you don’t have a web server. I hate doing it every 90 days and if I have to do it every 45 days I’ll probably just give up on sending and receiving files.
I should also mention that it took me hours to figure out the procedure i am using now, and it seems so stupid because I have that server locked down with two firewalls (one on the router and then iptables on the server) I don’t even want a certificate but the designers of Gajim in their infinite wisdom(?) decided not to give users the option to in effect say “I trust this server, just ignore an expired or missing certificate.” And the designers of LE never seemed to consider that some people might need a certificate that are not running a web server (and don’t want to run one) and provide some automatic mechanism for renewing in that situation. And just because someone uses Linux does not mean we are all programmers or expert script writers. I can follow “cookbook” type instructions (that is the ONLY way I got Prosody set up) but I can’t write a script or program to automate this process (again, I’m OLD).
I know somebody’s going to be tempted to say I should use some other software (other that Prosody or Gajim). I tried other IM clients and Gajim is the only one that works the way I expect it to. As for Prosody, I have from time to time tried setting up other XMPP servers that people have suggested and could never get any of them to work. As I said, had I not found “cookbook” type instructions for setting up Prosody I would probably not be running that either, it was a PITA to get working but not that it IS working I don’t want to go through that again. And Prosody isn’t the problem, it works perfectly fine without a valid certificate, but pretty much every Linux IM client I have tried either loses functionality or won’t work at all if the server doesn’t have a valid certificate. And no I don’t run or use Docker, nor do I have any desire to (especially on a Raspberry Pi).
EDIT: After giving this some thought I decided look further into this, and discovered that while Certbot can’t handle this, it’s possible that a script called acme.sh can. See https://github.com/acmesh-official/acme.sh?tab=readme-ov-file (also https://github.com/acmesh-official/acme.sh?tab=readme-ov-file#8-automatic-dns-api-integration - may need to scroll up just a bit, the pertinent item is “8. Automatic DNS API integration”). I haven’t tried it yet (just manually renewed yesterday) but it looks promising if I can figure it out. Thought I’d post the links for anyone else that might be in the same situation.
This is one of the reasons they’re reducing the validity - to try and convince people to automate the renewal process.
That and there’s issues with the current revocation process (for incorrectly issued certificates, or certificates where the private key was leaked or stored insecurely), and the most effective way to reduce the risk is to reduce how long any one certificate can be valid for.
A leaked key is far less useful if it’s only valid or 47 days from issuance, compared to three years. (note that the max duration was reduced from 3 years to 398 days earlier this year).
From https://www.digicert.com/blog/tls-certificate-lifetimes-will-officially-reduce-to-47-days:
In the ballot, Apple makes many arguments in favor of the moves, one of which is most worth calling out. They state that the CA/B Forum has been telling the world for years, by steadily shortening maximum lifetimes, that automation is essentially mandatory for effective certificate lifecycle management.
The ballot argues that shorter lifetimes are necessary for many reasons, the most prominent being this: The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
The ballot also argues that the revocation system using CRLs and OCSP is unreliable. Indeed, browsers often ignore these features. The ballot has a long section on the failings of the certificate revocation system. Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
note that the max duration was reduced from 3 years to 398 days earlier this year)
Oh… Oops. Hahaha
But can you imagine the load on their servers should it come to this? And god forbid it goes down for a few hours and every person in the world is facing SSL errors because Let’s Encrypt can’t create new ones.
This continued shortening of lifespans on these certs is untenable at best. Personally I have never run into a situation where a cert was stolen or compromised but obviously that doesn’t mean it doesn’t happen. I also feel like this is meant to automate all cert production which is nice if you can. Right now, at my job, all cert creation requires manually generating a CSR, submit it to a website, wait for manager approval, and then wait for creation. Then go download the cert and install it manually.
If I have to do this everyday for all my certs I’m not going to be happy. Yes this should be automated and central IT is supposed to be working on it but I’m not holding my breath.
The current automation guidelines and defaults renew certs 30 days from expiry. So even today certs aren’t around for more than 60 days, it’s just that they’re valid for 90.
Additionally you can fairly easily monitor certs to get an alert if you drop below the 30 day threshold and automatic cert renewal hasn’t taken place.
I use Grafana self hosted for this with their synthetic monitoring free tier but it would be relatively trivial to roll your own Prometheus-exporter to do the same.
I doubt they will drop below 1-2 weeks. Any service outage would turn into a ddos when service was restored.
This is typically called a thundering herd
That was a rabbit hole. I never heard the term prior to this. Pretty interesting.
Yep. Kinda what I was thinking.
The entire renewal process is fairly cheap, resource wise. 7 day certificates are already a thing.
In terms of bandwidth you could easily renew a billion certificates a day over a gigabit connection, and in terms of performance I recon even without specialized hardware a single system could keep up with that, though that also depends on the signature algorithms employed in the future of course.The dependence on these servers is the far bigger problem I’d say.
This shortening of lifetimes is a slow change, so I hope there will be solutions before it becomes an issue. Like keeping multiple copies of certificates alive with different providers, so the one in use can silently fall through when one provider stops working. Currently there are too few providers for my taste, that would have to improve for such a system to be viable.Maybe one day you’ll select a bundle of 5 certificate services with similar policies for creating your certificate the way you currently select a single one in certbot or acme.sh
Partition the internet… Like during the Morris worm of '88, where they had to pull off regional networks to prevent the machines from being reinfected?
The good old days were, maybe, not that good. :)
Well that could be considered the point where we lost our innocence, yeah. :(
Imagine trying to do this (excluding China, Russia and some middle eastern or african countries) in the western world.
I would assume total anarchy (especially in the stock trade lol)
So what’s the floor here realistically, are they going to lower it to 30 days, then 14, then 2, then 1?
LE is beta-testing a 7-day validity, IIRC.
Will we need to log in every morning and expect to refresh every damn site cert we connect to soon?
No, those are expected or even required to be automated.
7-day validity is great because they’re exempt from OCSP and CRL. Let’s Encrypt is actually trying 6-day validity, not 7: https://letsencrypt.org/2025/01/16/6-day-and-ip-certs
Another feature Let’s Encrypt is adding along with this is IP certificates, where you can add an IP address as an alternate name for a certificate.
Ah, well. I only remembered something about a week.
Will we need to log in every morning and expect to refresh every damn site cert we connect to soon?
Certbot by default checks twice a day if it’s old enough to be be due for a renewal… So a change from 90 to 1 day will in practice make no difference already…
Good point. On that note I am very happy having moved my home server from Apache to Caddy. The auto cert config is very nice.
The current plan is for the floor to be 47 days. https://www.digicert.com/blog/tls-certificate-lifetimes-will-officially-reduce-to-47-days, and this is not until 2029 in order to give people sufficient time to adjust. Of course, individual certificate authorities can choose to have lower validity periods than 47 days if they want to.
Essentially, the goal is for everyone to automatically renew the certificates once per month, but include some buffer time in case of issues.
where we didn’t have to assume every single god-damned connection was a hostile entity
But you always did, it was always being abused, regularly. That’s WHY we now use secure connections.
I think I’m just not picking up whether you’re actually trying to pitch a technical solution, or just wishing for a perfect world without crime.
More the latter :) … if only we could all just get along and be nicer to each other. Sigh.
Fair enough lol, can’t argue with that.
The best approach for securing our CA system is the “certificate transparency log”. All issued certificates must be stored in separate, public location. Browsers do not accept certificates that are not there.
This makes it impossible for malicious actors to silently create certificates. They would leave traces.
Isn’t this just CRL in reverse? And CRL sucks or we wouldn’t be having this discussion. Part of the point of cryptographically signing a cert is so you don’t have to do this if you trust the issuer.
Cryptography already makes it infeasible for a malicious actor to create a fake cert. The much more common attack vector is having a legitimate cert’s private key compromised.
No, these are completely separate issues.
- CRL: protect against certificates that have their private key compromised
- CT: protect against incompetent or malicious Certificate Authorities.
This is just one example why we have certificate transparency. Revocation wouldn’t be useful if it isn’t even known which certificates need revocation.
The National Informatics Centre (NIC) of India, a subordinate CA of the Indian Controller of Certifying Authorities (India CCA), issues rogue certificates for Google and Yahoo domains. NIC claims that their issuance process was compromised and that only four certificates were misissued. However, Google is aware of misissued certificates not reported by NIC, so it can only be assumed that the scope of the breach is unknown.
Or the more likely a rouge certificate authority giving out certs it shouldn’t.
This seems like a good idea.
The only disadvantage I see is that all my personal subdomains (e.g. immich.name.com and jellyfin) are forever stored in a public location. I wouldn’t call it a privacy nightmare, yet it isn’t optimal.
There are two workarounds:
- do not use public certificates
- use wildcard certificates only
But how to automate wildcard certificate generation? That requires a change of the txt record and namecheap for instance got no mechanism for that to automatically happen on cert bot action
Doesn’t caddy support that (name cheap txt mod) via a plug-in?
I haven’t tried it yet, but the plugin made it sound possible. I’m planning to automate on next expiration… When I get to it ;)
I did already compile caddy with the plugin, just haven’t generated my name cheap token and tested.
“when i get to it” is my time frame aswell, till then its a reoccurring calendar notification with instructions because past me who set this all up was a genius compared to sleep deprived current me
There are some nameserver providers that have an API.
When you register a domain, you can choose which nameserver you like. There are nameservers that work with certbot, choose one that does.
Namecheap supports this according to docs. I just haven’t tested yet.
Not exactly what you mean because there are also bad actors but take a look at i2p, in some ways it feels like an retro internet.
Seeing as most root CA are stored offline compromising a server turned off is not really possible.
I’m more annoyed that I have 10 year old gear that doesn’t have automation for this.
Oh, I’m really just pining for the days before the ‘Eternal September’, I suppose. We can’t go back, I know. :/
Signing (intermediate) certs have been compromised before. That means a bad actor can issue fake certs that are validated up to your root ca certs
While you can invalidate that signing cert, without useful and ubiquitous revocation lists, there’s nothing you can do to propagate that.
A compromised signing certs, effectively means invalidating the ca cert, to limit the damage
It’s the “change your password often odyssey” 2.0. If it is safe, it is safe, it doesn’t become unsafe after an arbitrary period of time (if the admin takes care and revokes compromised certs). If it is unsafe by design, the design flaw should be fixed, no?
Or am I missing the point?
The point is, if the certificate gets stolen, there’s no GOOD mechanism for marking it bad.
If your password gets stolen, only two entities need to be told it’s invalid. You and the website the password is for.
If an SSL certificate is stolen, everyone who would potentially use the website need to know, and they need to know before they try to contact the website. SSL certificate revocation is a very difficult communication problem, and it’s mostly ignored by browsers because of the major performance issues it brings having to double check SSL certs with a third party.
The point is, if the certificate gets stolen, there’s no GOOD mechanism for marking it bad.
That’s what OCSP is for. Only Google isn’t playing along as per that wiki entry.
I mean, are you intending to retroactively add SSL to every tool implementing SSL in the past few decades?…
Browsers aren’t the only thing that ingress SSL.
Then there’s the older way of checking CRLs which any tool of the past few decades should support.
That’s what Carla are for.
Looks like autoincorrect did a s/CRLs/Carla/ for you.
And that somehow Lemmy didn’t federate my deletion!

How did you reply to a deleted comment?
Probably the comment has federated to lemmy.world, but the deletion of the comment hasn’t yet.
But browsers have a marker for dangerous sites - surely Cloudflare, Amazon or Google should have a report system and deliver warnings at the base
Browsers are only a (large) fraction of SSL traffic.
So is there an example of SSL certs being stolen and used nefariously. Only thing that sticks out to me is certificate authorities being bad.
Yep. https://fedia.io/m/selfhosted@lemmy.world/t/3090624/Decreasing-Certificate-Lifetimes-to-45-Days/comment/13237364#entry-comment-13237364
Short lifespans are also great when domains change their owner. With a 3 year lifespan, the old owner could possibly still read traffic for a few more years.
When the lifespan ist just 30-90 days, that risk is significatly reduced.
Only matters for LE certs.
You can still buy 1 year certsFor 3 more months or so, you can’t buy them in april 2026 anymore
oh? Damn
They are going down to 200 day expiration in March 2026. You can still buy 5 year certificates today but you still need to reissue them in 365 day cadence.
As some selfhosting novice who uses NPM with auto renewal - I feel that I shouln’t be ocncerned.
Check your autorenewal failure alerts go somewhere you’ll react to.
Just saying:
There are alternatives for LE,not for all things, but for a lot. Afaik not all of them do follow suit.
Reducing the valid time will not solve the underlying problems they are trying to fix.
We’re just gonna see more and more mass outages over time especially if this reduces to an uncomfortably short duration. Imagine what might happen if a mass crowdflare/microsoft/amazon/google outage that goes on perhaps a week or two? what if the CAs we use go down longer than the expiration period?
Sure, the current goal is to move everybody over to ACME but now that’s yet another piece of software that has to be monitored, may have flaws or exploits, may not always run as expected… and has dozens of variations with dependencies and libraries that will have various levels of security of their own and potentially more vulnerabilities.
I don’t have the solution, I just don’t see this as fixing anything. What’s the replacement?
clearly the most secure option is to have certificates that are only valid for 30 seconds at a time
Well it should be as short as possible while still being practical. LE doesn’t have infinite server compute, renewal also takes some amount of time, plus if they make the validity too short people might stop using them (pretty evident judging from sentiment here) and move to other CAs and make what they do pointless.
45 days are still plenty of time yet people are already complaining. Does make me worry.
Let’s be extra safe. New cert per every request
Ephemeral diffie-hellman is exactly that, it’s part of TLS since I think 1.2
And you still can’t self certify.
It’s cute the big players are so concerned with my little security of my little home server.
Or is there a bigger plan behind all this? Like pay more often, lock in to government controlled certs (already done I guess because they control DNS and you must have a “real” website name to get a free cert)?
I feel it’s 50% security 50% bullshit.
Edit: thank you all I will dive down the CA certification rabbit hole now! Have worked in C++ & X509 on the client side so maybe I’ll be able to figure it out.
You can absolutely run your own CA and even get your friends to trust it.
Yes you can but the practicality of doing so is very limiting. Hell I ran my own CA for my own internal use and even I found it annoying.
The entire CA ecosystem is terrible and only exists to ensure connections are encrypted at this point. There’s no validation or any sort of authority to say one site is better than another.
not all phones support manually adding certs
Which phones. Android and iOS could.
I don’t know about iOS, but Android had support for this in the past. Now the support is partial. It’s no longer possible to install system-level certificates. Or at least they made it extremely inconvenient.
That’s a complaint about those phones not PKI in general then. Though it’s surprising their enterprise support won’t let you since that is (or was) a fairly common thing for businesses to do.
That’s a fair point. However, on the practical side, it’s sad that I would have to root my gf’s phone to let her access the services we host.
I ended up using a DynDNS and Caddy for managing my cert.
But you have to manually accept this dangerous cert in the browser right?
Very interesting actually, do you have any experience about it or other pointers? I might just set one up myself for my tenfingers sharing protocol…
No, because it’s no longer dangerous if it’s trusted.
You give your friends your public root and if applicable, intermediary certs. They install them and they now trust any certs issued by your CA.
Source: I regularly build and deploy CA’s in corps
Thank you!
Is there some simple soft that let you make those certs, like with a root cert and then “derived” certs? On linux :-) ?
I guess people have to re-trust every now and then because certs get old, or do they trust the (public partof the) root cert and the daughter certs derived from root are churned out regularly for the sites?
Openssl can do everything.
That’s right, but instead of the word derived we use “issued”
Correct certs get old by design, they can also be revoked. As another commenter mentioned the biggest pain is actually in the redistribution of these end certificates. In enterprise this is all managed usually with the same software they use for deployment or have auto enrollment configured.
You should find tons of guides just take it slow to understand it all. Understanding certificates in depth is a rare and good skill to have. Most sysadmins I come across are scared to death of certificates.
It’s pretty simple to set up. Generate CA, keep key and other private stuff stored securely, distribute public part of CA to whoever you want and sign all the things you wish with your very own CA. There’s loads of howtos and tools around to accomplish that. The tricky part is that manual work is needed to add that CA to every device you want to trust your certificates.
Thank you! This is actually precisely what I need, you IT guys are the best!
No that’s the point. If you import the CA certificate on your browser, any website that uses a cert that was signed by that CA will be trusted and accessible without warning.
Technically something like DANE can allow you to present DNSSEC-backed self-signed certs and even allow multi-domain matching that removes the need for SNI and Encrypted Client Hello… but until the browsers say it is supported, it’s not
At some point there was a browser extension to support DANE (and Perspectives and similar approaches against centralization) but since then, browser vendors fixed that security flaw.
And you still
can’tcan self certify.Skill issue, you’ve always been able to self certify. You just have to know where to drop the self signed cert or the parent/root cert you use to sign stuff.
If you’re running windows, it’s trivial to make a self signed cert trusted. There’s an entire certificate store you can access that makes it easy enough you can double click it and install it and be on your way. Haven’t had a reason to figure it out on Linux, but I expect it won’t be super difficult.
I already did but my browser choked on it.
So yes I should probably set up the whole CA thing.
Just let me know so I can change my crontabs.
Coming soon. Daily certs. Just $19.99 a month.

























