With many jurisdictions introducing age verification laws for various things on the internet, a lot of questions have come up about implementation and privacy. I haven’t seen anyone come up with a real working example of how to implement it technically/cryptographically that don’t have any major flaws.
Setting aside the ethics of age verification and whether or not it’s a good idea - is it technically possible to accurately verify someone’s age while respecting their privacy and if so how?
For an implementation to work, it should:
- Let the service know that the user is an adult by providing a verifiable proof of adulthood (eg. A proof that’s signed by a trusted authority/government)
- Not let the service know any other information about the user besides what they already learn through http or TCP/IP
- Not let a government or age verification authority know whenever a user is accessing 18+ content
- Make it difficult or impossible for a child to fake a proof of adulthood, eg. By downloading an already verified anonymous signing key shared by an adult, etc.
- Be simple enough to implement that non-technical people can do it without difficulty and without purchasing bespoke hardware
- Ideally not requiring any long term storage of personal information by a government or verification authority that could be compromised in a data breach
I think the first two points are fairly simple (lots of possible implementations with zero-knowledge proofs and anonymous signing keys, credentials with partial disclosure, authenticating with a trusted age verification system, etc. etc.)
The rest of the points are the difficult ones. Some children will circumvent any system (eg. By getting an adult to log in for them) but a working system should deter most children and require more than a quick download or a web search for instructions on how to circumvent.
The last point might already be a lost cause depending on your government, so unfortunately it’s probably not as important.
You know how there are stores that sell restricted substances and verify your age by checking a provided ID? Have those same stores sell a cheap, sealed card with a confirmation code on it. You can enter that code online to verify any service. The code expires after a set period of time after it’s first use to prevent sharing and misuse.
This system would be as secure as the restrictions on the restricted substance, such as alcohol, so it should be fine for “protecting the children”
Interesting idea. Could also give it out free with packs of beer like a golden ticket from Charlie And The Chocolate Factory.
And all across the whole world, 18 year old men will jump for joy when picking up birthday booze - “I can finally look at boobs on the internet!”
I’m pretty sure there is already a cryptographic protocol that can do this, but that’s not the point. We do NOT need age verification in software, it makes no sense. We need parents to take care of their own children because why would open-source software do the job of failed parenting? It’s a social issue, not something that can be solved with technology. Or we would have put shock-collars on every kids when they don’t behave.
Great idea, let’s get parents to raise their kids.
Now, how do we suddenly make them actually do that? Last I checked this idea has been around about as long as people have been around but it’s still not happening.
Parenting matters, but it’s not the only layer of protection. We don’t rely solely on parents to keep kids from walking into bars or buying cigarettes, we have laws and systems to back them up. Why should the internet be different?
Wrote a comment recently. Age verification? Unnecessary. OS-level parental controls? Possibly meriting.
https://programming.dev/comment/22589550
I am still against where all this age verification crap is coming from, and I’m against what specifically “age verification” entails; but here’s the thing: We keep saying, “It should be the parent’s responsibility to secure their kids”—and while that’s true, you can do all the talking and educating you want, but the fact is that the internet is now nigh-fully integrated with our lives, and unless you are surveilling your kid at every moment they are on the internet (don’t recommend), not every parent has the time, resources, or know-how to keep their children safe on the internet without help.
There are some states pushing for “OS-level age verification,” and I’m not convinced the proponents for this idea know what this combination of words means—but the idea isn’t all bad. An interface for apps to query the device for a simple “can access adult content” value would be helpful for parents to better manage what their kids can access without having to hover 24/7. There is zero need for any sort of identification at any point in the process. The fact that legislation is promoting cumbersome identification collection and not the already existing idea of parental controls is evidence enough that this is designed to surveil.
This may address the privacy concern, but the issue still remains of a centralized power deciding what is and what isn’t “safe for kids.”
I don’t think we’re gonna get around the child internet safety conversation, and for good reason; but the conversation should be around how we can do it without jeopardizing individuals’ safety and privacy, including children.
That’s what the router setting to block adult websites is for… you don’t have to monitor 24/7, have some idea that bad sites are blocked, and you can just be doing regular checkups on your child then.
There is and was never a need to involve IDs, other than more control over us as a whole and being able to extract more data.
I think maybe the barrier could be a little higher than just disconnecting from your home’s network.
If we were to accept the premise that there is currently an issue with child internet safety, then clearly this still an issue despite the existence of router controls. But now the question of if this premise is valid. What do you look at to determine whether “internet safety for children” is adequate? I don’t really know, and so I guess I have more reading to do.
I was gonna say something about PSAs, but no time.
I mean at the same time if we accept the premise that parents are unable or unwilling to use the mountains of currently available tools, why would we assume that “just one more tool, bro” is actually a solution to the “problem.”
You could accept that premise, and I’d argue that in that case if child internet safety is an issue, then the only solutions are either to force safety protocols or to leave kids in jeopardy.
I think if you as a parent have router controls and block adult content on their mobile plan if they have one (which I have seen as an option), then you are already doing a lot.
Most routers from ISPs come with “adult” content filtering enabled by default I think, at least the ones I’ve had have had this on.
VPNs already work and I can’t see them not working, so that’s always an option I guess, but they are also still an option with ID laws (ie connect to a region where they have no such laws).
Children’s safety online can’t involve limiting access and tracking everyone who ever goes online with their national ID attatched to every request (basically).
I think it’d be better if we explored the option that involves a parent blocking websites either on your network or on a device they give to you.
Most routers from ISPs come with “adult” content filtering enabled by default I think, at least the ones I’ve had have had this on.
Not that I’m doubting you, but is this a common thing? I’ve never even seen it as an option here in Canada, both on ISP supplied devices and on separate routers. Is it just because I’m using cheaper devices, or because on my region?
Hm I know it’s common for me. If it isn’t in canada or elsewhere, then that’s just lazyness and a lack of care by your isp/router manufacturer (which makes sense cuz theres a monopoly on internet over there right? (Rip)).
Anywho, adult filter blocks on routers are a really easy thing to implement. If it isn’t then they simply don’t care to help explore the simplest of options for parents restricting access to bad sites.
I’ve heard of apps you can get for child devices that do a similar thing and let parents track their kids, which might be better anyways, assuming they arent a privacy nightmare (if parents dont prefer buying a smarter router).
I’m not sure if this is part of the “setting aside” stuff, but I’d ask why age needs to be verified and not simply stated.
I’m the admin on this device, I say I’m 50, why does the website need to check some ID to prove I’m 50? They trust what I reported, and if I lied to them that’s on me. It shouldn’t be the websites’ job to validate.
Exactly, it should be a parent’s job to limit a child’s access not a website.
I agree, but also parents need better tools to be able to effectively limit their child’s access. App and device level parental controls are not sufficient as they currently work.
They work fine for people who use them correctly
For an implementation to work, it should: * Let the service know that the user is an adult by providing a verifiable proof of adulthood (eg. A proof that’s signed by a trusted authority/government) * Not let the service know any other information about the user besides what they already learn through http or TCP/IP *
Seems like that’s exactly what https://yivi.app/en/ can do.
How do they deal with the other requirements though? What’s stopping someone from setting up a service that uses their yivi account to sign “I’m over 18” for anyone who wants to be over 18?
What’s to stop people from providing that service to buy people alcohol?
The difference is one is physical and requires interaction with a human: “Hey uncle Bob, buy me beer?” Vs. The other one is technical and just requires them to do a Google search and click a button without interacting with anyone.
The first one has a higher barrier for entry and at least involves some form of adult supervision. The second one makes it not much different to the classic “what is your birthday?” thing.
Here’s one good answer: https://crypto.stackexchange.com/a/96283
It has the downside of requiring a physical device like a passport or some specific trusted long-running locally-kept identity store held by the user. But it’s otherwise very good.
Another option does not require anything extra be kept by the user, but does slightly compromise privacy. The Government will not be able to track each time the user tries to access age-gated content, or even know what sources of age-gated content are being accessed, but they will know how many different sites the user has requested access to. It works like this:
- The user creates or logs in to an account on the age-gated site.
- The site creates a token
Tthat can uniquely identify that user. - That token is then blinded
B(T). Nobody who receivesB(T)can learn anything about the user. - The user takes the token to the government age verification service (AVS).
- The user presents the AVS with
B(T)and whatever evidence is needed to verify age. - The AVS checks if the person should be verified. If not, we can end the flow here. If so, move on.
- The AVS signs the blinded token using a trusted AVS certificate,
S(B(T))and returns it to the user. - The user returns the token to the site.
- The site unblinds the token and obtains
S(T). This allows them to see that it is the same tokenTrepresenting the user, and to know that it was signed by the AVS, indicating that the user is of age. - The site marks in their database that the user has been age verified. On future visits to that site, the user can just log in as normal, no need to re-verify.
All of the moving around of the token can be automated by the browser/app, if it’s designed to be able to do that. Unfortunately a typical OAuth-style redirect system probably would not work (someone with more knowledge please correct me), because it would expose to the AVS what site the token is being generated for. So the behaviour would need to be created bespoke. Or a user could have a file downloaded and be asked to share it manually.
There’s also a potential exposure of information due to timing. If site X has a user begin the age verification flow at 8:01, and the AVS receives a request at 8:02, and the site receives a return response with a signed token at 8:05, then the government can, with a subpoena (or the consent of site X) work out that the user who started it at 8:01 and return at 8:05 is probably the same person who started verifying themselves at 8:02. Or at least narrow it down considerably. Making the redirect process manual would give the user the option to delay that, if they wanted even more privacy.
The site would probably want to store the unblinded, signed token, as long-term proof that they have indeed verified the user’s age with the AVS. A subsequent subpoena would not give the Government any information they could not have obtained from a subpoena in an un-age-verified system, assuming the token does not include a timestamp.
It would also reveal to the government that the user was accessing 18+ content (though not what that content is if the token is blinded).
It also doesn’t stop the easy circumvent of someone who is an adult providing a service for children or others who don’t want to auth with the government.
- The 18+ site provides Child c with a token T and it’s blinded to b(T)
- The child sends b(T) to a malicious service run by a real adult (Mal)
- Mal sends the token to the AVS to create s(b(T))
- Mal provides s(b(T)) to the child who gives it to the 18+ site as a legit S(T)
It would also reveal to the government that the user was accessing 18+ content
Yes, I did mention that. Although ironically, Australia’s social media minimum age law, and other similar laws being considered around the world, would actually increase privacy in this respect. The government could have separate keys for each age of legal significance (16 and 18, in Australia) and sign with the appropriate one (either the highest the user meets, or all the user meets—the latter would give the site less information about the user’s and).
I don’t believe it is technically possible to get around the example you shared there. Even in the real world, it’s not dissimilar to a child asking an adult to buy alcohol for them.
The difference with the asking an adult to buy alcohol is mostly that, because the whole thing is online, they wouldn’t need to ever really interact with an adult.
If the circumvention is as easy as looking up “free age verification” in a search engine, typing a url and clicking a button then it might not be very effective.
If it at least required them to steal dad’s id card or get uncle Bob to help or something that’s a different story.
Actually something just occurred to me. Because my system, unlike the one from the Stack Exchange link or the one described elsewhere in the thread using an ID card, relies on a per-site untraceable request to the government, the government would be able to detect if one user is making a suspicious number of requests. It’s reasonable for one person to make tens of requests, maybe even low hundreds over the course of a lifetime. It’s not reasonable to be making hundreds or more in a day. They wouldn’t know which sites are being accessed with it, or even what accounts on those sites. But they could set rate limits to prevent one person creating too many accounts for others, and potentially threaten legal action against them for doing so.
That threat of legal action is part of the same thing that prevents children from being able to go up to a random adult, handing them a $50 note, and asking for $20 worth of alcohol in exchange. You’re not going to be able to prevent it on a smaller scale, but you can definitely prevent a small handful of people being able to age verify on behalf of thousands of children.
An additional protection could be added depending on how the age verification works. If she verification is “upload a scan of your photo ID”, then yeah, mass verification becomes possible. But if each verification requires you to hold up your photo ID next to your face, speak a specific phrase aloud (with automated lip reading attempting a rough lip flap match), nod your head, write a specific phrase on a piece of paper, and more, all in randomised orders, it becomes a much bigger burden for someone to provide for others.
I’m certainly not advocating this. The level of burden for legitimate users would be too high to consider it reasonable. But it would be possible. Something like this has been used in the past for things like EV code signing certificates, where a larger burden is relatively more reasonable.






