The platform will soon require users to verify their age before accessing explicit channels or unblurring sensitive media. That verification may involve submitting a video selfie for facial analysis or uploading a government-issued ID. For a company still recovering from a recent breach involving identity documents, the timing has not gone unnoticed.
In a blog post, Discord confirmed that “a phased global rollout” will begin in “early March,” shifting all users by default into “teen-appropriate” experiences. Adults who want access to restricted areas will likely need to complete an age estimation check. Most users will only do this once, the company said, though some “may be asked to use multiple methods, if more information is needed to assign an age group.”
The mechanics hinge on artificial intelligence. Discord says facial age estimation runs directly on a user’s device, analysing facial structure in real time. If users choose to upload an ID, that document is checked off-device, but the company insists selfie data never leaves the phone. Both forms of data are deleted once age is determined.
That promise lands differently in light of October’s breach, when hackers stole government IDs belonging to 70,000 users. Those documents had been submitted through a third-party age verification provider used in the United Kingdom and Australia. At the time, Discord warned that attackers aimed to “extort a financial ransom from Discord.”
Security experts urged caution. Ars Technica’s senior security editor Dan Goodin wrote that “the best advice for people who have submitted IDs to Discord or any other service is to assume they have been or soon will be stolen by hackers and put up for sale or used in extortion scams.”
Now, as Discord expands age checks globally, many users question whether the company can safeguard even more sensitive data.
On Reddit, backlash was swift. One gamer wrote, “Hell, Discord has already had one ID breach, why the fuck would anyone verify on it after that?” Another declared, “This is how Discord dies,” adding, “Seriously, uploading any kind of government ID to a 3rd party company is just asking for identity theft on a global scale.”
Scepticism extends beyond ID uploads. Some users say they will never submit a selfie scan, fearing that breaches are inevitable and suspecting the company of understating privacy risks while expanding data collection.
To reinforce confidence, Discord has partnered with k-ID, an age-assurance provider also used by platforms including Meta and Snap. The company says its system keeps biometric processing local to the device and only sends a pass/fail result to Discord.
Yet even privacy policies have fuelled unease. Reddit users dissected k-ID’s disclosures, questioning references to “trusted 3rd parties” involved in verification. One user concluded that “everywhere along the chain it reads like ‘we don’t collect your data, we forward it to someone else… .’”
Ars Technica reviewed the policies and noted that k-ID relies on facial age estimation technology from a Swiss company called Privately. K-ID’s policy states, “We don’t actually see any faces that are processed via this solution.” It later clarifies that “neither k-ID nor its service providers collect any biometric information from users when they interact with the solution. k-ID only receives and stores the outcome of the age check process.”
Pressed for clarity, a k-ID spokesperson said: “the Facial Age Estimation technology runs entirely on the user’s device in real time when they are performing the verification. That means there is no video or image transmitted, and the estimation happens locally. The only data to leave the device is a pass/fail of the age threshold which is what Discord receives (and some performance metrics that contain no personal data).”
The spokesperson added: “k-ID, does not receive personal data from Discord when performing age-assurance,” describing the approach as grounded in data minimisation. “There is no storage of personal data by k-ID or any third parties, regardless of the age assurance method used.”
Privately echoes that stance. The company says its tools comply with the European Union’s General Data Protection Regulation by keeping AI models on-device. Its website states: “No user biometric or personal data is captured or transmitted,” and promotes “our secret sauce is our ability to run very performant models on the user device or user browser to implement a privacy-centric solution.”
Its privacy policy reinforces that position: “Our technology is built using on-device edge-AI that facilitates data minimization so as to maximise user privacy and data protection,” adding that processing occurs locally “thereby avoiding the need for us or for our partners to export user’s personal data onto any form of cloud services.”
Discord maintains that neither it nor its vendors permanently store identity documents or selfie videos. A company spokesperson said, “Discord and our age assurance vendor partners do not permanently store personal identity documents or users’ video selfies. Identity documents, including selfies, are deleted once a user’s age group is confirmed, and the selfie video used for facial age estimation never leaves their device.”
The spokesperson added, “We’re also exploring other vendors and will be transparent with users if the data practices for vendors differ. We’ll continue to put user privacy first as we consider introducing any additional methods in the future. We also frequently audit our third-party systems to ensure they meet our security and privacy standards.”
Beyond verification uploads, Discord is deploying an age inference model. Savannah Badalich, the company’s global head of product policy, told The Verge that the system analyses metadata, including gaming habits, activity patterns and behavioural signals such as working hours. “If we have a high confidence that they are an adult, they will not have to go through the other age verification flows,” she said.
Badalich acknowledged that some users may leave over the changes but suggested that “we’ll find other ways to bring users back.”
Accuracy remains a live issue. In Australia, where the system first launched, some teenagers claimed the tool failed to estimate their age at all. Others reported bypassing checks with AI-generated videos or cosmetic tweaks. One 13-year-old boy reportedly convinced the system he was over 30 by scrunching his face to appear older.
Privately states that its technology is “proven to be accurate to within 1.3 years, for 18-20-year-old faces, regardless of a customer’s gender or ethnicity.” Even so, experts have warned that age-verification systems often struggle to distinguish between a 17- and 18-year-old—an edge case that carries regulatory weight.
Appeals may prove the weakest link. Discord’s previous breach involved IDs submitted during an appeal process to correct inaccurate age estimates. Badalich confirmed that a third-party vendor will review appeals under the new system and that IDs shared in that context “are deleted quickly—in most cases, immediately after age confirmation.”
Users remain unconvinced. One Redditor argued that “corporations like Facebook and Discord, will implement easily passable, cheapest possible, bare minimum under the law verification, to cover their ass from a lawsuit,” while expecting users to trust the security of their data.
Another joked she would feel more confident if Discord were “willing to pay millions to every user” whose “scan does leave a device.”
Discord faces a familiar digital dilemma: protect minors, satisfy regulators and preserve privacy—all without eroding trust. The company insists its safeguards work. Users ask a simpler question: if the last cache of IDs was breached, what assurance prevents the next one?
Author: George Nathan Dulnuan
