Young people increasingly turn to social media to make sense of their mental health. Researchers now warn that much of what they find may mislead rather than inform.
Experts from the University of East Anglia (UEA) in Norwich and Norfolk and Suffolk NHS Foundation Trust examined the quality of mental health content across major platforms. They reviewed 27 studies covering 5,057 posts on YouTube, TikTok, Facebook, Instagram and X. Their conclusion lands clearly: unreliable information is widespread, and it shapes how young users interpret their own behaviour.
Dr Alice Carter, from UEA, said 52% of ADHD-related videos and 41% of autism videos on TikTok were inaccurate, something TikTok disputed.
The scale varies across topics. Researchers reported misinformation rates ranging from 0% for anxiety and depression videos on YouTube Kids to 56.9% for claustrophobia content on YouTube. They noted that misinformation prevalence was “consistently higher on TikTok than other platforms”.
YouTube Kids stood apart. The platform showed no misinformation on some topics, “likely due to the implementation of stricter content moderation and prioritisation of child-friendly content”, the authors said.
The findings highlight a pattern: content about ADHD and autism attracts more misinformation than other mental health topics. That matters because these conditions often involve nuanced diagnosis. A short-form video rarely captures that complexity.
Dr Eleanor Chatburn, from UEA’s Norwich Medical School, said many young people were turning to social media to understand their symptoms.
“While this questioning can be a helpful starting point, it’s important these questions lead to proper clinical assessment with a professional,” she said.
“As well as leading to misunderstanding of serious conditions and pathologising ordinary behaviour, misinformation can also lead to delayed diagnosis for people that actually do need help.”
The implications extend beyond individual confusion. When users begin to label everyday behaviours as clinical symptoms, they risk missing the threshold between normal variation and diagnosable condition. In a workplace context, this mirrors employees misreading performance feedback and drawing the wrong conclusions about their capabilities.
Researchers also pointed to platform dynamics. TikTok’s algorithms could often push misinformation, the authors said, and they called for “strengthened content moderation”.
TikTok rejected the criticism. A spokesperson described the research as a “flawed study” that relied on “outdated research about multiple platforms”.
“The facts are that we remove harmful health misinformation and provide access to reliable information from the World Health Organization, so that our community can express themselves about what matters to them and find support,” they added.
Judith Brown, head of evidence and research at the National Autistic Society, said the study showed “how rapidly” misinformation can spread on social media.
“Social media companies should think about how to improve their platforms to prevent the spread of misinformation,” she added.
The question remains: if social platforms now serve as the first point of contact for mental health concerns, who takes responsibility when the information fails?
Author: Pishon Yip
