Published: February 2026
Category: Health Tech / Artificial Intelligence
Artificial intelligence is rapidly making its way into operating theatres, promising greater precision and better outcomes. But as medical device makers rush to embed AI into surgical tools, regulators are seeing a growing number of reports alleging patient injuries, raising serious questions about safety, oversight, and accountability.
One of the most striking examples involves the TruDi Navigation System, a device used in sinus surgery. In 2021, Acclarent, a subsidiary of Johnson & Johnson at the time, announced that it had upgraded the system with AI-driven software designed to assist ear, nose and throat surgeons by improving navigation during procedures.
Before the AI upgrade, the US Food and Drug Administration had received only a handful of unconfirmed reports of device malfunctions and a single injury. After AI was introduced, however, regulators recorded more than 100 reports of malfunctions and adverse events, according to FDA data.
Between late 2021 and November 2025, at least 10 patients were reportedly injured while the system was in use. Many of the cases involved surgeons allegedly being given incorrect information about the position of surgical instruments inside patients’ heads. Reported outcomes included cerebrospinal fluid leaks, accidental skull punctures, and in two cases, strokes caused by damage to major arteries.
While FDA reports are not designed to establish causation, the spike in incidents has drawn attention from both regulators and the legal system. Two patients who suffered strokes have filed lawsuits in Texas, alleging that the AI-enhanced software contributed to their injuries. One lawsuit argues that the device may have been safer before artificial intelligence was integrated.
Johnson & Johnson has referred questions about the system to Integra LifeSciences, which acquired Acclarent in 2024. Integra maintains that the reports do not demonstrate a causal link between the AI software and patient harm, stating that the device was merely present during surgeries where adverse events occurred.
The case highlights a broader tension in modern medicine: while AI has the potential to transform healthcare, its rapid deployment into high-risk environments like surgery is outpacing the systems designed to evaluate long-term safety. As AI becomes more deeply embedded in clinical practice, regulators and healthcare providers face increasing pressure to ensure innovation does not come at the expense of patient well-being.
Author. Adedoye Adigun.
