Grammarly’s new “Expert Review” feature promises something ambitious: editorial guidance inspired by the world’s most respected thinkers, writers and journalists. On paper, it sounds like a powerful upgrade for a tool already used by millions to polish emails, reports and articles.
The reality raises uncomfortable questions about how artificial intelligence borrows authority — and whose names it uses to do so.
A Writing Assistant That Channels Famous Voices
Grammarly introduced Expert Review in August 2025 as part of its expanding suite of AI tools. The feature sits inside the assistant’s sidebar and offers suggestions framed through the perspectives of recognised experts.
Paste a draft into the editor and the system may suggest improvements attributed to well-known figures — journalists, academics or public intellectuals whose work appears online. The feedback might recommend adding ethical context, strengthening an argument or sharpening narrative structure, presented as though it reflects how those individuals would critique the piece.
For writers seeking a quick second opinion, the concept feels familiar. Professionals often imagine how an experienced editor might react to a draft. Would a newsroom editor demand stronger evidence? Would a columnist push for a clearer point of view? Grammarly essentially attempts to simulate that mental exercise.
The difference lies in how the software presents those voices.
Real Names, No Real Participation
Investigations revealed that the tool frequently references real journalists and scholars, including staff members from major publications. None of them appear to have been involved in developing the feature or granting permission to use their names.
In some tests, the software produced suggestions tied to identifiable reporters from technology outlets and national newspapers. The feedback appeared in a way that could easily resemble commentary from an actual editor reviewing the document.
For readers accustomed to collaborative editing platforms like Google Docs, that format matters. A comment in the margin usually signals input from a colleague. When an AI system attaches a recognisable name to a suggestion, the line between simulation and genuine expertise blurs.
Grammarly’s parent company, Superhuman, maintains that the tool draws only on publicly available work. Alex Gay, the company’s vice president of product and corporate marketing, explained that these figures appear in the system because their writing is widely cited and accessible online.
The company also clarifies in its documentation that references to experts are informational and do not indicate endorsement or collaboration.
Still, the approach raises a straightforward question: when software frames advice as coming from an identifiable individual, does the distinction remain clear to users?
When “Expert Review” Has No Experts
Critics argue the terminology itself misleads.
Historian C.E. Aubin summed up the concern bluntly: “These are not expert reviews, because there are no ‘experts’ involved in producing them.”
The comment highlights a broader tension shaping the AI industry. Language models can analyse vast collections of text and replicate stylistic patterns with remarkable speed. They cannot replicate lived experience, editorial judgement developed over decades, or the specific reasoning behind a writer’s decisions.
Imagine a junior analyst submitting a report to a senior partner. The partner’s feedback reflects years of deals, negotiations and failures. An algorithm trained on public writing can imitate the tone of that feedback, but it cannot reproduce the decision-making process behind it.
The distinction matters most in fields where credibility forms the foundation of influence — journalism, academia and policy analysis.
Accuracy Problems Complicate the Picture and technical issues further undermine the feature’s authority.
During testing, investigators found that some citations linked to unrelated or low-quality webpages rather than the original material attributed to an expert. In other cases, the descriptions of those experts appeared outdated or incorrect.
Several users also reported instability in the feature, including crashes while generating suggestions.
For professionals relying on AI to speed up editing, those inconsistencies can quickly erode trust. A consultant preparing a client presentation expects reliable sources. A journalist verifying information cannot risk following a link that leads to irrelevant or questionable content.
AI tools succeed only when they reduce friction. Faulty references do the opposite.
Consent and the Economics of Reputation
At the centre of the controversy lies a deeper issue: ownership of intellectual identity.
AI systems increasingly train on publicly available content — articles, books, academic papers and blog posts. Companies argue that because the material is public, it can inform algorithms that generate new outputs.
Yet turning someone’s published work into a branded “persona” inside a commercial product changes the equation.
The Verge’s reporting found examples where the feature referenced journalists whose editorial style bore little resemblance to the suggestions the AI produced.
That mismatch raises a practical concern. If a tool attaches your name to advice you never gave, who bears responsibility for the guidance?
For professionals whose reputations shape their careers, the stakes are obvious.
The Bigger Question for AI Productivity Tools
Grammarly’s experiment reflects a broader shift in the software industry. Writing assistants are evolving into multi-purpose AI platforms designed to operate across documents, emails and workplace apps.
Companies want these systems to act like colleagues — brainstorming ideas, revising text and suggesting strategy. The promise resembles having an experienced editor or analyst on call at all times.
But credibility cannot be automated simply by borrowing familiar names.
When a feature suggests that a renowned thinker would critique your work in a certain way, users naturally assume authenticity. Without it, the product risks feeling less like expert insight and more like a cleverly packaged imitation.
That distinction may determine how far AI assistants can expand into professional decision-making.
The technology can analyse language. It can mimic style. It can even anticipate the structure of an argument.
What it cannot yet provide is the thing many writers seek when they ask for a second opinion: the judgement of someone who actually read the draft.
Author: George Nathan Dulnuan
