Alzheimer's Drugs Called Out, AI Stumbles: Innovation's Reckoning
Anti-amyloid drugs show "trivial" effect, AI hallucinates in court, and UK kids stay vulnerable online. Three stories, one uncomfortable pattern.
Editorial digest April 16, 2026
Last updated : 08:21
Seventeen clinical trials. Billions in research money. Decades of patient hope. And the verdict? "Trivial." That word, deployed in a major new review of Alzheimer's anti-amyloid drugs, lands like a diagnosis of its own — on the pharmaceutical industry's most celebrated class of breakthrough therapies.
Three stories this week share a single thread: the uncomfortable distance between what innovation promises and what it actually delivers.
Alzheimer's Drugs: The "Gamechanger" That Wasn't
For years, anti-amyloid drugs — lecanemab, donanemab and their kind — were the white hope of dementia research. Regulators approved them. Patients enrolled. The narrative was set: science had finally cracked the amyloid hypothesis, and Alzheimer's would never be the same.
According to a review reported by The Guardian, analysing data from 17 clinical trials of these drugs, the effects on cognitive decline and dementia severity over 18 months were "trivial", with improvements in functional ability described as "small at best". No meaningful effect.
This isn't a minor academic quibble. The NHS is under pressure to decide whether to fund these therapies. Each treatment cycle runs to tens of thousands of pounds. The review's findings — if they hold — reframe the entire cost-benefit calculation for the health service. Worse, they suggest that the amyloid hypothesis, the bedrock of Alzheimer's drug development for three decades, may be fundamentally flawed.
The pharmaceutical industry will push back, hard. Some researchers will dispute the methodology. But the burden of proof has just shifted. When 17 trials yield "trivial" results, the uncomfortable question becomes: who knew what, and when did they know it?
AI in Court: The Hallucination Arrives as Evidence
Australia's federal court this week issued formal guidance to the legal profession on the use of generative AI — warning that lawyers who present AI-generated errors to the court face financial penalties and potentially worse. The trigger: a surge in court filings globally containing false citations fabricated by AI systems, according to The Guardian.
It's a different problem from AI's role in medicine or enterprise computing, but it points to the same structural failure. These systems generate text that sounds authoritative because authority is what they've been trained to mimic. In court, that's not a stylistic quirk — it's perjury by proxy.
British courts haven't yet issued equivalent guidance. They should. The UK legal profession is no stranger to AI adoption, and the pressure to use these tools — for drafting, research, submissions — is growing fast. The Australian warning is a preview, not a curiosity from the other side of the world.
The pattern here is familiar: AI moves fast, institutions regulate slowly, and somewhere in between, someone ends up in trouble.
The Roblox Moment: $12 Million to Learn What Was Already Known
Roblox will pay more than $12 million to the state of Nevada and implement new child protections following a landmark settlement, according to The Guardian. The platform — used by tens of millions of children worldwide — will now require age verification, restrict late-night notifications for minors, and limit chat functions.
The same week, the UK government summoned senior executives from Meta, YouTube and other social media firms to Downing Street to account for their record on children's safety, according to the BBC.
What's striking about both stories is the sequencing. Platforms don't move until regulators move. They don't implement age verification because it's the right thing to do — they do it because a state attorney general has extracted twelve million dollars from them. The voluntary model of child safety online has, empirically, failed.
The Online Safety Act is the UK's bet that legal compulsion will work where moral suasion didn't. The Roblox settlement suggests that financial pain, not principled governance, is what actually changes behaviour. Downing Street's summit will generate headlines. Whether it generates anything beyond that remains to be seen.
The One Story That Actually Worked
Not everything this week was a cautionary tale. In Barnsley, a former Wilko store has become a functioning NHS outpatients centre — eye tests, mole checks, consultant appointments, all inside a shopping centre. According to The Guardian, the experiment is both improving healthcare access and boosting footfall at the Alhambra simultaneously. Two problems, one building.
It's quiet, undramatic, and genuinely useful. A reminder that innovation doesn't always arrive with a press release or a nine-figure valuation. Sometimes it arrives in a disused retail unit in South Yorkshire, and it just works.
The week's lesson, if there is one: the loudest breakthroughs — Alzheimer's "gamechangers", AI's legal revolution, platforms self-regulating for children — are underdelivering. The quieter ones, built without fanfare and measured in footfall, are occasionally the real thing.