Innovation Goes Ideological: NHS, xAI and Beijing's Bet

Innovation has turned political: an anti-woke tech boss wins NHS contracts, the DOJ defends xAI against Colorado, and China bets big on driverless.

Innovation Goes Ideological: NHS, xAI and Beijing's Bet
Photo by Abdullah Konte on Unsplash

Editorial digest April 25, 2026
Last updated : 08:22

Innovation used to be a dull word about productivity. This week made it political. A tech boss holding NHS and defence contracts dropped a 22-point manifesto on the future of the West. The US Justice Department waded into a state-level AI law on behalf of Elon Musk. China filled a Beijing exhibition hall with vehicles that don't need a driver. Three countries, one question: who sets the terms?

Why is an anti-woke manifesto suddenly NHS infrastructure?

According to BBC News, the chief executive of a tech company with UK government contracts spanning the NHS and defence has published a 22-point plan on the future of Western civilisation. The plan went viral. The contracts did not.

That is the awkward part. Companies have politics. Founders have manifestos. Fine. But when the manifesto becomes the marketing — and the buyer is the British state — due diligence stops being abstract. What happens to patient data running on infrastructure built by a firm whose boss is louder about ideology than about engineering? The BBC piece does not answer that. Whitehall has not either.

There is a deeper procurement question Britain keeps dodging. At what point does a CEO's worldview become a contract risk? The country has spent two decades outsourcing critical tech to whoever bid lowest, then acted surprised when the geopolitics shifted underneath. This is the same script with a 22-point footnote.

Will Washington override states on AI law?

Across the Atlantic, the Trump administration's Justice Department filed on Friday in support of xAI in its lawsuit against Colorado, the Guardian reports. Colorado passed a law requiring AI firms to guard against unintended discriminatory effects. The DOJ argues that violates the 14th Amendment's equal protection clause, because the state requires guard-rails against unintended bias while still permitting affirmative measures aimed at promoting diversity.

Translate the legal language: the federal government is using equal protection to dismantle state AI regulation, on behalf of a company controlled by Elon Musk. Whether the constitutional theory survives a court fight is one question. The political signal is the louder one. Washington wants a single federal AI framework, and it wants that framework friendly to the largest operators. States that try to move first will find the DOJ on their doorstep.

British readers should not file this under American eccentricity. The UK is currently betting on light-touch AI regulation while the EU goes maximalist. The American case previews what happens when sub-national regulators try to set their own rules anyway. There is now a federal preemption playbook. It will travel.

What does Beijing's driverless dream actually say?

According to the Guardian, more than a thousand vehicles filled the Beijing motor show on Friday, and barely any of them needed someone behind the wheel. Domestic Chinese EV sales are slowing. Manufacturers are pivoting hard to autonomy and to exports.

The strategic read is straightforward. China has already won the EV battery and assembly race. The next race is the software stack — perception, planning, safety arbitration — that lets a car drive itself reliably enough to ship. If Chinese firms reach scale on Level 3 and 4 autonomy while Western regulators are still litigating liability when a Tesla swerves, the cost of catching up climbs every quarter. Britain has a real autonomy sector around Oxford and Cambridge. It will not survive on small. The Beijing show was a reminder that mobility is now a question of which legal system can absorb the risk fastest, not which engineer is cleverest.

What does Altman's apology really admit?

OpenAI chief executive Sam Altman wrote a brief letter on Thursday to residents of Tumbler Ridge, Canada, the BBC reports, apologising for failing to alert police about the account of a mass shooting suspect ahead of a January attack. The apology runs to a paragraph. The implication runs further.

What duties does a frontier-AI company owe to law enforcement when its product is used by someone preparing violence? Today, none that are clearly written down. Altman's letter quietly concedes the answer probably should not stay "none". Expect Westminster to notice. The UK already has a regulator-by-press-release approach to AI safety; one transatlantic incident with a Crown dependency would harden that overnight.

What to keep in mind

Innovation is no longer a neutral noun. The British state is buying it from firms whose CEOs publish political programmes. The American state is defending those firms against its own regulators. The Chinese state is flooding showrooms with autonomous vehicles its rivals cannot yet ship. Governments that pretend tech procurement is just procurement, and AI regulation is just regulation, will keep being out-manoeuvred by governments that treat both as instruments of power.