Innovation: Linux 7.0, AI Glasses and the Machines That Grade Your Kids

Linux 7.0 arrives with AI-powered bug hunting, Meta's smart glasses raise privacy alarms, and China wants algorithms to mark homework.

Innovation: Linux 7.0, AI Glasses and the Machines That Grade Your Kids
Photo by Kevin Ku on Unsplash

Editorial digest April 13, 2026
Last updated : 08:20

Monday morning, and three stories tell the same tale: artificial intelligence is no longer knocking at the door — it has moved in, rearranged the furniture, and started marking your children's homework. From Linus Torvalds pondering whether AI could reshape how the world's most important open-source project ships code, to China deploying algorithms in classrooms, to Meta strapping cameras to your face and calling it "personal super intelligence" — the question is no longer whether AI will permeate everything, but who gets to set the rules when it does.

What does Linux 7.0 mean for the future of software?

Linus Torvalds released Linux kernel 7.0 this week — a milestone that, according to The Register, also makes Rust support officially part of the project. That alone would be worth noting: Rust's inclusion in the kernel has been a years-long saga of technical debate and community friction. But the more striking element is Torvalds openly musing about AI's potential to find bugs and what that could mean for the kernel's release cycle.

Think about that for a moment. Linux runs everything from Android phones to most of the world's servers, cloud infrastructure and supercomputers. If AI tools can genuinely accelerate bug detection in a codebase this critical, the ripple effects reach every device you touch. Torvalds, not exactly known for indulging hype, appears to be taking the prospect seriously enough to consider its impact on how releases are managed. That is not a man swept up in Silicon Valley enthusiasm — it is an engineer asking a practical question about his workflow.

The version number itself is largely cosmetic — Torvalds has long said major numbers are arbitrary. But the substance underneath is not. Official Rust support opens the door to memory-safe code in the kernel's most security-sensitive components. Combined with AI-assisted auditing, it sketches a future where the software underpinning civilisation becomes measurably harder to break.

Are Meta's AI glasses a breakthrough or a surveillance device?

Meta's Ray-Ban smart glasses have been on the market for months, but a Guardian podcast this week offered something rare: an honest, extended account of what it is actually like to wear them daily. Journalist Elle Hunt spent a month with the device and came back with a split verdict.

On one hand, the potential for people with vision impairments or hearing loss is genuinely promising — real-time assistance that does not require pulling out a phone. On the other, Hunt's report surfaces the privacy dimension that Meta would rather you not dwell on. These are glasses with cameras and microphones, connected to AI, worn in public spaces. The people around you did not consent to being recorded or analysed.

Mark Zuckerberg's framing — "personal super intelligence that lets you stay present in the moment" — is a masterclass in euphemism. You are not more present; you are more surveilled, and so is everyone near you. Britain has no comprehensive law governing wearable recording devices in public. The conversation about regulating this technology is moving far slower than the hardware.

Why is China putting AI in charge of the classroom?

China's National Data Administration published an action plan last week for deploying AI across the country's education system, as reported by The Register. The ambition is sweeping: AI should prepare lessons, mark homework and upskill citizens to work alongside the technology.

The efficiency argument writes itself. Teachers are overworked everywhere, and automating marking frees time for actual teaching. But China's version raises questions that a British audience should watch closely, because the pressure to adopt similar tools here is already building.

Who designs the curriculum an AI teaches from? What biases live inside the model grading a teenager's essay? When an algorithm decides a student's answer is wrong, what is the appeals process? These are not abstract concerns. They are the governance questions that every education ministry will face within five years, and China is running the experiment first — inside a system where transparency is not the priority.

Britain's Department for Education has been cautiously exploring AI tools for administrative tasks. The gap between "AI helps with scheduling" and "AI marks your GCSEs" may feel vast, but the direction of travel is clear.

The thread that connects it all

Strip away the specific technologies and a pattern emerges. Linux 7.0 shows AI entering the infrastructure layer — the code beneath the code. Meta's glasses put AI on your face. China's plan puts it in front of your children. Each step is individually defensible. Collectively, they represent a pace of adoption that is outrunning the institutions meant to govern it.

The UK has positioned itself as a light-touch AI regulator, betting that flexibility will attract investment. That bet looks increasingly like a gamble. When the machines are grading homework, scanning public spaces and reshaping how critical software ships — all in the same week — "wait and see" stops being pragmatism and starts looking like negligence.