UK government commits over £500M to quantum tech – “Experts say quantum tech could change things like how we scan the brain or keep data safe, just like AI changed the world”.
Industrial strategy: Takeaways for UK tech innovations | Computer Weekly – “Labour wants to put the UK at the forefront of tech innovation. Its industrial strategy offers a funding boost for tech and lighter-touch regulation”.
Plans to transform Ravenscraig into one of the UK’s largest green AI data centres - Daily Record – “Developers Apatura is working closely with North Lanarkshire Council and site owners Ravenscraig Ltd to advance the proposal”.
Matt Clifford to step back as Prime Minister's AI advisor - UKTN – “The entrepreneur has been instrumental in UK AI policy”.
MHRA joins HealthAI Global Regulatory Network | UKAuthority – “MHRA said it will work with regulators around the world to share early warnings on safety, monitor how AI tools perform in practice, and shape international standards”.
The Great Language Flattening - The Atlantic (gift link, may only work for some) – “Chatbots learned from human writing. Now the influence may run in the other direction”.
5 books about AI to get you caught up in 2025 - Fast Company – “Among the authors is a veteran Pulitzer Prize-winning journalist who shadows the top thinkers in the field of AI”.
Inside Microsoft’s 2025 Responsible AI Transparency Report | AI Magazine – “Microsoft has published its second annual Responsible AI Transparency Report, showcasing the measures it is taking to ensure its development of AI is both ethical and open”.
A PDF or text file is all you need for Copilot to generate a PowerPoint for you in seconds - Neowin – “Microsoft has outlined a streamlined approach to creating PowerPoint presentations within seconds using a PDF or text file, and a natural language prompt as input”.
The $14 Billion AI Google Killer – “Why Meta and Apple want Perplexity AI, even if it's just a glorified chatbot”.
BBC Threatens to Sue Perplexity, Alleging 'Verbatim' Reproduction of Its Content - CNET – “This isn't the first time the AI company has been accused of infringing on content”.
AI search finds publishers starved of referral traffic • The Register – “Turn out the lights, the internet is over”.
LLMs factor in unrelated information when recommending medical treatments | MIT News | Massachusetts Institute of Technology – “Researchers find nonclinical information in patient messages — like typos, extra white space, and colorful language — reduces the accuracy of an AI model”.
Checking In on AI and the Big Five – Stratechery by Ben Thompson – “I thought it would be useful to revisit that 2023 analysis and re-evaluate the state of AI’s biggest players, primarily through the lens of the Big Five: Apple, Google, Meta, Microsoft, and Amazon”.
The struggle to get inside how AI models really work – “Anthropic, Google and OpenAI deploy ‘chains-of-thought’ to better understand the operations of AI systems”
Multiple Studies Now Suggest That AI Will Make Us Morons – “Are we on the road to Idiocracy?”
New data highlights the race to build more empathetic language models | TechCrunch – “while the major benchmarks still focus on left-brain logic skills, there’s been a quiet push within AI companies to make models more emotionally intelligent”.
New GitHub Copilot limits push AI users to pricier tiers • The Register – “Welcome to bill shock, AI style”.
Pope Leo XIV Urges Tech Executives to Come Up With an Ethical AI Framework - CNET – “The Pope is sending a message to tech executives about necessary AI guardrails”.
The résumé is dying, and AI is holding the smoking gun - Ars Technica – “As thousands of applications flood job posts, 'hiring slop' is kicking off an AI arms race”.
MCP servers used by developers and 'vibe coders' are riddled with vulnerabilities – here’s what you need to know | IT Pro - "New research shows misconfigured MCP servers are putting devs at risk".
Major AI chatbots parrot CCP propaganda – “According to the American Security Project (ASP), the CCP’s extensive censorship and disinformation efforts have contaminated the global AI data market”.
DeepSeek accused of powering China’s military and mining US user data – Computerworld – “Explosive allegations reveal the AI firm’s role in battlefield simulations, chip smuggling, and exploiting systemic export control loopholes”.
DeepSeek’s Democratic Deficit – “But global access to an admittedly powerful — and, so far, free — AI model does not necessarily mean democratization of information”.
Using AI at work? Don't fall into these 7 AI security traps | Mashable – “AI can be a powerful tool for productivity, but risks come with its rewards”.
Unpacking the bias of large language models | MIT News | Massachusetts Institute of Technology – “In a new study, researchers discover the root cause of a type of bias in LLMs, paving the way for more accurate and reliable AI systems”.
Exclusive: Scale AI's Spam, Security Woes Plagued the Company While Serving Google – “How the startup that just scored a $14 billion investment from Meta struggled to contain ‘spammy behavior’ from unqualified contributors as it trained Gemini”.
Anthropic destroyed millions of print books to build its AI models - Ars Technica – “While destructive scanning is a common practice among smaller-scale operations, Anthropic's approach was somewhat unusual due to its massive scale”.
And finally…5 things AI does in movies and TV I'm still waiting on | TechRadar – “Impatiently awaiting the future”