NHS closes hundreds of GitHub repos over AI security fears

The UK's National Health Service is pulling hundreds of open source projects behind closed doors. The healthcare giant cited concerns about advanced AI models, particularly Anthropic's Mythos, which can analyze code at scale.
NHS technology leaders have until May 11 to switch their GitHub repositories from public to private. The move marks a significant shift for an organization that has long championed open source development.
What's the news?
Internal NHS guidance obtained by The Register warns that public repositories "materially increase the risk of unintended disclosure of source code, architectural decisions, configuration detail, and contextual information that may be exploited."
The guidance specifically mentions "rapid advancements in AI models capable of large-scale code ingestion, inference, and reasoning" including Mythos. The NHS Engineering Board approved the decision, which affects hundreds of repositories.
An NHS England spokesperson called this a "temporary measure enacted while the organization shores up its cybersecurity posture." They said the NHS will "continue to publish source code where there is a clear need."
NHS sources say most of the affected repositories contain non-sensitive content:
- Documentation and architecture diagrams
- Internal tools and web applications
- Code for managing clinic scheduling systems
Why does it matter?
This decision represents a major policy reversal for the NHS, which has historically favored open source development. The organization's service manual states that "all new source code should be made open source" because "public services are built with public money."
The move highlights growing concerns across major organizations about AI models' ability to find vulnerabilities in publicly available code. If other large institutions follow suit, it could signal a broader retreat from open source practices in critical infrastructure sectors.
However, security experts question whether closing repositories now provides meaningful protection. Former NHSX open technology head Terence Eden argues that code "was all ingested for training purposes years ago" and remains accessible through various archives and backups.
The context
Anthropic's Mythos model has generated significant debate in security circles. The company claims it can rapidly find vulnerabilities that skilled human teams would miss. UK authorities, including the AI Safety Institute and National Cyber Security Centre, have validated some of these claims.
Critics remain skeptical about Mythos's actual capabilities:
- Anthropic hasn't disclosed the model's false positive rate
- Tests show the gap between Mythos and open source models is smaller than claimed
- The model is currently restricted to select organizations through Project Glasswing
Forrester analysts warn that once powerful AI models become publicly available to attackers, open source software will face genuine threats. However, many security experts argue that organizations face bigger risks from phishing attacks, poor password practices, and supply chain vulnerabilities than from AI-powered code analysis.
The NHS previously sparked concern when it deleted web pages about its open source approach, though officials said this was routine cleanup during organizational restructuring.
💡Did you know?
You can take your DHArab experience to the next level with our Premium Membership.👉 Click here to learn more
🛠️Featured tool
Easy-Peasy
An all-in-one AI tool offering the ability to build no-code AI Bots, create articles & social media posts, convert text into natural speech in 40+ languages, create and edit images, generate videos, and more.
👉 Click here to learn more

