US FDA starts using agentic AI in-house

The FDA is not tiptoeing into artificial intelligence anymore. It is leaning in. This week, the agency quietly crossed an important line by rolling out agentic AI capabilities to all FDA employees. Optional. Guardrailed. And ambitious.
Agentic AI is a step beyond the chatbots most people know. It is designed to think in steps, juggle goals, and act with intent. In short, it does more than answer questions. It gets work done.
As FDA Commissioner Marty Makary put it, "There has never been a better moment in agency history to modernize with tools that can radically improve our ability to accelerate more cures and meaningful treatments." That is not marketing talk. It is a signal.
How does it work?
Agentic AI systems are built to plan, reason, and execute tasks across multiple stages. Think of it like a junior analyst who never gets tired and always follows the rulebook.
At the FDA, that means AI can now assist with:
- Pre-market reviews and review validation
- Meeting management and administrative workflows
- Post-market surveillance
- Inspections and compliance work
These systems can pull from different AI models, sequence actions, and keep humans in the loop at every step. Oversight is baked in. Usage is voluntary. No one is forced to rely on it.
This builds on the FDA's earlier deployment of Elsa, a large language model tool now used by more than 70 percent of staff. Teams have already been shaping it to fit real workflows, not demos. Agentic AI takes that same idea and stretches it further.
Why does it matter?
Because time matters at the FDA. Every review cycle. Every inspection. Every delay.
Agentic AI promises to reduce friction across complex processes that normally eat up hours or days. It does not replace scientific judgment. It clears the underbrush so experts can focus on decisions that actually require expertise.
Chief AI Officer Jeremy Walsh summed it up plainly. "Agentic AI will give them a powerful tool to streamline their work and help them ensure the safety and efficacy of regulated products."
There is also a cultural shift underway. The agency is launching a two-month Agentic AI Challenge, inviting staff to build and demonstrate real solutions. Not theory. Not slides. Working systems, showcased at the FDA Scientific Computing Day in January 2026.
That is how institutional change sticks. You let the people closest to the work shape the tools.
The context
This move did not come out of nowhere. Regulators around the world are under pressure to do more with less, even as the complexity of science continues to rise. AI is no longer a nice-to-have. It is infrastructure.
The FDA is also being careful. These tools run inside a high security GovCloud environment. The models do not train on FDA input data. They do not learn from industry submissions. Sensitive research stays locked down.
That balance matters. Innovation without trust goes nowhere. By embedding AI directly into workflows while keeping strong safeguards, the FDA is signaling that modernization and responsibility can coexist.
In plain terms, the agency is treating AI less like a shiny object and more like electricity. Invisible when it works. Essential when it does.
💡Did you know?
You can take your DHArab experience to the next level with our Premium Membership.👉 Click here to learn more
🛠️Featured tool
Easy-Peasy
An all-in-one AI tool offering the ability to build no-code AI Bots, create articles & social media posts, convert text into natural speech in 40+ languages, create and edit images, generate videos, and more.
👉 Click here to learn more

