Blog | Healthcare
6th April,   2026
Managing Director of Healthcare and Life Sciences at Brillio
Praveen Soti is a physician, thought leader, and business executive with over 25 years of global experience. He oversees strategic growth across healthcare and life sciences at Brillio. Praveen holds an MD and MBA and is an alum of the Stanford Graduate School of Business and The Nashville Health Care Council and has served as the Board Chair of the Microsoft Health Users Group. As an operator and innovator with private equity and public co. experience, he specializes in enabling meaningful industry solutions and capabilities, through use of data and AI.
The next chapter of AI in healthcare is being written not in research labs or keynote presentations, but inside hospitals, imaging suites, and clinical workflows where real decisions are made every day. At the Nashville Healthcare Council Fellows Alumni Session on March 31, 2026, a fireside chat brought together senior leaders for a grounded, forward-looking conversation on how AI is already creating meaningful value and what it will take to scale that value across health systems.
The discussion was practical, optimistic, and deeply rooted in the realities of care delivery. Here are the ideas that matter most as healthcare moves from AI exploration to AI execution.
The most powerful framing to emerge from the session was simple: AI works best when it amplifies what clinicians already do well. Evidence from radiology workflows consistently shows that the strongest outcomes come when AI and clinicians work in tandem.
This is an empowering idea. It means AI is not about replacing judgment; it is about giving clinicians better information, faster, so they can make more confident decisions. The goal is a future where every clinician is supported by tools that make their expertise go further, reaching more patients, more quickly, with greater precision.
For AI to deliver value, it has to live where clinicians already work. The most successful AI applications are those that reduce burden, save time, and fit naturally into existing clinical routines.
Ambient listening is perhaps the clearest example of AI done right. By capturing clinical conversations and handling documentation automatically, it gives time back to care teams and allows them to focus on what matters most: the patient in front of them. The principle that guides the best implementations, embedding AI within a seamless, unified experience across EMRs, imaging platforms, and command centers is one that health systems are increasingly designing toward.
Imaging is one of the most mature and promising areas of AI in healthcare, and the current moment is one of real acceleration. AI is enabling better image quality, lower radiation exposure, faster scan times, and significant gains in daily throughput translating directly into more patients served and faster diagnoses delivered.
Beyond individual scan improvements, AI-enabled operational infrastructure is helping health systems extend the reach of specialist expertise across geographies. Digital pathology and radiology command models are making it possible to connect patients with the right expertise regardless of where they are located.
The session was emphatic on this point: the most effective AI implementations are built with clinicians, not simply delivered to them. When care teams are involved in designing and refining the tools they use, adoption follows naturally because the tools reflect how care is actually delivered, not how it was imagined from the outside.
This approach also cultivates something invaluable: internal champions. Clinicians who help shape a solution become its most credible advocates, building trust among peers in ways that no external message can replicate. Co-creation is not just good design practice, it is the foundation of sustainable adoption.
Responsible AI governance is increasingly recognized not as a constraint on innovation, but as what makes innovation trustworthy and scalable. The most effective governance models create what might be called “freedom within a framework” — clear guardrails that protect patients and organizations while giving teams the space to move quickly and learn.
This includes building in feedback loops, monitoring performance across diverse patient populations, and ensuring that AI tools are continually evaluated against real-world outcomes. A thoughtful governance posture gives health systems the confidence to scale what works and the clarity to adjust course when context changes.
In clinical settings, trust is everything and it must be earned. Clinicians need to understand not just what an AI system recommends, but why. What data informed the recommendation? How does it connect to the patient’s broader history? Is the reasoning traceable and clear?
The session highlighted the importance of explainability at the point of care, and the risk of alert fatigue when AI outputs are opaque or disconnected from clinical context. The systems earning the deepest trust are those that make their reasoning visible, connect recommendations to longitudinal patient data, and support clinicians in exercising their own judgment rather than short-circuiting it.
Perhaps the most forward-looking thread of the session was this: as the technology matures, organizational leadership becomes the primary differentiator. The health systems that will lead in AI are those whose boards and executives treat it as a strategic priority with written action plans, clear accountability, and a commitment to embedding AI across operations, not just in isolated pilots.
This also means investing in the people who make change happen at the front lines — the internal builders and champions who help teams see what is possible and make new ways of working feel achievable. Leadership that empowers these individuals, and creates the conditions for them to succeed, will define the next era of healthcare AI.
AI in healthcare has moved well past the question of whether it works. The conversation now is about how to scale thoughtfully, equitably, and with human expertise always at the center.
This is the moment where intent turns into execution.
Healthcare does not need more pilots with vague promise. It needs fewer, better bets scaled with conviction.
Because in the end, the value of AI will not be measured by the sophistication of the model. It will be measured by better decisions, better experiences, better operations, and better outcomes for patients.
That is the real opportunity ahead.