Across the country, healthcare leaders are moving beyond the hype and to make tangible headway with AI. Whether addressing workforce shortages, improving access, or reducing administrative burdens, organizations are recognizing that AI is a necessary tool for today’s health system challenges.
At Noteworthy 2025, healthcare leaders from top institutions gathered to discuss one of the most pressing questions in the industry today: how to get started with AI effectively. Moderated by Dr. Aaron Neinstein, Chief Medical Officer at Notable, the panel featured innovative perspectives from Dr. Paul Lukac (UCLA Health), Diane Maas (Beacon Health System), Dr. Spencer Dorn (UNC Health), and Dr. Karandeep Singh (UC San Diego Health).
Throughout the conversation, a shared theme emerged: leveraging AI requires not just technological readiness, but also deep collaboration with clinical and operational teams, a thoughtful governance framework, and a vision for empowering the healthcare workforce of the future.
Creating the building blocks of an AI strategy
Establishing a guiding AI strategy is vital before embarking on any new AI project. It will set the goals, guidelines, and vision for what you’re trying to accomplish.
Dr. Paul Lukac shared that there are two main parts to a successful AI strategy: people and process. First, it’s important to co-design and co-implement with clinicians and staff, instead of deploying it “on” them once it’s already developed. “You can have the most advanced AI system in the world, but if people don’t trust it, or they don’t understand it, or they don’t feel like they were part of the journey, you will never get adoption for that technology.”
When clinicians, physicians, nurses, and other related staff help shape workflows, it creates a sense of ownership and confidence in using the technology. It also has the additional benefit of promoting AI literacy.
The second part, Dr. Lukac stressed, is process: AI has to fit into clinical and operational workflows, not the other way around. “Once you have the people part down and they’re engaged, and you have the processes in place, the technology becomes more of an enabler, in a sense, rather than a roadblock for whoever is using it.”
Beacon Health System has a robust strategic planning process that focuses on what they call “the now, the near, and the far.” Diane Maas encourages her teams to reflect on what they’re doing today and how it will contribute to their vision for the future.
This practice shifts the focus to outcomes, rather than process, answering questions like: What’s the yield on the revenue cycle? What has the workforce turnover decreased to? What is the staff satisfaction and patient satisfaction of a new initiative?
Maas added that this process is not just top-down: “It’s a very iterative up and down process, including the board, the physicians, management, and team members, to make sure that everybody has a chance to provide input.”
Navigating AI governance
When it comes to AI governance, Dr. Karandeep Singh started the conversation by stating that there’s still a lot to be figured out – and, at least in this current stage, health systems have to take the responsibility of doing so.
He clarified that AI governance is not actually about AI. “It’s about the system,” said Dr. Singh. UCSD Health uses an evaluation framework that considers appropriateness, accuracy, and alignment to govern its AI work. It engages the right parties and accounts for dependencies in each use case. For example:
- What’s the appropriate behavior for an AI Voice Agent calling a patient? What is not appropriate?
- What’s considered accurate vs. inaccurate information, especially when it comes to what the AI Voice Agent might say?
- What do we want the AI Agent to do? For example, it shouldn’t share dinner recipes.
- Does the AI Voice Agent speak too quickly?
- How will we account for diverse patient populations once the English-speaking AI Agent is established?
Engaging the right parties helps to clarify these answers. Dr. Singh engaged UCSD Health’s Office of Patient Experience to help determine the right speaking cadence of an AI Voice Agent, for example.
Dr. Lukac reinforced this practice, adding that AI councils need multidisciplinary leadership. “We have some researchers, and those are the AI experts usually. But you need to have Legal. You need to have Compliance. You need to have Risk. You need to have Privacy.” These experts will help anticipate the downstream effects of AI and mitigate the unintended consequences that may arise from using the technology in each use case.
For example, when using a denials appeal drafting tool, he had experts on his AI council who could identify early on what might result in a penalty, or what might not work with a certain payer. The expert oversight also helped frontline teams feel more confident in using the technology in their day-to-day work.
Best practices for AI change management
When it comes to preparing your organization for AI, Dr. Spencer Dorn warned that this is a new era, and no one has all the answers: “This is an evolution, this is not a one-time event. So I think approaching things with humility is very important.”
He returned to Dr. Lukac’s initial statement that frontline workers must be involved in the process for smooth change management. Dr. Dorn used the example of scheduling a patient for a colonoscopy, which is a nuanced, specific process that requires the knowledge of someone who is close to the problem.
He added that it’s important to frame AI not as job displacement, but as job empowerment. Focus on the possibilities that these tools are helping enable for those who are using them.
There will be new jobs created because of AI, and there will be existing jobs that will change because of AI, “but there is not a single job that will not somehow be affected by AI in the future,” added Dr. Lukac. It’s incumbent on leadership to shepherd people through the process.
“We want our healthcare workers across the system to feel like they’re supported,” he explained. “They still have a place in the future of healthcare, even more so, I think, because AI is just so different. And I think that will create a lot more new jobs rather than eliminate existing jobs.”
Dr. Singh added that AI is essential to the future of the industry: “We’ve got doctor shortages. We’ve got nursing shortages. An aging population. I think the math doesn’t work to actually get people high-quality care without AI-supported workflows.”
Actively engaging Labor Relations and Human Resources departments in AI change management can help ensure you’re proactive in creating new pathways to upskill staff. This helps staff feel confident that their jobs are valuable and builds trust around what jobs of the future might look like.
Keeping an eye out for what’s next on the AI frontier
Many healthcare executives have been getting started with AI by focusing on AI scribes. But what are these leaders seeing as the next big trend in AI adoption?
Dr. Dorn shared that he’s particularly excited about summarization. “No matter who you are in healthcare – whether you’re a gastroenterologist, an ICU physician, a critical care nurse, or a call center agent – fundamentally, healthcare is about processing information. And we’re just overloaded with too much information.” He sees opportunity in tools that summarize information and help process some of the incredible amounts of data that we have.
He’s also looking at patient-facing AI as a major opportunity: “Even the best-run large system is still very intimidating to navigate. How do we help our patients better understand what they need to get care for? Where do they go? How do they do it?”
AI is imperative for the future of healthcare
As healthcare organizations navigate challenges from workforce shortages to rising operational complexity, AI is quickly moving from concept to imperative.
The future of AI in healthcare will depend on leadership that is thoughtful, inclusive, and focused on creating systems that improve outcomes for both patients and staff.
Read more insights from Noteworthy 2025 here.





