Arrow Back
Back to blog main page
Calendar
May 1, 2025
Time To Read
3 min read

AI data security: What healthcare leaders need to know

How can we ensure AI data security when using Intelligent Agents? Hear from Notable’s data intelligence experts to learn how our AI Agents work, and how we implement AI safeguards to protect patient data.

By
Kevin Huang
AI data security: What healthcare leaders need to know

The healthcare industry is experiencing a transformation period, driven by technological advancements that promise greater efficiency, streamlined operations, and a better patient experience. AI in particular stands at the forefront of this transformation, with AI use among physicians up 78% from 2023. 

As healthcare organizations continue to embrace technological innovation, privacy concerns about the use of AI in healthcare naturally arise. Handling sensitive Protected Health Information (PHI) demands the highest standards of care. Healthcare leaders frequently ask us: “How do we protect patient data when using AI in healthcare?” This question is central to responsible innovation in modern healthcare. 

Throughout this article, we'll provide clear answers on how AI interacts with patient information, the essential safeguards that must be implemented, and our comprehensive approach to protecting sensitive data. Whether you're evaluating potential AI solutions or strengthening your existing security framework, you'll find actionable guidance on a question many are asking: how can we harness the benefits of AI Agents while ensuring the security and confidentiality of patient data?

At Notable, this isn't just a consideration; it's a foundational principle.  

The foundations of AI data security in healthcare 

Traditionally, healthcare technology systems rely on the client-server architecture, where user devices (clients) request information or services from a central computer (server) that stores data and manages resources.

When AI Agents—intelligent systems automating healthcare tasks—are implemented, the core workflow structure may persist, but the operational dynamics shift significantly. These Agents actively participate in the workflow, moving beyond solely human-led processes. They enhance efficiency by conducting tasks independently in the background or directly facilitating users as they gather user requests and perform necessary actions. 

Notable integrates its platform, including AI Agents, with existing systems like Electronic Health Records (EHRs) using secure methods such as FHIR APIs, HL7 interfaces, or Robotic Process Automation (RPA). The EHR typically remains the ultimate source of truth for patient data. Notable's AI then unlocks and processes structured and unstructured data to perform tasks like data entry, document uploads, and patient outreach, essentially acting like highly efficient digital staff members.  

While AI integration unlocks powerful automation capabilities, it also introduces specific risks that must be carefully managed:  

  • Data privacy and security breaches: Preventing unauthorized access and protecting PHI from external attacks or internal vulnerabilities.
  • Bias and fairness: Ensuring AI does not perpetuate existing biases found in historical data.
  • Transparency and explainability: Avoiding “black box” risks where users cannot understand why an AI Agent made a specific recommendation or took a certain action.  
  • Security vulnerabilities: Protecting AI systems from novel attacks like prompt injection and jailbreaking.

Addressing these risks requires proactive, layered defenses anchored by robust governance and cutting-edge security best practices.

Understanding AI access to patient data

One common myth about using AI Agents in healthcare is that they have unrestricted, direct access to the entire patient database. This is not the case with Notable's AI Agents.  

Our Intelligent Agents are specifically designed to operate without direct access to underlying database records or the full EHR. Here’s how we ensure data is accessed responsibly:  

  1. Templated configuration: When setting up an automation workflow (e.g., for clinical chart review or document processing), developers use templated placeholders within the AI Agent's instructions, not actual patient data. This prevents unnecessary exposure of PHI during the configuration phase.
  2. Runtime data ingestion: Patient information is only inserted into these placeholders at the moment the automation runs for a specific task.  
  3. Scoped access: Each AI workflow is connected to a specific event or task, such as a patient visit, an insurance claim, or a lab order. When the AI needs to pull information, it only accesses the data directly related to that event, and nothing else. The system is carefully designed to prevent the AI from accidentally viewing or retrieving other patient records that aren't necessary for the task at hand.
  4. Authenticated API calls (conversational AI): For user-facing AI Agents, like Notable’s Assistant, information is only pulled after a user logs in securely using multi-factor authentication. Once logged in, the AI Agent works on the user’s behalf, accessing only the specific information needed for that conversation. It uses a temporary access token and a secure connection that expires after completing the task.

To summarize, the AI Agent accesses only the minimum necessary information required to complete its specific, assigned task for a particular patient at that precise moment. It can’t browse the database freely or access data outside the scope of its immediate job.  

Comprehensive AI safeguards for PHI

Ensuring the security and privacy of patient data isn't an afterthought for Notable; it's central to our platform and governed by a comprehensive framework aligned with industry standards. 

Here are the key pillars of our approach:  

  1. HIPAA compliance and beyond: We strictly follow HIPAA regulations, enforce Business Associate Agreements (BAAs) with all partners—including cloud providers (like GCP, Azure, AWS) and AI/LLM providers (like OpenAI)—and ensure all employees complete HIPAA training. We also conduct regular internal audits and comply with other applicable industry, state, and federal regulations. 
  2. Zero-retention policy: Our agreements with LLM providers enforce immediate deletion of any data processed, ensuring nothing persists after task completion.
  3. Encryption: We use strong encryption methods for data in transit and at rest within HIPAA-compliant cloud environments.
  4. Strict access controls: Users and systems only have access to the data and functions essential for their role. This is enforced through Role-Based Access Control (RBAC) and requires multi-factor authentication for sensitive systems, with detailed audit trails and logs.
  5. Data minimization: Only the minimum necessary information, often filtered to include only relevant document types or data elements, is used for each task. 
  6. Bias mitigation: We proactively remove input data that could contribute to bias before processing, engineer models to ground outputs in evidence, and rigorously test across diverse patient samples to monitor the presence of bias.
  7. Explainability and hallucination prevention: Our AI systems provide quoted, traceable evidence for their findings; human review is often required to validate outputs and prevent inaccuracies in critical workflow steps.
  8. Safety guardrails: We use AI guardrails to assess outputs for faithfulness, relevance, and harmful content, triggering regeneration or human review for flagged issues to ensure output quality.
  9. Secure Development and Operations: We follow secure coding standards like OWASP, conduct regular vulnerability assessments and penetration tests, and halt model processing if performance anomalies are detected.

Security that enables better care

This rigorous approach goes beyond regulatory compliance—it drives real-world impact. Healthcare organizations using Notable’s AI platform can expect:

  • Reduced risk exposure: By preventing direct database access and minimizing data flow, we significantly reduce the attack surface and the potential impact of any single point of failure.
  • Enhanced trust: Transparency measures, like providing evidence for AI suggestions and incorporating human review, build clinician trust and ensure AI serves as a co-pilot, not an unchecked black box. Qualified professionals always have the final say in clinical decisions. 
  • Focus on relevant care: Bias mitigation strategies and evidence-based outputs ensure the clinical validity and fairness of AI recommendations, freeing up staff time to take on complex, patient-facing tasks that only humans can perform.

The Notable Platform is built from the ground up to address the rigorous clinical, ethical, and regulatory requirements inherent in healthcare settings. Our safeguards ensure that intelligent automation can deliver true efficiency without compromising patient and provider trust.

Embracing AI responsibly

AI presents an extraordinary opportunity to transform healthcare operations for the better, with Intelligent Agents alleviating administrative burdens, streamlining workflows, enhancing patient engagement, and ultimately supporting clinicians in delivering higher-quality care.

The full potential of AI Agents, however, can only be realized through an unwavering commitment to data privacy, security, and transparency. Innovation and responsibility must go hand-in-hand. By embedding ethical protections into every layer of our platform, we enable healthcare organizations to embrace innovation confidently.

Button Arrow 
Button Arrow

Recent posts