Organizations have regulatory obligations and quality, safety, and compliance goals that historically depend on post-encounter chart reviews—traditional peer reviews by clinicians and targeted audits by specialized teams. Human time and effort remain the primary rate limiter, leading to sample-based approaches that lengthen time to findings, constrain insights, and force careful resource allocation.
At MIT Health, each provider was peer reviewed on roughly a dozen encounters per year. With providers seeing approximately 2,500 visits annually, less than half a percent of encounters were reviewed. This creates a statistical impossibility for meaningful oversight of quality, safety, and compliance—magnified inside large, multi-hospital systems caring for millions of patients.
But what if, instead of reviewing 0.5 percent of encounters, you could review all of them?
MIT Health's experience offers an early, practical glimpse of that future and a roadmap for large systems moving from audit-based scarcity to AI-enabled abundance.
Changing the denominator at MIT Health
MIT Health worked with Notable, an AI platform purpose-built for healthcare, to leverage the company’s workflow automation and agentic AI tools, as well as existing integrations with MIT Health's EHR. These tools provide a self-service, low-code platform that allows the MIT Health team to build whatever it can envision. All workflows and AI prompts were developed by MIT Health team members without formal computer science backgrounds, an important consideration for low-resource environments.
Combining automation, FHIR APIs, and large language models makes it possible to analyze full encounter notes - not just discrete fields - in near real time. Once established, the marginal effort to add more notes was negligible, allowing MIT Health to move chart review from sampling to universality.
This forced different questions: what's in a note that we care about, and what do we want to interrogate at scale? The system now reviews behavioral health and fall risk screening follow-up, procedure consenting, sensitive examination assistance documentation, and timely note completion—all fully automated. Identified instances of potential non-compliance are tasked to humans for review, with encounter content and AI reasoning presented within the workflow to maximize efficiency.
Since launching in Q4 2025, MIT Health has reviewed approximately 50,000 encounters – 100% of licensed independent practitioner volume. Because every encounter is reviewed in real time, the approach generates high volumes of insights, enabling rapid trend identification and visibility of low-frequency edge cases that sampling would miss. Adding new quality measures takes days and can be done entirely in-house. The quality team now focuses on interpreting patterns, identifying gaps, and driving targeted improvement rather than manually collecting data.
From point audits to continuous intelligence
Two core archetypes emerged that map directly to how health systems think about quality and safety.
Prospective, trigger-based monitoring. Events now automatically trigger real-time workflows—ambulatory encounter completion, hospital and ED discharges, new referrals, and selected orders and results. Instead of staff analyzing whether a patient should get follow-up, the system assumes yes and automates toward the desired outcome. The same pattern applies to sepsis or heart failure discharges requiring timely follow-up, high-risk medication starts warranting early check-ins, or abnormal imaging results requiring documented action.
Retrospective, population-wide reviews. When a pediatrician wanted to understand fluoride application practices, the team built a flow to scan every pediatric well-child visit for fluoride discussion, application, and billing documentation. Build time was thirty minutes. MIT Health also performed a three-year retrospective review of all advanced imaging studies, surfacing patients whose abnormal results hadn't generated appropriate follow-up.
Redefining quality, compliance, and malpractice prevention
Most quality and compliance teams operate in scarcity, relying on auditors' instincts and availability, spending time developing sampling methodologies and abstracting data rather than acting on it. In MIT Health's model, every encounter is screened against defined rules and measures, with potential issues surfaced with context, rationale, and relevant note excerpts already assembled. Quality staff review only cases that warrant attention and focus on pattern recognition, prioritization, and intervention design.
This shift has implications for malpractice and risk management. Many inpatient events trace back to outpatient misses—delayed diagnoses or inadequate documentation of shared decision-making. MIT Health's continuous review supports malpractice prevention by automatically monitoring policy compliance and identifying encounters where patients choose alternatives to recommended care, ensuring discussion and rationale are documented clearly. For larger systems, aligning automated checks with system-wide policies could provide a near real-time view of policy adherence across every location and service line.
From scarcity to abundance in operations
Historically, teams spent significant time touching something to decide if they had to touch it—skimming charts to see if review was necessary, triaging work queues, or messaging about whether to call a patient after discharge. With AI handling the first pass, clinicians receive pre-analyzed tasks rather than raw queues. Time once spent on "chart biopsy" shifts to patient communication and decision-making.
When human labor is the constraint, systems batch, throttle, and triage. When AI can screen every encounter, legacy workflows start to look like a choice rather than a necessity.
New competencies for system-level quality leadership
This capability amplifies rather than replaces human judgment, but changes what quality, safety, and compliance leadership need to excel at. Leaders must translate policy language and quality measures into computable logic that AI can apply at the encounter level. This means capturing implicit institutional knowledge—unwritten rules, clinical judgment heuristics, and workflow expectations—and codifying them into explicit procedures that AI can consistently follow.
Leaders must also design and iterate review workflows with cross-functional teams, interpret high-volume continuous signals rather than small static samples, and embed findings into training, workflow redesign, and accountability structures so insights translate into change.
How large health systems can start
Start with measures you already track—especially where manual review is expensive or insufficient—and choose one concrete use case narrow enough to show results quickly. Build a cross-functional team that favors people who understand workflows deeply, even if they aren't programmers. Keep humans in the loop for flagged cases; AI surfaces and organizes work, but humans decide what to do.
From audits to everything
MIT Health's experience shows that comprehensive, AI-powered quality monitoring is already operational in a real clinical environment. As financial and regulatory pressures intensify, sampling-based assurance will feel less acceptable to boards, regulators, and patients. When it's technically and operationally possible to review everything, deliberate blindness becomes harder to justify.
For quality and compliance professionals, this isn't about losing relevance. It's an invitation to step into a larger role: from auditors to designers of learning systems, from data collectors to strategic leaders, from scarcity managers to stewards of abundance.
Once you can review everything, the constraint is no longer what you can see. The constraint is what you choose to improve.
Connect with Notable to see how MIT Health achieved 100% encounter review and transformed their quality program from audit-based scarcity to AI-enabled excellence.


