Medicare has spent decades paying for healthcare in a way that mostly assumes care happens in discrete, billable moments: a visit, a procedure, a test, a hospitalization. But the reality of chronic illness, recovery, and social instability is messier. Patients don’t just need clinicians; they need continuity—someone to notice when symptoms worsen, to nudge adherence before it becomes a crisis, to coordinate transportation or housing support, and to make sure follow-up actually happens after the appointment ends.
That gap between what patients need and what Medicare has historically funded is now getting a new bridge, and it’s one that—according to a growing chorus of health policy observers—was effectively designed with AI-enabled “between-visit” care in mind, even if most of the tech world hasn’t caught up to what that means.
The core idea is straightforward: Medicare’s newer payment approach creates a governmental pathway to pay for services that occur outside traditional clinical encounters. In other words, it’s not only about reimbursing an app or a device. It’s about funding an ongoing care function—monitoring, outreach, coordination, and follow-through—that can be delivered by humans, software, or increasingly, AI agents. The difference is that the payment model is structured so that these activities can be recognized as part of care delivery rather than treated as overhead or charity.
For companies building AI in healthcare, this matters because the hardest part of “AI for health” has never been the model itself. It’s been the reimbursement logic: who pays, for what exactly, under which rules, and how outcomes are measured. Many AI pilots have lived in a limbo where they can demonstrate promise but struggle to scale because the system doesn’t have a clear mechanism to pay for the work between visits.
Now, the argument is that ACCESS—an initiative highlighted in recent coverage—has helped create that mechanism for the first time, at least in a way that aligns with real-world workflows. The claim isn’t that AI magically replaces clinicians. It’s that the system is finally making room for a new kind of care layer: an agent that can monitor a patient’s status, check in proactively, coordinate referrals (including non-medical supports like housing), and help ensure medication follow-through.
To understand why this is such a big deal, it helps to look at what “between-visit” care actually includes. It’s not a single task. It’s a chain of small interventions that, when stitched together, prevent deterioration.
A patient leaves a clinic with instructions, but the next 48 hours are often where things go off track. Side effects appear. Confusion sets in. A pharmacy delay turns into missed doses. A housing situation makes it hard to store medications properly or keep appointments. A caregiver can’t be reached. Symptoms that seem minor at first become urgent later.
In a traditional model, these problems are addressed when the patient calls, shows up, or gets readmitted. That’s expensive, reactive, and often avoidable. Between-visit support aims to shift the system from reactive to proactive. It’s the difference between waiting for a crisis and catching the early warning signs.
Historically, however, Medicare has struggled to pay for that proactive layer. Some services exist in pockets—care management programs, certain telehealth structures, home health in specific circumstances—but the overall architecture has been fragmented. Many of the activities that would make between-visit care effective—frequent check-ins, coordination across systems, reminders that aren’t just “patient education,” and follow-through on referrals—don’t map neatly onto existing billing codes or program requirements.
So when people say Medicare’s new payment model is built for AI, they’re not claiming that Medicare is “pro-AI” in a marketing sense. They’re pointing to something more technical and more consequential: the payment design is compatible with the operational reality of AI-enabled care coordination.
AI agents are particularly suited to between-visit work because the tasks are repetitive, time-sensitive, and multi-step. An agent can triage incoming signals, interpret symptom reports, detect patterns that suggest risk, and then trigger the right next action—whether that’s a message to the patient, an escalation to a nurse, or a referral to a housing resource. It can also maintain context across days, which is crucial when care is not happening in a single appointment.
But compatibility with AI isn’t automatic. A payment model can be “between-visit” in theory and still be unusable in practice if it requires documentation that doesn’t match how care is delivered, or if it doesn’t define measurable outcomes. What’s being argued here is that the new approach creates a clearer governmental pathway to fund this kind of continuous support, making it easier for organizations to build and sustain services rather than run short-term pilots.
ACCESS, in this framing, is described as creating that mechanism for the first time. The emphasis on “for the first time” is important. It suggests that while there have been experiments and partial programs, there hasn’t been a consistent, scalable funding route for the full between-visit bundle—monitoring plus outreach plus coordination plus medication follow-through—especially when non-medical needs are involved.
That last point—housing and other social supports—is where the story becomes especially revealing. Healthcare outcomes are deeply influenced by factors that don’t live inside clinics. If a patient can’t secure stable housing, medication adherence becomes harder. If a patient lacks reliable transportation, follow-up appointments slip. If a patient is isolated, symptoms may go unreported until they become severe.
Traditional care coordination often depends on staff capacity: case managers, social workers, community health workers. Those roles are valuable, but they’re limited by workforce shortages and administrative burdens. AI-enabled coordination doesn’t eliminate the need for human expertise, but it can reduce the friction: it can help identify needs earlier, route requests faster, and keep track of whether referrals were completed.
In practical terms, an AI agent that coordinates a housing referral isn’t just sending a link. It’s gathering information, confirming eligibility, scheduling next steps, and checking whether the patient actually connected with the resource. It can also remind the patient about documents needed for intake, and it can escalate when the patient is stuck.
This is the kind of work that is difficult to fund under older models because it doesn’t fit neatly into a single encounter. It’s ongoing. It’s relational. It’s operational. And it’s exactly the kind of work that AI can help deliver at scale—if the system will pay for it.
The unique take in the current discussion is that the “tech world” may be looking at AI in healthcare through the wrong lens. Many observers focus on the novelty of AI capabilities: better triage, improved imaging, more accurate predictions. Those are real, but they’re not the bottleneck for adoption. The bottleneck is whether the healthcare system can afford and operationalize the service.
When payment models evolve to recognize between-visit care as a reimbursable function, the conversation shifts. Suddenly, AI isn’t just a tool; it becomes part of the care delivery infrastructure. That changes incentives. It encourages vendors to build for compliance and measurement, not just for demos. It pushes organizations to define what “success” looks like: fewer avoidable hospitalizations, improved adherence, better follow-up completion, reduced emergency department use, and improved patient-reported outcomes.
It also forces clarity about what the AI agent does and what it doesn’t do. A credible between-visit model must include guardrails: when to escalate to a clinician, how to handle uncertainty, how to protect privacy, and how to ensure that the agent’s recommendations are safe. Payment models that support these services implicitly require accountability. If Medicare is paying for between-visit support, then the system will want evidence that the support improves outcomes and doesn’t create new risks.
That’s where the “built for AI” claim becomes more than a slogan. AI agents thrive when they can operate within defined workflows and measurable goals. They need structured inputs, clear escalation pathways, and outcome metrics that can be tracked over time. A payment model that funds between-visit care can provide that structure, turning a vague concept (“care coordination”) into a defined service with performance expectations.
There’s also a cultural shift embedded in this. For years, many healthcare organizations treated between-visit support as optional or philanthropic. It was something you did if you had extra resources. But if Medicare’s payment architecture now supports it, then between-visit care becomes part of standard care delivery. That’s a profound change in how healthcare organizations plan staffing, technology, and operations.
And it’s not just about technology companies. It affects hospitals, primary care practices, accountable care organizations, and community-based providers. If the system can reimburse for between-visit monitoring and coordination, then organizations will compete on execution: how quickly they respond, how effectively they coordinate referrals, and how reliably they ensure medication follow-through.
Medication follow-through is a particularly telling example. It’s easy to talk about adherence, but it’s hard to operationalize. Patients miss doses for many reasons: side effects, confusion about instructions, cost barriers, pharmacy delays, lack of understanding, or simply forgetting. An AI agent can help by checking in, clarifying instructions, flagging side effects, and coordinating with pharmacies or clinicians when issues arise. But again, the key is that the system must pay for the work of doing those things consistently.
Without reimbursement, adherence support tends to be sporadic. With reimbursement, it becomes a service line. That’s how you get from “we tried an AI chatbot” to “we run an ongoing between-visit program.”
The housing referral piece adds another layer of complexity. Coordinating housing is not a simple medical referral. It involves eligibility criteria, documentation, local resource availability, and sometimes legal or administrative processes. An AI agent can’t conjure housing, but it can reduce the administrative burden on patients and providers by guiding the process, tracking progress, and ensuring that referrals don’t fall into a black hole.
If Medicare’s payment model recognizes the value of addressing social determinants of health through coordinated support,
