ChatGPT and the Future of Rehabilitation Therapy: Inside the AI‑Driven Revolution in Patient Recovery

ChatGPT and the Future of Rehabilitation Therapy: Inside the AI‑Driven Revolution in Patient Recovery

Published: December 1, 2025


A new era in rehab: from long waiting lists to AI‑assisted recovery

Rehabilitation therapy is quietly becoming one of the most important testbeds for artificial intelligence — and especially for large language models (LLMs) like ChatGPT.

In the UK, the National Health Service is rolling out what’s been described as its first AI‑run physiotherapy clinic, Flok Health. Patients with back, neck or knee pain can self‑refer via an app, complete an automated assessment and receive same‑day video appointments with a “digital physiotherapist”. Early NHS pilots involving more than 1,000 staff found that almost all participants received same‑day triage, over 80% reported symptom improvement, and many rated the experience as comparable to traditional physiotherapy. Regulators have approved the platform as both a healthcare provider and a medical device, but professional bodies stress that AI cannot fully replace human clinical judgment.  [1]

At the same time, the UK government has announced “My Companion”, a ChatGPT‑style feature inside the NHS App that will guide patients through symptoms, medications and care decisions — a kind of “AI GP” designed to reduce pressure on services and shorten long waiting lists.  [2]

Together, these initiatives illustrate a broader shift: health systems are beginning to treat conversational AI not just as a novelty, but as infrastructure. Rehabilitation therapy — with its chronic conditions, behaviour change and home‑based exercise — is a natural place to experiment.


What ChatGPT can already do in rehabilitation today

Although most hospitals are still in pilot mode, ChatGPT and similar tools are already being used — formally and informally — across physical therapy, occupational therapy and speech‑language pathology.

A 2025 article from the Florida Occupational Therapy Association describes how therapists and patients are using ChatGPT at home and in the community health setting. According to the authors, ChatGPT can:  [3]

  • Explain diagnoses, symptoms and treatment plans in everyday language
  • Reinforce exercise instructions between visits
  • Send motivational messages and reminders to complete home exercise programmes
  • Ask about pain levels and range of motion to help patients track progress
  • Help therapists brainstorm interventions, draft documentation and write goals

The same piece — along with guidance from the American Occupational Therapy Association — notes that ChatGPT is increasingly used in OT education for case simulations, clinical reasoning practice and exam preparation.  [4]

In parallel, digital health and software companies are integrating ChatGPT‑like tools into:

  • Musculoskeletal (MSK) and virtual PT platforms, where AI helps triage patients, generate initial home programmes and answer FAQs about back or joint pain  [5]
  • Electronic health records (EHRs), where AI assists with clinical documentation, billing codes and outcome tracking so therapists can spend more time in direct care  [6]
  • Tele‑rehab and coaching apps, where a conversational layer improves engagement compared with static text or video content  [7]

In other words, ChatGPT is already acting as:

  • patient educator,
  • virtual coach, and
  • clinical co‑pilot for therapists.

The open question is how reliable — and how safe — those roles really are.


The evidence so far: promising, but uneven

Until recently, most claims about ChatGPT in rehab were anecdotal. That’s changing fast.

In late 2025, a narrative review in Frontiers in Digital Health analysed dozens of studies on ChatGPT in rehabilitation medicine. The authors concluded that ChatGPT performs well on structured, guideline‑based tasks such as explaining low back pain red‑flag symptoms or describing standard rehabilitation options, but shows “critical shortcomings” in complex, individualized cases that demand nuanced clinical reasoning.  [8]

Key themes from that and related reviews:

  • ChatGPT 4.0 often provides accurate, generally safe answers for routine musculoskeletal questions, with some studies reporting accuracy figures around 70–80% for basic advice.  [9]
  • However, performance drops in areas where evidence is sparse or conflicting, and in cases that require prioritising between multiple comorbidities or psychosocial factors.  [10]
  • ChatGPT’s references are frequently unreliable; one review found that fewer than one in five citations generated by the model could be verified, a serious limitation for evidence‑based practice.  [11]

A 2023 case study from Japan used ChatGPT‑4 to generate a full rehabilitation prescription and ICF coding for a textbook stroke case. Clinicians judged the plan to be broad, reasonable and quickly produced, but also more generic than human‑written versions and not fully accurate in its classification details.  [12]

And a 2025 mini‑review of personalized rehabilitation reported that ChatGPT‑based systems often achieve 55–80% agreement with clinicians when designing treatment plans, but only around 70% performance when more precise, real‑time adaptation is required.  [13]

In simple terms: ChatGPT is becoming a capable assistant in rehab planning and education, but it’s not ready to be the primary decision‑maker.


Virtual coaches and AI assistants: closing the adherence gap

Rehabilitation lives or dies on adherence. For many conditions, what patients do between appointments matters as much as what happens in the clinic. That is where AI — particularly conversational agents — is already making a measurable difference.

A 2025 systematic review in Artificial Intelligence Review looked at eight studies using AI‑driven virtual assistants (AIVAs) to promote physical activity. Chatbot‑based interventions produced small to moderate improvements in step counts and moderate‑to‑vigorous physical activity, especially when they offered dynamic feedback, personalised goals and an empathetic tone. However, results were inconsistent, and many trials had a high risk of bias.  [14]

Another 2025 review of AI virtual personal assistants and disability‑focused healthcare found that AI‑driven systems can improve home‑based rehabilitation and reduce in‑person visits when combined with wearable sensors and remote monitoring — but highlighted concerns about privacy, digital literacy and long‑term engagement.  [15]

More recently:

  • A 2025 study testing a physical activity app with a ChatGPT‑based chatbot reported that participants were generally satisfied and saw the bot as motivating, though they asked for more personalisation and smoother conversation.  [16]
  • Early work with a ChatGPT‑delivered exercise programme for children with autism spectrum disordersuggested that AI‑guided activity sessions are feasible and may increase participation, but required careful parental oversight and small group sizes.  [17]

Alongside these, earlier RCTs of AI coaching — for cancer survivors and people with chronic musculoskeletal pain — have shown that automated or semi‑automated coaching can boost activity in some groups, although not consistently outperforming standard care in specialist settings.  [18]

The pattern is emerging: AI coaches work best as a complement to human support, offering daily nudges, education and planning while clinicians handle the complexity.


Stroke, knee osteoarthritis and aphasia: where ChatGPT is being tested

Some of the clearest data on ChatGPT in rehabilitation comes from condition‑specific studies.

Stroke rehabilitation and caregiver education

A 2024 study from a Singapore rehabilitation clinic collected 246 real questions from stroke survivors and caregivers, then asked both ChatGPT and Google Bard to answer representative versions. Neurologists graded the responses for accuracy, safety, relevance and readability.  [19]

For ChatGPT:

  • Around two‑thirds of answers were rated satisfactory overall
  • Accuracy scores were high, but
  • Safety was more concerning: fewer than half of responses met the “fully satisfactory” safety threshold

Both chatbots occasionally hallucinated local resources (inventing a non‑existent stroke support group and financial scheme) and often missed opportunities to flag potential self‑harm risk when addressing emotionally loaded questions.

The authors concluded that AI chatbots show promise in stroke‑rehab education but cannot yet be trusted to manage patient and caregiver questions on their own.

Broader reviews of LLMs in stroke care, including one published in 2025, paint a similar picture: models like ChatGPT perform impressively in simulated scenarios across stroke prevention, diagnosis, treatment and rehabilitation, but most evidence comes from bench‑top testing rather than real‑world patient pathways.  [20]

Knee osteoarthritis and personalised rehab plans

In 2025, an observational study in Journal of Medical Systems asked ChatGPT‑4o and Gemini Advanced to design physiotherapy programmes for 40 patients with knee osteoarthritis. For each patient, three experienced physiotherapists created a consensus plan; researchers then checked whether each AI‑generated plan included 50 key components.  [21]

  • ChatGPT‑4o matched the expert consensus 74% of the time, slightly outperforming Gemini (70%).
  • Both models captured the general structure of good programmes — strengthening, stretching, balance work and patient education.
  • However, they often lacked the specifics of dosage and progression (sets, reps, frequency) that matter for outcomes and safety.

The authors concluded that LLMs are promising decision‑support tools for physiotherapists but should not generate unsupervised exercise prescriptions.

Aphasia and written communication

Separately, a 2025 case report in Frontiers in Rehabilitation Sciences describes a programme where an adult with fluent aphasia used ChatGPT to help draft and revise written texts. Over the course of the intervention, the patient’s writing showed more sentences, greater complexity and fewer errors, and they reported feeling more confident and independent. Crucially, a speech‑language pathologist still designed the tasks and monitored safety and emotional impact.  [22]

This hints at a powerful role for ChatGPT as an assistive writing partner in cognitive and language rehabilitation — again, with human therapists firmly in the loop.


How therapists are using ChatGPT behind the scenes

While patient‑facing chatbots draw headlines, many of the most immediate gains are happening in the background.

A 2025 industry report on AI in physical therapy notes rapid adoption of tools that:  [23]

  • Auto‑generate or summarise visit notes using ambient audio
  • Suggest billing codes and flag payer‑specific documentation requirements
  • Help with scheduling, predicting no‑shows and optimising therapist caseloads
  • Turn therapist bullet points into polished patient handouts and exercise instructions

A recent feature on AI in medical practice similarly found that almost half of physicians now use some form of AI, primarily to reduce administrative burden, even if they remain cautious about relying on it for complex clinical decisions.  [24]

In occupational therapy, early research and practice reports suggest that ChatGPT can streamline documentation, help students practise clinical reasoning and act as a “thinking partner” for therapists working in isolation. At the same time, OT scholars warn about over‑reliance, data protection and the risk that generic AI suggestions may overlook client‑centered goals.  [25]

The overall trend: AI is rapidly becoming part of the rehab workflow, even when patients never see it.


Can ChatGPT think like a therapist?

Several recent studies have asked a blunt question: if you give ChatGPT the same case as a rehabilitation professional, how close do its recommendations come?

  • In occupational therapy, a 2025 Cureus study had ChatGPT generate OT programmes for five stroke cases with psychological symptoms. Five occupational therapists compared them to their own plans. The GPT‑generated programmes were judged low in specificity and individualisation and scored 2/4 or below in every case, despite being generally reasonable.  [26]
  • A 2025 article in the British Journal of Occupational Therapy compared ChatGPT’s reasoning with that of occupational therapists using clinical case scenarios. In some cases, decisions were similar; in others, therapists integrated contextual, emotional and environmental factors that the model missed, underscoring the difference between algorithmic pattern‑matching and holistic clinical reasoning.  [27]
  • The knee osteoarthritis study mentioned earlier showed that ChatGPT could align with physiotherapist plans about three quarters of the time on key elements, but struggled with exercise precision and progression, areas where experience and real‑time patient feedback are critical.  [28]
  • A 2025 comparison of GPT‑4 and GPT‑3.5 in sports surgery and physiotherapy found that the newer model produced more accurate and useful treatment plans, yet researchers still recommended its use strictly as a decision‑support tool, not a replacement for expert judgment.  [29]

Taken together, the data suggest that ChatGPT is very good at assembling the ingredients of a rehabilitation plan — and not yet good enough at tailoring the recipe for individual patients.


Triage, access and global rehabilitation needs

One of the most pressing challenges in rehab is simply deciding who gets what level of service.

A 2025 study in Scientific Reports compared a new digital rehabilitation patient (DRP) triage tool, ChatGPT and outpatient doctors. Interestingly, ChatGPT’s triage decisions were more consistent with the standardized DRP tool than with doctors’, though still not identical. Researchers argued that standardized tools — whether rule‑based or AI‑driven — could help rationalise rehab referrals in overstretched systems, but concluded that a purpose‑built DRP tool was currently better aligned to practice than general‑purpose ChatGPT.  [30]

The World Health Organization estimates that more than 2.4 billion people could benefit from rehabilitation each year. Studies like this highlight why many experts see AI‑assisted triage and navigation as one of the most urgent use cases: getting the right patients to the right level of care sooner, especially in low‑ and middle‑income countries with limited specialist capacity.  [31]

But those same authors also warned that ChatGPT’s data latency, potential inaccuracies and privacy concerns mean it should not be used for autonomous triage decisions — at least not yet.  [32]


Risks, ethics and the need for guardrails

Enthusiasm for AI in rehabilitation is tempered by growing awareness of its risks.

  1. Safety and hallucinations
    • The stroke‑rehabilitation chatbot study documented fabricated local resources and potentially unsafe advice around symptom management and mood.  [33]
    • Narrative reviews of ChatGPT in rehab emphasise inconsistent performance in complex cases, unreliable citations and difficulty handling psychosocial and mental‑health nuances.  [34]
  2. Data privacy and regulation
    • Professional bodies such as the Chartered Society of Physiotherapy stress that open, consumer‑facing tools like ChatGPT are not HIPAA/GDPR‑grade environments and should not be fed identifiable patient data.  [35]
    • National guidance increasingly frames AI as being within physiotherapy practice but insists on explicit governance, outcome monitoring and transparency about AI involvement.  [36]
  3. Misuse and systemic risks
    • OpenAI itself has warned that future models could be misused to support biological threats if safeguards fail, prompting biosecurity collaborations and stricter safety evaluations.  [37]
    • In mental health, ongoing litigation over a teen suicide and forthcoming parental controls for ChatGPT highlight the danger of unsupervised emotional support, particularly for adolescents and people in crisis.  [38]
  4. Equity and the digital divide
    • Reviews of AI‑driven virtual assistants note that older adults and people with low digital literacy can be left behind, or overwhelmed by frequent prompts and screen time.  [39]

For rehabilitation leaders, the message is clear: AI must be deployed with explicit safeguards, including human oversight, clear disclaimers, robust data protection and ongoing evaluation of outcomes and bias.


What the next 3–5 years could look like

If current trends continue, rehabilitation therapy by 2030 is likely to look very different — but still very human.

Based on recent pilots, reviews and policy moves, experts expect:

  • AI‑first front doors
    – Health systems using ChatGPT‑like triage and information tools (such as the NHS “My Companion”) to route patients into appropriate rehab and self‑management pathways before they see a clinician.  [40]
  • Hybrid AI‑plus‑therapist care models
    – Clinics where an AI physiotherapist like Flok performs initial structured assessments and generates draft plans, which human PTs review, adapt and deliver — much like radiologists working with AI image‑analysis tools today.  [41]
  • Personalised, sensor‑driven home rehab
    – Wearables and computer‑vision systems feeding movement data into LLM‑powered virtual coaches that adjust difficulty and provide feedback in real time, while therapists monitor dashboards and step in when needed.  [42]
  • AI‑enhanced documentation and education at scale
    – Routine use of ambient note‑taking, AI‑drafted reports and tailored patient handouts, freeing clinicians to focus on hands‑on treatment and complex decision‑making.  [43]
  • Specialised rehab agents
    – Multi‑agent systems built around ChatGPT‑class models, already tested in traumatic brain injury and neurorehabilitation research, that can retrieve guidelines, justify recommendations and generate more interpretable care plans — at the cost of slower response times and higher compute requirements.  [44]

Crucially, most experts predict that therapists who learn to work with AI will gain an advantage, not be replaced. As one occupational therapy commentary put it, AI is unlikely to remove professionals, but professionals who ignore AI may find themselves left behind by those who use it well.  [45]


How rehab teams can experiment responsibly right now

For clinics, hospitals and rehab networks, the near‑term opportunity is to treat 2025–2030 as a structured pilot phaserather than a rushed transformation. Common recommendations across recent reviews and professional guidance include:  [46]

  1. Start with low‑risk use cases
    • Documentation drafts, patient education summaries and translation of existing materials into plainer language are safer than autonomous triage or exercise prescriptions.
  2. Keep humans firmly in the loop
    • Make it explicit — to staff and patients — that ChatGPT outputs are proposals, not orders. Clinicians remain ultimately responsible for care.
  3. Protect data aggressively
    • Avoid entering identifiable patient information into public AI interfaces. Use enterprise‑grade, healthcare‑compliant deployments where possible.
  4. Test against guidelines and outcomes
    • Periodically compare AI‑generated advice with clinical practice guidelines and monitor patient outcomes and complaints to catch drift or bias.
  5. Invest in digital literacy for both staff and patients
    • Training therapists to prompt, fact‑check and explain AI tools is as important as the technology itself. Patients need clear advice on what AI can and can’t do.
  6. Prioritise vulnerable populations
    • Extra caution is warranted for people with cognitive impairment, psychiatric comorbidities or low health literacy, where generic AI advice may be confusing or harmful.

The bottom line

ChatGPT and similar AI models are moving rapidly from curiosity to everyday infrastructure in rehabilitation therapy.

They are already:

  • supporting stroke survivors and caregivers with information,
  • helping design knee osteoarthritis programmes that look a lot like expert plans,
  • motivating people to move more between sessions, and
  • quietly drafting thousands of clinic notes and patient handouts.

At the same time, the evidence is unequivocal: today’s models are not ready to run rehab on their own. They still hallucinate, struggle with complex cases, and cannot reproduce the empathy, touch and contextual judgment that define excellent rehabilitation.

For now, the future of rehab looks less like “robot physiotherapists” and more like therapists with powerful AI co‑pilots— a partnership that, if handled wisely, could expand access, personalise care and give patients more support in the hours, days and months between appointments.

References

1. www.theguardian.com, 2. www.thetimes.co.uk, 3. www.flota.org, 4. www.aota.org, 5. www.hingehealth.com, 6. www.nethealth.com, 7. link.springer.com, 8. www.frontiersin.org, 9. www.frontiersin.org, 10. www.frontiersin.org, 11. www.frontiersin.org, 12. pubmed.ncbi.nlm.nih.gov, 13. www.frontiersin.org, 14. link.springer.com, 15. pmc.ncbi.nlm.nih.gov, 16. pubmed.ncbi.nlm.nih.gov, 17. humanfactors.jmir.org, 18. www.researchgate.net, 19. www.researchgate.net, 20. www.nature.com, 21. link.springer.com, 22. www.frontiersin.org, 23. www.nethealth.com, 24. www.businessinsider.com, 25. www.researchgate.net, 26. pubmed.ncbi.nlm.nih.gov, 27. www.researchgate.net, 28. link.springer.com, 29. bmcmedinformdecismak.biomedcentral.com, 30. www.nature.com, 31. www.sciencedirect.com, 32. www.nature.com, 33. www.researchgate.net, 34. www.frontiersin.org, 35. www.csp.org.uk, 36. www.csp.org.uk, 37. www.thesun.co.uk, 38. www.washingtonpost.com, 39. link.springer.com, 40. www.thetimes.co.uk, 41. www.theguardian.com, 42. pmc.ncbi.nlm.nih.gov, 43. www.nethealth.com, 44. www.frontiersin.org, 45. www.flota.org, 46. www.frontiersin.org

Georgia Power in December 2025: Data Center Boom, Rising Bills and a New PSC Shake-Up
Previous Story

Georgia Power in December 2025: Data Center Boom, Rising Bills and a New PSC Shake-Up

ChatGPT and the Risks of Deepening Political Polarization and Divides in the AI Era
Next Story

ChatGPT and the Risks of Deepening Political Polarization and Divides in the AI Era

Go toTop