The FDA is redefining what counts as “medical AI”
- Johan Steyn

- 4 days ago
- 4 min read
As regulators shift oversight on wearables and decision-support tools, the line between wellness and medicine gets blurrier.

Audio summary: https://youtu.be/fEpDufdKL1g
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
A quiet but important change is happening in how health technology is regulated. The US Food and Drug Administration is signalling a more nuanced approach to digital health tools, including AI-enabled features in wearables and clinical decision-support software. On paper, this looks like a technical policy adjustment. In reality, it reshapes what companies can ship, how they market it, and what consumers may come to trust. The practical effect is that the boundary between “general wellness” and “medical-grade” technology is getting harder to see. That matters because the moment a product starts to look like it diagnoses, treats, or reliably measures disease-related outcomes, it moves into a very different regulatory world.
CONTEXT AND BACKGROUND
For years, regulators have tried to balance two competing pressures. On the one hand, digital health innovation is moving quickly and can deliver real benefits: earlier detection, better monitoring, and more personalised support. On the other hand, health claims can cause harm when they are wrong, overconfident, or misunderstood. A misleading wellness metric may not just be annoying. It can change behaviour, delay care, or create panic.
Wearables sit right in the middle of this tension. They started as fitness tools: steps, exercise minutes, sleep tracking. But they have steadily moved into more sensitive territory: heart rhythm notifications, blood oxygen estimates, blood pressure “insights”, stress scores, and now AI-driven interpretation of patterns over time. As soon as a product crosses from “helpful lifestyle information” into “clinical implication”, the expectation of evidence, validation, and accountability increases sharply.
Clinical decision-support tools create a similar challenge, just on the professional side. They can help clinicians prioritise, summarise, and interpret, but the moment the software function starts to drive a decision in a way a clinician cannot independently evaluate, it can become a regulated medical device.
INSIGHT AND ANALYSIS
The most important shift is not simply “more regulation” or “less regulation”. It is segmentation. Regulators are trying to define categories more clearly: what is genuinely low-risk wellness, what is decision support that remains a non-device function, and what is effectively a medical device because of its claims, intended use, or risk profile.
That sounds straightforward until you consider how products actually work. Modern AI features are often presented as “insights”, “trends”, or “coaching”. Marketing language can be carefully chosen to sound non-medical while still influencing medical behaviour. A wearable may avoid saying “diagnosis”, but if it provides a score that implies hypertension risk, many users will treat it as medical guidance regardless.
This is where the line gets blurrier. The same sensor can support a wellness feature or a medical feature, depending on context, intended use, and how results are communicated. And AI amplifies the ambiguity because it can infer meaning from patterns that look “clinical” even if the device is not positioned as clinical. The output is not just a number. It is an interpretation.
There is also a second-order effect: regulatory clarity can increase innovation, but it can also increase confusion for consumers. If some products are exempt as “wellness” while others require medical validation, ordinary users may not understand the difference. They may assume that because it looks scientific, it must be accurate. In other words, a lighter-touch pathway can unintentionally create a trust gap if labelling, disclosure, and user education do not keep up.
Finally, this matters beyond the United States. Global digital health markets tend to follow large regulatory signals. Even if you are building in South Africa, your partners, investors, and product roadmap may be influenced by where the biggest markets and most visible regulatory expectations are heading.
IMPLICATIONS
For device makers and software vendors, the message is to treat intended use and claims as strategic design choices, not marketing afterthoughts. If you want to remain in “wellness”, your product experience, labels, and outputs must reinforce that boundary. If you want to move into “medical”, you need an evidence plan: validation, quality management, cybersecurity discipline, monitoring, and post-market accountability.
For healthcare providers and health systems, the practical risk is workflow contamination. Tools that appear “helpful” can slip into clinical decisions without formal evaluation. Organisations should create clear procurement and governance rules for digital health AI, including what may be used in clinical pathways, what requires validation, and what must remain advisory only.
For consumers, the key is scepticism with confidence. A wearable can be useful, but it is not automatically a medical device. The right question is simple: is this feature validated for clinical use, or is it general wellness guidance? If you cannot tell, assume it is wellness and treat it as directional, not diagnostic.
CLOSING TAKEAWAY
The FDA’s approach reflects a broader truth about AI in health: categories that made sense a decade ago are struggling to keep up with products that blend sensing, inference, and personalised recommendations. As the wellness and medical worlds overlap, the real challenge becomes trust. Innovation is valuable, but in healthcare, confidence without clarity is dangerous. The next phase of digital health will belong to companies and regulators who make the boundary understandable: clear claims, clear evidence expectations, and clear user protections in a world where “insights” can quickly start to feel like medicine.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net






Comments