More than 230 million people now turn to AI chatbots like ChatGPT for health advice, according to OpenAI. While these tools promise easier access to healthcare navigation and self-advocacy, entrusting them with sensitive medical details is a gamble. Tech companies operate under different rules than medical providers, and data protection is far from guaranteed. The rush to integrate AI into healthcare raises serious questions about user privacy and the reliability of automated health advice.
The Rise of AI in Healthcare
Two major players, OpenAI and Anthropic, have recently launched dedicated healthcare AI products. OpenAI’s ChatGPT Health and Anthropic’s Claude for Healthcare aim to streamline health-related queries. However, these tools differ significantly in security; OpenAI offers a consumer version alongside a more protected enterprise solution, ChatGPT for Healthcare, leading to confusion among users who may assume the same level of protection. Google’s Gemini remains largely absent from this push, but its MedGemma model is available for developers.
OpenAI actively encourages sharing sensitive health data – medical records, test results, app data from Apple Health, Peloton, etc. – promising confidentiality. Yet, terms of service can change, and legal protections are weak. The lack of comprehensive federal privacy laws leaves users vulnerable, relying on company promises rather than enforceable standards.
The Illusion of Security
Even with encryption and stated commitments to privacy, trusting AI with health data is risky. OpenAI’s assurances are muddied by the existence of ChatGPT for Healthcare, a business-focused product with stronger safeguards. The similar names and launch dates make it easy to mistake the consumer version for the more secure one, a mistake many users have already made.
Moreover, companies can alter their data usage policies at any time. As digital health law researcher Hannah van Kolfschooten points out, “You will have to trust that ChatGPT does not [change its privacy practices].” HIPAA compliance, even if claimed, doesn’t guarantee enforcement. Voluntarily adhering to standards isn’t the same as being legally bound.
The Dangers of Misinformation
Beyond privacy, AI chatbots can provide inaccurate or dangerous health advice. Examples include chatbots recommending sodium bromide instead of salt, or wrongly advising cancer patients to avoid fats. OpenAI and Anthropic disclaim responsibility for diagnosis and treatment, classifying their tools as non-medical devices to avoid stricter regulations.
This classification is questionable, given that users are already using these tools for medical decision-making. OpenAI highlights health as a major use case, even showcasing a cancer patient who used ChatGPT to understand her diagnosis. The company’s own benchmarks suggest the AI can perform well in medical scenarios, raising questions about regulatory oversight.
A Question of Trust
The core issue is trust. Medicine is heavily regulated for a reason: errors can be fatal. AI companies, however, operate in a faster-moving, less regulated environment. While AI could improve access to healthcare, the industry hasn’t yet earned the same level of trust as traditional medical providers.
Ultimately, sharing private health data with AI chatbots is a trade-off between convenience and risk. Until stronger regulations and enforceable privacy standards are in place, users must proceed with caution. The current landscape prioritizes innovation over safety, leaving individuals to navigate a complex and uncertain future.






























