Disrupting Notions of “Protected” Health Information
In the ever-evolving landscape of healthcare, technology is now an indispensable tool for medical providers.

Disrupting Notions of “Protected” Health Information

September 2023

In the ever-evolving landscape of healthcare, technology is now an indispensable tool for medical providers. Artificial intelligence (AI), in particular, is proving invaluable for recording and organizing medical data, streamlining processes, and improving patient care. One prominent example is ChatGPT, an AI application that not only saves time, but also helps providers comply with Indonesia’s health laws. However, beneath the veneer of efficiency and expediency lies a complex web of concerns: confidentiality, privacy, and costs. One critical issue that often goes unnoticed is that AI providers effectively become third parties in the provider-patient relationship when sensitive identifying information is entered into their systems.

Third Parties in Healthcare

Imagine you are shopping for a new laptop. You don’t have a specific brand in mind, but you know it must have particular features within a specific price range. After hours of online research, you watch YouTube videos for further insights. Beneath one of these videos, you notice an advertisement for a laptop that fits your criteria perfectly. How did this happen? The answer lies in the trail of data you left behind during your online quest.

Similarly, when patients and providers interact with AI-driven healthcare tools, they may not fully grasp the implications of sharing sensitive health information. AI providers, like many online services, have the potential to monetize user data. Users often unknowingly consent to the sale of their search and health-related information, either by not reading the fine print or by underestimating the potential loss of privacy. The result is that AI providers become third parties in the healthcare relationship, raising questions about data ownership and the sanctity of patient information.

Ownership of Personal Information Once Divulged to Third-Parties

An illustrative parallel to this issue can be found in the world of DNA testing services. Companies like 23andMe, which are generally known for their commitment to data privacy, share genetic information only with explicit consent and under strict conditions of anonymity. However, public online services like GEDmatch do not adhere to the same stringent standards.
Moreover, even when companies offer customers the option to share data anonymously, they often maintain ownership of the physical DNA samples provided. In some cases, companies include clauses in their terms of service that allow them to use customer data in ways that may not align with individual expectations. This creates a precarious situation where customers unknowingly relinquish control over their genetic information.

Regulatory Frameworks and Solutions

To address these issues, governments and regulatory bodies must take action. In Indonesia, Minister of Health Regulation No. 24 of 2022 sets the aim of ensuring the security, confidentiality, integrity, and availability of medical record data. This regulation emphasizes the need for electronic medical record storage to guarantee data security and confidentiality.

However, it's crucial to question whether AI providers, such as OpenAI, can truly meet these stringent requirements. While AI is adept at collecting and presenting information, it may not inherently prioritize data security and confidentiality to the same degree as healthcare providers do.

To protect patient data, several steps can be taken:

  • Informed Consent: Patients should be informed clearly and concisely about how their data will be used within AI applications, ensuring they fully understand the implications and can provide informed consent.
  • Public Education: Raising public awareness about the potential loss of privacy when using AI in healthcare is essential. An informed public can make more conscious decisions about sharing their data.
  • Legal Updates: Governments should consider updating medical and privacy laws or introducing new regulations that outline stringent standards for medical providers when using AI applications.
  • Data Protection Laws: Increasing penalties for divulging identifying information on public platforms would incentivize AI companies and users to take data protection more seriously.
  • Compliance Requirements: AI companies should be required to comply with protected health information laws, ensuring they adhere to strict standards of data security and privacy.
  • Alternative Solutions: Governments could subsidize software or recording tools that offer the same benefits as AI applications but do not store data in external facilities.

While AI undoubtedly has its merits in healthcare, especially in improving efficiency and patient care, it also brings forth significant concerns regarding data security and privacy. To protect patients and their sensitive health information, it is imperative that we address these concerns through informed consent, public education, legal updates, and rigorous compliance requirements. Only then can we fully harness the potential of AI while safeguarding the privacy and confidentiality of patient health information.

**From Ms. Tequila Bester, Program Coordinator for Social Issues at the Foundation for International Human Rights Reporting Standards (FIHRRST), and Dr. Meryl Kallman, founder of the Dr's Clinic in Jakarta.