AI & Privacy Issues in Mental Healthcare

9 min readPosted on August 17, 2023

With OpenAI’s ChatGPT estimating 100 million monthly users in 2023, it’s no surprise that artificial intelligence (AI) is at the forefront of data and privacy discussions across industries. 

As you explore the ways AI can benefit your mental healthcare practice, it’s important to learn more about the patient privacy and ethical issues of AI in the healthcare industry specifically. Your decision-making abilities as a clinician will benefit from becoming more informed about privacy principles and how they affect the ways you incorporate new AI technology into your clinical workflow. This data privacy information will help you to maintain your own ethical integrity as a mental healthcare professional working with AI technologies.

In this blog post, we’ll explore the issues of privacy and security in AI for mental healthcare and how you can take strides to manage private data protection in your practice. 

AI data privacy best practices

As generative AI technologies evolve, the ways in which personal information can be used grow with it — but the same is true for the ways AI technologies can be misused. Data breaches, surveillance cameras, facial recognition technology, biometric data surveillance, algorithmic bias in a given machine learning model — the privacy implications of many AI technologies call to mind privacy concerns around human rights and personal data protection.

That means clinicians must stay on top of this evolving landscape of AI technologies to minimize any potential privacy risks to their practice when using AI tools. 

What are some of the privacy laws clinicians should be aware of? 

The speed with which AI technology has been growing has left some law enforcement agencies and privacy professionals scrambling to keep up. Unlike in the European Union where law enforcement agencies have passed privacy legislation (such as the Artificial Intelligence Act and GDPR) to protect consumer data, in the United States no comprehensive data privacy law or general data protection regulation exists to protect consumer data in AI applications at the federal level. This lack of any general data protection regulation regarding artificial intelligence exacerbates privacy concerns.

In lieu of one clear data protection law, when using generative AI tools, U.S. clinicians should be aware of: 

  • Domain-specific laws like the Health Insurance Portability and Accountability Act (HIPAA), which is a data protection law that requires (among other things) the data protection of communication between “covered entities,” like mental health professionals, and their patients. 
  • Legal issues and regulatory requirements applicable to the use of AI in healthcare at the local and state levels. 

Many states have privacy rights requirements that are different from HIPAA. And some of these state laws are a lot more comprehensive: for example, California’s Consumer Privacy Act has a “global opt out,” which allows a user to opt-out from data sharing by device or browser, instead of what users often to have to do which is to request an opt out from each and every individual site.

Other states may or may not have privacy legislation or privacy impact assessments that include:

  • How AI processes personal data collection
  • A “right to know” what data the generative AI applications collect about them (similar to the European Commission's GDPR privacy regulations)
  • How their personal data is being used (for example, is an AI company secretly using consumer data for advertising purposes)
  • The “right to access and delete” that data, barring any healthcare requirements (also similar to the European Union's GDPR privacy regulations)

These requirements may differ depending on which state you’re licensed to practice in, so it’s imperative that you stay up to date on the privacy legislation and privacy practices in your area to minimize the risks of privacy violations in your practice.

How might clinicians approach privacy risks? 

Without the U.S. having comprehensive federal privacy legislation on privacy compliance — or even legislation that addresses generative AI tools and their privacy considerations — a lot of the onus of navigating the privacy challenges of generative AI tools lies with individuals.

Here are some possible ways to go about informing and protecting yourself when it comes to generative AI and privacy considerations:

Use more controlled AI models

When deciding between AI tools, clinicians should look for generative AI providers that offer more controlled models and those that offer stronger data privacy preservation policies and terms. It’s a chore to look into those privacy policies, but definitely worth your effort as you should know what AI tools are asking from you. 

For example, generative AI tools designed for the mental healthcare market may offer privacy frameworks that are significantly more robust than those terms offered by large, public generative AI chatbot tools like ChatGPT. 

OpenAI’s privacy policy is too broad to be safe for healthcare industry use cases: Any conversations you have with the AI chatbot and any personal information you share with OpenAI could be used as training data to further train their AI model’s training data set and improve OpenAI’s AI chatbot services. Unless you negotiate otherwise, these terms are ubiquitous across most AI companies. 

Be intentional with what private data you share 

If you do decide to use an AI model like that, and you know that whatever you input is likely to be used as training data, you could tackle your privacy concerns in a different way: Be judicious with the information you share. That means, don’t share things like identifying information about patients. 

With public generative AI tools like ChatGPT, you should act as if any personal data you share could be leaked. Think of it as The New York Times test: When you enter personal data into generative AI tools, ask yourself if you (or your clients) are prepared for that private data to be published by The New York Times

No? Then it may be wise to reconsider how (or if) you use generative AI tools in conjunction with your practice. 

Explore your privacy controls

Go to the settings in any of your AI tools, find the privacy controls, and select how you want the data to be maintained in each of the AI tools. They might not have all of the privacy settings you’d ideally want, but it could help. For example, companies like OpenAI give you the option to delete records of your ChatGPT conversations. 

Set your own privacy policies

In your own private practice, it’s beneficial to adopt internal policies around data privacy. Those policies should be there to ensure that meaningful human oversight is prioritized with your usage of AI. Setting your own privacy standards and security measures — while holding yourself accountable to them — can go a long way toward building trust with clients about new AI technology. 

AI data security best practices 

Cybersecurity in artificial intelligence systems is a major concern intertwined with potential privacy concerns. 

Since artificial intelligence systems are a prime target for hackers, any healthcare system using generative AI tools should implement strong authentication and security protocols. Regularly monitoring AI systems for vulnerabilities, including cybersecurity assessments, is the key to privacy enforcement, especially when dealing with vast amounts of data processing.

AI data security considerations

It helps to have a list of some security considerations to keep in mind to aid in your decision making process about AI tools. Reach out to company representatives of the AI applications you’re interested in to learn more about their approaches to AI and privacy challenges.

Questions to ask AI privacy professionals at generative AI companies:

  • Are your AI tools HIPAA compliant?
  • What are your data collection policies?
  • What security measures do you have in place to ensure that any personal data entered into the AI will not become public? 
  • What if your AI tools suffer data privacy breaches? 
  • When was the last security audit of your artificial intelligence systems?
  • How long have your generative AI tools been in use? 
  • What types of training data have your AI tools been trained on, and how do you account for potential algorithmic bias in those vast amounts of training data?
  • How difficult would it be to transition from your AI tools to different AI tools? 
  • What’s the current financial state of the AI company, and how may that affect the ongoing development of its AI tools?

If those artificial intelligence companies value transparency and give you clear answers about their best practices, you’ll be better equipped to choose the most secure AI tools for your practice. Having those answers will also help you to better educate your clients when obtaining informed consent as part of your duty to medical ethics. 

Learn more: Exploring the Ethical Issues of AI in Mental Healthcare

Data privacy at Orchid

Transparency and human oversight in our AI and EHR practices is a priority at Orchid. 

We take several approaches to security measures and mitigating privacy risks, including: 

  • Implementing best practices in data collection, data security, and data protection
  • Regularly performing security audits
  • Personally verifying every clinician who joins our platform. (In other words, no one can sign up to Orchid without being verified that they are indeed a clinician.) 

If you ever have questions about how we handle data on Orchid’s platform — including any details about our AI-powered notes assistant tool — feel welcome to email us at info@orchid.health or DM us on social media anytime. 

The future of AI & privacy issues

It's undeniable that specialized AI tools like Orchid’s offer transformative potential. But with the rapid ascent of artificial intelligence and its widespread adoption, it is paramount for healthcare professionals to remain informed about AI privacy issues and security challenges that any generative AI model may present. Much like no motor vehicle is perfectly safe, no AI model is free from vulnerabilities. 

The healthcare sector is bound by stringent regulatory standards, so it doesn’t have as much leeway as other sectors — especially when patient data is at stake. But together we can take reasonable steps to mitigate the data security and potential privacy risks of which we are aware. Staying informed is critical to getting ahead of the pack in your understanding of AI’s impact on client care and in helping you make better privacy enforcement decisions for your mental healthcare business. 

Want to keep learning more about AI in mental healthcare? 

Learn more: How AI is Transforming Mental Healthcare EHR

This blog post is a summary for general information and discussion only and does not, and is not intended to, constitute legal advice. It is not a full analysis of the matters presented, may not be relied upon as legal advice, and does not purport to represent the views of Orchid or anyone affiliated with Orchid. Readers of this blog post should contact their attorney to obtain advice with respect to any particular legal matter.

loading

Joseph Pomianowski

Joseph Pomianowski is the CEO and co-founder of Orchid. Prior to starting an AI-powered EHR, he earned his JD from Yale Law School, was a Fulbright Scholar, and worked for Palantir Technologies. Drop him a message anytime to discuss Orchid, to debate tech & healthcare policy matters, or to recommend your favorite book.

Are you interested in writing for Orchid?
Subscribe to Our Newsletter
Get the latest updates & strategies for mental healthcare professionals.
100% free. Unsubscribe anytime. See our Privacy Policy.