Exploring the Ethical Issues of AI in Mental Healthcare

16 min readPosted on August 21, 2023

When exploring the ethical issues of AI in healthcare, that iconic line from the 2002 Spider-Man movie comes to mind: 

“With great power comes great responsibility.” 

As a licensed mental healthcare professional, when using artificial intelligence (AI) in conjunction with your treatment decisions, you are afforded a tremendous amount of power. A disregard for that power can result in ethical dilemmas at best and more severe consequences, such as liability, at worst. 

This is not the first time advances in technology have required ethical considerations from the mental health profession. You may recall that the digitization of medical records caused some confidentiality concerns as those records are more easily distributed and (at the time of initial release) were more vulnerable to security risks, such as unauthorized access. 

Like with the advent of electronic health records (EHR), generative AI has many applications that can help you become more efficient with your clinical practice’s administrative work and resource allocation. And yet, also like with electronic health records, there are ever-present ethical challenges to be aware of when using AI systems for patient care. 

This article is not an exhaustive set of ethical principles or ethical guidelines, but rather a starting point for conversations around the medical ethics of using generative AI in healthcare. Let’s explore together a few common ethical problems, including the three C’s of AI in healthcare systems: competence, confidentiality, and consent. 

AI Ethical Issue #1: Competence

When it comes to AI technologies for healthcare professionals, your clinical practice should be aware of the ethical concerns of AI with regard to your professional duty to competence.

In general, competence is defined as the ethical obligation to understand a technology before incorporating it into your clinical workflow. Keep in mind that even once you've gained a certain degree of competence in AI systems for healthcare, you'll need to maintain a consistent level of human oversight in how your chosen AI tools interact with your clinical practice's health systems.

Let’s look at how two organizations, namely the American Psychiatric Association and the American Psychological Association, each define competence. 

  • According to the American Psychiatric Association’s Commentary on Ethics in Private Practice: “Psychiatrists are responsible for obtaining sufficient knowledge about the technologies they employ to respect patient confidentiality and deliver competent care.” 
  • And according to the American Psychological Association’s Ethical Principles of Psychologists and Code of Conduct: “Psychologists planning to provide services [...] involving [...] technologies new to them undertake relevant education, training, supervised experience, consultation, or study” and “ in those emerging areas in which generally recognized standards for preparatory training do not yet exist, psychologists nevertheless take reasonable steps to ensure the competence of their work and to protect clients/patients, students, supervisees, research participants, organizational clients, and others from harm.”

From each of these ethical guidelines, it is clear that professionals who intend to use AI in their practice should also be knowledgeable of AI, proficient in its use, and aware of any unintended consequences of misusing AI in healthcare, such as algorithmic bias or health data breaches. This competence will help safeguard both the integrity of patient care and one’s own professional reputation. 

Understanding algorithmic bias in AI

As you educate yourself about the ethical use cases of artificial intelligence in healthcare, it’s important that you understand how AI applications can perpetuate potential biases that may affect their ethical use in your clinical practice. 

What is algorithmic bias in AI? 

Because much of generative AI is built using data sets pulled from the open internet, human biases in regards to certain demographics that may exist in such content could result in output influenced by those biases. 

Think of it as a child mirroring their parent’s bad behavior: If an algorithm is trained on biased data sets — or its results are applied in biased ways — it will reflect those biases and may even perpetuate them. For example, the AI model may associate the word “programmer” more often with men than with women. 

The broader consequences of such biases should not be overlooked. Beyond medical inaccuracies, they can solidify harmful stereotypes, exacerbate systemic discrimination, and widen health disparities among marginalized communities.

In the healthcare sector in particular, AI bias is multi-faceted. Historical bias from past medical practices, measurement bias in data collection, and algorithmic bias in data processing can all affect the outputs of AI models. These biases can distort clinical decision-making, which can result in jeopardizing patient outcomes and equity in healthcare. 

For instance, if training data under-represents certain demographics or overly emphasizes certain treatments, AI outputs could inadvertently perpetuate these biases. This might manifest as a skewed diagnosis or treatment suggestion, which in turn might harm patients or defer crucial healthcare.

Ways to address potential bias in healthcare

Addressing bias requires a multi-pronged approach. One way to handle bias as a clinician is simply to be on the lookout for it. With enough careful human oversight, you should be able to spot and to correct many issues. By actively monitoring and providing feedback to AI tools, you can help train models to be more inclusive and accurate. 

Transparency and interpretability in AI tools are crucial, as understanding how these tools reach conclusions can illuminate inherent biases. It’s imperative to perform regular audits and updates of AI tools, keeping them aligned with evolving societal norms and clinical knowledge. 

Because of the broader societal consequences of algorithmic bias in generative AIs, healthcare industry groups and even some governments have proposed their own safeguards or implemented guidance aimed at identifying and managing bias in AI. For example, the United States government published the Blueprint for an AI Bill of Rights in 2022, which outlines a framework that could be leveraged in connection with developing AI. 

Learning about AI in healthcare

To gain a sufficient or a reasonable understanding of how generative AI works before using it in the clinical setting, clinicians should find ways to learn about the benefits and challenges of using AI in healthcare and how those relate to medical ethics in your field. 

Ideas for how to learn more about AI in healthcare

CE units or seminars

If there are available seminars — or better yet continuing education (CE) units on generative AI — then you should consider attending them. The APA, for example, has recently been offering AI in Psychiatry medical education seminars. 

Those sessions include learning about: 

  • The costs and benefits of AI tools
  • How to think about patient privacy with the use of an AI application
  • Ways to implement AI strategies into your clinical practice, such as the ethical use of AI with patient data in your EHR
AI-powered EHR platforms

If you are considering using an EHR platform that provides generative AI as a tool to help your administrative workflow, then ask the platform about their AI tool as much as possible. 

For example, you could ask your AI-powered EHR platform about: 

  • Their tool’s HIPAA compliance and data sharing regulations
  • How their generative AI model was trained and on what sorts of data sets, with particular attention to its potential for algorithmic bias
  • The known limitations and any potential unintended consequences of using their AI model
  • Its scope of ethical use
  • The quality of the information it produces and if there have been any cases of algorithmic bias and discrimination cases documented to date
Use a “test client” to learn in action

To test the quality of an AI model for your clinical setting, you should consider making up a fake client whose persona you can use as a “test client” in your AI prompts. 

Working with your test client’s fake data can show you more about how the AI tool works in a really hands-on way. Your test client will also help you identify if there are gaps in the AI’s specialized training or potential bias in its machine learning that could have a significant impact on your work with real clients. 

If you struggle to create a test client, consider using your textbooks or sample case studies from your training. Additionally, there are plenty of open education resources discoverable on the internet. 

Keep your AI usage documented

Regardless of which AI tool you choose to use, all of your training, education, and communications within that tool should be well documented offline. Consider this documentation as insurance for any accidents: In the event there are legal or regulatory challenges to any of your AI usage, your thorough documentation will help show that you acted with due care toward your patients.

AI Ethical Issue #2: Confidentiality

Maintaining reasonable precautions to protect client confidentiality is a primary obligation of all mental health care professionals. Confidentiality is essential to effective treatment, so it’s crucial to ensure patient data remains secure. That confidentiality in patient care is at the core of medical ethics and a non-negotiable part of being a mental healthcare professional. 

But because generative AI is inherently meant to learn based on information that users provide, AI technology use in healthcare presents additional complexity to the issue of confidentiality. As a result, when clinicians use generative AI to help answer a specific healthcare question or draft a clinical note specific to an appointment, there is a risk of data sharing confidential information with third parties. Always make sure you’ve informed yourself about the data security specifics of an AI tool before using it in a clinical context. 

Example of patient privacy issues in ChatGPT

Let’s take a look at ChatGPT. OpenAI’s Terms of Use give OpenAI the right to use whatever you input to develop and improve their services. In other words, whatever you input can be accessed by OpenAI employees, contractors, and subcontractors. To be fair, OpenAI recently implemented a way to opt-out of this default rule, but it’s still unclear whether the data you’ve already inputted is retained. 

Why does this matter? Patient privacy, for one. If there’s unauthorized access, this confidential information can be disclosed, and you can be in breach of confidentiality to your patients. 

As a result, healthcare organizations and clinicians should never put sensitive, confidential, or patient health information into a public model, such as ChatGPT.

How to address ethical concerns about AI confidentiality

When deciding whether to use AI technology or any other kind of technology that stores or processes confidential health data, here are some things to consider: 

  • Your ability to assess the level of security the AI technology affords
  • Whether reasonable precautions may be taken during technology use in order to increase the level of data security
  • Limitations on who has access to monitoring the technology's use
  • The legal ramifications of third parties intercepting, accessing, or exceeding authorized use of another's electronic information
  • The possible impact on the client of an inadvertent disclosure of confidential information through data sharing
  • Clear client consent to either use or not use the technology in health care delivery

Unless a clinician can be reasonably sure that the information entered into a generative AI platform is HIPAA compliant, you should avoid putting any sensitive information, such as healthcare data, into the platform at all.

If you are a group practice, you should have an internal policy making your employees or contracted care workers aware of best practices and potential consequences for using AI tools in their clinical workflow. Even if you are using a HIPAA-compliant AI model, you should be clear about what information can be included in your AI prompts. Not upholding accountability standards and human oversight in this area could lead to unintended consequences

Because the AI landscape is constantly changing, it is best to stay up-to-date on any policies relating to confidentiality, medical ethics, and the use of AI models in healthcare. 

As you develop your AI competency and awareness of confidentiality issues with AI, it’s also important for you to prioritize educating your clients about how you intend to use generative AI, and get their informed consent before using AI in any way related to their care. 

What is informed consent? 

Informed consent is an ongoing process that involves the duty to communicate with a patient about their assessment and treatment. That communication should occur before starting a new clinical process, with the goal of obtaining their informed consent to begin it. 

Since it is anyone’s right to revoke consent at any time, an integral part of informed consent is also making sure clients know that they can change their minds at any time, rest assured that you will honor their consent or refusal. 

Informed consent ultimately helps to protect the psychological safety of clients and their therapeutic relationship with you. It's an approach to providing client-centric mental health care that prioritizes transparency and intentionality in line with medical ethics.

How does informed consent apply to using AI in healthcare? 

Informed consent becomes a more complex process when AI technology is introduced to assist with non-delegable duties such as creating a treatment plan, determining a procedure, or even performing that procedure. It becomes complex because of the additional information that the provider must convey to the patient and that the patient must weigh as part of their clinical decision-making process.

When using new technologies like generative AI while providing care, a clinician should strive to discuss any or all of the following items in this non-exhaustive list:

  • A general explanation of how the AI tool works
  • The benefits and risks of using AI technology with the client if it involves any of the client’s private data
  • Any relevant alternatives to that technology, including their risks and benefits
  • The clinician's experience using the AI tool and how you will continue to maintain human oversight with it
  • The roles and responsibilities in diagnosis, treatment, and procedures between the AI tool and the clinician, and any ways that may impact the client’s treatment
  • Any safeguards that have been put in place, such as final clinical decision-making authority and mitigation of algorithmic bias
  • Explain why the clinician does not intend to use AI technology for a particular task, if that technology is otherwise available for ethical use
  • The confidentiality and data security risks of patient’s information, if any

With these discussion points in hand, a clinician should strive to obtain the patient's informed consent to the ethical use of AI technology while under their care. Spending time to provide patients with additional details during the informed consent process and to answer any of their questions can help ensure that the patient has enough information to make an informed decision about their treatment. 

Following the informed consent process, healthcare providers should document these discussions in patients’ electronic health records and include copies of any related consent forms. You should also stay vigilant for ongoing local, state, and federal developments related to providers’ legal, ethical, and professional responsibilities for disclosing information about AI during the informed consent discussions.

Example of requesting informed consent

Perhaps you would like to use Orchid’s AI assistant to summarize your telehealth calls and help to more efficiently and effectively create progress notes for each client. It’s a process that can enhance the treatment plans you develop for clients, which can ultimately have a positive effect on their health outcomes — but each client has the right and autonomy to decide that for themself. 

In that case, you’d want to educate your clients on Orchid’s AI-powered tool, including its functionality, benefits, and potential risks. Answer any questions they have. Clinicians should have sufficient knowledge to explain to patients how an AI application works before even asking for consent. But if anything comes up that you don’t have the answers for yet, make sure you find ways to educate both of you before taking any action with AI in their treatment. 

Only when that client feels sufficiently informed about Orchid’s AI assistant is it appropriate to confirm their consent to use that tool in your clinical workflow. If the client declines to consent to the use of AI while under your care, the best practice is to honor their request. 

With Orchid, we provide you a template consent form that includes the use of our AI technology. Send us an email at info@orchid.health to learn more anytime. 

What about concerns around perceived pressure on clients? 

It’s always the ethically sound thing to be cognizant of the inherent power dynamics between a clinician and their client. In that vein, some providers have felt uncomfortable with getting client consent to use an AI tool because they’re concerned about wielding undue influence on their client. 

To mitigate that concern, you should design the informed consent process in such a way that minimizes the patient’s perceived pressure to accept your use of any AI tool. This is especially important to do with attention to populations who may be particularly vulnerable to coercion or undue influence. 

For example, emphasize during the consent process that patients’ involvement will have no impact on decisions made concerning their treatment. In other words: Their consent is voluntary, and the quality of care they receive from you will not change if they choose not to consent to the use of AI systems in their health care.

Obtaining consent should be encouraged; after all, both you and the patient are after the same result: better mental health outcomes. While attending to all best practices in obtaining informed consent, you might be pleasantly surprised by your patient’s appreciation with your transparency and by their active participation in providing informed consent to use an AI tool as part of their therapy journey.

The informed consent process reflects the deeply collaborative nature of psychotherapy. Professionals should foster an environment where clients are not only informed but feel that there is sufficient human oversight over AI in your clinical practice, and that they have enough autonomy in their therapeutic journey with or without the use of AI technology. 

Learn more: AI & Privacy Issues in Mental Healthcare

The Future of Ethical AI

Possessing such a powerful tool like artificial intelligence mandates immense clinical responsibility and human oversight. The healthcare industry is in the early stages of creating a set of ethical guidelines for AI in healthcare, and our discussion of the trifecta of competence, confidentiality, and consent only shows a glimpse into the intricacies and nuances professionals must consider. Issues of transparency, accountability, data security, autonomy, fairness, discrimination, and trust are all part of the ethical AI discussion as well. 

As technology continues to evolve, so too will the discussion around ethical issues of AI in healthcare. It will be incumbent upon mental health professionals to be informed about the allure of new AI strategies, while balancing their duty to patients and not breaking that fundamental trust in therapeutic relationships. 

The Orchid team is at the vanguard of AI and EHR in the healthcare industry: We’re here to serve as a resource and as a guide as you explore the use of artificial intelligence in your clinical toolkit, making sure that you can render the most effective mental health services while keeping patient safety at the forefront of healthcare delivery.

This blog post is a summary for general information and discussion only and does not, and is not intended to, constitute legal advice. It is not a full analysis of the matters presented, may not be relied upon as legal advice, and does not purport to represent the views of Orchid or anyone affiliated with Orchid. Readers of this blog post should contact their attorney to obtain advice with respect to any particular legal matter.

loading

Joseph Pomianowski

Joseph Pomianowski is the CEO and co-founder of Orchid. Prior to starting an AI-powered EHR, he earned his JD from Yale Law School, was a Fulbright Scholar, and worked for Palantir Technologies. Drop him a message anytime to discuss Orchid, to debate tech & healthcare policy matters, or to recommend your favorite book.

Are you interested in writing for Orchid?
Subscribe to Our Newsletter
Get the latest updates & strategies for mental healthcare professionals.
100% free. Unsubscribe anytime. See our Privacy Policy.