Galenie Galenie
Menu
Clinical Practice

Using AI Ethically in Therapy Practice: A Clinician's Guide

Galenie Team · · 10 min read

As AI tools reshape therapy documentation, clinicians face new ethical questions. This guide covers regulations, consent, bias, and building an AI ethics policy.

Using AI Ethically in Therapy Practice: A Clinician’s Guide

A 2025 APA survey found that 41% of licensed therapists had used an AI-assisted tool in the past 12 months, up from 12% in 2023. The technology is here, and the ethical frameworks are catching up fast.

Most AI ethics guidance targets researchers or hospital systems. Solo and small-group clinicians need practical answers: What must my consent form say? Which AI tasks cross a clinical boundary? How do I vet a vendor’s data practices? This guide covers the 2026 regulatory landscape, consent requirements, bias risks, and a ready-to-use policy template.

The Ethical Landscape of AI in Therapy: 2026 Regulations

Therapists now navigate overlapping federal, state, and international AI rules. Ignorance does not shield you from enforcement.

APA Advisory on AI in Psychological Practice

The APA’s 2024 Advisory established three core principles:

  1. Human oversight is non-negotiable. AI outputs must be reviewed by a licensed professional before entering the clinical record.
  2. Transparency with clients is mandatory. Clients must be informed when AI tools are used in any aspect of their care.
  3. The clinician bears ultimate responsibility. AI-generated errors are the signing clinician’s liability, not the vendor’s.

State licensing boards increasingly reference this advisory in disciplinary proceedings.

State-Level Regulations

Illinois leads state-level AI regulation. The Illinois AI Video Interview Act (amended 2024) now extends to any AI system analysing human behaviour in professional services. Therapists using ambient listening tools with sentiment analysis must obtain explicit written consent. Violations carry penalties up to $1,000 per affected individual.

California’s AB 302 requires impact assessments before deploying AI tools that process sensitive personal information. Therapists must document what data the AI accesses, what decisions it influences, and what safeguards exist. Clients must be informed of their right to opt out.

New York, Colorado, and Washington have passed AI transparency requirements applicable to healthcare, with varying disclosure and enforcement specifics.

HIPAA and GDPR Implications

Under HIPAA, any AI tool processing PHI must be covered by a Business Associate Agreement (BAA) with administrative, physical, and technical safeguards per the Security Rule. The 2025 OCR guidance clarified that AI-generated clinical summaries are PHI from the moment of creation – not just after a therapist signs them.

Under GDPR, therapy records qualify as “special category data” (Article 9), requiring explicit consent. The data minimisation principle means AI tools should process only the minimum data necessary. Article 22 gives clients the right to understand how automated systems process their data – a requirement most AI therapy vendors do not adequately support.

For clinicians treating clients across jurisdictions, compliance with both HIPAA and GDPR simultaneously is required.

What AI Should and Should Not Do in Clinical Settings

AI should handle administrative and structural tasks where errors are catchable during review, not clinical judgment tasks where errors require independent reasoning to detect.

Task AI Should AI Should Not
Session documentation Draft structured notes (SOAP, DAP, BIRP) from clinician input or approved transcription Write clinical impressions or diagnostic formulations without clinician dictation
Diagnosis Suggest ICD-10/DSM-5 codes based on documented symptoms for clinician confirmation Assign or recommend diagnoses autonomously
Treatment planning Populate treatment plan templates with goals from prior sessions Generate treatment recommendations or modify interventions
Risk assessment Flag documented risk language (e.g., suicidal ideation mentions) for clinician review Score or categorise risk levels independently
Pre-session briefs Summarise prior session notes and outstanding treatment goals Predict client behaviour or session trajectory
Billing Suggest CPT codes matching documented session content Auto-submit claims without clinician verification
Client communication Draft appointment reminders and scheduling confirmations Generate clinical advice or therapeutic responses

The core principle: AI assists, clinicians decide. Any tool that auto-finalises notes, independently assigns risk scores, or generates therapeutic recommendations crosses a line no regulatory framework currently sanctions. For more on which tools fit these boundaries, see our guide on AI in therapy practice management.

Standard informed consent language does not cover AI tool usage. Therapists must add specific disclosures about what the AI does, what data it accesses, and what client rights apply.

Your AI consent addendum should cover:

  1. Plain-language description of the AI tool. “We use an AI tool to help draft session notes from my observations” is sufficient. Avoid vendor marketing language.
  2. What data the AI processes. Be specific: session audio, therapist-dictated summaries, prior notes, intake forms.
  3. Where data is processed and stored. Cloud or on-device? Which country? This matters for GDPR.
  4. Human review guarantee. A licensed clinician reviews all AI-generated content before it enters the record.
  5. Right to opt out. Clients can decline AI processing without it affecting their care. Required under GDPR and APA ethics standards.
  6. Data retention and deletion. How long does the vendor retain data? Can it be deleted on request?
  7. Vendor identity. Name the specific tool. “Third-party AI software” is inadequate for informed consent.

For template language, see our informed consent templates guide.

A standalone AI consent addendum is recommended over embedding disclosures in your general intake form. It can be updated independently when you change vendors, creates a clear record of AI-specific consent, and satisfies California’s AB 302 “conspicuous” disclosure requirement. Consent must be obtained before the AI tool processes any client data.

Bias, Accuracy, and Clinical Oversight

AI models carry biases from their training data. In therapy documentation, these biases produce clinically dangerous distortions.

Known Bias Patterns in AI Clinical Tools

Cultural and linguistic bias. AI transcription tools perform measurably worse on non-standard English dialects. A 2024 JAMA Network Open study found error rates 23% higher for Black English speakers and 31% higher for non-native English speakers – a confidentiality and documentation integrity problem for diverse practices.

Diagnostic language inflation. AI note tools tend to escalate clinical language. A therapist’s “seemed flat today” becomes “presented with constricted affect consistent with depressive symptomatology” – introducing clinical claims the therapist did not make.

Gender and identity bias. The AI Now Institute has documented persistent gender bias in clinical NLP systems, including misgendering, pathologising gender non-conformity, and applying different descriptors to identical symptoms based on documented gender.

Clinical Oversight Requirements

Every AI-generated document must pass through structured review before entering the clinical record:

  • Read the full output. Do not skim – AI errors are often embedded in otherwise accurate text.
  • Verify clinical claims. Confirm any diagnosis, symptom severity, or risk factor matches your observation.
  • Check for hallucinated content. AI models can fabricate plausible clinical details that did not occur.
  • Remove unsupported language. Delete clinical terminology you did not specifically intend.
  • Compare against source material. Spot-check key passages against the original transcription.

The standard: “Would I sign this as my own clinical documentation?” If no, edit before signing.

Protecting Client Data When Using AI Tools

AI tools introduce data risks that standard HIPAA checklists do not fully address.

Five Questions to Ask Every AI Vendor

  1. Does client data enter model training? Demand a contractual commitment that client data is never used for training. Without explicit consent, using therapy data for model improvement violates both HIPAA and GDPR.

  2. Where is data processed geographically? GDPR requires EU/UK data stay within adequate jurisdictions or be covered by Standard Contractual Clauses.

  3. Is there a signed BAA? Under HIPAA, no BAA means no compliance. The BAA must specifically cover AI processing, not just storage. See our HIPAA compliance checklist.

  4. What is the data retention policy? Push for the shortest retention period. Confirm data is deleted – not just deactivated – upon contract termination.

  5. What happens during a breach? Demand documented incident response: 60 days under HIPAA, 72 hours under GDPR. Your BAA should specify the shorter timeline.

Minimum Technical Safeguards

At minimum, any AI tool processing therapy data must provide end-to-end encryption in transit and at rest, role-based access controls, audit logging, SOC 2 Type II certification (or equivalent), and data isolation ensuring your clients’ data cannot be accessed by other customers.

For a step-by-step guide to automating therapy notes with AI while maintaining these safeguards, see our implementation walkthrough.

Building an AI Ethics Policy for Your Practice

An internal AI ethics policy converts principles into operational rules. It protects you in licensing board reviews and malpractice proceedings.

AI Ethics Policy Template

Section 1: Approved tools and use cases – List every AI tool by name and vendor. Specify authorised and prohibited uses. Record the date of your last vendor security review.

Section 2: Client consent – Link to your AI consent addendum. Document how opt-out is handled and how consent is recorded. Note jurisdiction-specific requirements.

Section 3: Clinical oversight – Define who reviews AI output, the review standard, and the process for correcting errors.

Section 4: Data handling – Record where each tool processes data. Reference each vendor’s BAA. Specify retention periods and deletion procedures.

Section 5: Bias monitoring and incidents – Define how you monitor for AI bias. Establish incident response for AI errors in clinical records. Set an annual review schedule.

Quick-Reference Checklist

Before introducing any new AI tool:

  • Vendor provides a signed BAA (HIPAA) or Data Processing Agreement (GDPR)
  • Client data is not used for model training
  • Data is encrypted in transit and at rest
  • The tool has SOC 2 Type II or equivalent certification
  • You have updated your informed consent to disclose AI usage
  • Clients can opt out without affecting care quality
  • A licensed clinician reviews all AI output before it enters the record
  • You have documented approved use cases and prohibited uses
  • The tool’s data retention policy aligns with your practice’s records policy
  • You have a plan for migrating away from the tool if the vendor changes terms

Policy Review Schedule

  • Quarterly: Check for new state or federal AI regulations
  • Annually: Audit all vendor BAAs, data practices, and certifications
  • On vendor change: Complete the full checklist before any new tool touches client data
  • After any incident: Update the policy to address the root cause

Frequently Asked Questions

Do I need separate consent for AI-assisted documentation?
Yes. Standard consent forms do not cover AI processing. Under GDPR, explicit consent for automated processing of special category data is legally required. Multiple US states now mandate similar disclosure.

Can I use AI to write clinical notes without telling clients?
No. The APA advisory, GDPR Article 13, and state transparency laws require disclosure. Undisclosed AI use undermines the trust that client confidentiality protections exist to preserve.

What if a client opts out of AI processing?
Write their notes manually. Opting out cannot result in reduced care quality. Document the opt-out in their record.

Am I liable for errors in AI-generated notes?
Yes. The signing clinician is responsible for everything in the clinical record, regardless of how it was drafted.

Next Steps

Start with your consent forms, audit your vendors against the checklist above, and draft an internal policy before your next licensing renewal. For related guidance, see our posts on GDPR compliance for therapists and therapy session transcription ethics.

Stay informed

Enjoyed this article?

Get practical tips and in-depth guides for your therapy practice delivered straight to your inbox.

Ready to streamline your practice?

AI-powered notes, client management, and more — free for up to 5 clients.

Start Free

Related Articles

Modal

Loading…