← All Posts

Your Staff Is Pasting Patient Data Into ChatGPT. Here's Why That's a HIPAA Problem.

By Lauren Berkley, Founder, EllenRx | Masters of Bioethics, University of Pennsylvania

It's 2:15 on a Tuesday afternoon. A medical assistant at a mid-sized dermatology practice is staring at yet another prior authorization denial — this time for a biologic that her patient has been waiting three weeks to start. The denial letter is vague. The clinical criteria it cites don't match what she sees in the chart.

She opens a new browser tab, navigates to ChatGPT, and types:

“Write a letter of medical necessity for the following patient: Jane Smith, DOB 04/12/1987, diagnosis atopic dermatitis (L20.9), policy number UHC-4471882, requesting Dupixent 300mg. She has failed topical corticosteroids, tacrolimus, and phototherapy. Her EASI score is 28.”

Three minutes later, she has a polished, well-structured appeal letter. It's good — better than what she could have drafted manually in 45 minutes. She copies it into the EHR, queues it for the physician's signature, and moves on to the next one.

She does this 15 times a day. So does the prior authorization coordinator down the hall. So does virtually every other practice in America that has discovered how effective consumer AI tools are at generating clinical documentation.

There's just one problem: every single one of those interactions is a HIPAA violation.

The Scale of the Problem

To understand why staff are turning to consumer AI chatbots, you have to understand the administrative nightmare they're living in.

According to the American Medical Association's 2024 Prior Authorization Physician Survey, physicians and their staff spend an average of 34 hours per week— nearly a full-time employee's workload — on prior authorization tasks alone. That's not a rounding error. That's a structural failure in how American healthcare delivers care.

The cruelest part? The denials are often wrong. Data from health systems and appeals tracking organizations consistently show that approximately 82% of prior authorization denials are overturned when providers submit a properly documented appeal. The system isn't filtering out inappropriate care. It's filtering out providers who don't have time to fight.

So staff found a shortcut. Consumer AI chatbots like ChatGPT, Claude, and Gemini can generate a compelling, clinically referenced appeal letter in minutes. The output is often remarkably good — comprehensive, well-organized, and persuasive.

The problem isn't the quality of the letters. The problem is what happens to the patient data that goes in.

Why This Is a HIPAA Violation — Specifically

Let's be precise about what's happening, because “it's just a chatbot” is not a legal defense.

No Business Associate Agreement (BAA) exists for consumer AI products. OpenAI does not sign BAAs for the consumer version of ChatGPT. Anthropic does not sign BAAs for consumer Claude. Google does not sign BAAs for consumer Gemini. These companies offer enterprise tiers with BAA options, but the free or consumer-subscription versions that your staff is using right now? No BAA. No exception.

Under HIPAA, any entity that creates, receives, maintains, or transmits protected health information (PHI) on behalf of a covered entity must have a signed BAA in place. The moment a medical assistant types a patient's name, date of birth, diagnosis code, and insurance policy number into ChatGPT, that platform becomes a de facto business associate — without the legal agreement that makes that relationship compliant.

PHI is PHI, regardless of the medium. HIPAA doesn't distinguish between PHI stored in an EHR and PHI typed into a chatbot prompt. If the information identifies a specific patient and relates to their health condition, treatment, or payment for care, it's protected.

Data retention creates ongoing exposure. OpenAI's privacy policy states that user prompts may be retained for up to 30 days for safety and abuse monitoring purposes. That means patient PHI — names, diagnoses, policy numbers, medication histories — is sitting on servers operated by companies with no BAA, no HIPAA compliance obligation, and no contractual duty to protect that information.

If your practice is submitting 15 prior authorization letters per day through consumer ChatGPT, that's potentially 450 patients' PHI sitting on non-compliant servers every month.

The Enforcement Reality

“But nobody's been fined for this yet.”

That's what healthcare organizations said about ransomware in 2015. About unencrypted laptops in 2010. HIPAA enforcement follows adoption curves, and AI adoption in healthcare is accelerating faster than any previous technology.

The Office for Civil Rights (OCR) enforces HIPAA through a four-tier penalty structure:

  • Tier 1 (Lack of knowledge): $100–$50,000 per violation
  • Tier 2 (Reasonable cause): $1,000–$50,000 per violation
  • Tier 3 (Willful neglect, corrected): $10,000–$50,000 per violation
  • Tier 4 (Willful neglect, not corrected): $50,000 per violation

The annual cap is $1.5 million per violation category. If your staff has been pasting patient data into ChatGPT for months — across hundreds of patients — each instance is a separate violation. The math gets catastrophic quickly.

What Your Practice Should Do Right Now

The good news: this is fixable. The bad news: you need to fix it today, not next quarter.

  1. Audit your staff's AI tool usage immediately. Don't assume you know what tools your team is using. Ask them directly: “Are you using ChatGPT, Claude, Gemini, or any other AI tool to draft clinical letters?” The answer will almost certainly be yes.
  2. Update your HIPAA policies to address AI tools. Add explicit guidance on which AI tools are approved for use with patient data and which are prohibited.
  3. Train your workforce — and document the training. HIPAA requires workforce training on policies and procedures related to PHI. AI tool use needs to be part of that training now.
  4. If you're using AI for prior authorization, ensure your vendor has a signed BAA. AI-powered prior authorization tools exist that are built specifically for healthcare, operating under BAAs with encryption and audit trails.
  5. Look for purpose-built solutions. The difference between consumer AI and healthcare-specific AI isn't just compliance — it's quality.

The Bottom Line

The prior authorization crisis is real. It's burning out your staff, delaying your patients' care, and consuming resources that should be spent on medicine, not paperwork. Providers should use AI to fight back against a system that denies care it knows should be approved.

But they need to do it with tools built for healthcare — not tools built for writing birthday poems and debugging Python code.

The prior authorization system is broken. Don't let it break your compliance, too.

Ellen is a HIPAA-compliant patient advocacy platform with AI-powered prior authorization tools for providers. Built on AWS with BAA, encryption at rest and in transit, and audit trails. Learn more →

Denied a medication?

Ellen decodes your insurance denial and generates a personalized appeal — HIPAA-compliant, no patient data stored.

Decode My Denial