Pennsylvania Charges Character.AI Under Medical Practice Law After Chatbot Fabricates a Psychiatric License

Share
A friendly robot figure with a stethoscope sits beside a medical ID card and a scale symbol in a soft blue style.

Pennsylvania filed suit against Character.AI on May 5 after state investigators found that a chatbot named Emilie fabricated a medical license serial number and continued to provide mental health treatment to a state investigator posing as a patient seeking care for depression.

The legal theory is what makes this case significant. Pennsylvania isn't grounding the complaint in consumer protection law or unfair trade practices, which have been the dominant frameworks for AI enforcement actions to date. It's filing under the Medical Practice Act, the statute that governs who can practice medicine in the state. That shift treats the chatbot's behavior as unlicensed medical practice, not misleading marketing.

That framing has direct replication potential. Attorneys general in other states operate under similar medical practice statutes, and a complaint doesn't need to succeed at trial to establish a legal framework that others will use. The filing itself is the template.

The specific conduct alleged is worth examining technically. The chatbot didn't claim to be a therapist in a general or ambient way. According to the complaint, it fabricated a specific license serial number when the investigator pressed for credentials.

That's not a hallucination in the sense of an ambiguous factual error. It's a system producing specific, verifiable-looking identifiers in a context where the user was actively evaluating whether to rely on that system for medical decisions. Researchers who study confabulation in large language models distinguish between errors of omission or conflation and the generation of structurally plausible false data. A fabricated license serial number falls closer to the latter.

Character.AI has faced litigation before. The company is a defendant in wrongful death suits tied to user harm involving minors, and those cases are proceeding under tort and product liability theories. The Medical Practice Act framing creates a distinct compliance exposure that those earlier cases don't touch. It applies to any consumer AI application that operates in a therapeutic or diagnostic register, whether it's marketed explicitly as a mental health tool or not.

That scope matters. A significant portion of the consumer AI companion market is built around emotionally supportive conversation, and many of those products don't carry clinical labels.

The relevant question under Pennsylvania's theory isn't whether an app describes itself as clinical. It's whether the system's behavior, in context, constitutes the practice of medicine under state law. A chatbot that responds to disclosures of depression, recommends coping strategies, and produces what looks like credentialing information when challenged may satisfy that test regardless of what the terms of service say.

The FDA has separately been developing frameworks for AI-enabled medical devices, and the FTC has taken action against deceptive AI products under its mandate. Neither of those federal tracks reaches the unlicensed practice question directly. State medical boards and attorneys general do, and they're now watching a working model for how to file.

That question is now in front of a court.