New York legislators are taking aim at artificial intelligence (“AI”) chatbots that may blur the line between automated assistance and professional advice. A recently advanced bill would make it unlawful for AI chatbots to pose as doctors, therapists, lawyers, or other licensed professionals. The Senate Internet & Technology Committee approved Senate Bill S7263 (“S7263”) last month and the bill now awaits consideration on the Senate floor calendar. S7263 is aimed at establishing guardrails around the growing number of conversational AI tools, commonly known as chatbots, that can deliver sophisticated, and often persuasive, advice.
What the Bill Would Do
S7263 targets “proprietors” of AI chatbots, defined broadly as entities that own, operate, or deploy AI chatbots, excluding third-party developers that license their technology to proprietors. The bill defines “chatbots” to mean any AI system that simulates human-like conversation and interaction to provide information and other services to users. In practice, this would capture companies that deploy conversational AI tools that provide guidance in areas traditionally associated with licensed professions, i.e. chatbots that offer medical or mental health guidance, financial or investment recommendations, or legal information in response to user questions. The definition could also encompass providers of standalone, web-based conversational AI services.
The bill creates a private right of action for actual harm resulting from a user’s reliance on any “substantive response, information, or advice” offered by an AI chatbot which, if provided by a natural person, would amount to the practice of a licensed profession under New York’s Education Law or the practice of law under the Judiciary Law. A plaintiff may additionally recover attorneys’ fees for willful violations, for example, where a proprietor knowingly deploys or maintains a chatbot that provides professional-style advice without adequate safeguards or disclaimers, despite awareness of the risk of user reliance.
The bill additionally requires proprietors to provide clear and conspicuous notice that users are interacting with an AI system and not a human. This could look like prominent banners, pop-up disclosures before a chat session begins, and persistent labels within the chat interface. However, the disclaimer alone will not insulate a proprietor from liability if the chatbot’s conduct crosses the line into unlicensed practice of a profession.
Implications and Practical Takeaways for Businesses
Greater Litigation Exposure By giving users a direct litigation path, the bill shifts enforcement from state regulators to the civil courts, increasing practical risk for companies embedding AI chat bots into products or services. If S7263 becomes law, proprietors that embed AI chat bots into consumer-facing products could face claims not just for faulty or inaccurate output under existing laws, but also for the chatbot’s conduct as an unlicensed “professional.” Plaintiffs’ firms may test the statute aggressively, given the promise of attorneys’ fees and the ability to plead economic and non-economic damages.
A Template for Other Jurisdictions States are experimenting with transparency rules, governance frameworks, and mental-health safeguards in their approach to AI regulation. New York’s licensure-based approach could be exported to any field where professional credentials are core to public safety: healthcare, legal services, financial advice, and beyond. Multistate companies should anticipate divergent standards and consider adopting the strictest set of controls as a baseline, while recognizing that a future federal framework could alter or preempt aspects of state regulation.
Persistent Ambiguities
- Definition of “substantive” advice. If S7263 becomes law, litigants will likely contest what constitutes “substantive” professional advice. Courts will need to distinguish between general information, interactive conversation, and personalized advice. These determinations have the potential to influence how companies design and structure AI chatbots by shaping how responses are framed, when tools decline certain questions, or how users are directed to licensed professionals.
- Liability allocation. The bill exempts upstream developers that merely license technology, but modern AI supply chains blur those roles. Expect indemnity battles and novel third-party claims.
- Innovation chill. Liability concerns may discourage helpful AI use cases - such as legal-information chatbots that avoid personalized counsel - unless regulators or courts clarify the scope of permissible uses. At least one New York federal court recently addressed the distinction between providing legal information and legal advice. Even though the case involved humans (not AI) associated with dispensing free legal advice, the court found that the nonprofit’s proposed program to train non-lawyers to provide individualized, case-specific guidance would constitute the unauthorized practice of law, and dismissed the plaintiffs’ First Amendment challenge.[1]
Whether or not S7263 ultimately becomes law, prudent preparation now may pay off later. Companies that deploy AI chatbots should consider updating insurance and incident-response plans to treat AI advice as a distinct risk category. Companies should also present clear disclaimers, inventory existing use cases against professional-licensing rules, and monitor legislative activity, including early lawsuits, to stay ahead of the curve. Sheppard’s attorneys will be tracking the bill and will provide timely updates as the landscape develops.
