Senate Bill 7263 passed the New York Judiciary Committee 7-0 in February and would classify personalized AI legal advice as a Class E felony — punishable by up to four years in prison. It is the most aggressive AI-professional-services bill in the country, and it is not the only one. Here is what corporate legal departments need to know before the regulatory wave hits.
On February 25, 2026, the New York State Senate Judiciary Committee voted 7-0 to advance Senate Bill 7263, introduced by Senator Jessica Gonzalez. The bill would make it a Class E felony for an AI system to provide personalized legal advice to a consumer. Not a regulatory violation. Not a fine. A felony carrying one to four years in prison.
That sentence is worth reading twice, because it represents a category shift in how legislatures are thinking about AI in professional services. We have spent the last three years debating whether AI tools constitute the unauthorized practice of law. New York has decided to skip the debate and go straight to the criminal code.
S7263 does not stop at lawyers. It covers fourteen licensed professions — including doctors, accountants, engineers, architects, nurses, pharmacists, psychologists, social workers, veterinarians, dental hygienists, physical therapists, chiropractors, and dietitians. The bill draws a line between AI providing general information and AI providing personalized advice tailored to a specific individual’s situation. The first is permitted. The second is a felony.
The New York State Bar Association supports the bill. And if you are a General Counsel or CLO running a legal department that has deployed AI tools — or plans to — the implications extend well beyond New York.
The core of S7263 rests on a distinction that every legal department needs to internalize now: the difference between general legal information and personalized legal advice. An AI tool that explains what a non-compete clause typically contains is providing information. An AI tool that reviews your specific employment agreement and tells you whether your non-compete is enforceable in your jurisdiction is providing advice.
The line between legal information and legal advice has always been blurry. New York just drew it with a Sharpie and attached a prison sentence.
This distinction is not new. It has existed in unauthorized practice of law jurisprudence for decades. But it has never been applied to AI systems, and the practical challenge is significant. Modern large language models do not reliably stay on one side of that line. A user asks a general question, the model asks a follow-up, the user provides specifics, and within three exchanges the AI is doing something that looks a lot like tailored legal advice. The technology does not have a built-in governor for this. The bill says it needs one.
For corporate legal departments, the question is not whether your team is using AI to give legal advice to consumers. It is whether any AI-powered tool in your operation — contract review platforms, compliance chatbots, employee self-service legal portals — could be construed as crossing that line. If your company deploys a customer-facing AI that answers insurance coverage questions or explains policy terms based on a specific policyholder’s circumstances, you are in the zone.
Key S7263 definitions: "Personalized legal advice" means guidance tailored to a specific individual’s legal situation, as distinguished from general legal information. The bill covers 14 licensed professions. Violation is a Class E felony (1-4 years imprisonment). The NY State Bar Association has endorsed the bill.
S7263 gets the headlines because of the felony classification. But the regulatory trend is multi-state and accelerating. California, Texas, Florida, and Illinois are all moving — each with a different theory of the case, but all pointed in the same direction.
California’s SB 574, introduced by Senator Tom Umberg, takes a disclosure-and-oversight approach rather than outright criminalization. It would require AI legal platforms to display prominent disclaimers that they are not providing legal advice, prohibit AI from representing clients in court proceedings, and mandate human attorney oversight for any AI tool marketed as providing legal services. It is a lighter touch than New York, but it creates compliance obligations that will affect every legal tech vendor operating in the state — which is most of them.
Texas is pursuing a liability framework through the RAIGA Act — Regulate AI-Generated Advice — which focuses less on criminalizing AI output and more on establishing clear liability rules for providers. The theory is that if you deploy an AI system that gives bad legal advice, you own the consequences. Florida’s AI Task Force is exploring similar legislative options, and Illinois has proposed AI professional licensing requirements that would create a new regulatory category specifically for AI systems operating in licensed professional domains.
At the federal level, the GUARD Act — Government Unified AI Regulatory Direction Act — would create a national framework for AI in professional services, potentially preempting or supplementing the state patchwork. The FTC is separately investigating AI platforms that make claims about providing legal guidance, focusing on whether those claims constitute deceptive trade practices.
Five states are writing five different rules for AI in legal services. If you operate nationally, you need to comply with all of them.
The pattern is clear. Within eighteen months, any AI tool that touches legal advice, legal information, or legal services will operate in a regulatory environment that does not yet exist today. The specifics will vary by state. The direction will not. We mapped nuclear verdict exposure by state on our interactive heat map at litigationsentinel.com/nuclear-verdicts — the geographic patterns matter when you are thinking about regulatory risk too, because the states with the highest litigation exposure are also the ones moving fastest on AI regulation.
The American Bar Association has been telegraphing this regulatory wave for two years. In 2024, Formal Opinion 512 confirmed that AI tools used by licensed attorneys under proper supervision do not constitute the unauthorized practice of law. That opinion was a green light for attorney-supervised AI — and a red light for everything else.
In 2025, ABA Resolution 604 went further: it explicitly recommended that state legislatures update their unauthorized practice of law statutes to address AI systems. The resolution did not dictate how. It simply said the existing rules were not built for this technology and needed to be modernized. S7263 is one answer to that call. California SB 574 is another. More are coming.
The ABA’s AI Task Force report called for a "technology-neutral" approach to professional regulation — the idea being that the rules should focus on what the tool does, not what the tool is. If a system provides personalized legal advice, it should be regulated the same way regardless of whether a human or a machine is providing it. That principle sounds reasonable in the abstract. In practice, it means AI tools that cross the information-to-advice threshold will face the same regulatory framework as a human practicing law without a license.
For corporate legal departments, the ABA’s position creates a clear safe harbor and a clear danger zone. The safe harbor: AI tools used by licensed attorneys, with appropriate supervision and governance, to support their practice. The danger zone: AI tools deployed directly to consumers, employees, or business partners that provide tailored guidance on specific legal situations without attorney oversight. If you are in the safe harbor, you are fine. If you are in the danger zone, the regulatory ground beneath you is shifting fast.
While legislatures draft bills, courts have been establishing their own precedent — and the picture is not encouraging for organizations that have been treating AI as a free-form legal research tool without governance.
In Mata v. Avianca (2023), a New York federal judge sanctioned attorney Steven Schwartz for submitting a brief that cited six cases fabricated by ChatGPT. The court found that Schwartz had failed to verify the AI output against actual legal databases — a basic competency obligation that the technology made dangerously easy to skip. In re: Schwartz imposed additional sanctions in the same matter. These cases established that AI hallucinations are the attorney’s problem, not a defense.
More recently — and more relevant to the regulatory trend — Judge Rakoff’s ruling in the Heppner case earlier this year found that AI-generated legal documents are not protected by attorney-client privilege when created using consumer AI tools without attorney direction. We covered that ruling in depth in our analysis of the AI privilege split — if you have not read it, it is the most consequential AI-privilege ruling to date and directly compounds the risks I am describing here.
The regulatory and judicial trend lines are converging: Mata v. Avianca (2023) established attorney liability for unverified AI output. Heppner (2026) stripped privilege from consumer AI interactions. S7263 would criminalize personalized AI legal advice. ABA Resolution 604 recommended states update UPL statutes for AI. Five states now have active AI-legal-services legislation. Each development makes the next one more likely — and more aggressive.
The courts are telling you that AI output is your responsibility. The legislatures are telling you that unsupervised AI in professional services is heading toward criminal liability. The ABA is telling you the rules are changing. The only question is whether your department’s governance framework reflects any of this.
Let me be direct about what I am not saying. I am not saying AI is dangerous. I am not saying legal departments should stop using AI tools. I use AI in my practice. The companies I work with use AI across their legal operations. The technology is genuinely transformative when deployed correctly.
What I am saying is that the regulatory environment is catching up to the technology, and most corporate legal departments have not updated their governance frameworks to reflect that. Many deployed AI tools eighteen months ago under the assumption that the regulatory environment would remain permissive. That assumption is no longer safe.
Here is where the risk concentrates. First, customer-facing AI tools. If your company deploys a chatbot, virtual assistant, or automated guidance system that answers legal or quasi-legal questions for customers — coverage questions, claims guidance, policy interpretation, compliance advice — you need to evaluate whether that tool could be construed as providing personalized legal advice under S7263 or its equivalents. The exemption for general information is narrow, and these tools are designed to get specific.
Second, employee self-service legal tools. Many large organizations have deployed internal AI tools that help employees navigate HR policies, compliance requirements, or benefits questions. Some of these tools are sophisticated enough to provide guidance that could cross the information-to-advice line. Under S7263’s framework, the fact that the tool is internal does not necessarily exempt it — the bill targets the act of providing personalized professional advice, not the commercial context.
Third, legal operations AI without attorney oversight. Contract review platforms, litigation hold automation, discovery tools — if these are operating without a licensed attorney in the supervision chain, the regulatory trajectory I have described will eventually reach them. ABA Opinion 512 protects attorney-supervised AI. It does not protect AI operating autonomously in a legal function.
The question is not whether you use AI. It is whether you can prove — to a regulator, a judge, or a jury — that a licensed attorney supervised every output that touched legal advice.
The legislative timelines on these bills range from months to a year. S7263 still needs a full Senate vote and Assembly passage. California SB 574 is earlier in its committee process. But the direction is set, and the compliance lift is not trivial. Starting now gives you time to do this right instead of doing it under pressure.
One: Inventory every AI tool in your legal operation and classify it. Map each tool against the information-versus-advice distinction. Any tool that takes user-specific inputs and generates tailored legal guidance — even indirectly — needs to be flagged for governance review. Do not limit this to tools the legal department procured. Check what customer service, HR, and compliance deployed independently.
Two: Establish attorney oversight protocols for every tool that crosses the line. ABA Opinion 512 provides the framework: a licensed attorney must supervise the AI’s output when it touches legal advice. Document who that attorney is, what the supervision process looks like, and how it is enforced. If you cannot name the attorney, the tool is not supervised.
Three: Update your AI acceptable use policy to reflect the multi-state regulatory landscape. Your policy was probably written before S7263, SB 574, and the RAIGA Act existed. It needs to account for the possibility that personalized AI legal advice will be criminalized in your operating jurisdictions. If your company operates in New York, California, or Texas, this is not hypothetical.
Four: Coordinate with your technology and product teams. If your company sells or deploys AI tools that interact with customers on legal, financial, or health-related topics, the product team needs to understand S7263’s fourteen-profession scope. This is not just a legal department problem. It is a product liability problem, a compliance problem, and potentially a criminal exposure problem.
Five: Build a monitoring framework for the legislative trend. The bills I have described are the first wave. By the end of 2026, I expect at least ten states to have active legislation addressing AI in professional services. Your regulatory tracking should include these bills the same way it tracks data privacy legislation or litigation funding reforms.
Action checklist for GCs: (1) Inventory all AI tools touching legal advice — including tools deployed by non-legal departments. (2) Classify each tool on the information-vs-advice spectrum. (3) Assign attorney oversight to every tool that crosses the advice threshold. (4) Update AI acceptable use policies for multi-state compliance. (5) Brief product and technology leadership on S7263’s fourteen-profession scope.
Here is the part nobody is talking about yet. S7263 and its equivalents are addressing the supply side — restricting what AI tools can do. But the demand side is not going away. Consumers want cheaper, faster access to legal guidance. Businesses want to automate legal operations. The economic pressure to deploy AI in legal services is enormous and growing.
The organizations that navigate this well will be the ones that build governance frameworks robust enough to satisfy regulators while capturing the efficiency gains that AI makes possible. That is not a contradiction. It is a design problem. And the legal departments that solve it first will have a meaningful competitive advantage — both in operational efficiency and in regulatory risk management.
The organizations that navigate it poorly will be the ones that either freeze — stopping all AI deployment out of regulatory fear — or ignore the trend and discover the compliance gap when a regulator, a court, or opposing counsel surfaces it for them. Neither outcome is acceptable.
I started covering the AI-and-litigation intersection with our analysis of the Heppner privilege ruling last month. The regulatory story I have outlined here is the next chapter. The privilege question, the unauthorized practice question, and the governance question are all converging into a single reality: AI in legal services is no longer an unregulated frontier. The rules are being written now. Your department’s governance framework should be written at least that fast.
We are tracking every bill, ruling, and ABA development in this space through Litigation Sentinel. If you are not subscribed, this is the coverage that will keep you ahead of the regulatory curve. And if you want to see where your legal operations stand against the governance benchmarks that are emerging, our Executive Briefing at litigationsentinel.com/briefing is a two-minute diagnostic that maps your blind spots — including the AI governance gaps that S7263 just made urgent.
The Executive Briefing takes 2 minutes and shows you exactly where the gaps are.
Take the Executive Briefing →Join 1,847 litigation leaders who get weekly intelligence on strategy, technology, and the data that matters.