Lawyers Warn AI Chats Can Be Used in Court After New York Privilege Ruling

0

All news is rigorously fact-checked and reviewed by leading blockchain experts and seasoned industry insiders.
  • More than a dozen major U.S. law firms have warned clients that AI chatbot conversations may be discoverable in court.
  • The warnings followed a New York ruling that a fraud defendant’s chats with Anthropic’s Claude were not protected by attorney-client privilege or work-product doctrine.

U.S. law firms are moving quickly to warn clients that conversations with AI chatbots may not stay private once a case reaches court.

The urgency follows a February ruling by Judge Jed Rakoff in New York, who held that Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings, had to turn over 31 documents generated through Anthropic’s Claude to federal prosecutors pursuing securities and wire fraud charges.

Rakoff found that no attorney-client relationship existed between a user and Claude, and that any confidentiality was waived by sharing information with the platform.

Law firms are starting to write the warning into client contracts

Reuters reported that more than a dozen major U.S. firms have since issued advisories telling clients to be careful with legal discussions involving chatbots such as Claude and ChatGPT. Some firms have gone further and embedded those warnings directly into engagement agreements.

New York firm Sher Tremonte, for example, said in a recent client contract that disclosing privileged communications to a third-party AI platform may waive attorney-client privilege.

That is a meaningful shift. What was, a few months ago, mostly internal caution from lawyers is now being formalized in client paperwork.

One ruling, but a broader legal signal

The Rakoff decision is not the only court view on the issue. On the same day, a magistrate judge in Michigan held that a pro se plaintiff’s ChatGPT conversations could be treated as personal work product and did not have to be produced. Still, legal advisers appear to be treating the New York case as the more important warning sign for now.

The deeper issue is not really AI itself. It is confidentiality. As Reuters noted, both Anthropic and OpenAI state in their terms that user data may be shared with third parties, including government authorities in some circumstances. For lawyers, that makes the old rule feel newly relevant. Do not discuss your case with anyone except your lawyer, and that now includes the chatbot.


Credit: Source link

Leave A Reply

Your email address will not be published.

Please enter CoinGecko Free Api Key to get this plugin works.