AI conversation history generates alert for lawyers in the US

April 15 (Reuters) – As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots as confidants when their freedom or legal responsibility is at stake.

These warnings became more urgent after a New York federal judge ruled this year that the former chief executive of a failed financial services company could not protect his AI chats from prosecutors accusing him of fraud.

In the wake of the ruling, lawyers have advised that conversations with chatbots like Anthropic’s Claude and OpenAI’s ChatGPT may be required by prosecutors in criminal cases or by adversaries in civil cases.

Continues after advertising

“We’re telling our clients: You should tread carefully here,” said Alexandria Gutiérrez Swette, an attorney at the New York-based law firm Kobre & Kim.

People’s conversations with their lawyers are almost always considered confidential under US law. But AI chatbots are not lawyers, and lawyers are instructing clients to take steps that can keep their communications with AI tools more private.

In emails to clients and notices posted on websites, more than a dozen major U.S. law firms are outlining advice for people and businesses to lessen the chances of AI chats ending up in court.

Similar notices are also appearing in some firms’ employment contracts with their clients. For example, New York-based firm Sher Tremonte stated in a recent contract with a client that sharing a lawyer’s advice or communications with a chatbot could eliminate the legal protection known as attorney-client privilege, which typically protects communications between lawyers and their clients.

JUDICIAL DECISION

The case that helped raise the alarm involved Bradley Heppner, former chairman of failed financial services firm GWG Holdings and founder of alternative asset firm Beneficent. In November of last year, Heppner was charged by federal prosecutors with securities and wire fraud and pleaded not guilty.

Continues after advertising

Heppner had used the Claude chatbot to prepare reports about his case to share ​with his lawyers, who subsequently argued that his AI exchanges should be withheld because they contained lawyers’ details relating to his defense.

Prosecutors argued that they had the right to demand the material Heppner created with Claude because his defense lawyers were not directly involved and because attorney-client privilege does not apply to chatbots.

U.S. District Judge Jed Rakoff of Manhattan ruled in February that Heppner must turn over 31 documents generated by Claude related to the case.

Continues after advertising

There is “or could be, no attorney-client relationship between an AI user and a platform like Claude,” Rakoff wrote.

Heppner’s attorneys did not immediately respond to requests for comment. A spokesman for the U.S. attorney’s office in Manhattan declined to comment.

Courts are already grappling with the increasing use of artificial intelligence by lawyers and people representing themselves in legal proceedings, which, among other things, has led to court cases containing cases invented by AI.

Continues after advertising

The Rakoff decision was an important early test in the AI ​​chatbot era for the fundamental legal protections governing lawyer-client communications and materials prepared for litigation.

On the same day as Rakoff’s ruling, U.S. Judge Anthony Patti in Michigan said that a woman representing herself in a lawsuit she filed against her former company did not need to turn over her conversations with ChatGPT about the employment claims made in the case.

Patti treated conversations with the woman’s AI as ⁠part of her personal ‘work product’ for the case, rather than conversations with a person her employer could try to use in her defense.

Continues after advertising

ChatGPT and other generative AI programs “are tools, not people,” Patti wrote.

OpenAI and Anthropic’s terms of privacy and use state that the companies can share data involving ⁠their users with third parties. Both also state that they require users to consult a qualified professional before relying on their chatbots for legal advice.

Rakoff, at a February hearing in Heppner’s case, noted that Claude “expressly established that users have no expectation of privacy in their information.”

Representatives from OpenAI and Anthropic did not respond to requests for comment.

ADVICE FOR PROTECTION

Lawyers’ advice ranges from telling clients to carefully select their AI platforms to suggesting specific language to use.

Los Angeles-based O’Melveny & Myers and other firms say in their notices to clients that ‘closed’ AI systems designed for corporate use could offer stronger protections for legal communications, although they said even this has not yet been tested.

Some firms assert that AI ⁠legal research is more likely to be protected by attorney-client privilege when it is conducted under the guidance of a lawyer. If a lawyer advises using AI, the person should say so in the ⁠chatbot prompt, New York-based law firm Debevoise & Plimpton said in a notice on its website. ‘I am doing this research under the guidance of the lawyer for the litigation [X] ‘, suggested the office.

Sher Tremonte, who frequently represents white-collar crime defendants, said in a new contract in March: “Disclosure of privileged communications to a third-party AI platform may constitute a waiver of attorney-client privilege.”

Justin Ellis of the New York-based law firm MoloLamken and other lawyers said they hope more rulings will eventually clarify when AI chats can be used as evidence.

Until then, lawyers are saying that an old premise still applies: Don’t talk to anyone about your case except your lawyer.

Source link