Technology
US legal experts are warning that conversations with AI tools may be used as evidence against users in legal cases.

As more people rely on AI for guidance, U.S. lawyers are cautioning clients not to treat chatbots as confidential advisors, especially when legal risk is involved. The concern has grown after a federal judge in New York ruled that a former executive of a bankrupt financial services firm could not block prosecutors from accessing his AI chatbot conversations in a securities fraud case. Lawyers say this decision highlights that messages exchanged with tools like ChatGPT or Claude may be discoverable in court, unlike communications with licensed attorneys, which are typically protected.
Following the ruling, legal experts have begun advising clients that AI chats could be subpoenaed in both criminal and civil cases. Attorneys emphasize that unlike lawyer–client conversations, interactions with AI systems do not carry legal privilege, and sharing sensitive legal advice with chatbots may weaken confidentiality protections.
Several major law firms have issued guidance urging caution, with some even updating client agreements to warn that using AI tools could risk waiving attorney–client privilege if legal advice is exposed to third-party platforms. The case that triggered these concerns involved a former financial firm executive who used an AI chatbot to help prepare case-related material for his lawyers. Prosecutors argued those AI-generated materials were not protected because they were not created directly through an attorney, and a judge agreed, ordering disclosure of many of the documents. The judge also noted that AI platforms do not have a legal relationship with users and therefore cannot provide privileged communication.
Another court in Michigan, however, ruled differently in a separate case, allowing a self-represented plaintiff to keep her AI chat records private as part of her case preparation work product, showing that courts are still split on how such data should be treated. AI companies like OpenAI and Anthropic note in their terms that user data may be shared in certain circumstances, and they recommend users avoid relying on chatbots for legal advice.
Law firms are now increasingly setting internal rules and suggesting that if AI is used in legal research, it should be done under lawyer supervision and clearly documented. Until clearer legal standards emerge, attorneys continue to stress a cautious approach: sensitive case discussions should be kept strictly between a client and their human lawyer, not AI systems.



