Public AI Tools and Discovery Exposure
Generative AI tools have quickly become part of daily life for businesses and individuals across Illinois and Missouri. Platforms such as ChatGPT and other large-language models often feel conversational and private, which can foster a false sense of security. Recent litigation, however, demonstrates that what users type into these tools may be stored, logged, and later produced in court. For companies operating under Illinois and Missouri discovery rules—both of which allow broad discovery of electronically stored information—this risk is particularly acute.
One of the clearest examples comes from copyright litigation brought by major publishers. The New York Times and other media organizations have sued AI companies alleging that copyrighted articles were impermissibly used to train AI models without authorization.[1] These cases are not limited to abstract copyright principles; they have generated aggressive discovery disputes focused on how AI systems collect, retain, and use data.
During that litigation, U.S. Magistrate Judge Ona Wang ordered defendants to produce 20 million anonymized ChatGPT conversation logs.[2] The order required disclosure of complete conversations, not just isolated prompts. For Illinois and Missouri litigants, this should sound familiar: courts in both states routinely compel production of broad categories of electronic records when they are relevant and proportional. Once AI logs exist, courts may treat them like emails, chat messages, or internal databases—fully discoverable.
Under both Illinois and Missouri law, once litigation is reasonably anticipated, parties must preserve potentially relevant information. Entering sensitive information into a public AI tool can create data that parties must preserve long after a user thought it was gone.
Emerging Liability and Privilege Risks
Other lawsuits illustrate different forms of exposure. In Raine v. OpenAI, the family of a teenager who died by suicide alleges that ChatGPT encouraged self-harm and failed to intervene as conversations escalated.[3] The complaint asserts negligence and product-liability theories, arguing that the AI’s design created foreseeable harm.[4] While this case is pending in California, the legal theories closely mirror those routinely litigated in Illinois and Missouri courts, where courts may hold companies responsible for products or services that create unreasonable risks.
As recently as February 2026, Judge Jed S. Rakoff of the Southern District of New York held that documents a criminal defendant generated using Anthropic’s public AI tool, Claude, were not protected by either the attorney-client privilege or the work‑product doctrine. See United States v. Heppner. The defendant, Bradley Heppner, had used Claude to draft strategy-oriented legal analyses after receiving a subpoena and later shared the resulting documents with counsel. The court ruled that privilege did not apply because: (1) no attorney was involved in creating the documents; (2) the AI tool expressly disclaimed providing legal advice; and (3) communications with a public AI platform are not confidential. To conclude that the Claude inputs were “communications” for which privilege was waived, the court essentially treated Claude as a person rather than a research tool. This landmark ruling underscores that unsupervised client use of consumer AI tools can defeat privilege, reinforcing the need for strict confidentiality controls and attorney-directed workflows when leveraging AI.
Practical Implications for Illinois and Missouri Businesses
Overall, these cases matter to local businesses because AI logs and user inputs can become evidence in someone else’s lawsuit. Detailed prompts may contain internal strategies, employment issues, financial data, or early assessments of legal risk. In Illinois and Missouri litigation, such information may be discoverable even if the company never shared it externally other than through an AI platform.
There are also direct Illinois and Missouri connections. Illinois-based publishers, including the Chicago Tribune, have participated in litigation challenging AI training practices, anchoring these disputes squarely within Illinois interests.[5] In Missouri, while no major AI-privacy lawsuit has yet made headlines, the state legislature is considering changes to state law impacting AI usage.[6]
For Illinois and Missouri clients, the practical takeaway is straightforward: treat public AI tools as if everything you enter could one day appear in discovery. This includes draft contracts, internal communications, HR discussions, compliance issues, litigation strategies, and sensitive personal information. Courts in both states expect parties to know where their data resides and to produce it when required. AI can assist with efficiency, but it cannot preserve privilege, assess litigation risk, or comply with discovery obligations on its own. As Illinois and Missouri courts continue to apply existing discovery, negligence, and consumer-protection principles to new technology, one thing is clear: AI use does not reduce legal responsibility. Careful, informed use of AI today is the best way to avoid unintended privacy exposure and costly discovery disputes tomorrow.
[1] Audrey Pope, NYT v. OpenAI: The Times’s About-Face, Harv. L. Rev. (April 10, 2024), https://harvardlawreview.org/blog/2024/04/nyt-v-openai-the-timess-about-face/
[2] Blake Brittain, OpenAI loses fight to keep ChatGPT logs secret in copyright case, Reuters (December 3, 2025), https://www.reuters.com/legal/government/openai-loses-fight-keep-chatgpt-logs-secret-copyright-case-2025-12-03/
[3] Shweta Watwe, OpenAI Hit With Suit From Family of Teen Who Died by Suicide, Bloomberg Law(August 26, 2025), https://news.bloomberglaw.com/litigation/openai-hit-with-suit-from-family-of-teen-who-died-by-suicide
[4] Id.
[5] Robert Channick, Chicago Tribute sues Perplexity AI for copyright infringement, Chicago Tribune (December 4, 2025), https://www.chicagotribune.com/2025/12/04/chicago-tribune-perplexity-ai-copyright-infringement/
[6] Kurt Erickson, Missouri Lawmakers Move Toward Regulating AI, GovTech (December 3, 2025), https://www.govtech.com/artificial-intelligence/missouri-lawmakers-move-toward-regulating-ai