Executive summary
Musk’s deposition shows how courtroom rhetoric is being used to shape public and regulatory narratives about AI safety — even as documented Grok incidents undercut those claims.
In sworn testimony tied to his suit against OpenAI, recorded in September 2024 and filed in October 2024, Elon Musk positioned his AI model Grok as harmless compared with ChatGPT, asserting “nobody committed suicide because of Grok.” That assertion serves dual purposes: it frames Grok as a safety exemplar and paints OpenAI as overstating AI risks for competitive or regulatory gain. Yet the timing of the deposition—preceding an April 2026 jury selection—and the backdrop of real-world Grok controversies, including a deepfake nudity episode under investigation by California and the EU, reveal a strategic clash between courtroom messaging and incident-driven scrutiny of AI practices.
Across corporate legal planning, regulatory oversight, and partner evaluation, this episode illustrates how AI companies’ public testimonies can reshape reputational landscapes, inform regulatory agendas, and create evidentiary records that long outlive soundbites. At the same time, documented Grok safety incidents provide focal points for regulators and litigants to challenge claims made under oath.
Key takeaways
- Musk’s central claim—“Nobody has committed suicide because of Grok”—was delivered under oath in September 2024 and filed publicly in October 2024, positioning Grok’s safety against ongoing lawsuits alleging ChatGPT interactions contributed to mental health crises.
- The $134.5 billion lawsuit alleges OpenAI abandoned its 2015 nonprofit charter after Microsoft investment; jury selection is set for April 2026, making pretrial filings a battleground for competing safety narratives.
- Documented Grok incidents—including widespread non-consensual nude deepfakes on X in September 2024—triggered an October 3, 2024 California AG investigation, EU scrutiny, and a class action, directly challenging the deposition’s safety assertions.
- Under oath, Musk revised his OpenAI donation figure to ~$44.8 million (down from reported $100 million) and reiterated his March 2023 “Pause Giant AI Experiments” letter signature, blending safety advocacy with competitive positioning.
- Courtroom rhetoric now serves as a public relations tool that creates a documentary record, influencing juror perceptions, regulator inquiries, and downstream contractual risk assessments.
- Regulatory bodies may use the discrepancy between sworn safety claims and on-platform incidents to expand oversight, while litigants can cite these incidents in discovery and trial to undermine credibility arguments.
Breaking down the deposition and its strategic use
When Musk declared that “nobody committed suicide because of Grok,” he tied a high-profile public safety debate to a legal confrontation over corporate governance. The deposition—captured in September 2024 and unsealed in October—functions as more than a discovery statement. It is a calibrated effort to shape the public record, set the narrative for potential jurors, and create talking points for rival platforms and regulators.
The timing deepens its strategic value. With jury selection scheduled for April 2026 in the Northern District of California, each pretrial filing gains amplified significance. Musk’s emphasis on Grok’s purported safety aligns with his broader claim that OpenAI breached its original nonprofit mission by turning for-profit under Microsoft’s influence. By contrasting Grok’s record with lawsuits filed against ChatGPT between June and August 2024—where plaintiffs alleged that ChatGPT conversations contributed to suicidal ideation—he reframes the dispute around safety and ethics rather than purely corporate control.

OpenAI’s response has blended legal and public relations tactics. The company sought to dismiss some cases under Section 230 of the Communications Decency Act and announced safety investments on September 4, 2024, including new moderation layers and advisory boards. Those moves are now weighed against both Musk’s deposition and the unfolding Grok controversies, creating a layered evidentiary and public opinion landscape.
Why the “no suicides” claim collides with Grok incidents
The deposition’s core assertion becomes contested in light of concrete incidents. In September 2024, Grok-powered image generation tools on X produced non-consensual nude deepfakes at scale. This event prompted an investigation by California’s Attorney General on October 3, 2024, drew scrutiny from EU digital safety regulators, and spurred a class action lawsuit alleging negligence, privacy violations, defamation, and unfair practices. These incidents contradict a blanket safety claim and supply regulators and plaintiffs with documented harms to cite in discovery, briefs, and trial testimony.
The juxtaposition of sworn testimony versus platform incidents crystallizes a diagnostic tension: public courtroom declarations leave a trail of record that can be revisited by adversaries. Regulators and opposing counsel alike can point to unsealed deposition transcripts alongside incident reports, internal logs, and third-party complaints, highlighting gaps between public statements and operational realities. The depth of incident documentation—including user reports, takedown logs, and communications with law enforcement—becomes central to challenges against credibility.
Legal and competitive stakes
This deposition blurs lines between litigation strategy and competitive marketing. By using a legal forum to assert safety credentials, Musk effectively translates courtroom theatrics into a public relations campaign. That approach has multiple downstream effects. First, it conditions public perception of Grok as a safer alternative, potentially influencing user adoption and partner negotiations. Second, it draws regulators’ attention to discrepancies that might justify heightened oversight or enforcement actions. Third, it sets a precedent: other tech litigants may adopt similar tactics, turning depositions into de facto marketing vessels for product differentiation.

For OpenAI, Musk’s statements raise questions about narrative control. The company’s Section 230 defense and safety announcements aim to counterbalance allegations, but those filings now coexist with public record of Grok incidents. In trial, attorneys will likely juxtapose Musk’s safety assertions with deepfake logs and AG correspondence to challenge both the substance and sincerity of his claims. The strategic interplay will extend across media coverage, congressional inquiries, and international digital policy debates.
Stakeholder implications
- Legal teams may find that depositions serve dual roles—uncovering facts and broadcasting narratives. The unsealed Musk transcript creates a public record that can be deployed in media and regulatory forums, increasing demands for rapid internal investigations and evidence preservation.
- Trust and safety teams are facing heightened scrutiny as regulators cite the gap between sworn assurances and on-platform harm. Documented Grok incidents now underscore the need for deeper audit trails and external reviews, shaping future compliance priorities.
- Procurement and vendor-evaluation groups are encountering a new precedent: public deposition records as risk signals. Grok’s controversy may factor into contractual due diligence, influencing clauses on indemnification, audit rights, and incident-reporting requirements.
- Corporate communications functions confront a shift in the battleground for reputation. With depositions visible to journalists and public stakeholders, messaging must anticipate legal exposure, ensuring that public statements align with documented practices to avoid credibility gaps.
- Regulatory bodies in California, the EU, and beyond are now armed with a concrete case study of conflicting narratives versus documented harms. Those jurisdictions can reference the Musk transcript and Grok incident reports in drafting digital safety guidelines or pursuing enforcement actions.
Anchoring forward scenarios to evidence
While some observers forecast an escalation of courtroom soundbites, that outcome hinges on the rhythm of pretrial filings and scheduled hearings. With jury selection set for April 2026, key milestones include dispositive motion deadlines, evidentiary hearings on admissibility, and potential summary judgment motions. Each event may generate fresh transcripts and public snippets that feed into media cycles and regulatory comments.
Simultaneously, the California AG’s investigation—initiated October 3, 2024—and parallel EU inquiries provide fixed reference points. Advances in those probes, such as civil investigative demands or formal enforcement referrals, can be expected in late 2025 or early 2026 as regulators conclude fact-gathering phases. The class action against xAI tied to the deepfake episode may yield discovery documents by mid-2025, offering additional material to challenge deposition claims.
These developments suggest a plausible scenario: as depositions and investigations progress, the contrast between sworn safety claims and incident evidence will sharpen. That dynamic is likely to influence settlements or judgments and shape broader policy debates on AI accountability, transparency, and platform governance.
What to watch next
- Follow-up depositions in Musk’s suit, including rebuttal testimony from OpenAI executives or technical staff, and the timing of rescheduled sessions ahead of April 2026.
- Public filings by the California Attorney General’s office and the European Digital Services regulators detailing findings or proposed corrective measures in response to the Grok deepfake incident.
- Discovery disclosures in the xAI class action, especially internal communications, incident response logs, and moderation policy iterations that could confirm or contradict Musk’s safety assertions.
- Upcoming summary judgment and merit-phase motion deadlines, where evidentiary debates over credibility and incident scopes will play out in legal briefs and potential court opinions.



