For Meta, The Mask Is Off: They Knowingly Harmed Children
Two guilty verdicts are a landmark in the long road to social media accountability
By Yaël Eisenstat
In dual first-of-their-kind verdicts, Meta was found guilty of harming young users in two separate jury trials last week, in Los Angeles and New Mexico. YouTube was also found guilty, to a lesser extent, in the L.A. trial.
These are the first two of a swath of cases making their way through U.S. courts that expose how Meta’s design features harm young users — including through enabling child sexual exploitation, addicting children and contributing to mental health issues — and the lies the company has fed the public for years over their products’ safety.
Efforts to hold social media companies accountable for the harms their platforms cause have gained momentum over the years, but this moment is different for several reasons:
Evidence was finally laid out in front of a jury of real, everyday people.
For decades, these companies have been shielded from accountability due a 1996 law that pre-emptively immunizes online platforms who carry third-party speech: Section 230 of the Communications Decency Act. The law was originally meant to help a burgeoning industry of Internet companies by enabling them to moderate user-generated content without becoming liable for any harmful content that remained.
When Section 230 was passed, these companies were mostly seen as mere “pipes” through which people’s speech flows. But over the 30 years since, they have evolved into corporate behemoths who have invented and unleashed tools to keep people on their platforms at all costs, while abdicating any responsibility for keeping us safe. Victims have been denied their day in court for so long, told that the companies were immune from even having to prove that they played no role in the harms the victims suffered. But the “get out of jail free” shield is finally cracking, and cases are moving to trial.
In the New Mexico case, the jury was shown evidence of how children were recommended sexualized and other harmful content, how unconnected adults were able to target children, and how predators were able to groom children for human trafficking. In addition to internal documents and victims’ testimonies, the Attorney General’s office ran an undercover investigation that proved these things were happening on the platform, using testing methods modeled on the investigation our research team ran with the Wall Street Journal in 2024.
This is the type of evidence social media companies have fought tooth and nail to prevent from ever entering a courtroom, while simultaneously lying to the public by touting their safety efforts. And as these two verdicts proved: once laid out in plain view, the companies’ own products and business decisions are enough for a jury to recognize their negligence.
They proved, beyond a doubt, that Meta knowingly contributed to harms against children.
Beyond the headline fact of Meta’s guilt in harming young users, a major takeaway of these trials is that Meta knew this, and continued to do so anyway. The company was found liable in the New Mexico case for misleading consumers about the safety of its platforms, in addition to endangering children.
The juries in both cases were shown example after example of internal communications with senior leaders warning of designs and choices that led to harming young users. Multiple documents proved that Zuckerberg disregarded his own teams’ warnings and continued to prioritize ruthlessly targeting children — as young as 11 years old — for growth.
Our center’s co-Director Damon McCoy and Fellow Arturo Bejar — a former senior leader at Meta who had worked directly on these issues — were both witnesses in the New Mexico case, during which they detailed where company leadership chose growth and profit over their own safety teams’ warnings and advice, contrary to Meta’s public commitments to protect teens. Our research detailing how Instagram has broken its promises to protect children online, including through faulty tools they misleadingly advertised as their teen safety products, was also referenced in the trial.
As New Mexico Attorney General Raúl Torrez said after the verdict: “Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew.”
Meta and Youtube were finally treated as products and held liable for their own design and business decisions.
The companies have spent years (and vast sums of money) on lobbying and public relations campaigns to convince the public, regulators and the courts that they are merely conduits for other people’s speech, and therefore everything they create — every platform, profit-seeking scheme, growth strategy, and tactic they use to elicit behavior — should only viewed in terms of “free speech.” After years of researchers, whistleblowers, and litigators working to counter this narrative, these cases finally treat social media apps as products whose particular design features and profitability strategies are not all merely speech-related.
The platform features at the heart of some of these cases — things like infinite scroll, autoplay, the timing and batching of push notifications, and other tactics borrowed from the gambling industry — have nothing to do with content moderation, contrary to the companies’ efforts to claim this is just about speech. These features are designed to elicit a behavior on the part of the user that furthers the company’s own business goals, and it is that behavior change that contributes to the claims in these cases, as we argued in an amicus brief co-authored with the Electronic Privacy and Information Center (EPIC). In both New Mexico and Los Angeles, the juries agreed. They found that the companies’ own design features intentionally addicted young users, among other harms laid out in the cases.
What’s next in the accountability battle?
Many are equating this to the “big tobacco” moment. After years of efforts from researchers, civil society, and former tech employees to seek accountability, the tide is finally turning.
There will be appeals, 1st Amendment challenges, and possibly mixed results in other cases. But these verdicts reflect a key change: the public — including parents, teachers, and juries of everyday citizens — has lost confidence in the arguments put forth by these companies that they are doing their best to protect us all.
In the next phase of the New Mexico case, they will seek additional financial penalties and court-mandated changes to Meta’s platforms that offer stronger protections for children. This is where the rubber will hit the road: social media companies will have to start designing their platforms with safety in mind.
With thousands of cases in the wings, the fines could add up to the billions, if not trillions. And even investors are taking notice. In 2019, when the FTC fined Facebook $5 billion over privacy violations, their stock actually rose. Investors did not view that as a threat to social media companies’ growth trajectory; it was a one-time cost of business they could weather. But Meta’s stock fell by 8% on March 26, after the two verdicts.
Until (and if) Congress ever catches up and passes legislation to build in the proper guardrails and accountability for the industry, financial incentives are a key lever to push these companies to change. After years of dishonesty — now proven in court — we cannot take them at their word; there must also be verification and requirements around how products are marketed.
Some will argue that if these verdicts stand, it will destroy the internet as we know it. And while that hypothetical is far from an evidence-based inevitability, perhaps it’s a bargain these two juries would be willing to accept. The status quo, it is clear, is no longer acceptable. The “free speech” mask is off, and now we can hopefully move to the next phase of building safer, more accountable online spaces.
Yael Eisenstat is Director of Policy and Impact at Cybersecurity for Democracy, based out of New York University. She was one of the early Facebook whistleblowers, after serving as its global head of elections integrity for political ads in 2018. She previously served as a diplomat, intelligence officer, and White House adviser.



Here's a case where size and greed were no protection. We mistakenly gave these businesses our trust. They carried on in high handed manner, yet eventually came to the end of their ropes, which the courts and public are now pulling on. The people have clearly had enough, and little wonder.
Yaël, I'm assuming other states will follow up with their own lawsuits. But will facebook/meta and youtube/google/alphabet hand over the money? And what does all of this mean for lawsuits about other social media websites? Would you talk about this with Preet? Thanks.