On today’s internet, the boundaries of acceptable speech are set by a few massive platforms, including Facebook, Twitter, Instagram, YouTube, and a handful of others. If those companies find something unacceptable, it can’t travel far — a restriction that’s had a massive impact for everyone from copyright violators to sex workers. At the same time, vile content that doesn’t violate platform rules can find shockingly broad audiences, leading to a chilling rise in white nationalism and violent misogyny online. After years of outcry, platforms have grown more willing to ban the worst actors online, but each ban comes with a new political fight, and companies are slow to respond in the best of circumstances. As gleeful disinformation figures like Alex Jones gain power — and the sheer scale of these platforms begins to overwhelm moderation efforts — the problems have only gotten uglier and harder to ignore. At the same time, the hard questions of moderation are only getting harder.
Okay, maybe not so measured, but worth reading. Law blogger and Verge favorite Eric Goldman on the recent moderation ruling against TikTok:
Unless the 3rd Circuit en banc quickly and decisively rejects this opinion, it will be celebrated by other judges eager to blow up Section 230 (of which there are many). As a result, I expect this opinion provides another hard shove towards the impending and seemingly inevitable end of Section 230–and the Internet as we know it.
[Technology & Marketing Law Blog]
Following a similar order against the attorney general of Texas, Judge Amit Mehta has blocked an investigation into Media Matters For America by Missouri AG Andrew Bailey, who alleged MMFA broke the law with critical reports about Elon Musk’s X. X’s similarly speech-chilling lawsuit against MMFA remains ongoing.
The company doesn’t know if the issue is related to an App Store bug, or if Apple is “secretly implementing a censorship order,” as it’s apparently affecting “multiple other VPNs” on iOS in Brazil.
On Saturday, X shut down its operations in Brazil over claims the government gave it secret “censorship orders.”
In a new filing, DOJ says it’s “not trying to litigate in secret,” but that the court should be able to review classified information that led Congress to determine the divest-or-ban bill was necessary. In its own filing, TikTok says the government’s arguments for the bill are riddled with errors and omissions.
CNN has a rare inside look at the Supreme Court deliberations that led to the (bad!) Texas and Florida social media regulations being put on hold and sent back to the lower courts to figure out how they would affect other kinds of websites and services. It almost went the other way, until Samuel Alito went too far in his first draft and Amy Coney Barrett flipped, eventually joining the 6-3 majority opinion.
[Alito] questioned whether any of the platforms’ content-moderation could be considered “expressive” activity under the First Amendment.
Barrett, a crucial vote as the case played out, believed some choices regarding content indeed reflected editorial judgments protected by the First Amendment. She became persuaded by Kagan, but she also wanted to draw lines between the varying types of algorithms platforms use.
The ruling is already having an impact on other moderation cases.
Sens. Chris Coons (D-DE) and Marsha Blackburn (R-TN) updated their discussion draft that seeks to prevent debacles like that between Scarlett Johansson and OpenAI. It’s gained the support of SAG-AFTRA and the Recording Industry Association, but the Electronic Frontier Foundation, which counts tech companies among its donors, previously raised concerns that the draft bill was overly broad.
The vice president and likely Democratic presidential nominee applauded the Senate’s vote to pass the Kids Online Safety Act and urged full passage through Congress.
Democratic Governor Kathy Hochul recently signed the state’s own laws to protect kids online, exemplifying how states have been the first to move on this kind of legislation. Hochul said in a statement that when she signed those bills, “we were sending a message to the nation. Now, I’m excited to see the Senate take steps to help safeguard more young people nationwide.”
The bill they’re contained in passed the 60 vote threshold to close debate, but the Senate must still vote to fully pass it. Schumer indicated that could happen early next week. Should it pass, it goes to the House – but that could take a while considering members are leaving early for summer recess.
“Once the Senate clears today’s procedural vote, KOSA and COPPA will be on a glide path to final passage early next week,” Senate Majority Leader Chuck Schumer (D-NY) said ahead of the cloture vote, which closes debate and sets up the bills for a full vote.
That bill is being used as the vehicle for KOSA and COPPA 2.0. They’re basically tucked in as an amendment to this unrelated bill that deals with duplicative reporting requirements for federal agencies.
The Tennessee Republican, another of the bill’s lead sponsors, began her remarks with what KOSA doesn’t do. It doesn’t cover nonprofits, it doesn’t include rule-making, it doesn’t include news outlets, and it doesn’t give the government new authority, she said.
“There’s no censorship in this bill. None. Zero,” the Connecticut Democrat who’s the bill’s lead sponsor said on the Senate floor. “It is about product design. Much as it would be about a car that is unsafe and is required to have seatbelts and airbags.”
The Kentucky Republican said the bill “promises to be pandora’s box of unintended consequences.” He added that “there’s enough to hate this bill from the right and left,” describing, for example, how discussion of sexuality, climate change, and abortion could cause anxiety, which the duty of care mandates platforms try to mitigate.
The board previously said the policy “disproportionately restricts free expression” because while the term is “sometimes used by extremists to praise or glorify people who have died while committing violent terrorist acts,” there are also alternate meanings.
In a test, Meta said, removing the term when “paired with otherwise violating content” captured “the most potentially harmful content without disproportionality impacting voice.”
Correction: Meta said it’s implementing the Board’s recommendations, not seeking further guidance.
[transparency.meta.com]
Supreme Court protects the future of content moderation
The NetChoice decision means curating, compiling, and moderating a feed is a First Amendment-protected activity.
The New York Times followed the harrowing journey of John Mark Dougan from his time as a deputy sheriff in Palm Beach County, Florida to his new residence in Moscow. From there, he reportedly he runs a vast network of largely AI-generated websites that spout disinformation. He’s apparently managed to build over 160 fake websites, according to The Times.
[The New York Times]
In a WhatsApp groupchat, prominent businessmen discussed how to use their “leverage” to persuade Columbia’s president to call in the NYPD.
Some members, including hedge fund manager Daniel Loeb, attended a Zoom meeting with NYC mayor Eric Adams on April 26th. Some participants offered to pay for private investigators to help crack down on protesters, while others promised to donate to Adams’s campaign.