Tech Regulation in 2023: Thorny Issues Remain as New Frontiers Emerge

Written by Eliza Thompson

The US Supreme Court last month handed Twitter a victory, protecting the platform from liability concerning terror-related content. The Court also left the hotly debated Section 230 untouched (perhaps signaling its inclination to leave the issue with Congress?). In February, we outlined what to expect in 2023 on tech regulation, including the potential impact of these Supreme Court decisions on the US regulatory landscape. Now, four months later, we’re providing key updates on recent developments, as well as a breakdown of what to expect for the remainder of the year.

Supreme Court Rules in Favor of Tech Platforms on Liability 

  • In May, the Supreme Court ruled on Twitter v. Taamneh, which dealt with the highly politicized issue of content liability for tech platforms. The Court held that Twitter was not liable for failure to remove terrorist content from its platform, arguing that it did not amount to an “affirmative act” that would have rendered Twitter a meaningful participant in the terrorist activity. In the unanimous decision, Justice Clarence Thomas said social media platforms are little different from other digital and communication technologies. It is notable, however, that Justice Ketanji Brown Jackson stressed the narrowness of the decision, saying other cases presenting different allegations and records may lead to different conclusions.

  • In a three page opinion, the Court meanwhile dismissed Gonzalez v. Google, the other key case on social media content moderation. The decision, which involved the contentious Section 230 of the Communications Decency Act, left in place a lower court ruling that protected social media platforms from a broad range of content moderation lawsuits.

  • The Court is still deciding whether to hear a number of cases dealing with the constitutionality of state laws passed by Texas and Florida that restricted online platforms’ ability to moderate content.

AI Takes Center Stage

  • OpenAI’s CEO Sam Altman testified before the Senate Judiciary Committee in May, discussing with policymakers the need to regulate AI technology across the tech industry. Marking a difference in tone from previous tech hearings such as with Facebook and Google, both Altman and policymakers aligned on the need to develop a more concrete AI regulatory environment in light of the ongoing AI “Arms Race.”

  • Also in May, companies across the AI space, such as OpenAI and Microsoft, called for the creation of a new government regulator that could oversee licensing for AI development, as well as implementing testing and safety standards. Other key industry players, however, have instead argued that AI regulation should be embedded into existing federal agencies as risks will vary by sector and existing agencies have the necessary expertise to best regulate their sectors. Moreover, there are concerns that a new AI specific regulator would face resource constraints that could create inefficiencies and more red tape for newcomers to navigate, thus limiting competition across the industry. 

  • Senate Majority Leader Chuck Schumer recently called for preemptive legislation to establish regulatory guardrails on AI products and services, such as on user transparency, government reporting, and value based systems.

  • The National Telecommunications and Information Administration announced a request for comment (RFC) in April to study the potential risks to individuals and society posed by AI that may not yet have manifested. The RFC will inform the Biden Administration’s ongoing work to build comprehensive approaches to AI-related risks. 

  • As outlined by the Harvard Business Review, there have been recent notable efforts at the state level to regulate AI, signaling the importance of state involvement at this stage in AI regulation. At least 17 states have now introduced legislation. California policymakers introduced a bill to address algorithmic discrimination in the use of AI tools that make consequential decisions, such as around insurance eligibility or housing advertising. Meanwhile, Pennsylvania policymakers introduced a bill to create a state AI register. Other state legislation seeks to require impact assessments, including California, Connecticut, DC, New York, and Washington. 

Antitrust Developments to Watch

  • In April, nine additional states joined the Justice Department’s highly influential lawsuit against Google around antitrust violations involving the company’s digital advertising technology products. The lawsuit, filed in January, accused the tech giant of monopolizing the ad tech market space through a combination of ad tech tools. A federal judge has set a faster than anticipated schedule for the case, which will likely put additional pressure on both sides.

  • In May, a US jury awarded nearly $268 million in damages to electronic components distributor Avnet Inc. in its lawsuit accusing a leading technology manufacturer, Nippon Chemi-Con Corp, of artificial inflating prices as part of a global price-fixing scheme. This decision may well reflect growing public support for stricter antitrust crackdowns across the tech sector. It could also motivate other companies to follow suit.

  • In late May, Representative David N. Cicilline (D-R.I) stepped down from Congress, which has the potential to sideline antitrust legislation targeting tech companies as Cicilline was a key proponent of such legislation.

Social Media Safety and Bans 

  • Children safety on social media is an important topic that has been popping up across both state and federal legislature in recent months. Utah governor Spencer Cox signed a sweeping social media legislation requiring explicit parental permission for anyone under 18 to use platforms such as TikTok, Instagram, and Facebook, marking the first such law in the US. There is opposition from civil liberty groups over aspects of the law, however, including the ability for parents to access all their children’s content. There are concerns this could negatively impact and put at risk marginalized youth and children facing unsafe home environments. There is also skepticism over how exactly the law will be enforced logistically.

  • In May, a bipartisan group of Senators introduced a similar bill that would prohibit children under the age of 13 from using social media platforms all together and requiring consent for teenagers. It would also prohibit companies from recommending content through algorithms for users under the age of 18. The same month, other senators reintroduced a bill looking to amend the Children’s Online Privacy Protection Act to more adequately address youth mental health concerns. 

  • A nearly six hour House hearing in March probed TikTok’s CEO Shou Zi Chew over data security concerns and harmful content featured on the platform. This renewed attention at the federal level over various bills aimed at TikTok, such as a bipartisan bill that would give the Commerce Department the ability to restrict foreign threats to technology platforms. 

  • At the state level, Montana became the first state to sign a total ban of TikTok, marking the most extreme measures taken against the app in the country thus far. TikTok sued in response, claiming that the ban violates the company’s constitutionally protected rights to disseminate and promote third-party speech. This is a key case to watch moving forward as it deals with the question of tech platform rights to disseminate and promote third-party speech.


Photo by Rodion Kutsaiev on Unsplash

Next
Next

Old Ways, New Approaches: Reknitting Social Cohesion in the 21st Century