2023: Tech Regulation Takes Center Stage

Written by Eliza Thompson

As the Supreme Court prepares to hear arguments later this month on pivotal cases involving big tech regulation, the question of broader US tech policy strategy is taking center stage. Last month, President Biden outlined his agenda for tech regulation and hope for bipartisan agreement. 2023 is poised to be a significant year for the tech industry, with topics such as platform liability and antitrust enforcement emerging as key agenda items. Below we outline key policy and legal developments to watch at the federal and state level as both the public and private sectors grapple with emerging challenges and longstanding questions on tech regulation.

Section 230 Finally Comes to a Head 

The Supreme Court will hear arguments in Gonzalez v. Google this month, a case centering on the lofty Section 230 of the Communications Decency Act. The case will largely determine the outstanding question of liability around the content featured by these companies. Historically, Section 230 has provided extensive protections to companies such as Google and Twitter over the content they feature and promote. Removing or severely limiting this protection would dramatically alter the way tech companies provide content, such as targeted recommendations and ranked content.

On the surface, reforming Section 230 has bipartisan support, but Democrats and Republicans have differing motivations. Republicans generally want to strip Section 230 protections as they feel social media companies overreach on content moderation, whereas Democrats generally want to further limit the spread of harmful content. Big tech companies are stuck in an uncomfortable position, fighting with both sides of the aisle as they try to maintain the status quo (as outlined in a recently published Google blog post). 

The Supreme Court will additionally hear arguments in Twitter v. Taamneh, which also deals with the issue of platform liability but adopts a different legal approach by alleging violations of the Anti-Terrorism Assistance (ATA) and Justice Against Sponsors of Terrorism Act (JASTA). The Court will be looking at whether Twitter played an assistive role by failing to identify and remove ISIS content.

Antitrust Reform Gains Muscle 

It is shaping up to be a big year for antitrust legislation largely aimed at the “big five” tech companies – Alphabet, Amazon, Apple, Meta, and Microsoft. In January, the Justice Department and a group of eight states sued Google alleging antitrust violations over Google’s ad tech business, marking the first such lawsuit against the company under President Biden and the fifth filed by US officials since 2020. The Biden administration has signaled antitrust as a key priority area, including by increasing funding for the Federal Trade Commission (FTC) and Department of Justice (DOJ) in 2023. With more resources, as well as an FTC chair who made a name for herself in antitrust law, the FTC will likely have a more active year in the antitrust space.

Other key areas to watch out for around big tech regulation include data privacy, content moderation, and online safety.

  • Last year Congress made progress on the American Data Privacy and Protection Act (ADPPA) after years of stalemate, reaching bipartisan agreement on issues such as preempting state law. However, the bill has stalled again with a divided Congress. Therefore, states will likely remain in the driver seat on privacy protection laws – with new privacy laws taking effect this year in California, Virginia, Colorado, Connecticut, and Utah.

  • Tech industry groups are pushing for courts to strike down laws in Texas and Florida that deal with content moderation and restrictions (with the Supreme Court asking for the Biden Administration to weigh in). The laws look to bar platforms from restricting content based on users’ viewpoints, fitting within the larger–and highly politicized–debate on censorship.

  • Child safety and mental health issues are also a major focal point in 2023. In January, Seattle’s public school district filed a first of its kind lawsuit against TikTok, Meta, YouTube, and Snapchat, claiming that they contribute to the mental health crisis among youth today. And in December, trade group NetChoice sued to block a California online safety law that would require digital platforms to vet whether new products pose harm to children before rolling them out.


Building a Long-Term Framework for AI 

While regulatory questions around big tech have been in play for many years, emerging tech innovations present new challenges to policymakers on how to mitigate negative social impact before they take root. ChatGPT has renewed attention around innovation through AI, with experts discussing “the good, the bad, and the ugly.”A recent paper–AI as Systemic Risk in a Polycrisis–outlines what challenges we will likely face as we embrace the use of AI in areas such as medicine and security. 

The US has largely been void of a strategy on AI regulation, but the White House Office of Science and Technology took first steps at articulating US strategy with a blueprint for an AI Bill of Rights after a year-long consultation. The blueprint identifies five guiding principles in the design and use of AI, including safe automated systems, protection of data privacy, and consideration for human impact. 

Entering 2023, it will be interesting to see what approach the US takes as it continues to develop its long-term framework, especially with increased public focus on the issue. A federal framework around transparency and preventing discrimination is at an early stage, with much progress to be made. There are also various proposed bills and a patchwork of state and local laws around AI. Some key areas to pay attention to include:

  • The Algorithmic Accountability Act was introduced in Congress last year and would require technology companies to perform a bias impact assessment of any automated decision-making system being deployed in key sectors, including employment, healthcare, housing, and legal services. The Act is currently in the Subcommittee on Consumer Protection and Commerce.

  • The Federal Trade Commission is in the process of creating new rules around commercial surveillance and data security that will govern any company that develops and deploys AI systems. This aligns with the Biden Administration’s focus on ensuring designers, developers, and deployers of automated systems respect data privacy and existing safeguards.

  • States have begun implementing laws to address emerging AI risks. For example, New York, Illinois, and Maryland all passed similar laws regulating the use of AI in employment decision-making due to concerns over the potential for AI to perpetuate biases in hiring. The US Equal Employment Opportunity Commission (EEOC) is also launching an initiative to ensure such systems comply with federal law. 


The Year of Web3 

2022 was quite the year for Web3, both bad and good. While the collapse of FTX and subsequent fraud charges highlighted the volatility of the crypto industry specifically, innovation using Web3 is still expanding into all areas of the economy and has the potential to foster considerable positive societal impact if approached correctly. As the industry continues to grow, so will policy debate on how best to regulate it. In January, the Biden Administration published a statement urging Congress to step up its efforts with respect to crypto regulation, arguing regulators’ powers should be expanded to prevent misuse and to mitigate conflicts of interest. 

States have also begun to address questions emerging across the Web3 space. In May 2022, California became the first state to begin creating a framework on responsible Web3 technology development. California will begin the process of creating a regulatory approach to spur responsible innovation while protecting California consumers–highlighting how states can bridge the gap between innovation and regulation. Policymakers can encourage the continued use of blockchain technology for research and industry while ensuring this is done in a responsible manner.

Regulation across the Web3 space will likely become an increasingly hot topic as new challenges around stability and security emerge, like data safety. As the volume of information stored on blockchain increases, issues around criminal behavior such as ransomware will be massive headaches for the sector. A key question the US will need to address first is what regulatory body, if any, has jurisdiction over Web3. 

Photo by Ian Hutchinson on Unsplash

Previous
Previous

Old Ways, New Approaches: Reknitting Social Cohesion in the 21st Century

Next
Next

2023 Technology Trends: The more things change…