Insights

From the Courtroom to Congress: Bridging the Gap Between Law, Technology, and Policy

Earlier this month, the Congressional Internet Caucus Academy (CICA) hosted a panel discussion titled, “Tech Platforms and the 1st Amendment: Impact of Supreme Court Rulings.”
Separate from the Congressional Member Organization known as the Congressional Internet Caucus, CICA is a project launched by the Internet Education Foundation (IEF), a 501(c)(3) nonprofit organization dedicated to promoting informed policy making and digital literacy for all Internet stakeholders. The panel was moderated by Nadine Farid Johnson (Knight First Amendment Institute) and hosted panelists Steve DelBianco (NetChoice), Yaël Eisenstat (Cybersecurity for Democracy), Olivier Sylvain (Fordham University School of Law), and Vera Eidelman (ACLU).

In an effort to prevent social media platforms from deplatforming political officials, Florida and Texas enacted laws in 2021 that attempted to regulate how social media platforms exercised their content moderation guidelines and policies. However, NetChoice – an association of businesses and online consumers – challenged these laws on the basis of a violation of both the First Amendment and Section 230 of the Communications Decency Act. After hearing oral arguments in February of this year, the Supreme Court decided on both NetChoice, LLC v. Paxton, and Moody v. NetChoice, LLC at the beginning of this month. There was no clear cut decision; in fact, the Supreme Court vacated and remanded the case to lower courts, citing the lower courts’ lack of proper analysis in relation to the First Amendment and the regulation as part of its reasoning.

After panelist Vera Eidelman set the scene with an extensive description of the First Amendment, she also pointed out that the Supreme Court justices made it clear in the majority that the First Amendment applies to activities both online and offline, with social media platforms exercising these protected rights at their own discretion. However, as online platforms exercise these rights, we encounter blurred lines, a facet of internet governance that panelist Steve DelBianco touched upon: how do we ensure we get content moderation “right,” while pleasing both the user community and the advertisers? 

Alongside Steve and Vera’s statements, panelist Yaël Eisentat offered an alternative perspective through highlighting some main points from the Supreme Court justices’ opinions: a need for further transparency in the tech industry’s algorithm explanations, and the possibility that not every action taken by a social media company will qualify as expression protected by the First Amendment. What this would look like, however, is not specified–the Supreme Court has left the door open as to what such legislation would look like. 

As the conversation continued, a common theme arose: the gray area that is internet governance – more specifically, content moderation – and a need for both Congress and the Supreme Court to modernize. It’s in the intersection of technology and policy that the concrete wires and machines that power our networks become much more abstract and opaque, even with the Internet’s nearly 60 years of existence. In the midst of this current lack of clarity, panelist Olivier Sylvain emphasized that cases such as these open up space for Congress and regulators to think creatively about navigating the territory and additional considerations accompanied by conversations about content moderation in online spaces, namely that of consumer protections and the meaning of “free expression.”

As Congress thinks through such considerations for legislation, TechCongress can step in to fill in the gaps. Since 2016, we have worked tirelessly to bridge the gap between technology and policy, placing 109 fellows in Congressional offices and committees as subject matter experts in pre-existing and emerging technology policy issues. While policymakers work on up-to-date content moderation and transparency legislation that reflect how the Internet operates today, it is imperative they are backed by tech talent eager to pitch in based on their own backgrounds in technical expertise.

If you believe you have the wealth of knowledge necessary to guide and support such conversations, please consider applying for our Congressional Innovation Fellowship for early to mid-career technologists! Applications close on August 5th, 2024.

Viewing the TikTok Ban Through the Lens of the First Amendment

Viewing the TikTok Ban Through the Lens of the First Amendment

Last week, members of the TechCongress team attended a panel discussion regarding the recent “Protecting Americans from Foreign Adversary Controlled Applications Act”  (H.R.7521). The panel was hosted by Harvard University’s Institute for Rebooting Social Media (RSM), a three-year “pop-up” research initiative at the Berkman Klein Center that aims to address social media’s most urgent problems. The panel was moderated by RSM visiting scholar Anupam Chander and hosted speakers Jennifer Huddleston (Cato Institute), Ramya Krishnan (Knight First Amendment Institute), Jenna Leventoff (ACLU), & Alan Z. Rozenshtein (University of Minnesota). 

Jack Cable: Money Over Morals: A Business Analysis of Conti Ransomware

Jack Cable authored the first in-depth peer-reviewed research into the Conti leaks. We mapped over $80 million in new payments to Conti.

This paper was published in December as part of the APWG Symposium on Electronic Crime Research, for which we received the best paper award.

In February 2022, over 168,000 internal chat messages of the Conti ransomware group were leaked. Conti is one of the most prominent ransomware groups of all time. We sought to build a picture of Conti's (quite profitable) business based on on-chain analysis of Bitcoin payments.

To do so, we manually annotated all 666 Bitcoin addresses present in the leaks based on message context (our team included a native Russian speaker). We tag addresses as either a salary, reimbursement, or ransom payment address.

StateScoop: Why 2023 could be a year for civic-tech optimism

Founder and Executive Director Travis Moore co-authored “Why 2023 could be a year for civic-tech optimism”

This year has the potential to be a positive, transformational year for government at all levels.

You’d be forgiven for scoffing at that sentence. With a divided Congress, many are ready to call 2023 a wash and set their sights on 2024. But from our vantage point in the world of public interest technology, that would be a mistake. We’ve never been as poised to drive meaningful, lasting change in government.

It’s taking place at every level of government — federal, state and local — as a result of three key factors: Increased capacity for tech talent in government jobs, digital delivery being written directly into policy, and government systems changing right before our eyes. The potential impact is enormous and will be felt in policies large and small — remaking the social safety net, transforming how we file taxes, modernizing infrastructure and beyond.

FCW: How smaller agencies are working to close their technology talent gaps

CFPB chief technologist Erie Meyer said she "frantically" recruits from fellowships like the TechCongress and Senior Congressional Innovation programs, which were launched in 2016 to place computer scientists, engineers and technologists on congressional teams as technology policy advisors for members of Congress. 

Web Summit: Cyberwarfare in 2022 Podcast

Alumni Geoff Cain discusses Cyberwarfare and threats in the Next Stage podcast.

Modern warfare, said Josh, extends beyond physical boundaries to the digital. Nowhere is this more obvious than the war in Ukraine where cyber-attacks have been part of Russia’s arsenal from the beginning.

When with the US army, Josh’s 2018 research found that the average iPhone was about one thousand times more secure than the Lockheed Martin F-35 Lightning II, a trillion dollar fifth generation fighter jet.

Meanwhile newer tech like Elon Musk’s Starlink satellite - which has been used by Ukraine military - is using older technology protocols that can be tracked using a shop bought kit costing only 25 dollars. In fact, there are Twitter accounts that do this publicly, added Josh.

“The next conflict could end without a shot being fired because no aircraft takes off from the tarmac,” remarked the Shift5 founder.

Josh Lospinoso, co-founder and CEO, Shift5, was in conversation with Geoffrey Cain, author and freelance writer, Wired, on the FULLSTK stage at Collision 2022.

Wall Street Journal: ‘The Titanium Economy’ Review: Making It in America

Alumni Geoff Cain authored an article for the WSJ discussing supply chain issues in America.

After many grueling nights designing and building a car in “makeshift tents,” Elon Musk emerged with a prescient lesson for Tesla. “The issue is not about coming up with a car design—it’s absolutely about the production system,” Mr. Musk said in 2019, during the unveiling of the car maker’s SUV, the Model Y. “You want to have a good product to build, but that’s basically the easy part. The factory is the hard part.”

Mr. Musk wanted to take vertical integration—or control over the supply chain—to what he’s since called “absurd” heights. His business philosophy was decisive. In February 2022, the federal government announced that supply-chain issues meant that American manufacturers had five days’ worth of chips in their inventories—an emergency shortage compared to their 40-day supplies three years earlier.

The Epoch Times: The TikTok Trojan Horse and China’s Long Arm of Artificial Intelligence

Alumni Geoff Cain, in an interview for the Epoch Times, discusses his book and privacy issues in America.

In this episode of American Thought Leaders, I sit down with Geoffrey Cain, an award-winning journalist, technologist, and author of “The Perfect Police State: An Undercover Odyssey into China’s Terrifying Surveillance Dystopia of the Future.”

“Everybody was constantly being watched by an artificial intelligence system, which was called the IJOP,” says Cain, referring to a pre-crime surveillance platform that the Chinese Communist Party launched in Xinjiang to predictively police the population.

Cain recently testified before the U.S. Senate about TikTok and why he believes the social media app’s troubled emergence in America, its shadowy corporate structure, and its connection to China’s security and data laws make it a unique national security threat.

“It is a disaster waiting to happen because TikTok, though the company denies it, is fundamentally obligated to follow … the laws that were created by the Chinese Communist Party,” Cain says.

C-Span: Senate Hearing on Social Media and National Secrurity

Alumni Geoff Cain testifies before Senate on the topics TikTok and Social Media’s impact on National Security concerns.

Chairman Peters, Ranking Member Portman, and Members of the Committee: It is an honor to be invited to testify here on social media’s impact on national security. Today, I will talk about one of the greatest technological threats facing our homeland security and democracy: TikTok, the social media app owned by the Chinese parent company ByteDance. TikTok is the fastest-growing social media app ever and is expected to hit 1.8 billion users by the end of this year. Known for its fun and digestible video snippets, the app is enormously popular among celebrities and Generation Z users. It goes to great lengths to appeal to the sensibilities of the American market by loudly proclaiming progressive, democratic, egalitarian values. It posts messages on social media supporting inclusivity, diversity, LGBTQ+ rights, and pro-life causes. All this is a distraction from the reality behind TikTok’s parent company in China, called ByteDance. As an investigative journalist in China and East Asia for thirteen years, I have been detained, harassed, and threatened for my reporting on Chinese technology companies. ByteDance and its subsidiary TikTok have sought to distract us from well-documented ties to the Chinese Communist Party.

EqualAI: NIST will cultivate trust in AI by developing a framework for AI risk management

Alumni Ellie Sakhaee writes for EqualAI about the steps to establish a framework for managing risks associated with AI systems

Despite their astonishing capabilities, today’s AI systems come with various societal risks, such as discriminatory outputs and privacy violations. Minimizing such risks can, therefore, lead to AI systems that are better aligned with societal values, hence, more trustworthy. Directed by Congress, NIST has taken important steps to establish a framework for managing risks associated with AI systems through creating a process to identify, measure, and minimize risks.

More than 167 guidelines and sets of principles have been developed for trustworthy, responsible AI. They generally lay out high level principles. The NIST framework, however is unique from many others because it aims to translate principles “into technical requirements that can be used by designers, developers, and evaluators to test the systems for trustworthy AI,” Elham Tabassi, the Chief of Staff at the Information Technology Laboratory (ITL) at NIST, said on the In AI we Trust? podcast with EqualAI and the World Economic Forum.

Government Needs Diverse Public Interest Technologists to Improve Services

A former fellow, Victoria Houed, mentioned TechCongress in her blog post on the Stanford Social Innovation Review:

This feeling of empowerment as a newly minted public interest technologist prompted me to apply to TechCongress, a year-long fellowship that places technologists into a congressional office or committee. I had no idea when I applied that I would end up working for House Speaker Nancy Pelosi in 2020 or that two months after my arrival in Washington, DC, a deadly pandemic would begin.

Tech Transparency Project: Apple’s Trademark ‘Bullying’ Targets Small Businesses, Nonprofits

Alumni Celeste Chamberlain interviewed in an article about Apple’s small business targeting.

Take 3.14 Academy, a Maryland-based nonprofit that provides children with autism, their families, and communities with educational initiatives and training. In July 2019, founder Celeste Chamberlain, an autism specialist and the mother of two autistic children herself, filed what she thought would be a routine trademark application for her academy’s logo, featuring the Greek letter pi inside an apple.

But Apple’s lawyers intervened. In a 257-page filing opposing Chamberlain’s application, Apple argued that it is deeply involved in education due to the fact that, among other things, it has donated iPads and Mac computers to schools, offers educational apps in its App Store, and makes GarageBand available to music teachers. Therefore, the filing argued, 3.14 Academy’s logo was “likely to cause confusion, mistake, or deception in the minds of consumers.”

In an interview with TTP, Chamberlain said that she and her lawyer were initially baffled by Apple’s opposition to her trademark and thought the company and its legal team would quickly realize it was all a big mistake.

The Federalist: Section 230 Needs To Be Fixed So Internet Companies Can’t Feature Child Pornography

Alumni Mike Wacker authored an article for the Federalist about CSAM Laws.

In the leadup to the Communication Decency Act of 1996, America was concerned about the Internet exposing children to pornography. The July 1995 cover of Time magazine, titled “Cyberporn,” depicted a child staring at a computer with this caption: “Exclusive: A new study shows how pervasive and wild it really is. Can we protect our kids — and free speech?”

Today, our biggest problem is not children who are exposed to pornography. It’s children who are involved in pornography or child sexual abuse material — CSAM, as it’s known. When victims of CSAM seek justice in the courts, however, section 230 of the Communication Decency Act — a law that protects digital platforms from liability for third-party content— often blocks their lawsuits.