I’ve been reading a lot of science fiction lately. And -- forgive me for stating the obvious here -- I’m still struck by the fact that, while science fiction is very much about technology, it’s usually even more so about people.
Sure, the people are set against a backdrop of spaceflight and androids and extraterrestrials and brain chips and the metaverse, but so often the story is really a study in how the characters and the institutions around them react to the new possibilities and situations presented to them by technology.
In the same way, I keep returning to the idea that questions about tech policy are often questions about people: our desires and behaviors, our relationships to the institutions who provide for us in exchange for power and capital. Undeniably, tech brings new angles and new urgency to these questions, but these are debates that we’ve been having since way, way before the first message was sent on ARPANET.
Many of the current headline issues in tech policy sit at this intersection of technology and values -- our beliefs about our rights and responsibilities, about trust or justice or power.
For example: questions in privacy require us to understand what is collected and learned about us -- and then to reason through which institutions we feel comfortable entrusting this information to, and what we are willing to exchange for our privacy. When thinking about misinformation and filter bubbles, the new dynamics of recommendation algorithms definitely matter -- but also at issue are the benefits and harms of the democratization of speech and the psychology of our reactions to counterattitudinal information. The use of ML algorithms in decision-making absolutely requires careful consideration of different fairness metrics -- but it also requires confronting the fact that data drawn from our society already encodes patterns of inequality that we need to look out for, lest we propagate them.
As I see it, the role of the technologist in policy isn't to answer these age-old questions on everyone’s behalf. Rather, it’s to make sure that the people and their representatives confront these questions armed with all the facts -- with an understanding of the real, technically-grounded tradeoffs, unencumbered by the confusion and mysticism and false dichotomies that motivated actors use to prevent real discussions about the design of technology.
Take a contested question like “should we adopt end-to-end encryption, if doing so means law enforcement can’t obtain communications under any circumstances (e.g. even with a warrant)?”
Before we even get into values, there are a host of technical questions to be understood: why exactly does end-to-end encryption (e2ee) mean that law enforcement can’t obtain communications? What are the security benefits provided by e2ee versus other systems? Are there any middle-ground options? Are there ways to design an access system that can’t be repurposed for censorship and surveillance?
Once we have understood the technical facts, then we have the basis for a discussion of tradeoffs, of questions about our values and our system of governance. How do we weigh the value of security and enforcement against the value of privacy? Who is harmed by lack of security, and who by lack of privacy? How would our answers to these questions change under different governments and legal regimes?
I love tech policy because it requires us to wed deep technical understanding with deep thoughtfulness about our collective values and goals. And I’m madly excited about Tech Congress because it has turned the idea that technical expertise is an essential half of this proposition into action.
I can’t wait to learn from and contribute to this shared endeavor: building the technology and the institutions to secure a sci-fi future that we’re excited to inhabit.