Meet the Scholars 2020: Genevieve Fried

The Michigan Integrated Data Automated System (MiDAS), implemented in 2013 to the tune of $47 million, caused an immediate spike in adjudications of fraudulent unemployment insurance claimants. By 2015, the system had falsely accused tens of thousands of residents of fraud, disqualifying them from their benefits and incurring huge fines and penalties in the form of garnished wages and tax returns, without due process. An internal state review of more than 20,000 fraud claims found a 93% error rate. Numerous individuals caught up in the MiDAS web were bankrupted; at least two individuals committed suicide because of financial strain. 

It is a somewhat recent phenomenon that algorithms are no longer seen as neutral arbiters of truth, fairness, and access to resources amongst the general public (critique of technology and its lauded benefits have always existed, but typically in the siloed halls of academia). I first joined an AI lab during my first year of undergrad at McGill University at a time when AI hype and fanfare was steeply rising. Many in the technical community, themselves growing rich off of the successful commercialization of machine learning, or cowed by the prospect of future riches, believed that there was nothing AI could not do, no domain—social or otherwise—it would not be able to improve. It was a self important and self serving narrative, and yet it took the world by storm, finding backing and amplification in academia, media, and the private sphere.  

But the past couple years have shown how deeply misguided this utopian vision of an algorithmically mediated world is. We are facing issues that technical solutions cannot sufficiently address; from algorithmic governance, which is often neither fair, transparent, nor just, to profound levels of alienation and polarization at the hands of networked technology like social media that is contributing to the breakdown of our individual mental health and our social institutions, to new forms of corporate power and consolidation that historical notions of antitrust may not be equipped to tackle. These problems require careful, interdisciplinary research and robust policy at all levels in order to safeguard the rights and civil liberties of individuals, protect our civic institutions and our democracy, and hold power to account.

These lived experiences—and tragedies—compel me to pursue policy. Over the course of my career, starting as an artificial intelligence researcher and transitioning to a researcher of the social implications of algorithmic technology, I’ve studied what characterizes a constructive implementation of a system from a harmful one. Typically, is a matter of systems failing to meet the needs of the individuals and communities over whom they are applied, sometimes by design, and often through failure to contend with the social and political environment these systems embed in. Policy can shape the incentives that determine what and how technology is built and ensure that public review and contestation, oversight, monitoring, and accountability are in place to make sure algorithms strengthen our democracy rather than impede it.

Genevieve is currently serving with Senator Chris Coons (D-DE) and supporting on issues of algorithmic accountability and justice.