Exploit Machina

Last Friday, February 21, the LawTech Center hosted Professor Andrea Matwyshyn of Penn State Law for a presentation in Purcell Reading Room. Matwyshyn spoke about her upcoming article, "Exploit Machina."

Professor Matwyshyn holds appointments at both Penn State Law and Penn State School of Engineering. She is also a Founding Faculty Director of the Penn State Policy Innovation Lab of Tomorrow (PILOT) and the Penn State Manglona Lab for Gender and Economic Equity. Matwyshyn has been studying the intersection between law, policy, and technology for over two decades and has published many law review articles on the subject.

The presentation centered on what Matwyshyn calls "exploit machina problems," a reference to 2014 sci-fi thriller, Ex Machina and a piece of code that takes advantage of a vulnerability (an “exploit”). The term aptly foreshadows the premise of the article—that broken technologies in conjunction with broken governance can cause irreparable harm, sometimes at scale.

“For centuries now, we’ve been engaged in a conversation between science and what Hayek called scientism, and sometimes the line between these two can be a little blurry,” Matwyshyn opened. Friedrich Hayek, an Austrian-British Nobel Prize winner in economics, cautioned against complete trust in the scientific method as the only way to learn truths about the world and solve social problems, a belief known as scientism.[1]

Matwyshyn’s “exploit machina” situations challenge the scientific community to consider nuances and question the trust we place in newly developed technologies, such as forms of Artificial Intelligence (“AI”) that produce predictive and prescriptive analytics. “We have industries building technologies in some cases that are transformational and helpful to humans and, in some cases, purely technologies of flawed judgements about humans that are being resold and leveraged to the detriment of those humans.”

Among the countless historic and modern examples that Matwyshyn presented, one stood out to me in particular: plans to have students wear brain sensing headbands in the classroom. Already deployed in some Chinese schools, an American company, BrainCo, has developed a similar product. The BrainCo headbands use electroencephalography (“EEG”) sensors to track certain responses in the brain to detect and quantify students’ attention levels during class.[2] Impliedly, teachers would use the results to keep students' wandering minds in line.

Matwyshyn argues that this technology is dangerous because of its unreliability and “flawed judgments.” EEG headbands have limiting functions. There is evidence they don’t work correctly on greasy hair, or if a wearer has had caffeine, looks around a lot, or has low blood sugar. Furthermore, these headbands unnecessarily quantify student engagement and market those metrics as stand-ins for real learning. Matwyshyn urges that “engagement does not equal learning, attention does not equal thinking, focus does not equal creativity or innovation.” This “broken” technology presents an “exploit machina problem”—if implemented by school districts, it could intentionally (or unintentionally) harm students at scale, particularly those that do not fall within the “proper” or “efficient” standards that the designers have arbitrarily set.

“These issues aren't new and the thinking behind them has echoes in history that we should connect with,” Matwyshyn explained. Science has historically been weaponized by the government against those deemed “different” or “not normal.” She pointed to the 1927 case, Buck v. Bell, in which the U.S. Supreme Court upheld a Virginia statute permitting the forced sterilization of people with intellectual disabilities.[3]

Matwyshyn also draws upon Hannah Arendt’s underexplored work at the intersection of technology and politics. In 1964, Arendt participated in a conference where she expressed concern about the trend toward what she called “cybernation,” or the “impact of technology on humans in context for their labor, their daily lives, their dignity and engagement with other humans,” Matwyshyn explained. Arendt also contrasted computer forms of memory with human memory, or what she called remembrance. “Remembrance is contextual, it is developmental, it is not replicable by a machine,” echoed Matwyshyn.

These ideas challenge the current paradigm in AI scholarship that the human brain can eventually be replaced by an AI “machine-brain.” Arendt predicted that the attempt to replace the human brain would result in a hyper-mathematization of human functions and an effort by those doing the mathematization to isolate themselves from the rest of society. Matwyshyn contends that we see both of these consequences today with technologies like the EEG headband in schools and an increasing number of billionaires investing in doomsday bunkers.

To respond to “exploit machina” issues, Matwyshyn offers a two-part solution. First, she argues for threat metamodeling techniques that focus on content and control, harm, and intent (“CHI”). This type of modeling would explicitly consider public safety and potential harms. Second, she advocates for a new federal agency that could act as a technology regulator of last resort, the Bureau of Technology Safety.

Look out for Professor Matwyshyn’s article, “Exploit Machina,” coming soon to a law review near you!


[1] https://mises.org/mises-wire/hayek-difference-between-science-and-scientism

[2] https://www.oregonlive.com/news/g66l-2019/01/7fa2c5265a1556/headband-that-detects-brain-activity-gets-tryout-in-schools-goal-is-to-improve-student-engagement.html

[3] 274 U.S. 200 (1927)

Alicia Kaufmann ’27

Online Editor — hcr9bm@virginia.edu

Previous
Previous

Law School Hosts LawTech Events

Next
Next

Journal Tryouts