Law School Hosts LawTech Events
LawTech was the topic of discussion for the day on Friday, February 22, at the Law School. In the morning, the Virginia Journal of Social Policy & the Law (VJSPL) hosted a symposium on Artificial Intelligence featuring professors from both the Law School and the Undergraduate University.
Marie Ceske ’25 kicked off the symposium with a brief introduction to both the event as a whole as well as the symposium’s first speaker, Professor Danielle Citron.
Professor Citron framed her remarks as a discussion of the “costs to democracy” enabled by the use of AI in perpetuating online harassment and abuse. A particularly pernicious use of AI—the creation of deepfake sex videos—fits into a familiar and consistent pattern of online harassment used as a tool to silence journalists and researchers (particularly women) and quiet dissent. Professor Citron first described this pattern: publication of an article or research critical of powerful interests; a call and response between those powerful interests and “cyber mobs”; followed by a barrage of deepfake sex videos, doxxing, and online rape and death threats against the journalist or researcher who published the original content. She also cited several recent high-profile incidents where this playbook was followed without deviation, including against Indian journalist Rana Ayyub, who uncovered human rights abuses implicating Prime Minister Narendra Modi[1]; Nina Jankowicz, a researcher and former executive director of the Disinformation Governance Board within the Department of Homeland Security[2]; and, most recently, Wall Street Journal reporter Katherine Long, who was targeted after authoring an article naming a twenty-five-year-old Department of Government Efficiency employee who in the past posted racist comments on his personal social media accounts.[3]
Professor Citron explained how these incidents evidence what she calls the “authoritarian two-step,” the adoption of tools used by authoritarian governments to silence dissent and chill speech from members of civil society by officials within our own government. The most recent episode concerning Department of Government Efficiency employees worryingly demonstrates a new “digital authoritarian tool,” which Professor Citron identified as the manipulation of the term “online abuse” by government officials and their allies to accuse journalists of the exact types of online abuse they themselves are engaging in. At the same time as Long was targeted for her reporting on one particular Department of Government Efficiency employee, Elon Musk accused a Wired magazine reporter of illegally doxxing by revealing the names of six of his employees in their reporting. Although there is no federal anti-doxxing statute, Citron noted the Department of Justice impliedly threatened investigations under a federal cyberstalking statute, a statute seldom used for everyday victims of cyber harassment, in response to these recent accusations against journalists.
With the threat to democracy identified, Professor Citron proceeded to discuss legal and market responses to combat this pattern of online abuse and harassment. Her outlook, however, was decidedly grim. “We need law,” Professor Citron prescribed. Law enforcement is ineffective against the “cyber mobs” who wage their mass online attacks. Among the thousands of perpetrators, law enforcement officers often do not know where to begin their investigation. And it may be difficult if not impossible to identify many of the posters, who are located across numerous jurisdictions and generally post anonymously. Instead, the typical response from law enforcement to reports of online abuse of this form, Professor Citron noted, is inaction and unconstructive advice to simply ignore the online abuse. Do civil actions provide adequate legal remedies? No, remarked Professor Citron, because “there is no deep pocket here, thanks to Section 230 of the Communications Decency Act,” the much-criticized federal statute providing a liability shield to online content platforms.
Market responses are equally inadequate, commented Professor Citron. Despite the rise of trust and safety initiatives among large tech platforms through much of the 2010s, these commitments have proven to be remarkably hollow and reached their “high-water mark [in] 2020.” Since then, companies like Google, X, and Meta have fired both their AI ethics teams and trust and safety councils. The advertiser revolt that catalyzed the birth of the trust and safety industry in the 2010s has not exerted the same market pressures in the 2020s, enabling the movement’s subsequent demise.
Following Professor Citron’s discussion, David Evans, professor of computer science at the University of Virginia, and Thomas Nachbar, professor of law at the Law School, provided complementary analyses of the “explainability” problem of AI, a control on AI models frequently demanded by public commentators which requires that the grounds for the output or “decision” of an AI model be capable of being explained.
Professor Evans discussed the “explainability” problem from a technical perspective. He first provided a brief technical background on how AI models are trained and developed, the upshot of which is that AI models are essentially a set of a few billion numbers representing the model’s parameters. Once it receives an input, calculations of these numbers generate some output. This technical description simply provides one way to conceptualize the explainability of a model: a model can be “explained” by the “calculation of numbers in the model and how it generates an output.”
This is not a particularly useful method of explanation, however, since it is not easily comprehensible by humans. Instead, Professor Evans described two alternative approaches that may provide a more useful way of explaining AI model outputs.
The first method Professor Evans discussed was “counterfactuals.” By changing some property of an input, you can observe the differences in the output generated by the model. Professor Evans presented a hypothetical example of an AI model trained on name, LSAT score, and writing sample score data that provides an admission decision for law school applicants. By varying the LSAT score across several iterations of an input, for example, you can observe the relative weight of that particular data point in the model. And, if changing the name of an applicant from a female to a male affects the output of the model, that may prompt closer interrogation of the model. This is not a perfect solution, Professor Evans noted, because some degree of randomness is built into AI models, so identical inputs may not always yield the same output to begin with.
The second method Professor Evans presented was “self-interrogation.” Many text-based generative AI models, especially newer reasoning models like OpenAI’s o3-mini and DeepSeek’s DeepSeek-R1, when prompted to explain themselves, will generate text that resembles the “thinking” it used to derive an output. These models do a reasonably good job of showing their work, in a sense, before arriving at the final output displayed to users.
Professor Nachbar complemented Professor Evans’ discussion with a legal perspective on explainability. Professor Nachbar noted that explainability has come to mean several different things in public discourse, which sometimes conflates demands for AI models whose outputs are themselves explainable with demands for an explanation of the functional role played by AI models. Most commonly, though, explainability has been used as a synonym for transparency.
However, Professor Nachbar argued that public discourse should move away from this emphasis on transparency and towards a more robust understanding of explainability in legal settings. He provided a taxonomy to frame conceptions of AI explainability, delineating between “clarifying explanations” and “causal explanations” of AI outputs. According to Professor Nachbar—borrowing from Google Dictionary—a clarifying explanation is “a statement or account that makes something clear.” In contrast, a causal explanation is “a reason or justification for an action or belief.”
Causal explanations are familiar to legal contexts, Professor Nachbar noted. For traditional software, he explained, these two explanations run in parallel—the answers to inquiries can be found directly in the code and the developer who wrote the code. But these two explanations diverge in the context of AI. For a model-based AI, looking at the model generally does not provide a comprehensible causal explanation. The causal explanation for any given output, Professor Nachbar explained, will always be answered with some version of “because it matches the pattern” of the data the model was trained on. But this does not do much work in providing a meaningful explanation of the output.
Instead, Nachbar identified four factors characteristic of useful explanations that can help inform causal understandings of explainability in the AI context. First, he posits that explanations are situationally contingent. Different situations present different explanatory demands, and those demands will determine which features of an output will be relevant at any given time. “We ask for explanations for things that don’t fit” in the composite output. Second, explanations are comparative. “We want to know why this and not that,” for any given feature of the output. Third, they are selective, meaning only particular features will be salient given the context. Finally, they are interactive, by which he means to convey that “explanations often arise by virtue of conversation” with a counterpart. “We express concern over specific things that emerge in dialogue.”
[1] ICFJ, ICFJ Stands With Indian Journalist Rana Ayyub Amid Prolific #OnlineViolence, https://tinyurl.com/4p4effkd.
[2] Reuters Institute, Facing hate and abuse as a woman online: Nina Jankowicz on her latest book, https://tinyurl.com/eja544cj; NPR, She joined DHS to fight disinformation. She says she was halted by... disinformation, https://tinyurl.com/ymn2c7xt.
[3] NY Times, How Elon Musk and the Right Are Trying to Recast Reporting as ‘Doxxing’, https://tinyurl.com/2xpnwhca.