UPDATE: As a commenter helpfully pointed out, the person whose tweet Iโm responding to was a political science Professor, not a historian. This kind of messes with the framing of this post but rather than stealth re-write it Iโll leave it as is and let you interpret my Freudian slip as you like
When I was in grad school, my Departmentโs grad student organization made shirts that read, โPolitical Science: Four sub-fields, no discipline.โ Behind this joke is a common observation about political science, that it is defined by its focus rather than a formal set of methods or theories. Not everyone agrees with this characterization, and there have been some efforts to craft political science-specific tools. But generally political science is a field that draws on insights and tools from other areas to study politics. This is most pronounced in international relations. IR looks not just to other fields but also other sub-fields of political science to study the world.
Many present IRโs lack of discipline as a critique. They view IR scholars as a group of raiders, pillaging ideas and methods from other disciplines then returning to our barren homeland. Two recent Twitter kerfuffles, however, demonstrate that this aspect of IR is actually our greatest strength.
I donโt spend much time on Twitter anymore, but I still seem to discover the latest controversy. Two academic ones were related in their attacks on political science.
First, a historian responded angrily to a new article in the American Political Science Review by Anna Grzymala-Busse on European state formation. The historian suggested she used overly-simplistic methods to make a point that โrealโ experts on early modern Europe already knew (Iโve anonymized the tweet as I donโt like engaging in Twitter attacks).
This is a common complaint Iโve heard from historians studying international issues. IR and CP either take historyโs insights and repackage them as our own, or donโt realize historians have already said this. Several academic institutions Iโve been a part of have included fierce and rather petty attacks by historians on political scientists.
As some respondents to this tweet noted, however, this historian isnโt really being fair. The role of religion in state formation is hardly settled groundโI took an entire class on debates over the role of religion in nationalism in grad school. Also, isnโt it a good thing to test and confirm certain arguments using a different set of data and methods? And when pressed, he couldnโt point to what historical works the author overlooked.
Critiques of IR and political science take the place of addressing real issues within other areas of study
In a follow-up tweet, the historian also makes an ironic call for interdisciplinarity. Ironic because this is an interdisciplinary work! Grzymala-Busse combined insights from comparative politics and history to generate new knowledge; this is in line with her other work, which involves careful attention to historical detail. Those calling for interdisciplinary engagement should cheer this, unless โinterdisciplinaryโ just means listening to historiansโฆ
The second Twitter incident involved a data scientist. A data science grad student tweeted a broadside against the replication crisis in psychology, followed by attacks on political science and the social sciences in general. Another data scientist responded by pointing out that social scientists donโt conduct our own statistical analysis and instead get โreal statisticiansโ to do it.
Again, people took issue with this. Some noted the data science grad student hadnโt really characterized the replication crisis accurately. Others asked for specific examples (which werenโt forthcoming). Iโd also point out that it is true data science does have a real impact on our lives, but itโs hardly a positive one; one data science course I took focused on things like getting around CAPTCHA tests and tricking spam filters. And in practice โinterdisciplinaryโ for data science often means using Python to study political or social issues without engaging actual subject matter experts.
These are very different controversies, and Iโm sure these two people wouldnโt agree on much if forced to have a conversation. But both involve the perennial attack on political science (and IR by extension): we donโt come up with our own insights or methods, we just steal the former and implement the latter badly.
I thought of this debate recently while visiting the excellent Jorvik Viking Centre in York, England. The popular view of the Norse raiders known as vikings is of pillaging hordes, and that was certainly the case initially. But as often happens, they settled down. And in the case of Jorvik, they created a thriving cosmopolitan society enabled through their wide-ranging travels.
Maybe Iโm pushing this metaphor a bit here, but I think of IR as Jorvik.
Yes, we got our start by combining economic models and humanistic insights. Yes, our research tends to include references from disparate traditions. Yes, our data is messier than other fields or even sub-fields within political science.
These issues all became strengths, however.
Because of IRโs broad roots we have to be conversant in different disciplines. When engaging with people from other disciplines, I often get the sense theyโve never really read anything from my field; their critiques are often caricatures. By contrast, many IR scholars are well-read in other fields.
Additionally, we recognize the difficulty of drawing on and testing different disciplines. Thatโs why you can find IR and political science discussions about combining methodologies or triangulating among competing schools of historiography.
Finally, the challenge of dealing with incredibly messy data has created problems for IR but also led to fertile debates. For example, the problem of selection effects in conflict onset has led to a useful back-and-forth.
Interdisciplinary means each side listens to and learns from the other, not that one asserts superiority and territoriality
Beyond that, these critiques of political science and IR take the place of addressing real issues within other areas of study. In C.P. Cavafyโs poem, โWaiting for the Barbariansโ (which I referenced above) the inhabitants of a classical city sit and wait for the barbarians to arrive instead of dealing with the problems in their civilization. We can sense this in some of the attacks Iโm discussing.
Historians are rightly frustrated at the lack of support for and interest in the humanities from universities and the general public. They often, however, see political science as the problem, such as Grzymala-Busse making a splash by engaging in historical debates. I am also reminded of a seminar my grad school put on to help students prepare and turn their dissertations into books; we went around the room discussing our topics and one history student made a crack about how โrelevantโ mine would be in DC. Rather than finding ways to demonstrate the value of a humanistic and historical approach to contemporary issues, some historians seem to blame political science and IR for sucking up all the attention (and student interest).
Likewise, data scientists are rightly tired of inadequate statistical models and badly interpreted findings. But what many of them seem to miss is that this is not a problem of stupidity: it is over-confidence, something data science tends to exhibit. I also sense a bit of frustration that political scientists are still seen as the expert onโฆpolitics despite our lack of cutting edge programming skills. This could be solved by closer collaborations between data scientists and subject matter experts, something that is often lacking.
Itโs almost like political science and IR have become the Other to our critics, alleviating the need for any deeper reflection. As Kavafy ended his poem: โthose people were a kind of solution.โ
So what should be done?
Well, I am just finishing a fellowship at Edinburgh Universityโs Institute for Advanced Studies in the Humanities (IASH), which was funded by the Centre for the Study of Islam in the Contemporary World. It includes fellows from across the humanities, as well as the social and natural sciences. Part of the fellowship is a โwork in progressโ talk, which I gave last week. The empirical subject was my new work using social network analysis to study international religious politics, but the broader theme was my ongoing effort to test concepts from the humanities using quantitative social science methods.
I was unsure about the reaction. I didnโt know if the crowd of humanities scholars would react hostilely to me as an interloper. Instead, it was an incredibly fruitful discussion. There were tough questions and critiques, but they were in the spirit of collaboration and community. They recognized that I valued their disciplines, and they did not see the fact that I drew on theirs and mixed it with others (i.e., my lack of discipline) as a problem.
In this context, interdisciplinary meant each side listened to and learned from the other, rather than one asserting superiority and territoriality. Itโd be nice if that attitude spread outside IASH.
Algorithms were supposed to remake the American justice system, but data can discriminate, says Ngozi Okidegbe, an expert on data and the law.
Championed as dispassionate, computer-driven calculations about risk, crime, and recidivism, their deployment in everything from policing to bail and sentencing to parole was meant to smooth out what are often unequal decisions made by fallible, biased humans.
But, so far, this hasnโt been the case.
โIn theory, if the predictive algorithm is less biased than the decision-maker, that should lead to less incarceration of Black and Indigenous and other politically marginalized people. But algorithms can discriminate,โ says Okidegbe, associate professor of law and an assistant professor of computing and data sciences at Boston University. Her scholarship examines how the use of predictive technologies in the criminal justice system affects racially marginalized communities.
As it is, these groups are incarcerated at nearly four times the rate of their white peers. According to the Bureau of Justice Statistics, an arm of the US Department of Justice, there were 1,186 Black adults incarcerated in state or federal facilities for every 100,000 adults in 2021 (the most recent year for which data are available), and 1,004 American Indians and Alaska Natives incarcerated for every 100,000 adults. Compare these to the rates at which white people were incarcerated in the same year: 222 per 100,000.
In recent papers, Okidegbe has studied the role of algorithms in these inequities and the interwoven consequences of technology and the law, including researching the data behind bail decisions.
In their most basic form, algorithms are problem-solving shortcuts. Engineers can train computers to digest a large amount of data and then produce a simple solution to a complex problem. Spotify, for example, uses algorithms to suggest songs the company thinks its listeners might enjoy, based on what theyโve listened to previously. The more data a computer model has to go on, the more nuanced and accurate its results should be.
But a growing body of academic researchโincluding by Okidegbeโand news reports show that algorithms built upon incomplete or biased data can replicate or even amplify that bias when they spit out results. This isnโt a huge deal if, for example, your toddlerโs Peppa Pig obsession leaks into your suggested Spotify playlists, but it can have devastating effects in other contexts.
Consider a judge, says Okidegbe, who receives an algorithmically generated recidivism risk score as part of a report on a convicted criminal. This score tells the judge how likely this person is to commit another crime in the near futureโthe higher the score, the more likely someone is to be a repeat offender. The judge takes this score into account, and assigns more jail time to someone with a high recidivism score. Case closed.
A sprawling report by the nonprofit news organization ProPublica found that because these scores feel impartial, they can carry a lot of weight with the judges who use them. In reality, these scores are neither impartial nor airtight. ProPublica found that one particular system used by courts across the country guessed wrong about two times as often for Black people than for white people: it mislabeled twice as many Black people who didnโt reoffend as being at high risk for doing so.
In a recent article for the Connecticut Law Review, Okidegbe traces this inconsistency back to its source, and identifies a three-pronged โinput problem.โ
First, she writes, jurisdictions are opaque about whether and how they use pretrial algorithms, and often adopt them without consulting marginalized communities, โeven though these communities are disproportionately affected by their utilization.โ Second, these same communities are generally shut out of the process for building such algorithms. Finally, even in jurisdictions where members of the public can lodge opinions about the use of such tools, their input rarely changes anything.
โFrom a racial-justice perspective, there are other harms that come out of the use of these algorithmic systems. The very paradigm that governs if and how we use these algorithms is quite technocratic and not very diverse. Kate Crawford has noted AIโs โwhite guy problem,'โ Okidegbe says, referring to a principal researcher at Microsoft and cochair of a White House symposium on AI and society who coined the term to describe the overrepresentation of white men in the creation of artificially intelligent products and companies.
From the very outset, Okidegbe says, algorithmic systems exclude racially marginalized and other politically oppressed groups.
โIโve been looking at the decision-making power of whether and how to use algorithms, and what data they are used to produce. It is very exclusionary of the marginalized communities that are most likely to be affected by it, because those communities are not centered, and often theyโre not even at the table when these decisions are being made,โ she says. โThatโs one way I suggest that the turn to algorithms is inconsistent with a racial justice project, because of the way in which they maintain the marginalization of these same communities.โ
In addition to producing biased results that disproportionately harm marginalized communities, the data used to train algorithms can be messy, subjective, and discriminatory, Okidegbe says.
โIn my work, Iโve contended with what I think is a misconception: that algorithms are only built with quantitative data. Theyโre not, theyโre also built with qualitative data,โ she says. Computer engineers and data designers will meet with policymakers to figure out what problem their algorithm should solve, and which datasets they should pull from to build it, Okidegbe says.
In the criminal and legal context, this might mean working with judges to determine what would help them deliver prison sentences, for example. Once again though, itโs much less likely that data engineers would meet with incarcerated people, say, as part of their early information-gathering process. Instead, as Okidegbe writes in an article for a recent edition of the Cornell Law Review, most large datasets used in pretrial algorithms are built upon and trained on data from โcarceral knowledge sources,โ such as police records and court documents.
โThat puts forth this narrative that these communities have no knowledge to add toward the broader question,โ Okidegbe says.
Really delivering on the promise of algorithms in the criminal justice systemโthe promise that they make the process more uniform and less biased than humans otherwise haveโrequires a radical rethinking of the entire structure, Okidegbe says. Itโs something she encourages her students to consider as they shape the future of law and criminal justice.
โIt means actually accounting for the knowledge from marginalized and politically oppressed communities, and having it inform how the algorithm is constructed. It also means ongoing oversight of algorithmic technologies by these communities, as well. What I am contending requires building new institutional structures, it requires shifting our mindset about who is credible and who should be in power when it comes to the use of these algorithms. And, if that is too much, then we canโt, in the same breath, call this a racial justice project.โ
Source: Boston University
The post Criminal justice algorithms still discriminate appeared first on Futurity.