FreshRSS

๐Ÿ”’
โŒ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

All hail International Relationsโ€™ lack of discipline!

UPDATE: As a commenter helpfully pointed out, the person whose tweet Iโ€™m responding to was a political science Professor, not a historian. This kind of messes with the framing of this post but rather than stealth re-write it Iโ€™ll leave it as is and let you interpret my Freudian slip as you like

When I was in grad school, my Departmentโ€™s grad student organization made shirts that read, โ€œPolitical Science: Four sub-fields, no discipline.โ€ Behind this joke is a common observation about political science, that it is defined by its focus rather than a formal set of methods or theories. Not everyone agrees with this characterization, and there have been some efforts to craft political science-specific tools. But generally political science is a field that draws on insights and tools from other areas to study politics. This is most pronounced in international relations. IR looks not just to other fields but also other sub-fields of political science to study the world.

Many present IRโ€™s lack of discipline as a critique. They view IR scholars as a group of raiders, pillaging ideas and methods from other disciplines then returning to our barren homeland. Two recent Twitter kerfuffles, however, demonstrate that this aspect of IR is actually our greatest strength.

Erecting ramparts against the IR hordes

I donโ€™t spend much time on Twitter anymore, but I still seem to discover the latest controversy. Two academic ones were related in their attacks on political science.

First, a historian responded angrily to a new article in the American Political Science Review by Anna Grzymala-Busse on European state formation. The historian suggested she used overly-simplistic methods to make a point that โ€œrealโ€ experts on early modern Europe already knew (Iโ€™ve anonymized the tweet as I donโ€™t like engaging in Twitter attacks).

This is a common complaint Iโ€™ve heard from historians studying international issues. IR and CP either take historyโ€™s insights and repackage them as our own, or donโ€™t realize historians have already said this. Several academic institutions Iโ€™ve been a part of have included fierce and rather petty attacks by historians on political scientists.

As some respondents to this tweet noted, however, this historian isnโ€™t really being fair. The role of religion in state formation is hardly settled groundโ€“I took an entire class on debates over the role of religion in nationalism in grad school. Also, isnโ€™t it a good thing to test and confirm certain arguments using a different set of data and methods? And when pressed, he couldnโ€™t point to what historical works the author overlooked.

Critiques of IR and political science take the place of addressing real issues within other areas of study

In a follow-up tweet, the historian also makes an ironic call for interdisciplinarity. Ironic because this is an interdisciplinary work! Grzymala-Busse combined insights from comparative politics and history to generate new knowledge; this is in line with her other work, which involves careful attention to historical detail. Those calling for interdisciplinary engagement should cheer this, unless โ€œinterdisciplinaryโ€ just means listening to historiansโ€ฆ

The second Twitter incident involved a data scientist. A data science grad student tweeted a broadside against the replication crisis in psychology, followed by attacks on political science and the social sciences in general. Another data scientist responded by pointing out that social scientists donโ€™t conduct our own statistical analysis and instead get โ€œreal statisticiansโ€ to do it.

Again, people took issue with this. Some noted the data science grad student hadnโ€™t really characterized the replication crisis accurately. Others asked for specific examples (which werenโ€™t forthcoming). Iโ€™d also point out that it is true data science does have a real impact on our lives, but itโ€™s hardly a positive one; one data science course I took focused on things like getting around CAPTCHA tests and tricking spam filters. And in practice โ€œinterdisciplinaryโ€ for data science often means using Python to study political or social issues without engaging actual subject matter experts.

These are very different controversies, and Iโ€™m sure these two people wouldnโ€™t agree on much if forced to have a conversation. But both involve the perennial attack on political science (and IR by extension): we donโ€™t come up with our own insights or methods, we just steal the former and implement the latter badly.

Whatโ€™s going to happen to us without barbarians?

I thought of this debate recently while visiting the excellent Jorvik Viking Centre in York, England. The popular view of the Norse raiders known as vikings is of pillaging hordes, and that was certainly the case initially. But as often happens, they settled down. And in the case of Jorvik, they created a thriving cosmopolitan society enabled through their wide-ranging travels.

Maybe Iโ€™m pushing this metaphor a bit here, but I think of IR as Jorvik.

Yes, we got our start by combining economic models and humanistic insights. Yes, our research tends to include references from disparate traditions. Yes, our data is messier than other fields or even sub-fields within political science.

These issues all became strengths, however.

Because of IRโ€™s broad roots we have to be conversant in different disciplines. When engaging with people from other disciplines, I often get the sense theyโ€™ve never really read anything from my field; their critiques are often caricatures. By contrast, many IR scholars are well-read in other fields.

Additionally, we recognize the difficulty of drawing on and testing different disciplines. Thatโ€™s why you can find IR and political science discussions about combining methodologies or triangulating among competing schools of historiography.

Finally, the challenge of dealing with incredibly messy data has created problems for IR but also led to fertile debates. For example, the problem of selection effects in conflict onset has led to a useful back-and-forth.

Interdisciplinary means each side listens to and learns from the other, not that one asserts superiority and territoriality

Beyond that, these critiques of political science and IR take the place of addressing real issues within other areas of study. In C.P. Cavafyโ€™s poem, โ€œWaiting for the Barbariansโ€ (which I referenced above) the inhabitants of a classical city sit and wait for the barbarians to arrive instead of dealing with the problems in their civilization. We can sense this in some of the attacks Iโ€™m discussing.

Historians are rightly frustrated at the lack of support for and interest in the humanities from universities and the general public. They often, however, see political science as the problem, such as Grzymala-Busse making a splash by engaging in historical debates. I am also reminded of a seminar my grad school put on to help students prepare and turn their dissertations into books; we went around the room discussing our topics and one history student made a crack about how โ€œrelevantโ€ mine would be in DC. Rather than finding ways to demonstrate the value of a humanistic and historical approach to contemporary issues, some historians seem to blame political science and IR for sucking up all the attention (and student interest).

Likewise, data scientists are rightly tired of inadequate statistical models and badly interpreted findings. But what many of them seem to miss is that this is not a problem of stupidity: it is over-confidence, something data science tends to exhibit. I also sense a bit of frustration that political scientists are still seen as the expert onโ€ฆpolitics despite our lack of cutting edge programming skills. This could be solved by closer collaborations between data scientists and subject matter experts, something that is often lacking.

Itโ€™s almost like political science and IR have become the Other to our critics, alleviating the need for any deeper reflection. As Kavafy ended his poem: โ€œthose people were a kind of solution.โ€

Interdisciplinary goes both ways

So what should be done?

Well, I am just finishing a fellowship at Edinburgh Universityโ€™s Institute for Advanced Studies in the Humanities (IASH), which was funded by the Centre for the Study of Islam in the Contemporary World. It includes fellows from across the humanities, as well as the social and natural sciences. Part of the fellowship is a โ€œwork in progressโ€ talk, which I gave last week. The empirical subject was my new work using social network analysis to study international religious politics, but the broader theme was my ongoing effort to test concepts from the humanities using quantitative social science methods.

I was unsure about the reaction. I didnโ€™t know if the crowd of humanities scholars would react hostilely to me as an interloper. Instead, it was an incredibly fruitful discussion. There were tough questions and critiques, but they were in the spirit of collaboration and community. They recognized that I valued their disciplines, and they did not see the fact that I drew on theirs and mixed it with others (i.e., my lack of discipline) as a problem.

In this context, interdisciplinary meant each side listened to and learned from the other, rather than one asserting superiority and territoriality. Itโ€™d be nice if that attitude spread outside IASH.

Criminal justice algorithms still discriminate

Black and white hand pushes down small blue ball on balance scale, larger blue ball on opposite high side of scale, purple background

Algorithms were supposed to remake the American justice system, but data can discriminate, says Ngozi Okidegbe, an expert on data and the law.

Championed as dispassionate, computer-driven calculations about risk, crime, and recidivism, their deployment in everything from policing to bail and sentencing to parole was meant to smooth out what are often unequal decisions made by fallible, biased humans.

But, so far, this hasnโ€™t been the case.

โ€œIn theory, if the predictive algorithm is less biased than the decision-maker, that should lead to less incarceration of Black and Indigenous and other politically marginalized people. But algorithms can discriminate,โ€ says Okidegbe, associate professor of law and an assistant professor of computing and data sciences at Boston University. Her scholarship examines how the use of predictive technologies in the criminal justice system affects racially marginalized communities.

As it is, these groups are incarcerated at nearly four times the rate of their white peers. According to the Bureau of Justice Statistics, an arm of the US Department of Justice, there were 1,186 Black adults incarcerated in state or federal facilities for every 100,000 adults in 2021 (the most recent year for which data are available), and 1,004 American Indians and Alaska Natives incarcerated for every 100,000 adults. Compare these to the rates at which white people were incarcerated in the same year: 222 per 100,000.

In recent papers, Okidegbe has studied the role of algorithms in these inequities and the interwoven consequences of technology and the law, including researching the data behind bail decisions.

Algorithms can amplify bias

In their most basic form, algorithms are problem-solving shortcuts. Engineers can train computers to digest a large amount of data and then produce a simple solution to a complex problem. Spotify, for example, uses algorithms to suggest songs the company thinks its listeners might enjoy, based on what theyโ€™ve listened to previously. The more data a computer model has to go on, the more nuanced and accurate its results should be.

But a growing body of academic researchโ€”including by Okidegbeโ€”and news reports show that algorithms built upon incomplete or biased data can replicate or even amplify that bias when they spit out results. This isnโ€™t a huge deal if, for example, your toddlerโ€™s Peppa Pig obsession leaks into your suggested Spotify playlists, but it can have devastating effects in other contexts.

Consider a judge, says Okidegbe, who receives an algorithmically generated recidivism risk score as part of a report on a convicted criminal. This score tells the judge how likely this person is to commit another crime in the near futureโ€”the higher the score, the more likely someone is to be a repeat offender. The judge takes this score into account, and assigns more jail time to someone with a high recidivism score. Case closed.

A sprawling report by the nonprofit news organization ProPublica found that because these scores feel impartial, they can carry a lot of weight with the judges who use them. In reality, these scores are neither impartial nor airtight. ProPublica found that one particular system used by courts across the country guessed wrong about two times as often for Black people than for white people: it mislabeled twice as many Black people who didnโ€™t reoffend as being at high risk for doing so.

Messy data

In a recent article for the Connecticut Law Review, Okidegbe traces this inconsistency back to its source, and identifies a three-pronged โ€œinput problem.โ€

First, she writes, jurisdictions are opaque about whether and how they use pretrial algorithms, and often adopt them without consulting marginalized communities, โ€œeven though these communities are disproportionately affected by their utilization.โ€ Second, these same communities are generally shut out of the process for building such algorithms. Finally, even in jurisdictions where members of the public can lodge opinions about the use of such tools, their input rarely changes anything.

โ€œFrom a racial-justice perspective, there are other harms that come out of the use of these algorithmic systems. The very paradigm that governs if and how we use these algorithms is quite technocratic and not very diverse. Kate Crawford has noted AIโ€™s โ€˜white guy problem,'โ€ Okidegbe says, referring to a principal researcher at Microsoft and cochair of a White House symposium on AI and society who coined the term to describe the overrepresentation of white men in the creation of artificially intelligent products and companies.

From the very outset, Okidegbe says, algorithmic systems exclude racially marginalized and other politically oppressed groups.

โ€œIโ€™ve been looking at the decision-making power of whether and how to use algorithms, and what data they are used to produce. It is very exclusionary of the marginalized communities that are most likely to be affected by it, because those communities are not centered, and often theyโ€™re not even at the table when these decisions are being made,โ€ she says. โ€œThatโ€™s one way I suggest that the turn to algorithms is inconsistent with a racial justice project, because of the way in which they maintain the marginalization of these same communities.โ€

Shift the power

In addition to producing biased results that disproportionately harm marginalized communities, the data used to train algorithms can be messy, subjective, and discriminatory, Okidegbe says.

โ€œIn my work, Iโ€™ve contended with what I think is a misconception: that algorithms are only built with quantitative data. Theyโ€™re not, theyโ€™re also built with qualitative data,โ€ she says. Computer engineers and data designers will meet with policymakers to figure out what problem their algorithm should solve, and which datasets they should pull from to build it, Okidegbe says.

In the criminal and legal context, this might mean working with judges to determine what would help them deliver prison sentences, for example. Once again though, itโ€™s much less likely that data engineers would meet with incarcerated people, say, as part of their early information-gathering process. Instead, as Okidegbe writes in an article for a recent edition of the Cornell Law Review, most large datasets used in pretrial algorithms are built upon and trained on data from โ€œcarceral knowledge sources,โ€ such as police records and court documents.

โ€œThat puts forth this narrative that these communities have no knowledge to add toward the broader question,โ€ Okidegbe says.

Really delivering on the promise of algorithms in the criminal justice systemโ€”the promise that they make the process more uniform and less biased than humans otherwise haveโ€”requires a radical rethinking of the entire structure, Okidegbe says. Itโ€™s something she encourages her students to consider as they shape the future of law and criminal justice.

โ€œIt means actually accounting for the knowledge from marginalized and politically oppressed communities, and having it inform how the algorithm is constructed. It also means ongoing oversight of algorithmic technologies by these communities, as well. What I am contending requires building new institutional structures, it requires shifting our mindset about who is credible and who should be in power when it comes to the use of these algorithms. And, if that is too much, then we canโ€™t, in the same breath, call this a racial justice project.โ€

Source: Boston University

The post Criminal justice algorithms still discriminate appeared first on Futurity.

โŒ