FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Transpositioning, a Hypertext-ethnography

This is a work of hypertext-ethnography.

It is based on my research of a small genetics laboratory in Tokyo, Japan where I am studying the impact of the transnational circulation of scientific materials and practices (including programming) on the production of knowledge. In this piece, I draw primarily from my participant observation field notes along with interviews. I also incorporate other, maybe more atypical, materials such as research papers (mine and others), websites and email.

The timeframe for this work is primarily the spring of 2020 and the setting is largely Zoom. Although I began my research in 2019 physically visiting the lab every week, in April 2020, it—and most of the institute where the lab is located—sent researchers home for seven weeks. That included me. Luckily, the lab quickly resumed its regular weekly meetings online (between the Principal Investigator (PI) and individual post-docs for example, as well as other group planning and educational meetings), and I was invited to join. I continued my ethnography for an additional year in this style.

Working in Zoom, my field notes narrowed to transcript recording, and I eventually grew frustrated with the loss of texture and diversity of information, even hands-on, that I had encountered being in-person, in the lab. However, a good deal of my original field notes from the lab describe scientists working silently and independently at their laptops, and on what kinds of materials or with what tools I could hardly, at the time, fathom. Online meetings allowed me to join scientists, inside their computers in a way, where I had a more intimate access to their experimental work.

At the same time, the lab was undergoing a transition. Scientists who practiced “wet” experiments (involving human/animal materials in chemical reactions), like many others, were mostly at home and shifting to planning or learning new skills. But even before the pandemic, this laboratory was gradually incorporating more and more “dry” techniques—using computational methods as part of their genetic research. This includes programming languages like Python and R (note that R appears in this work as a literary device more than accurate depiction), and more accessible entry points such as “no-code” and other web-based tools for analysis that require less time-consuming training. More of the researchers began to learn and play with these methods while at home in that time of “slow down,” and with more or less success. While coding scripts are not so completely different from the experimental protocols that scientists use in the lab (each takes time, patience and a kind of careful attention to perfect), they presented a general challenge that was compounded by being separated at home. In my case, just as I felt I was getting a grasp of the technical terms and biological concepts harnessed in the lab’s research projects, I was exposed to, and lost in, a layer of coding practices with which I had no background knowledge. This time of transition, and of destabilization, is ultimately the location of this work. It weaves two threads: a closing down into relative isolation while at home (and a limiting to the kind of surface data I could collect), and a shared opening up to new practices and forms of lab “work,” including computational research (and for me, remote participant observation). This is the experience that I work to recreate here in interactive form.

A dendritic tree against a pink background, with dozens of small branches.

As a kind of ethnographic accounting, hypertext-ethnography remains uncommon. Despite the promise of early works such as Jay Ruby’s Oak Park Stories (2005) and Rodrick Coover’s Cultures in Webs (2003), hypertextual forms have been mostly left for other disciplines like documentary filmmaking (some examples are described by Favero, 2014). For most, its bare textual form—as in this piece—might even be considered horribly outdated. For me, hypertext is a method to tell a different kind of story. I use this as a form of ethnographic representation along a relatively rhizomatic path to convey the feeling of being “always in the middle, between things, interbeing, intermezzo” (Deleuze and Guattari, 1993, 25). Here, interpretation emerges as part of the direction the reader intentionally, or accidentally, takes through the material; it is therefore open in ways different from traditional academic texts. Any “narrative” emerges primarily in juxtaposition of moments, comments, records and links that also refuses complete(d) analysis. At the same time, hypertext highlights the multivocal and always emergent nature of ethnographic data, destabilizing authorship, if even in small ways. It helps me to raise familiar questions which don’t have (any) easy answers: how do we ever know what we know, and how much do we really need to know and understand to faithfully represent others?

For me, this “story” is only one story among many others which I have yet to fully see.

Start here.


References

 

Coover, R. (2003). Cultures in webs: Working in hypermedia with the documentary image. CITY: Eastgate Systems.

Deleuze, G. & Guattari, F. (1993) A thousand plateaus. Minnesota: The University of Minnesota Press.

Droney, D. (2014) “Ironies of laboratory work during Ghana’s second age of optimism.” Cultural Anthropology 29:2, 363-384, https://doi.org/10.14506/ca29.2.10

DeSilvey, C. (2006) “Observed decay: Telling stories with mutable things.” Journal of Material Culture 11:3, 318–338, https://doi.org/10.1177/1359183506068808

Favero, P. (2014) “Learning to look beyond the frame: reflections on the changing meaning of images in the age of digital media practices.” Visual Studies 29:2, 166-179, http://dx.doi.org/10.1080/1472586X.2014.887269

Larkin, B. (2008) Signal and noise: Media, infrastructure, and urban culture in Nigeria. Durham: Duke University Press.

Li N., Jin K., Bai Y., Fu H., Liu L. and B. Liu (2020) “Tn5 transposase applied in genomics research.” Int J Mol Sci. Nov 6;21(21):8329. doi: 10.3390/ijms21218329.

Krasmann, S. (2020) “The logic of the surface: on the epistemology of algorithms in times of big data.” Information, Communication & Society, 23:14, 2096-2109, https://doi.org/10.1080/1369118X.2020.1726986

McClintock, B. (1973) Letter from Barbara McClintock to maize geneticist Oliver Nelson.

Pink S., Ruckenstein M., Willim R., and M. Duque (2018) “Broken data: Conceptualising data in an emerging world.” Big Data & Society January–June: 1–13.

Ravindran, S. (2012) “Barbara McClintock and the discovery of jumping genes.” PNAS 109 (50) 20198-20199, https://doi.org/10.1073/pnas.1219372109

Ruby, J. (2005) Oak Park stories. Watertown: Documentary Educational Resource.

Venables, W. N., Smith D. M. and the R Core Team (2022) An introduction to R. Notes on R: A programming environment for data analysis and graphics, version 4.2.2 (2022-10-31).

Virilio, P. (1997) Open sky. Translated by J. Rose. London: Verso.

Virilio, P. (1999) Politics of the very worst: An interview by Philippe Petit. Edited by S. Lotringer and translated by M. Cavaliere. New York: Semiotext(e).

Virilio, P. (2000) Polar inertia. Translated by P. Camiller. London: Sage

 

Criminal justice algorithms still discriminate

Black and white hand pushes down small blue ball on balance scale, larger blue ball on opposite high side of scale, purple background

Algorithms were supposed to remake the American justice system, but data can discriminate, says Ngozi Okidegbe, an expert on data and the law.

Championed as dispassionate, computer-driven calculations about risk, crime, and recidivism, their deployment in everything from policing to bail and sentencing to parole was meant to smooth out what are often unequal decisions made by fallible, biased humans.

But, so far, this hasn’t been the case.

“In theory, if the predictive algorithm is less biased than the decision-maker, that should lead to less incarceration of Black and Indigenous and other politically marginalized people. But algorithms can discriminate,” says Okidegbe, associate professor of law and an assistant professor of computing and data sciences at Boston University. Her scholarship examines how the use of predictive technologies in the criminal justice system affects racially marginalized communities.

As it is, these groups are incarcerated at nearly four times the rate of their white peers. According to the Bureau of Justice Statistics, an arm of the US Department of Justice, there were 1,186 Black adults incarcerated in state or federal facilities for every 100,000 adults in 2021 (the most recent year for which data are available), and 1,004 American Indians and Alaska Natives incarcerated for every 100,000 adults. Compare these to the rates at which white people were incarcerated in the same year: 222 per 100,000.

In recent papers, Okidegbe has studied the role of algorithms in these inequities and the interwoven consequences of technology and the law, including researching the data behind bail decisions.

Algorithms can amplify bias

In their most basic form, algorithms are problem-solving shortcuts. Engineers can train computers to digest a large amount of data and then produce a simple solution to a complex problem. Spotify, for example, uses algorithms to suggest songs the company thinks its listeners might enjoy, based on what they’ve listened to previously. The more data a computer model has to go on, the more nuanced and accurate its results should be.

But a growing body of academic research—including by Okidegbe—and news reports show that algorithms built upon incomplete or biased data can replicate or even amplify that bias when they spit out results. This isn’t a huge deal if, for example, your toddler’s Peppa Pig obsession leaks into your suggested Spotify playlists, but it can have devastating effects in other contexts.

Consider a judge, says Okidegbe, who receives an algorithmically generated recidivism risk score as part of a report on a convicted criminal. This score tells the judge how likely this person is to commit another crime in the near future—the higher the score, the more likely someone is to be a repeat offender. The judge takes this score into account, and assigns more jail time to someone with a high recidivism score. Case closed.

A sprawling report by the nonprofit news organization ProPublica found that because these scores feel impartial, they can carry a lot of weight with the judges who use them. In reality, these scores are neither impartial nor airtight. ProPublica found that one particular system used by courts across the country guessed wrong about two times as often for Black people than for white people: it mislabeled twice as many Black people who didn’t reoffend as being at high risk for doing so.

Messy data

In a recent article for the Connecticut Law Review, Okidegbe traces this inconsistency back to its source, and identifies a three-pronged “input problem.”

First, she writes, jurisdictions are opaque about whether and how they use pretrial algorithms, and often adopt them without consulting marginalized communities, “even though these communities are disproportionately affected by their utilization.” Second, these same communities are generally shut out of the process for building such algorithms. Finally, even in jurisdictions where members of the public can lodge opinions about the use of such tools, their input rarely changes anything.

“From a racial-justice perspective, there are other harms that come out of the use of these algorithmic systems. The very paradigm that governs if and how we use these algorithms is quite technocratic and not very diverse. Kate Crawford has noted AI’s ‘white guy problem,'” Okidegbe says, referring to a principal researcher at Microsoft and cochair of a White House symposium on AI and society who coined the term to describe the overrepresentation of white men in the creation of artificially intelligent products and companies.

From the very outset, Okidegbe says, algorithmic systems exclude racially marginalized and other politically oppressed groups.

“I’ve been looking at the decision-making power of whether and how to use algorithms, and what data they are used to produce. It is very exclusionary of the marginalized communities that are most likely to be affected by it, because those communities are not centered, and often they’re not even at the table when these decisions are being made,” she says. “That’s one way I suggest that the turn to algorithms is inconsistent with a racial justice project, because of the way in which they maintain the marginalization of these same communities.”

Shift the power

In addition to producing biased results that disproportionately harm marginalized communities, the data used to train algorithms can be messy, subjective, and discriminatory, Okidegbe says.

“In my work, I’ve contended with what I think is a misconception: that algorithms are only built with quantitative data. They’re not, they’re also built with qualitative data,” she says. Computer engineers and data designers will meet with policymakers to figure out what problem their algorithm should solve, and which datasets they should pull from to build it, Okidegbe says.

In the criminal and legal context, this might mean working with judges to determine what would help them deliver prison sentences, for example. Once again though, it’s much less likely that data engineers would meet with incarcerated people, say, as part of their early information-gathering process. Instead, as Okidegbe writes in an article for a recent edition of the Cornell Law Review, most large datasets used in pretrial algorithms are built upon and trained on data from “carceral knowledge sources,” such as police records and court documents.

“That puts forth this narrative that these communities have no knowledge to add toward the broader question,” Okidegbe says.

Really delivering on the promise of algorithms in the criminal justice system—the promise that they make the process more uniform and less biased than humans otherwise have—requires a radical rethinking of the entire structure, Okidegbe says. It’s something she encourages her students to consider as they shape the future of law and criminal justice.

“It means actually accounting for the knowledge from marginalized and politically oppressed communities, and having it inform how the algorithm is constructed. It also means ongoing oversight of algorithmic technologies by these communities, as well. What I am contending requires building new institutional structures, it requires shifting our mindset about who is credible and who should be in power when it comes to the use of these algorithms. And, if that is too much, then we can’t, in the same breath, call this a racial justice project.”

Source: Boston University

The post Criminal justice algorithms still discriminate appeared first on Futurity.

At This School, Computer Science Class Now Includes Critiquing Chatbots

Move over, coding. Some schools are asking student programmers to think critically about rapid advances in artificial intelligence.

Marisa Shuman challenged her students at the Young Women’s Leadership School of the Bronx to examine the work created by a chatbot.

If I Could Talk to the Algorithm

In the film Doctor Dolittle (1967), the title character yearns to “Talk to the Animals,” as the song goes, to understand their mysterious and often vexing ways. It is interesting to observe a similar impulse to understand and communicate with algorithms, given their current forms of implementation. Recent research shows that intense frustration often emerges from algorithmically driven processes that create hurtful identity characterizations. Our current technological landscape is thus frequently embroiled in “algorithmic dramas” (Zietz 2016), in which algorithms are seen and felt as powerful and influential, but inscrutable. Algorithms, or rather the complex processes that deploy them, are entities that we surely cannot “talk to,” although we might wish to admonish those who create or implement them in everyday life. A key dynamic of the “algorithmic drama” involves yearning to understand just how algorithms work given their impact on people. Yet, accessing the inner workings of algorithms is difficult for numerous reasons (Dourish 2016), including how to talk to, or even about, them.

Talking about “Algorithms”

Common shorthand terms such as “the algorithm” or even “algorithms” are problematic considering that commercial algorithms are usually complex entities. Dourish (2016) notes that algorithms are distinct from software programs that may make use of algorithms to accomplish tasks. The term “algorithm” may also refer to very different types of processes, such as those that incorporate machine learning, and they may be constantly changing. For the purposes of this discussion, the term “algorithm” will be used while keeping these caveats in mind. In this post, I relate the findings of ethnographic research projects incorporated into the new volume, The Routledge Companion to Media Anthropology (Costa et al. 2022), in which interviewees use the term to narrativize troubled interactions they experienced with devices and systems that incorporate algorithmic processes.

Simply pinpointing how algorithms work is a limited approach; it may not yield what we hope to find (Seaver 2022). In studying technologists who develop music recommendation algorithms, Nick Seaver (2022) reminds us that algorithms are certainly not the first domain that ethnographers have studied under conditions of “partial knowledge and pervasive semisecrecy” (16). He lists past examples such as magicians, Freemasons, and nuclear scientists. Seaver argues that researching algorithmic development is not really about exposing company secrets but rather more productively aims to reveal “a more generalizable appraisal of cultural understandings that obtain across multiple sites within an industry” (17). A different approach involves exploring algorithms’ entanglements in an “ongoing navigation of relationships” (15). The idea is to reveal the algorithmic “stories that help people deal with contradictions in social life that can never be fully resolved” (Mosco 2005; see also Zietz 2016).

Narrativizing Algorithmic Experience

In order for data to “speak” about a domain vis-à-vis a group of users, data must be narrativized (Dourish and Goméz Cruz 2018). In the domain of machine learning, large-scale data sets accumulated from the experiences of many users are subsequently used to train a system to accomplish certain tasks. The system then must translate the processed information back so that the information is applicable to the individual experiences of a single user. Algorithmic data must ultimately be “narrated,” especially for devices that have the potential to “re-narrate users’ identities in ways that they strongly [reject] but [cannot] ignore” (Dourish and Goméz Cruz 2018, 3). Dourish and Goméz Cruz argue that it is only through narrative that data sets can “speak,” thus extending their impact, albeit in ways that may differ significantly from designers’ initial conceptions.

In light of this context, responding to algorithms through ethnographic narrative emerged as an important theme in a volume that I had the pleasure of co-editing with Elisabetta Costa, Nell Haynes, and Jolynna Sinanan. We recently saw the publication of our ambitious collection, The Routledge Companion to Media Anthropology (Costa et al. 2022), which contains forty-one chapters covering ethnographic research in twenty-six countries. The volume contains numerous chapters and themes pertaining to science and technology studies, including a section specifically devoted to “Emerging Technologies.” Additional themes include older populations’ struggles with technology, Black media in gaming ecosystems, transgender experiences on social media, and many other relevant themes. The volume also collectively tackles numerous media types including digital spaces, virtual reality, augmented reality, and historical media forms.

Talking Back to the Algorithm

One science and technology-related theme that emerged across different sections of the volume involved perceptions of algorithmic usage and impacts on individuals. Several chapters explored identity characterizations that users of technologized devices, such as the Amazon Echo and the Amazon Echo Look, as well as spaces such as the video-sharing platform of YouTube, found violative of their sense of self and well being. In her chapter, “Algorithmic Violence in Everyday Life and the Role of Media Anthropology,” Veronica Barassi (2022) explored the impacts of algorithmic profiling on parents through an analysis of Alexa, the voice-activated virtual assistant in the home hub Amazon Echo. The chapter examines the experiences of families using Alexa in the United Kingdom and the United States during her study from 2016-2019.

Woman staring at a computer screen

Talking back to the algorithm. Image from Pixabay.

Participants in Barassi’s study often felt that the product engaged in demeaning algorithmic profiling of them. For example, a woman named Cara whom Barassi met in West Los Angeles related how angry and irritated she felt because she was automatically targeted and profiled with weight loss ads simply because she was a woman over 50 years old. Feeling belittled by such profiling, she told Barassi, that “there is so much more to me as a person.” Amy, another participant in her study who was actively trying to lose weight, felt that algorithmic profiling concentrated on a person’s vulnerabilities, such that every time she went on Facebook she was bombarded with ads for new diets and plus-size clothing. She too used exactly the same phrase, that there was “so much more to her as a person,” than the algorithmically-generated profiles constructed.

The commentary that Barassi collected from participants represents an attempt to “talk back” to the “algorithm” or perhaps more accurately the developers, companies, and societal perspectives that have collectively implemented violative profiling. By relating their experiences, these narratives work to counteract the feelings of powerlessness that many interviewees felt in the technologized construction and representation of their perceived identity.

Another important aspect of Barassi’s contribution is to broaden analysis of algorithmic creation and impact beyond a particular piece of technology, and understand how algorithmic profiling and predictive systems are deeply intertwined with forms of bureaucracy. She observes that algorithmic profiling engages in symbolic violence because it “pigeon-holes, stereotypes, and detaches people from their sense of humanity” (Barassi 2022, 489). Barassi argues that we must attend far more closely to the relationship between bureaucracy and structural violence as instantiated in algorithmic profiling.

Similar interviewees’ experiences of feeling belittled by algorithms emerged in the chapter by Heather Horst and Sheba Mohammid, entitled “The Algorithmic Silhouette: New Technologies and the Fashionable Body” (2022). Horst and Mohammid studied the Amazon Echo Look, an app that compares outfits and provides fashion recommendations based on expert and algorithmic information. Priced at $199 and released widely in the US market in 2018, it was ostensibly perceived as a democratizing fashion tool, given that the average user would not ordinarily have daily customized access to fashion expertise.

Horst and Mohammid examined use of the device among women in Trinidad in 2019-2020. One of its features was called Style Check, in which the user selected and tried on two different outfits and submitted images of themselves wearing the outfits for the device to compare. It provided a percentage preference ranking, along with a narrative about the recommendation. Women in the study noted that the system could be very useful for providing recommendations and affirming their choices, particularly for meeting their criteria to appear professional in their wardrobe.

Yet some women felt that the device misrecognized them or their goals in oppressively normative ways. In one instance, a woman was threatened to be banned from adding images due to violating “community guidelines” because she was trying to compare herself wearing two bikinis. Another woman complained that the device’s recommendations seemed to be geared to select garments that made her appear slimmer. In an interview she noted:

It’s assuming that you want a slimmer silhouette, less curves, less flare…it doesn’t take into consideration me, like my personal preferences. It’s comparing me basically using its algorithm and how do I know that your algorithm is as inclusive as it should be, you know?

They conclude that these tensions reveal complexities that emerge when devices do not translate across cultural contexts. Their research demonstrates how inherent biases as instantiated in devices and systems reproduce structural inequality. Horst and Mohammid (2022) recommend analyses that can “give feedback to designers and others at particular points in the life of algorithms and automation processes” (527). They recommend taking a “social life of algorithms” approach that considers how algorithmic processes are embedded in cultural and social relations, and how particular values become normative. Feedback from people interacting with algorithmic products needs to be collected and circulated, particularly to challenge the “inevitability” narrative of technical impact that often accompanies the emergence of new technologies.

Zoë Glatt (2022) writes about perceptions of algorithms among YouTube influencers in London and Los Angeles between 2017 and 2021 in her chapter, “Precarity, Discrimination and (In)Visibility: An Ethnography of ‘The Algorithm’ in the YouTuber Influencer Industry.” Drawing on fieldwork among hard-working videographers, video editors, performers, and marketers, Glatt’s chapter traces how people respond to algorithmic recommendations on the YouTube platform, which directly impact influencers’ livelihoods. She found that “algorithmic invisibility,” or having work deprioritized or omitted on recommendation lists based on algorithmic rankings, is a common fear even among successful content creators with sizable followings. One vlogger expressed her deep concerns about platform invisibility:

Over the past year it has all gone to hell. There’s just no pattern to what is happening in essentially my business, and it is scary and it’s frustrating. I don’t know if people see my videos, I don’t know how people see my videos, I don’t know what channels are being promoted, I don’t know why some channels are being promoted more than others. There’s just no answers, and that’s scary to me. (Excerpted from a vlog by Lilly Singh 2017)

Glatt makes the important contribution of analyzing the cultural and economic meanings that creators attach to assumptions about algorithms. In triangulating what creators say about algorithms with how they feel about them, and the actions that influencers take in response, Glatt provides an important framework for parsing algorithmic interactions in culture. Glatt’s findings that influencers found the algorithm to be unpredictable and stressful underscore the importance of researchers to help hold developers and implementers accountable for algorithmic processes, particularly with regard to addressing the algorithmic discrimination that participants reported.

Collectively, these chapters in the Routledge Companion to Media Anthropology include crucial analysis about ethnographic  interviewees’ perceptions of algorithms while also providing a mechanism for participants to “talk back” to “algorithms” about how they as individuals are represented in everyday life through technology. Indeed, the stories presented serve as a reminder that it is important to think of algorithms relationally and to provide consistent mechanisms for feedback and implementation strategies to reduce harm. It is indeed time to “talk to the algorithms” by engaging users as well as the designers, processes, and societal organizations that implement them in daily life. We can move on from pinpointing exactly how algorithms work, to shifting attention to establishing ways to meaningfully incorporate feedback to change their impact on human beings around the world.


References

Barassi, Veronica. 2022. “Algorithmic Violence in Everyday Life and the Role of Media Anthropology.” In The Routledge Companion to Media Anthropology. Edited by Elisabetta Costa, Patricia G. Lange, Nell Haynes, and Jolynna Sinanan, 481-491. London: Routledge.

Costa, Elisabetta, Patricia G. Lange, Nell Haynes, and Jolynna Sinanan. 2022. The Routledge Companion to Media Anthropology. London: Routledge.

Dourish, Paul. 2016. “Algorithms and their Others: Algorithmic Culture in Context.” Big Data & Society (July – December): 1-11.

Glatt, Zoe. 2022. “Precarity, Discrimination and (In)Visibility: An Ethnography of ‘The Algorithm’ in the YouTube Influencer Industry.” In The Routledge Companion to Media Anthropology. Edited by Elisabetta Costa, Patricia G. Lange, Nell Haynes, and Jolynna Sinanan, 544-556. London: Routledge.

Horst, Heather A. and Sheba Mohammid. 2022. “The Algorithmic Silhouette: New Technologies and the Fashionable Body.” In The Routledge Companion to Media Anthropology. Edited by Elisabetta Costa, Patricia G. Lange, Nell Haynes, and Jolynna Sinanan, 519-531. London: Routledge.

Mosco, Vincent. 2005. The Digital Sublime: Myth, Power, and Cyberspace. Cambridge, MA: The MIT Press.

Seaver, Nick. 2022. Computing Taste: Algorithms and the Makers of Music Recommendation. Chicago: The University of Chicago Press.

Ziewitz, Malte. 2016. “Governing Algorithms: Myths, Mess, and Methods.” Science, Technology, & Human Values 41(1): 3-16.

❌