FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Bard President Received $150,000 From Foundation Created by Jeffrey Epstein

Leon Botstein, the president of Bard College, said that he donated the money to his school as part of a $1 million gift he gave in 2016.

Leon Botstein, president of Bard College, in his study at the President’s House on the Bard College campus in Annandale-on-Hudson.

A College President Defends Seeking Money From Jeffrey Epstein

Leon Botstein, the president of Bard College, said, “Among the very rich is a higher percentage of unpleasant and not very attractive people.”

Leon Botstein has led Bard College for nearly five decades.

Household dust harbors forensic DNA info

A feather duster has white dust coming off of it.

It’s possible to retrieve forensically relevant information from human DNA in household dust, a new study finds.

After sampling indoor dust from 13 households, researchers were able to detect DNA from household residents over 90% of the time, and DNA from non-occupants 50% of the time. The work could be a way to help investigators find leads in difficult cases.

Specifically, the researchers were able to obtain single nucleotide polymorphisms, or SNPs, from the dust samples. SNPs are sites within the genome that vary between individuals—corresponding to characteristics like eye color—that can give investigators a “snapshot” of the person.

“SNPs are just single sites in the genome that can provide forensically useful information on identity, ancestry, and physical characteristics—it’s the same information used by places like Ancestry.com—that can be done with tests that are widely available,” says Kelly Meiklejohn, assistant professor of forensic science and coordinator of the forensic sciences cluster at North Carolina State University and corresponding author of the study in the Journal of Forensic Sciences.

“Because they’re single sites, they’re easier to recover for highly degraded samples where we may only be able to amplify short regions of the DNA,” Meiklejohn says.

“Traditional DNA analysis in forensics amplifies regions ranging from 100 to 500 base pairs, so for a highly degraded sample the large regions often drop out. SNPs as a whole don’t provide the same level of discrimination as traditional forensic DNA testing, but they could be a starting place in cases without leads.”

Meiklejohn and her team recruited 13 diverse households and took cheek swabs from each occupant along with dust samples from five areas within each home: the top of the refrigerator, inside the bedroom closet, the top frame of the front door, a bookshelf or photo frame in the living room, and a windowsill in the living room.

Utilizing massively parallel sequencing, or MPS, the team was able to quickly sequence multiple samples and target the SNPs of interest. They found that 93% of known household occupants were detected in at least one dust sample from each household. They also saw DNA from non-occupants in over half of the samples collected from each site.

“This data wouldn’t be used like traditional forensic DNA evidence—to link a single individual to a crime—but it could be useful for establishing clues about the ancestry and physical characteristics of individuals at a scene and possibly give investigators leads in cases where there may not be much to go on,” Meiklejohn says.

“But while we know it is possible to detect occupants versus non-occupants, we don’t know how long an individual has to stay in a household before they leave DNA traces in household dust.”

The researchers plan to address the question of how much time it takes for non-occupants to be detected in dust in future studies. Meiklejohn sees the work as being useful in numerous potential investigative scenarios.

“When perpetrators clean crime scenes, dust isn’t something they usually think of,” Meiklejohn says. “This study is our first step into this realm. We could see this being applied to scenarios such as trying to confirm individuals who might have been in a space but left no trace blood, saliva, or hair. Also for cases with no leads, no hit on the national DNA database, could household dust provide leads?”

The NC State College of Veterinary Medicine funded the work. Additional coauthors are from Massachusetts Institute of Technology and NC State.

Source: NC State

The post Household dust harbors forensic DNA info appeared first on Futurity.

Forensics study clarifies how bones of children decay

Yellow crime scene tape fallen on grass in the dark.

A new forensic science study sheds light on how the bones of infants and children decay.

The findings will help forensic scientists determine how long a young person’s remains were at a particular location, as well as which bones are best suited for collecting DNA and other tissue samples that can help identify the deceased.

“Crimes against children are truly awful, and all too common,” says Ann Ross, a professor of biological sciences at North Carolina State University and coauthor of the study in the journal Biology.

“It is important to be able to identify their remains and, when possible, understand what happened to them. However, there is not much research on how the bones of infants and children break down over time. Our work here is a significant contribution that will help the medical legal community bring some closure to these young people and, hopefully, a measure of justice.”

For the study, the researchers used the remains of domestic pigs, which are widely used as an analogue for human remains in forensic research. Specifically, the researchers used the remains of 31 pigs, ranging in size from 1.8 kilograms (4 pounds) to 22.7 kilograms (50 pounds). The smaller remains served as surrogates for infant humans, up to one year old. The larger remains served as surrogates for children between the ages of one and nine.

The surrogate infants were left at an outdoor research site in one of three conditions: placed in a plastic bag, wrapped in a blanket, or fully exposed to the elements. Surrogate juveniles were either left exposed or buried in a shallow grave.

The researchers assessed the remains daily for two years to record decomposition rate and progression. The researchers also collected environmental data, such as temperature and soil moisture, daily.

Following the two years of exposure, the researchers brought the skeletal remains back to the lab. The researchers cut a cross section of bone from each set of remains and conducted a detailed inspection to determine how the structure of the bones had changed at the microscopic level.

The researchers found that all of the bones had degraded, but the degree of the degradation varied depending on the way that the remains were deposited. For example, surrogate infant remains wrapped in plastic degraded at a different rate from surrogate infant remains that were left exposed to the elements. The most significant degradation occurred in juvenile remains that had been buried.

“This is because the bulk of the degradation in the bones that were aboveground was caused by the tissue being broken down by microbes that were already in the body,” says corresponding author and PhD candidate Amanda Hale. “Buried remains were degraded by both internal microbes and by microbes in the soil.”

Hale is a research scientist at SNA International working for the Defense POW/MIA Accounting Agency.

The researchers also used statistical tools that allowed them to better assess the degree of bone degradation that took place at various points in time.

“In practical terms, this is one more tool in our toolbox,” Ross says. “Given available data on temperature, weather, and other environmental factors where the remains were found, we can use the condition of the skeletal remains to develop a rough estimate of when the remains were deposited at the site. And all of this is informed by how the remains were found. For example, whether the remains were buried, wrapped in a plastic tarp, and so on.

“Any circumstance where forensic scientists are asked to work with unidentified juvenile remains is a tragic one. Our hope is that this work will help us better understand what happened to these young people.”

Source: NC State

The post Forensics study clarifies how bones of children decay appeared first on Futurity.

National Oxford Uehiro Prize in Practical Ethics: Why the Responsibility Gap is Not a Compelling Objection to Lethal Autonomous Weapons

By: admin

This article received an honourable mention in the undergraduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by Tanae Rao, University of Oxford student

There are some crimes, such as killing non-combatants and mutilating corpses, so vile that they are clearly impermissible even in the brutal chaos of war. Upholding human dignity, or whatever is left of it, in these situations may require us to hold someone morally responsible for violation of the rules of combat. Common sense morality dictates that we owe it to those unlawfully killed or injured to punish the people who carried out the atrocity. But what if the perpetrators weren’t people at all? Robert Sparrow argues that, when lethal autonomous weapons cause war crimes, it is often impossible to identify someone–man or machine–who can appropriately be held morally responsible (Sparrow 2007; Sparrow 2016). This might explain some of our ambivalence about the deployment of autonomous weapons, even if their use would replace human combatants who commit war crimes more frequently than their robotic counterparts.

This essay rejects Sparrow’s argument, at least as it applies to a wide class of lethal autonomous weapons I call ‘LAW-1’. When LAW-1s cause war crimes, then at least one human being can usually be held morally responsible. I acknowledge that there is a subset of accidents for which attributing moral responsibility is murkier, but they do not give us reason to refrain from using LAW-1s as compared with less sophisticated weapons like guns and missiles.

LAW-1s are the weapons systems that most people envision when imagining a lethal autonomous weapon. I predict that most systems developed in the next decade will be LAW-1s, although some may push the boundaries between LAW-1s and the next generation of lethal autonomous weapons. The defining characteristics of an LAW-1 are:

1.Moderate task specificity: An LAW-1 is a model trained to fulfil a relatively specific task, such as ‘fly around this area and kill any enemy combatants identified if and only if this is allowed under international law’. An example of a task too specific for a LAW-1 is ‘fly to these specific coordinates, then explode’ (this would be more akin to an unsophisticated missile, land mine, etc.). An example of a task too general is ‘perform tasks that will help our state win the war’.

2.No human intervention needed: An LAW-1 is capable of identifying targets and using lethal force without human intervention. For example, an unmanned aerial vehicle (UAV) that uses computer vision techniques to discern active combatants from non-combatants, then shoots the combatants with an attached gun without waiting for human approval would qualify as an LAW-1. An aerial vehicle that requires a remote pilot to operate it is not an LAW-1.

3.No mental states: An LAW-1 does not have mental states, such as pain or regret, and does not have subjective experiences. It is reasonable to believe that all weapons systems currently in operation fulfil this criterion.

I will now outline Sparrow’s argument that lethal autonomous weapons introduce a responsibility gap.

(1) There is a responsibility gap for some war crimes caused by lethal autonomous weapons, meaning that no one can be held morally responsible for the war crime.

(2) Out of basic respect for enemy combatants and non-combatants alike, the legitimate use of any weapon requires that someone can be held responsible if wrongful harm arises as a result of its use.

(C) Therefore, we should not use lethal autonomous weapons during wartime.

I deny the existence of a responsibility gap for an LAW-1. Therefore, the focus of this essay is on the first premise of Sparrow’s argument. There are two reasons why an LAW-1 might commit a war crime. First, this might be intentionally programmed, in which case at least one human being is morally responsible. Second, if the war crime was not a result of human intention, human beings can often be held responsible for gross negligence. I concede that there will be a small number of freak accidents involving the use of LAW-1s for which no human can be held responsible but argue that these cases give us no special reason to reject LAW-1s as compared with less sophisticated weapons.

i. Humans develop and deploy an LAW-1 despite knowing that it will likely commit a war crime.

It should be uncontroversial that humans using an LAW-1 with the knowledge that it will likely commit war crimes are morally responsible for those crimes. For example, a human could knowingly train an LAW-1 with a reward function that incentivises killing non-combatants, even if killing non-combatants is not its explicit goal (e.g., the machine is trained to kill non-combatants that get in its way). The programmers of such a horrible weapon are morally responsible for the war crimes committed. If the military officials knew about its criminal programming, then they too would be morally responsible for the war crimes committed. Therefore, if humans knowingly deploy an LAW-1 that will commit war crimes, there is no responsibility gap.

 Humans deploy an LAW-1, without knowing that it could commit a war crime.

Here is where the existence of a responsibility gap is most plausible. Sparrow argues that “the more the system is autonomous then the most it has the capacity to make choices other than those predicted or encouraged by its programmers. At some point then, it will no longer be possible to hold the programmers/designers responsible for outcomes that they could neither control or predict” (Sparrow 2006, 70).

I make two contentions about accidental war crimes caused by LAW-1s. Firstly, many of these automation failures are a result of gross negligence and should have been foreseen by human programmers. As in other cases of negligence, it is appropriate to hold some human beings morally responsible for the results. For example, weapons company executives and/or military leadership could justifiably be imprisoned for some accidents. Secondly, the accidents which could not have been foreseen or prevented through sensible design practice do not give us special reason to dismiss LAW-1s. These accidents are not dissimilar from the misfiring of a gun, or human mistargeting of an unsophisticated missile.

When considering my arguments, it is prudent to think of why such accidents happen. Not all LAW-1s use machine learning (ML) techniques, but ML is widespread enough in tasks important for LAW-1s, such as computer vision, that it is worth exploring in some detail. In general, a machine learning-powered LAW-1 might fail because a) it is (accidentally) given a goal compatible with war crimes without robust constraints, and/or b) it fails at achieving its goal or staying within its constraints (e.g., misidentifying non-combatants as enemy combatants about to shoot friendly combatants).[1]

A body of machine learning research has identified, forewarned, and discussed these potential failure modes in detail.[2] I think it is reasonable to expect LAW-1 programmers to rigorously test their systems to ensure that the frequency of war crimes committed is exceedingly low. Sensible development of LAW-1s might involve intensive testing on representative datasets, early-stage deployments in real combat zones without weaponry to check if non-combatants can be consistently identified, etc. Techniques to solve the problem of misspecified goals (in this case, goals compatible with war crimes) continue to be developed (Ouyang et al. 2022). The comparatively specific objectives given to LAW-1s makes overcoming these technical challenges easier than for ML models given very general objectives. And, in the worst-case scenario, LAW-1s committing war crimes can be quickly recalled, and either decommissioned or improved to avoid recurrences.

Crucially, developers of LAW-1s need not be able to predict exactly how or why their machines will fail to be held morally responsible for their failure. As long as the LAW-1 committed a war crime as a result of a known failure mode (e.g., glitches in computer vision misclassifying non-combatants) that was not ruled out with a sufficient degree of confidence, developers (among others) can be held morally responsible. This is analogous to an unsophisticated missile whose faulty targeting system causes target coordinates to be miscommunicated, resulting in the accidental bombing of a hospital. The weapons manufacturer can plausibly be held morally responsible for not rigorously testing their product before selling it to the military.

Therefore, it is likely that, in many though not all circumstances, humans can be held morally responsible for war crimes caused by LAW-1s, even if no human explicitly intended for a war crime to be committed. In particular, programmers can be held responsible for not carefully checking for common failure modes, military officials can be held responsible for not sufficiently auditing the weapons they choose to deploy, and states can be held responsible for failing to regulate the development of faulty LAW-1s. I acknowledge that careful, rigorous checks might not currently be possible for LAW-1s, let alone more sophisticated lethal autonomous weapons. But ensuring a very low failure rate in such systems is a technical problem to be solved, rather than some sort of mathematical impossibility. Perhaps the deployment of LAW-1s ought to be delayed until further progress on these technical problems is made, but this does not justify a complete ban.

To be clear, LAW-1s still identify and kill people without human intervention. There will likely always be a small risk of accidentally violating international law when using an LAW-1 even if no negligence is involved. But there is no morally relevant difference between this and a human keying in the wrong target for a missile accidentally, or even a gun misfiring and hurting a surrendered enemy combatant. If LAW-1s have a very high rate of accidental killings, then they should not be used, for the same reason that a very inaccurate missile should not be used. The degree of autonomy exhibited by a weapons system is only relevant insofar as it is correlated with the frequency of accidents; the responsibility gap is not a reason to discount the deployment of LAW-1s with low accident rates.

Sparrow’s response to the charge that non-autonomous weapon-related unjust killings sometimes also have responsibility gaps is that “if the nature of a weapon, or other means of war fighting, is such that it is typically impossible to identify or hold individuals responsible for the casualties that it causes then it is contrary to [the] important requirement of jus in bello” (Sparrow 2007, 67). But I have argued that, at least for the LAW-1s currently being deployed and developed by the world’s militaries, the responsibility gap is far from typical. By this, I mean that the overall number of LAW-1-caused war crimes for which no one can be held morally responsible is plausibly smaller than Sparrow needs for his quoted response to be compelling.

Despite being able to use lethal force without human intervention, LAW-1s are not so different with regards to the attribution of moral responsibility than a gun. Just as a gun might misfire, or a human being may accidentally (and understandably) misaim, LAW-1s might not fulfil the task intended by the humans developing and deploying them. If these accidents are just as infrequent as accidents caused by human combatants, then the existence of a responsibility gap does not give us compelling reason to abandon LAW-1s. As technology develops, it seems likely that accident rates will decrease to the point that LAW-1s are superior to human combatants. Clever programming can allow LAW-1s to escape the violence-inducing cognitive biases shown to be present in human militaries, intake and provide relevant information faster than humans, and ultimately render law-abiding decisions in chaotic situations (Arkin 2010).

Therefore, the responsibility gap is not a compelling reason to refrain from developing and deploying certain kinds of lethal autonomous weapons. In fact, the need to minimise accidents may justify more expenditure on developing LAW-1s to be as safe as is feasible. Additionally, further research should establish a clearer classification of the degree of autonomy displayed by different weapons systems, as is relevant to moral responsibility. Not all lethal autonomous weapons have the same ethical implications, and it is dangerous to be overly general in our conclusions about such a consequential subject.

 

Bibliography

Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. “Concrete problems in AI safety.” arXiv preprint arXiv:1606.06565 (2016).

Arkin, Ronald C. “The case for ethical autonomy in unmanned systems.” Journal of Military Ethics 9, no. 4 (2010): 332-341.

Di Langosco, Lauro Langosco, Jack Koch, Lee D. Sharkey, Jacob Pfau, and David Krueger. “Goal misGeneralization in deep reinforcement learning.” In International Conference on Machine Learning, pp. 12004-12019. PMLR, 2022.

Ouyang, Long, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang et al. “Training language models to follow instructions with human feedback.” arXiv preprint arXiv:2203.02155 (2022).

Sparrow, Robert. “Killer robots.” Journal of applied philosophy 24, no. 1 (2007): 62-77.

Sparrow, Robert. “Robots and respect: Assessing the case against autonomous weapon systems.” Ethics & international affairs 30, no. 1 (2016): 93-116.

[1] The former category is not limited to models for which goals are misspecified; I intend for ‘inner alignment’ failures, also known as goal misgeneralisation, to be included as well (see Langosco et al. 2022).

[2] See Amodei et al. 2016 for an overview of these research problems.

Chicago school official busted for embezzling $1.5 million in chicken wings

Vera Liddell—director of food services at Harvey School District 152 just outside Chicago—is accused of embezzling $1.5 million worth of chicken wings from the district. According to the Cook County State's Attorney's office, the scam dates back to the height of COVID-19 when schools were closed but still providing meals to children via curbside pick-up. — Read the rest

Human penis found at gas station. No cause for concern, say police.

Customers at a Mobile, Alabama gas station were upset by a real dick they encountered in the parking lot. "It was a whole d*ck." one employee said. Police arrived to collect the severed member and later commented that the incident is "not being investigated as an assault or murder." — Read the rest

Charity and solidarity! What responsibilities do nonprofits have towards Ukraine?

Charity and solidarity! What responsibilities do nonprofits have towards Ukraine?

In a speech to the UN General Assembly in the fall of 2022, President Biden called on the UN to stand in solidarity with Ukraine. At least 1,000 companies have left Russia because of Putin’s brutal unprovoked war on Ukraine. Some companies left because of sanctions. Others left for moral reasons, often under pressure from investors, consumers, and out of empathy with their employees in Ukraine. But companies also have human rights responsibilities. Whether they stay or leave Russia will impact the war and human rights of the people of Ukraine. When companies leave en masse, Russia faces the possibility of economic oblivion.

Nonprofits can also impact the war. Russian oligarchs have donated lots of money to cultural organizations, universities, and think tanks, such as Harvard, MOMA, and the Council on Foreign Relations. Many of these donations are tainted by corruption and the close ties oligarchs have with Putin.

Philanthropy is a common way for oligarchs to launder their reputations, sometimes with an eye to future wrongdoing, what social psychologists call moral licensing. Studies show that people often follow their good acts with bad acts as a way to balance out the good with the bad. In the end, whatever good oligarchs do through their giving may be outweighed by the bad they’ve done in the past or will do in the future. But oligarchs are only part of the problem. Nonprofits that solicit and accept their donations are complicit in those harms, too.

What are the responsibilities of nonprofits? How should they meet their moral and human rights responsibilities during Russia’s war on Ukraine? What should we expect from museums, universities, and cultural organizations? If anything, they should be held to a higher standard than for-profit enterprises. After all, nonprofits serve the public good. They may not have had a physical presence in Russia, the way Starbucks and Levi Strauss did, but many of them are connected to Putin by way of Russian oligarchs.

“Philanthropy is a common way for oligarchs to launder their reputations, sometimes with an eye to future wrongdoing.”

How are nonprofits connected to Russia’s oligarchs?

Consider Viktor Vekselberg, a prominent Russian oligarch with close ties to Putin and head of the Skolkovo Foundation. Like many Russian oligarchs, he made his money with the collapse of the Soviet Union. The Skolkovo Foundation donated over $300 million to Massachusetts Institute of Technology (MIT) to support Skoltech, a program aimed at developing Russia’s tech sector. Vekselberg also sat on MIT’s Board of Trustees. It was only in 2018, after the US Treasury sanctioned him for “malign activities,” that MIT found the wherewithal to remove him from the Board. And, it was not until Russia invaded Ukraine that MIT ended the Skoltech Program, explaining, “this step is a rejection of the actions of the Russian government in Ukraine.” MIT finally got it right. Donors, such as Vekselberg, implicate nonprofits in Russia’s war on Ukraine. But had MIT done its due diligence from the outset, it would not have accepted Vekselberg’s donation in the first place. Boycotting oligarchs shows solidarity with the people of Ukraine, while doing nothing renders nonprofits complicit in the human rights violations suffered in Ukraine.

Vladimir Potanin, Russia’s richest oligarch, has supported the Kennedy Center and the Guggenheim Museum, among others. Until recently, he sat on the Board of Trustees at the Guggenheim, and on the Advisory Board of the Council of Foreign Relations. Potanin resigned from both in April 2022. Although not a Russian citizen, Len Blavatnik is a Russian insider who donated millions of dollars to Oxford, the Tate Modern, Yale, Harvard Medical School, and the Council of Foreign Affairs, to name a few of the elite recipients of his philanthropy. Aaron Ring, a Yale professor who received support from the Blavatnik Fund, called on Yale to suspend the Program. He was concerned that Yale was endorsing the donor. Yale maintained that since Blavatnik had not been sanctioned, his donation could be accepted. During Russia’s war on Ukraine, stakeholders like Aaron Ring don’t want to benefit from Russia’s oligarchs. They want to stand in solidarity with Ukraine.

How are nonprofits implicated in Russia’s human rights violations?

The Guiding Principles on Business and Human Rights were endorsed by the UN Human Rights Council in 2011. They hold that enterprises are responsible not only for their direct human rights violations, but also for their indirect ones. So, what counts as an indirect violation in the nonprofit sector? When a nonprofit benefits from donors who are implicated in human rights violations, the nonprofit is complicit in the wrongs of the donors. Many Russian oligarchs are tied to Putin, have profited from their relationship with him, and stand to benefit from his war on Ukraine.

“Boycotting oligarchs shows solidarity with the people of Ukraine, while doing nothing renders nonprofits complicit in the human rights violations suffered in Ukraine.”

When nonprofits refuse to accept donations from oligarchs, they stand in solidarity with Ukraine against Russia. Given the tendency of oligarchs to donate to elite and high-profile organizations, boycotting them may create a bandwagon effect, or a little philanthropy warfare!

Russia has a long record of human rights violations. Freedom of expression is one. The Committee to Protect Journalists confirmed that 82 journalists and media workers were killed in Russia between 1992 and 2022. In 2020, Russia adopted a law banning so called “disrespect” to authorities. Its violation of the fundamental rights of LGBTQ people is longstanding. In 2013, it penalized so called “propaganda” about homosexuality. Activists and celebrities faced fines for supporting the LGBTQ community in Russia.

Now, Russia is under investigation for war crimes such as rape, torture, and execution style murders of civilians. As of 2022, the UN General Assembly resolved that Russia should withdraw its military forces from Ukraine. This came amidst reports of Russian attacks on residences, schools, hospitals, and on civilians, including women, people with disabilities, and children.

Have nonprofits done enough for human rights?

Have nonprofits done enough for human rights? No, not when it comes to Russian oligarchs. By laundering the reputations of oligarchs, nonprofits have enabled Putin’s war on Ukraine and the horrific suffering it has brought. The Guiding Principles can help nonprofits identify their human rights responsibilities and ensure that they are not complicit in Russia’s human rights violations. All enterprises should practice due diligence, a mechanism that prevents human rights violations and complicity in them. Refusing donations from Russian oligarchs is the very least nonprofits can do.

Transparency is at the heart of due diligence. Yale Professor Jeffrey Sonnenfeld has tracked which companies left Russia and which have stayed, providing much-needed transparency on the operations of for-profit enterprises. Not only does Sonnenfeld’s list incentivize companies to pull out of Russia, those that left have outperformed those that remained. Unfortunately, no such list exists for nonprofits. Tracking nonprofits with respect to Russian oligarchs, knowing and showing, would go a long way toward ensuring that they meet their human rights responsibilities.

To be sure, there is a risk that nonprofits will receive less money if they boycott Russian oligarchs. But it is also possible that they will be rewarded for doing the right thing, as Hands on Hartford was when it refused donations from the Proud Boys, a white supremacist group. Generous donors may come forward when they learn that nonprofits stand in solidarity with Ukraine. Granted, the impact nonprofits can have on the war in Ukraine is not as great as for-profit companies, if only because of scale. But keep in mind that nonprofits serve the public good, which if anything enhances their human rights responsibilities. In the long run, when nonprofits stand in solidarity with Ukraine, they serve the public good.

Featured image by Elena Mozhilov via Unsplash (publish domain)

OUPblog - Academic insights for the thinking world.

Corky Lee and the Work of Seeing

On the first anniversary of photographer Corky Lee’s death, Ken Chen sets forth an astounding feat of remembrance: a mosaic of photocriticism from which he teases out an elegy to Lee’s empathic genius, all set against a litany of horrors perpetuated on the Asian American community. Stunning writing, brimming with clarity and anger and love.

I spent a year looking at Corky Lee’s photographs. I saw grandmothers squat on the curb and laugh. I saw girls pluck the guqin. I saw boys pose on their fire escape. I saw women set up a streetside clinic whose sign says without shame: PAP SMEAR / BREAST EXAM / GONORRHEA TEST. I saw tenements, picketers, parades, veterans, and flags. I saw Reyna Elena, Miss Philippines and a B-Boy flying his bare arms wide. I saw a dapper Desi boy protesting Dotbusters. I saw men beat Taiko drums, I saw them hold up tombstones for Vincent Chin. I saw three women from Sakhi say: WE WILL NOT TOLERATE ABUSE. I saw a bride and groom order from a hot dog cart. I saw two cool women throw a cool glance. I saw a man remembering at a table marked POSTON ARIZONA and I wondered how many years had passed since the prison camps. I saw New York City and the tangled warrens of Chinatown. I saw a hollering woman in a hardhat hoist her sign high, the text that also tells her biography: INJURED ON THE JOB, THEN FIRED BY THE BOSS! There is something moving about the sheer number of people Corky Lee thought were worth remembering. His archive is an Aleph in which you can glimpse everyone from an Asian American world bulging vast with time and complexity. Over the past few years, we have asked for someone to finally see us. Looking at these kaleidoscopic images, I found myself thinking the only power that can recognize us is ourselves.

❌