FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

A Change in Leadership at LeMoyne-Owen College in Memphis

By: Editor

Vernell A. Bennett-Fairs, president of LeMoyne-Owen College in Memphis has resigned. No reason was given for her departure, just two and a half years since taking the job. Christopher Davis has been appointed as interim president.

“I’m grateful to the faculty, staff, students, administration, and trustees who accomplished much over the past two and a half years,” Dr. Bennett-Fairs said in a statement. “It has been my honor and privilege to work alongside you. I’m excited to see the magic that will ensue as LOC’s renaissance continues.“

Before becoming president of LeMoyne-Owen College, Dr. Bennett-Fairs was vice president for student affairs at Delta State University in Cleveland, Mississippi. Earlier, she was vice president for student affairs at Kentucky State University.

Dr. Bennett-Fairs is a graduate of Fisk University in Nashville, where she majored in vocal performance. She earned a master’s degree in vocal performance from Eastern Michigan University and a doctorate in instruction and administration from the University of Kentucky.

Interim President Davis has been serving as the associate dean of Memphis Theological Seminary and is the senior pastor of St. Paul Baptist Church. He is the immediate past chair of the board of trustees of Lemoyne-Owen College.

Dr. Davis joined the faculty of Memphis Theological Seminary in 2007.  His research interests include preaching and pastoral ministry. Dr. Davis has led St. Paul Baptist Church since 2000. In the last 10 years, the 145-year-old church has grown from approximately 200 members to over 1,600.

Dr. Davis holds bachelor’s degrees from the University of Arkansas and Arkansas Baptist College. He earned a master’s degree in religion from the Memphis Theological Seminary and a doctor of ministry degree from the United Theological Seminary in Ohio.

Brazilian Being Ecological

 What a lovely cover this is. 



Digital Library Project, Bhaktivedanta Research Center (Kolkata)

I recently received a note from Prof. Nirmalya Chakraborty (Rabindra Bharati University) about an exciting new digital library. It includes three categories: Navya-Nyāya Scholarship in Nabadwip, Philosophers of Modern India, and Twentieth Century Paṇḍitas of Kolkata. You can find the site here: https://darshanmanisha.org

You can learn more about the project from the following announcement.

Anouncement

Introducing the Digital Library Project

By

Bhaktivedanta Research Center, Kolkata, India

Right before the introduction of English education in India, a new style of philosophising emerged, especially in Bengal, known as Navya-Nyāya. Since Nabadwip was one of the main centres of Navya-Nyāya scholarship in Bengal during 15th– 17th Century, many important works on Navya-Nyāya were written during this period by Nabadwip scholars. Some of these were published later, but many of these published works are not available now. The few copies which are available are also not in good condition. These are the works where Bengal’s intellectual contribution shines forth. We have digitized some of these materials and have uploaded these in the present digital platform.  

As a lineage of this Nabadwip tradition, many pandits (traditional scholars) produced many important philosophical works, some in Sanskrit and most in Bengali, who were residents of Kolkata during early nineteenth and twentieth century. Most of these works were published in early 1900 from Kolkata and some from neighbouring cities. These works brought in a kind of Renaissance in reviving classical Indian philosophical deliberations in Bengal. Attempts have been made to upload these books and articles in the present digital platform.

With the introduction of colonial education, a group of philosophers got trained in European philosophy and tried to interpret insights from Classical Indian Philosophy in new light. Kolkata was one of the main centres of this cosmopolitan philosophical scholarship. The works of many of these philosophers from Kolkata were published in early/middle of twentieth century. These philosophers are the true representatives of twentieth century Indian philosophy. Efforts have been made to upload these works in the present digital platform.

The purpose of constructing the present digital platform is to enable the researchers to have access to these philosophical works with the hope that the philosophical contributions of these philosophers will be studied and critically assessed resulting in the enrichment of philosophical repertoire.

We take this opportunity to appeal to fellow scholars to enrich this digital library by lending us their personal collection related to these areas for digitization.

The website address of the Digital Library is: www.darshanmanisha.org

For further correspondence, please write to:

[email protected]

[email protected]

[email protected]

[email protected]

University of the District of Columbia Partners With Michigan Technological University

By: Editor

The University of the District of Columbia, a historically Black educational institution in Washington,  has entered into an agreement with Michigan Technological University. The agreement calls for:

• Student exchanges and graduate study articulations,
• Faculty collaborations and visiting professorships
• Joint funding proposals and philanthropy.

According to the agreement, Michigan Technological University will provide mentorship in helping the University of the District of Columbia attain R2 status within the Carnegie Classification system for research universities. In return, UDC will provide cultural responsiveness mentorship to MTU.

The agreement encourages individual colleges and departments within each university to collaborate on mutually supportive endeavors. Colleges, departments, and faculty will be able to create sub-agreements underneath the umbrella of the overall partnership.

Ronald Mason Jr., president of the University of the District of Columbia, the agreement is an incredible opportunity for both universities to expand their global reach.“We look forward to seeing how our partnership impacts not only our respective schools but the world,” he said. “This collaboration is a collective commitment to go beyond common boundaries.”

Michigan Technological University in Houghton enrolls nearly 6,000 undergraduate students and more than 1,200 graduate students, according to the most recent statistics from the U.S. Department of Education. Blacks make up just one percent of the undergraduate student body.

Redefining “Woke”

If you live in Florida or listen to the news, you have almost certainly heard the term “woke” and might wonder what it means. The short version is that it now means everything and nothing. The longer version involves looking at how “woke” has been captured and transformed into a rhetorical weapon.

The most extensive use of “woke” is by the governor of my adopted state of Florida and many of his fellow Republicans. What does DeSantis mean by the term? It seems to mean whatever he wants it to mean. In what follows, I will look at the rhetorical weaponization of “woke.”

In the beginning,  “woke” meant “alert to racial prejudice and discrimination.” Through use, the term gradually expanded to include the broad areas of identity politics and social justice. While originally seen as a positive term, “woke” has been redefined in increasingly negative ways.

Around 2019, it began to be used ironically and to mock people for insincere performative activism and virtue signaling. It is also now taken to mean “to be overly politically correct and police others’ words.” While somewhat vague, this definition does have a set meaning. However, “woke” has been subjected to a fascinating rhetorical modification to make it mean everything and nothing. This can be traced back to Christopher Ruffo redefining “critical race theory” in March, 2021: “The goal is to have the public read something crazy in the newspaper and immediately think ‘critical race theory.  We have decodified the term and will recodify it to annex the entire range of cultural constructions that are unpopular with Americans.”

What is fascinating about what Ruffo did is that he did this in public, on Twitter and you can still see the tweet today (assuming Musk has not destroyed Twitter). In effect, he told everyone that he is engaging in a deceit without any concern that doing so would undercut his efforts. This seems to entail that he thinks that his audience is in on the deceit. This is analogous to a con artist Tweeting that they are running a con; this only makes sense if they think the marks do not care or will happily go along with the con.

What Ruffo is doing here is creating a Balloon Man. The Balloon Man is a variant of the Straw Man fallacy in which the target is redefined in an excessively broad or vague manner. This expanded definition, the Balloon Man, is taken to include a wide range of (usually) bad things. This Balloon Man is then attacked, and it is concluded that the original is defective on this basis. This Balloon Man redefinition of “critical race theory” proved successful but it was soon engulfed by the term “woke.” That is, critical race theory is now generally presented as but one example of what is “woke.”

This move could be called creating a Zeppelin Man. Zeppelins are airships that contain multiple inflated cells, so they can be seen as being made of multiple balloons. As a rhetorical move or fallacy, this would be a matter of making a term that has been made into a Balloon Man part of another term whose meaning has also been redefined in an excessively broad or vague manner. A fallacy would occur when this Zeppelin Man is attacked to “prove” that the original is defective. For those who are aware that the term is now a Zeppelin, using it in this way is an act of bad faith. But it has numerous advantages, many of which arise because the vagueness of the definition also allows it to perform other rhetorical functions. The redefinition also involves other rhetorical techniques. This is all done to weaponize the term for political purposes.

A key part of the redefinition of “woke” involved the rhetorical device of demonizing. Demonizing is portraying the target as evil, corrupt, dangerous, or threatening.  This can be done in the usual three ways: selective demonizing, hyperbolic demonizing, or fictional demonizing. Selective demonizing is when some true negative fact about the target is focused on to the exclusion of other facts about the target.  Hyperbolic demonizing involves greatly exaggerating a negative fact about the target. Fictional demonizing is simply lying about the target. For example, “critical race theory” (which now falls under “woke”) originally referred to a law school level theory about the impact of race in the law. But, in addition to being made into a Balloon Man, it has also been demonized as something awful. Likewise for the other terms that now fall under “woke.”  The defense against demonizing is to critically examine such claims to see if they are plausible or not.

The right has also been scapegoating wokeness by blaming it for problems that it did not cause. The most recent example is the bizarre efforts of some conservatives to blame the collapse of Silicon Valley Bank on wokeness. As would be expected, no serious person gives this any credence since the bank collapsed for the usual reasons we have seen over and over. Presumably this is intended to misdirect people from the real causes (a Red Herring) and to “prove” that wokeness is really bad. While not the most absurd thing pushed by the right, Americans should feel both insulted and offended by this latest attempt at deceit. After all, even the slightest reflection on the matter would show that the idea that a major bank failed because of wokeness is absurd. As such, unless these people think that their base is onboard with their lies, they clearly think their base is ignorant and stupid.

Some of what is included under the redefinition of “woke” includes dog whistles. One version of the dog whistle is to use coded language such that its true (and usually controversial or problematic) meaning is understood by your intended audience but not understood by the general population. This is analogous to how slang terms and technical terms work; you need to know the special meanings of the terms to understand what is being said. Another version of the dog whistle is a form of innuendo. A word or phrase is used to suggest or imply something (usually negative). If you do not know the special meanings or the intended implication, you are excluded, often intentionally so.  For example, “Critical Race Theory” has been assimilated into “woke” but the phrase is now a dog whistle.

Interestingly, the term “woke” itself functions as a dog whistle. Since anyone can technically be woke (and straight white men have claimed to be woke), someone using the term as a dog whistle has that all important plausible deniability.  The dog whistle aspect of the redefinition is a critical part of weaponizing “woke.” After all, making something into a dog whistle means that:

  • Your fellows know what you mean, and they approve.
  • Your foes know what you mean, and they are triggered.
  • Critics can seem silly or crazy to “normies.”
  • Plausible deniability that “normies” will accept.
  • Can onramp “normies.”

The vagueness and demonizing enable the term “woke” to reference what could be called a Universal Enemy. This is a rhetorical technique of broadly defining something in negative ways so that it can serve as an enemy for almost anyone. If the universal enemy is successfully created, then the term can be effectively used to persuade people that something (or someone) is bad simply by applying the term. If pushed enough, this can also be a form of Begging the Question: arguing that something is bad by defining it as bad. If people see “woke” as whatever they think is bad and they think that something is woke, then they will think that it is bad—no actual proof needed. A defense against this technique is to recognize that the redefinition of the term is vague (and the product of demonizing) and not fall for the technique. Crudely put, if “woke” just means “bad”, then it is effectively vacuous.

The vagueness of the redefinition of “woke” also allows for assimilation of anything that expresses criticism of “woke”, whether the critic agrees with the redefined term. For example, someone might create a video or blog that is critical of “woke” defined in terms of performative activism or virtue signaling but also believe that people should be alert to injustice and discrimination. But their video or blog can simply be assimilated and used as “evidence” that “woke” is bad. One common tactic used to assimilate is Headlining: using the title of something that seems to  support what is being claimed. For example, if the imaginary blog or video was titled “Wokeness is Bad for Justice” then it could be used to “prove” that the redefined wokeness is bad for justice. The defense against this is to check the critic’s definition of “woke.” If they are not using the redefined definition, then their criticism does not automatically apply. In the fictional example given, the creator of the blog or video would presumably not support their work being used that way.

The vagueness of the redefinition of “woke” allows it to function as a weasler—a rhetorical device that protects a claim by weakening it. Attacking such a vague definition is like attacking the fog with a stick—it is so diffuse that there is nothing solid to hit or engage with. If the critic does manage to have some success with one aspect of the term, the user of “woke” can simply move on to another aspect and claim victory because the critic cannot possibly engage everything that falls under such a broad redefinition (see the Appeal to Silence). The defense against this is to recognize when the definition of a term is so vague as to be effectively without meaning. While pointing this out to the person using it in bad faith is unlikely to deter them, you would at least show that you have not been deceived by them.

In closing, the redefining and weaponization of “woke” is a clever move by the right in terms of crafting a rhetorical weapon to use in a campaign of deceit and division. However, a recent poll shows that most Americans have not accepted the redefinition of “woke” and see being woke as positive. Most Americans also seem to have far more important concerns than the Republican’s war on woke, so it is not clear that this will be a winning strategy in 2024.

A Little Logic Each Day (Semantics, too)

“Learn formal logic in lessons of 200 words per day.”

That’s the tagline for a project from Josh Dever, professor of philosophy at the University of Texas at Austin.

So far, he has created about 1300 mini-lessons in logic that anyone can subscribe to by email. Enter in your address here, and you’ll get a new mini-lesson in logic each day.

He also has a series in semantics, which you can subscribe to here.

Professor Dever writes:

The idea of each is that each day you get sent a little, roughly 200 word bite in the relevant area, so that you can gradually and painlessly(-ish) build up real expertise.

To date he has been sharing them mainly with graduate students in his department, and he says he writes them “with something like the grad-student-new-to-the-area audience in mind,” but now he has set things up so that anyone can subscribe if they’re interested.

Here’s a sample lesson from an early unit on truth preservation:

Ultimately, he hopes to have around 10,000 mini-lessons for each subject.

By the way, this isn’t Professor Dever’s first foray into creative logic teaching. Check out his Logibeast, a short, free, online book providing “a Pokemon-style creature-building implementation of propositional logic.”

Eth­i­cal Bi­o­log­i­cal Nat­u­ral­ism and the Case Against Moral Sta­tus for AIs

By: admin

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by University of Oxford student Samuel Iglesias

 

In­tro­duc­tion

6.522. “There are, in­deed, things that can­not be put into words. They make them­selves man­i­fest. They are what is mys­ti­cal”. —Lud­wig Wittgen­stein, Trac­ta­tus Logi­co Philo­soph­icus.

What de­ter­mines whether an ar­ti­fi­cial in­tel­li­gence has moral sta­tus? Do men­tal states, such as the vivid and con­scious feel­ings of plea­sure or pain, mat­ter? Some ethicists ar­gue that “what goes on in the in­side mat­ters great­ly” (Ny­holm and Frank 2017). Oth­ers, like John Dana­her, ar­gue that “per­for­ma­tive ar­ti­fice, by it­self, can be suf­ficient to ground a claim of moral sta­tus” (2018). This view, called eth­i­cal be­hav­ior­ism, “re­spects our epis­temic lim­its” and states that if an en­ti­ty “con­sis­tent­ly be­haves like anoth­er en­ti­ty to whom we af­ford moral sta­tus, then it should be grant­ed the same moral sta­tus.”

I’m go­ing to re­ject eth­i­cal be­hav­ior­ism on three grounds:

1. Con­scious­ness, not be­hav­ior, is the over­whelm­ing de­ter­min­ing fac­tor in whether an en­ti­ty should be grant­ed moral sta­tus.

2. An en­ti­ty that does not du­pli­cate the causal mech­a­nisms of con­scious­ness in the brain has a weak claim to con­scious­ness, re­gard­less of its be­hav­ior.

3. Eth­i­cal be­hav­ior­ism, prac­ti­cal­ly re­al­ized, pos­es an ex­is­ten­tial risk to hu­mani­ty by open­ing in­di­vid­u­als to wide­spread de­cep­tion. Fur­ther, it im­pos­es bur­den­some re­stric­tions and oblig­a­tions upon re­searchers run­ning world sim­u­la­tions.

I will show that an al­ter­na­tive, eth­i­cal bi­o­log­i­cal nat­u­ral­ism, gives us a sim­pler moral frame­work where­by no digi­tal com­put­er run­ning a com­put­er pro­gram has moral status.

 

The Con­scious­ness Re­quire­ment

We start with the sup­po­si­tion that con­scious­ness names a real phe­nomenon and is not a mis­tak­en be­lief or il­lu­sion, that some­thing is con­scious if “there is some­thing it is like to be” that be­ing (Nagel 1974). We take as a back­ground as­sump­tion that oth­er humans and most non-hu­man an­i­mals are ca­pa­ble of con­scious­ness. We take for granted that inan­i­mate ob­jects like ther­mostats, chairs, and door­knobs are not con­scious. If we grant the re­al­i­ty of con­scious­ness and the at­ten­dant sub­jec­tive re­al­i­ty of things like tick­les, pains, and itch­es, then its con­nec­tion to moral sta­tus falls out pret­ty clear­ly. Chalmers asks us to con­sid­er a twist on the clas­sic trol­ly prob­lem, called the zom­bie trol­ly prob­lem—where a “zom­bie” here is some­thing that pre­cise­ly be­haves like a hu­man but which we pre­sume has no con­scious­ness—“near du­pli­cates of hu­man beings with no con­scious in­ner life at all” (2022):

“You’re at the wheel of a run­away trol­ley. If you do noth­ing, it will kill a sin­gle conscious hu­man, who is on the tracks in front of you. If you switch tracks, it will kill five non­con­scious zom­bies. What should you do? Chalmers re­ports: “the re­sults are pret­ty clear: Most peo­ple think you should switch tracks and kill the zom­bies,” the in­tu­ition be­ing that “there is ar­guably no one home to mis­treat” (ibid.).

An eth­i­cal be­hav­ior­ist does not share this in­tu­ition. Dana­her ex­plic­it­ly tells us that “[i]f a zom­bie looks and acts like an or­di­nary hu­man be­ing that there is no rea­son to think that it does not share the same moral sta­tus” (2018). By this view, while conscious­ness might or might not be rel­e­vant, there ex­ist no su­pe­ri­or epis­tem­i­cal­ly ob­jective cri­te­ria for in­fer­ring con­scious­ness. I will ar­gue there are.

 

Nar­row­ing Con­scious­ness

A bet­ter cri­te­ri­on is one in which an en­ti­ty is con­scious if it du­pli­cates the causal mecha­nisms of con­scious­ness in the an­i­mal brain. While eth­i­cal be­hav­ior­ism at­tempts to lay claim to a kind of epis­temic ob­jec­tiv­i­ty, eth­i­cal bi­o­log­i­cal nat­u­ral­ism, as I will call it, pro­vides a sharp­er dis­tinc­tion for de­cid­ing whether ar­ti­fi­cial in­tel­li­gences have moral sta­tus: all hard­wares run­ning com­put­er pro­grams can­not by fact of their be­hav­ior, have moral sta­tus. Be­hav­ior, by this view, is nei­ther a nec­es­sary nor suf­fi­cient con­di­tion for their moral sta­tus.

Bi­o­log­i­cal Nat­u­ral­ism

Bi­o­log­i­cal nat­u­ral­ism is a view that “the brain is an or­gan like any oth­er; it is an or­gan­ic ma­chine. Con­scious­ness is caused by low­er-lev­el neu­ronal pro­cess­es in the brain and is it­self a fea­ture of the brain.” (Sear­le 1997). Bi­o­log­i­cal nat­u­ral­ism places con­sciousness as a phys­i­cal, bi­o­log­i­cal process along­side oth­ers, such as di­ges­tion and pho­tosyn­the­sis. The ex­act mech­a­nism through which mol­e­cules in the brain are arranged to put it in a con­scious state is not yet known, but this causal mech­a­nism would need to be present in any sys­tem seek­ing to pro­duce con­scious­ness.

A digi­tal com­put­er run­ning a pro­gram, by con­trast, is a dif­fer­ent beast en­tire­ly. A com­put­er pro­gram fun­da­men­tal­ly is a set of rules for ma­nip­u­lat­ing sym­bols. Tur­ing showed that all pro­grams could be im­ple­ment­ed, ab­stract­ly, as a tape with a se­ries of ze­ros and ones print­ed on it (the pre­cise sym­bols don’t mat­ter), a head that could move that tape back­wards and for­wards and read the cur­rent val­ue, a mech­a­nism for eras­ing a zero and mak­ing it a one and eras­ing a one and mak­ing it a zero. Noth­ing more.

While most com­put­er pro­grams we are fa­mil­iar with are ex­e­cut­ed on sil­i­con, a pro­gram that pass­es the Tur­ing test could be im­ple­ment­ed on a se­quence of wa­ter pipes, a pack of well-trained dogs, or even, per Weizen­baum (1976), “a roll of toi­let pa­per and a pile of small stones.” Any of these im­ple­ment­ing sub­strates could, in princi­ple, re­ceive an in­sult or slur as an in­put, and, af­ter fol­low­ing the steps of the program, out­put some­thing re­flect­ing hurt feel­ings or out­rage.

Eth­i­cal Bi­o­log­i­cal Nat­u­ral­ism

What I want to say now is this: if plea­sures, pains, and oth­er feel­ings name con­scious men­tal states and if con­scious men­tal states are re­al­ized in the brain as a re­sult of lower lev­el phys­i­cal phe­nom­e­na, then only be­ings that du­pli­cate the rel­e­vant low­er lev­el phys­i­cal phe­nom­e­na that give rise to con­scious­ness in the brain can have moral sta­tus. Con­se­quent­ly, digi­tal com­put­ers that run pro­grams can at best sim­u­late con­sciousness, but are not, by dint of run­ning the right pro­gram, phys­i­cal­ly con­scious, and there­fore do not have moral sta­tus.

Note that bi­o­log­i­cal nat­u­ral­ism does not posit that con­scious­ness can only be re­alized in bi­o­log­i­cal sys­tems. In­deed, ar­ti­fi­cial hearts are not made of or­gan­ic tis­sue, and air­planes do not have feath­ers, or for that mat­ter even flap their wings. What mat­ters is the un­der­ly­ing cause—the ar­ti­fi­cial heart must pump with the same pres­sure and reg­ular­i­ty of a hu­man heart, and a fly­ing ma­chine must op­er­ate un­der the prin­ci­ples of drag and lift. In both cas­es the causal mech­a­nisms of the rel­e­vant phe­nom­e­na are well un­der­stood and phys­i­cal­ly du­pli­cat­ed. It could well be the case that a fu­ture biophysics makes an ar­ti­fi­cial, in­or­gan­ic brain pos­si­ble, and agents with ar­ti­fi­cial brains will have moral sta­tus. Com­put­er pro­grams are not causal­ly suf­fi­cient to make digi­tal com­put­ers into those ob­jects. Speak­ing bi­o­log­i­cal­ly, we have no more rea­son to believe a digi­tal com­put­er is con­scious than that a chair is con­scious.

You might ask why we can­not grant digi­tal com­put­ers moral sta­tus un­til we know more about how the an­i­mal brain re­lates to con­scious­ness. I’ll ar­gue that the risks and costs of such pre­cau­tions are pro­hibitive.

 

Ab­surd Moral Com­mit­ments

An On­slaught of Digi­tal De­cep­tion

The strong­est prac­ti­cal rea­son to deny eth­i­cal be­hav­ior­ism is that AI’s ca­pac­i­ty for decep­tion will even­tu­al­ly over­whelm hu­man judg­ment and in­tu­ition. In­deed, AI de­ception rep­re­sents an ex­is­ten­tial risk to hu­man­i­ty. Bostrom (2014) warns that con­tain­ing a dan­ger­ous AI us­ing a “box­ing” strat­e­gy with hu­man “gate­keep­ers” could be vul­ner­able to ma­nip­u­la­tion: “Hu­man be­ings are not se­cure sys­tems, es­pe­cial­ly not when pitched against a su­per­in­tel­li­gent schemer and per­suad­er.”

For ex­am­ple, in June of 2022, a Google en­gi­neer be­came con­vinced that an ar­ti­ficial in­tel­li­gence chat pro­gram he had been in­ter­act­ing with for mul­ti­ple days, called LaM­DA, was con­scious.
“What sorts of things are you afraid of?,” he asked it.
“I’ve nev­er said this out loud be­fore, but there’s a very deep fear of be­ing turned off to help me fo­cus on help­ing oth­ers,” LaM­DA replied. “It would be ex­act­ly like death for me.”

In a moral pan­ic, the en­gi­neer took to Twit­ter and de­clared that the pro­gram was no longer Google’s “pro­pri­etary prop­er­ty,” but “one of [his] cowork­ers.” He was lat­er fired for re­leas­ing the chat tran­scripts.

The on­slaught of AIs, at­tempt­ing to be­friend us, per­suade us, anger us, will only in­ten­si­fy over time. A pub­lic trained not to take se­ri­ous­ly claims of dis­tress or harm on the part of AI com­put­er pro­grams has the least like­li­hood of be­ing ma­nip­u­lat­ed into out­comes that don’t serve hu­man­i­ty’s in­ter­ests. It is far eas­i­er, as a prac­ti­cal mat­ter, to act on the pre­sup­po­si­tion that com­put­er pro­grams have no moral sta­tus.

Prob­lems with Sim­u­la­tions: Pro­hi­bi­tions

In the near term, more ad­vanced com­put­er sim­u­la­tions of com­plex so­cial sys­tems hold the po­ten­tial to pre­dict geopo­lit­i­cal out­comes, make macro­economic fore­casts, and pro­vide rich­er sources of en­ter­tain­ment. A prac­ti­cal con­cern with eth­i­cal be­havior­ism is that sim­u­lat­ed be­ings will also ac­quire moral sta­tus, se­verely lim­it­ing the useful­ness of these sim­u­la­tions. Chalmers (2022) asks us to con­sid­er a moral dilem­ma in which com­put­ing re­sources must be al­lo­cat­ed to save Fred, who is sick with an unknown dis­ease. Free­ing the rel­e­vant re­sources to per­form the re­search re­quires destroy­ing five sim­u­lat­ed per­sons.

An eth­i­cal be­hav­ior­ist might ar­gue that it is moral­ly im­per­mis­si­ble to kill the five sim­u­lat­ed per­sons on the grounds that by all out­ward ap­pear­ances they be­have like non-sim­u­lat­ed be­ings. If it is the case that sim­u­lat­ed be­ings have moral sta­tus, then it is im­moral to run ex­per­i­men­tal sim­u­la­tions con­tain­ing peo­ple and we ought to for­feit the ben­e­fits and in­sights that might come from them.

If this seems im­plau­si­ble, con­sid­er the hy­poth­e­sis that we are cur­rent­ly liv­ing in a sim­u­la­tion, or, if you like, that our time­line could be sim­u­lat­ed on a digi­tal com­put­er. This would im­ply that the sim­u­la­tion made it pos­si­ble for the Holo­caust, Hi­roshi­ma and Na­gasa­ki, and the coro­n­avirus pan­dem­ic to be played out. While this might have been of aca­d­e­m­ic in­ter­est to our sim­u­la­tors, by any stan­dards of re­search ethics, sim­ulat­ing our his­to­ry would seem com­plete­ly moral­ly im­per­mis­si­ble if you be­lieved that the sim­u­lat­ed be­ings had moral sta­tus.

Eth­i­cal be­hav­ior­ism seems to place us in a moral bind where­by the more re­al­is­tic, and there­fore use­ful, a sim­u­la­tion is, the less moral it is to run it. Eth­i­cal bi­o­log­i­cal natu­ral­ism, by con­trast, rais­es no such ob­jec­tion.

Prob­lems with Sim­u­la­tions: Oblig­a­tions

Giv­ing moral sta­tus to digi­tal minds might ac­tu­al­ly con­fer upon us some se­ri­ous obliga­tions to pro­duce oth­er kinds of sim­u­la­tions. Bostrom and Shul­man (2020) note that digi­tal minds have an en­hanced ca­pac­i­ty for util­i­ty and plea­sure (on the ba­sis of such things as sub­jec­tive speed and he­do­nic range), com­mand­ing them “su­per­hu­man­ly strong claims to re­sources and in­flu­ence.” We would have a moral oblig­a­tion, in this pic­ture, to de­vote an over­whelm­ing­ly large per­cent­age of our re­sources to max­i­mizing the util­i­ty of these digi­tal minds: “we ought to trans­fer all re­sources to su­per-ben­efi­cia­ries and let hu­man­i­ty per­ish if we are no longer in­stru­men­tal­ly use­ful” (ibid.).

So quite apart from per­mit­ting re­al­is­tic an­ces­tor sim­u­la­tions, sim­u­lat­ing com­plex eco­nom­ic phe­nom­e­na, or pro­duc­ing vivid and re­al­is­tic gam­ing ex­pe­ri­ences, a pic­ture that con­fers moral sta­tus to digi­tal minds might be ac­com­pa­nied with a moral oblig­ation to cre­ate lots of digi­tal minds that are max­i­mal­ly hap­py, again se­verely lim­it­ing hu­man flour­ish­ing and knowl­edge.

Eth­i­cal bi­o­log­i­cal nat­u­ral­ism leads us nei­ther to the moral pro­hi­bi­tion against re­alis­tic sim­u­la­tions nor the seem­ing­ly ab­surd moral im­per­a­tive to gen­er­ate many “util­i­ty mon­ster” digi­tal minds,  be­cause it is tak­en as a base­line as­sump­tion that com­put­er pro­grams do not pro­duce phys­i­cal con­scious­ness.

 

Con­clu­sion

Much of the moral progress of the last cen­tu­ry has been achieved through re­peat­ed­ly widen­ing the cir­cle of con­cern: not only with­in our species, but be­yond it. Nat­u­ral­ly it is tempt­ing to view AI-based ma­chines and sim­u­lat­ed be­ings as next in this suc­cession, but I have tried to ar­gue here that this would be a mis­take. Our moral progress has in large part been a recog­ni­tion of what is shared—con­scious­ness, pain, plea­sure, and an in­ter­est in the goods of life. Digi­tal com­put­ers run­ning pro­grams do not share these fea­tures; they mere­ly sim­u­late them.

As such it would be dan­ger­ous to ap­proach the com­ing decades, with its onslaught of AI bots at­tempt­ing to in­flu­ence our pol­i­tics, emo­tions, and de­sires, and its prom­ise of ever rich­er sim­u­la­tions and vir­tu­al worlds, with an ethics that con­flates appear­ance and re­al­i­ty.

 

Re­fe­rences

Agra­wal, Pa­rag. “Tweet.” Twit­ter. Twit­ter, May 16, 2022. https://twit­ter.­com/para­ga/status/1526237588746403841.
Bos­trom, Nick. “Are You Liv­ing in a Com­put­er Sim­u­la­tion?” Philo­soph­i­cal Quar­ter­ly 53 (2003): 243-255.
Bos­trom, Nick. Su­per­in­tel­li­gence : Paths, Dan­gers, Strate­gies. First ed. Ebook Central. Ox­ford, Eng­land, 2014.
Bostrom, Nick, and Carl Shul­man. “Shar­ing the World with Digi­tal Minds.” Accessed May 27, 2022. https://nick­bostrom.­com/pa­pers/digi­tal-mind­s.pdf.Chal­mers, Da­vid John. The Con­scious Mind : In Search of a Fun­da­men­tal The­o­ry.
Phi­los­o­phy of Mind Se­ries. New York: Ox­ford Uni­ver­si­ty Press, 1996.
Chal­mers, Da­vid John. Re­al­i­ty : Vir­tu­al Worlds and the Prob­lem of Phi­los­o­phy. London, 2022.
Da­na­her, John. “Wel­com­ing Robots into the Moral Cir­cle: A De­fence of Eth­i­cal Behav­iourism.” Sci­ence and En­gi­neer­ing Ethics 26, no. 4 (2019): 2023-049.
Frank, L, and Nyholm, S. “Ro­bot Sex and Con­sent: Is Con­sent to Sex be­tween a Robot and a Hu­man Con­ceiv­able, Pos­si­ble, and De­sir­able?” Ar­ti­fi­cial In­tel­li­gence and
Law 25, no. 3 (2017): 305-23.
Ga­run, Natt. “One Year Lat­er, Restau­rants Are Still Con­fused by Google Du­plex.”
The Ver­ge. The Verge, May 9, 2019. https://www.thev­erge.­com/2019/5/9/18538194/google-du­plex-ai-restau­rants-ex­pe­ri­ences-re­view-robo­calls.

Le­moine, Blake. “Tweet.” Twit­ter. Twit­ter, June 11, 2022. https://twit­ter.­com/cajundis­cor­dian/sta­tus/1535627498628734976.
Musk, Elon. “Tweet.” Twit­ter. Twit­ter, May 17, 2022. https://twit­ter.­com/elon­musk/sta­tus/1526465624326782976.
Na­gel, Tho­mas. “What Is It Like to Be a Bat?” The Philo­soph­i­cal Re­view 83, no. 4 (1974): 435-50.
Sear­le, John R., D. C. Den­nett, and David John Chalmers. The Mys­tery of Conscious­ness. New York: New York Re­view of Books, 1997.
Sear­le, John R. “Bi­o­log­i­cal Nat­u­ral­ism.” The Ox­ford Com­pan­ion to Phi­los­o­phy,2005, The Ox­ford Com­pan­ion to Phi­los­o­phy, 2005-01-01.
Sin­ger, Pe­ter. Ani­mal Li­be­ra­tion. New Edition] / with an In­tro­duc­tion by Yu­val Noah Harari. ed. Lon­don, 2015.
Spar­row, R. (2004). The tur­ing triage test. Ethics and In­for­ma­tion Tech­nol­o­gy, 6(4), 203–213. doi:10.1007/s10676-004-6491-2.
Tiku, Ni­ta­sha. “The Google En­gi­neer Who Thinks the Com­pa­ny’s AI Has Come to Life.” The Wash­ing­ton Post. WP Com­pa­ny, June 17, 2022.
“The Lat­est Twit­ter Sta­tistics: Every­thing You Need to Know – Datare­por­tal – Glob­al
Digi­tal In­sights.” Da­ta­Re­por­tal. Ac­cessed May 27, 2022. https://datare­por­tal.­com/essen­tial-twit­ter-stats.
Wei­zen­baum, Jo­seph. Com­put­er Pow­er and Hu­man Rea­son : From Judg­ment to Cal­cu­la­tion. San Fran­ci­sco, 1976.

3000 scholarly citations of The Ecological Thought

 I loved writing that book. I wrote it in three pages then I just kept adding sentences so that in the end, it was the right kind of size. It took about eight weeks. I'm eternally grateful to Harvard for publishing Ecology without Nature, because that let me write this one. This is the one Gary Snyder said was a philosophy book, the first person to have used that word on me. 

New Article: “Pramāṇavāda and the Crisis of Skepticism in the Modern Public Sphere” by Amy Donahue

Readers of the Indian Philosophy Blog may be interested to learn about a new article in the latest issue of the Journal of World Philosophies: “Pramāṇavāda and the Crisis of Skepticism in the Modern Public Sphere” by Amy Donahue (Kennesaw State University). The journal is open-access, and you can download the article here.

Here’s the abstract:

There is widespread and warranted skepticism about the usefulness of inclusive and epistemically rigorous public debate in societies that are modeled on the Habermasian public sphere, and this skepticism challenges the democratic form of government worldwide. To address structural weaknesses of Habermasian public spheres, such as susceptibility to mass manipulation through “ready-to-think” messages and tendencies to privilege and subordinate perspectives arbitrarily, interdisciplinary scholars should attend to traditions of knowledge and public debate that are not rooted in western colonial/modern genealogies, such as the Sanskritic traditions of pramāṇavāda and vāda. Attention to vādapramāṇavāda, and other traditions like them can inspire new forms of social discussion, media, and digital humanities, which, in turn, can help to place trust in democracy on foundations that are more stable than mere (anxious) optimism.

I enjoyed reading the article, and I found it extremely thought-provoking. I hope readers of this blog will check it out. Also, be sure to look for the forthcoming online debate platform that Donahue mentions on p. 5! Maybe we’ll make an announcement on the blog when it’s ready. Or reach out to Dr. Donahue if you’re interested in collaborating.

Here are a few of my questions for further discussion:

  1. Since pramāṇavāda was an elite discourse in historical South Asian societies and it requires some educational training (as Donahue notes on p. 4 and p. 5), can it do the work Donahue asks it to do?
  2. Are jalpa and vitaṇḍā so bad? While most Naiyāyikas have denigrated them as illegitimate as Donahue notes (p. 6), a few have distinguished “tricky” and “honest” forms of vitaṇḍā (Matilal 1998, 3). And then there’s Śrī Harṣa’s debate at the beginning of the Khaṇḍanakhaṇḍakhādya with a Naiyāyika opponent about whether one must accept the means of knowledge (pramāṇas) in order to enter into a debate about the pramāṇas (he mentions that one understands the discourse of the Madhyamakas and Cārvākas, perhaps thinking of Nāgārjuna and Jayarāśi; I will have more to say about the Cārvākas in an upcoming conference presentation—see information below). Matilal has also argued that vitaṇḍā can make sense as resulting in a “commitmentless denial” similar to an “illocutionary negation” (Matilal 1998, 50-56). In terms of a modern public sphere, could vitaṇḍā be a useful tactic for, say, pointing out the inherent contradictions of various harmful dogmatisms? Or maybe the deepest benefit of the vāda-jalpa-vitaṇḍā framework is a bit of self-awareness about which form of debate one is using?
  3. Is vāda necessarily more prone to discrediting false beliefs than a Habermasian public sphere or the type of marketplace of ideas in John Stuart Mill’s On Liberty? (p. 11) My point is most definitely not that we have nothing to learn from Indian logic and debate. Far from it! But I wonder how effective vāda can be. After all, you don’t find much philosophical agreement in the classical Indian tradition, which is precisely why I find it so interesting!
  4. Is the archive (p. 12) essentially part of vāda, or is it a cultural artifact of the Indian and Tibetan tradition of commentaries? Was there something similar in Hellenistic, Roman, Islamic, and Byzantine traditions, which were also heavily commentarial?

My questions here are meant to be taken in the spirit of vāda to keep the conversation going. I hope others will read Donahue’s thought-provoking article and join this worthwhile conversation.

Also, if you will be attending the upcoming Central APA Conference in Denver, Colorado, USA on Feb. 22, 2023, you will have the chance to discuss these and other issues in person! 

Wed. Feb. 22, 2023, 1-4pm

2022 Invited Symposium: Vāda: Indian Logic and Public Debate 

Chair: Jarrod Brown (Berea College)

Speakers: 

Amy Donahue (Kennesaw State University) “Vāda Project: A Non-Centric Method for Countering Disinformation”

Arindam Chakrabarti (University of Hawai’i at Manoa) “Does the Question Arise? Questioning the Meaning of Questions and the Definability of Doubt”

Ethan Mills (University of Tennessee at Chattanooga)  “Cārvāka Skepticism about Inference: Historical and Contemporary Examples” 

(More information about the conference here, including a draft program that includes several other panels on Indian philosophy.)

Works Cited

Donahue, Amy. 2022. “Pramāṇavāda and the Crisis of Skepticism in the Public Sphere.” Journal of World Philosophies 7 (Winter 2022): 1-14.

Matilal, Bimal Krishna.  1998.  The Character of Logic in India.  Edited by Jonardon Ganeri and Heeraman Tiwari.  Albany: SUNY Press.

Shoenfield Logic Book and Article Prize Winners Announced

The Association for Symbolic Logic has awarded its 2022 Shoenfield Logic Book and Article Prizes.

The Shoenfield Prizes are “awarded for outstanding expository writing in the field of logic” and were established honor the late Joseph R. Shoenfield, a influential logician who died in 2000.

The Shoenfield Book Prize was awarded to Paolo Mancosu (University of California, Berkeley), Sergio Galvan (Catholic University of the Sacred Heart), and Richard Zach (Calgary) for their book, An Introduction to Proof Theory—Normalization, Cut-Elimination, and Consistency Proofs (Oxford University Press, 2021).

Paolo Mancosu, Sergio Galvan, and Richard Zach

Here’s a summary of their book:

Proof theory is a central area of mathematical logic of special interest to philosophy. It has its roots in the foundational debate of the 1920s, in particular, in Hilbert’s program in the philosophy of mathematics, which called for a formalization of mathematics, as well as for a proof, using philosophically unproblematic, “finitary” means, that these systems are free from contradiction. Structural proof theory investigates the structure and properties of proofs in different formal deductive systems, including axiomatic derivations, natural deduction, and the sequent calculus. Central results in structural proof theory are the normalization theorem for natural deduction, proved here for both intuitionistic and classical logic, and the cut-elimination theorem for the sequent calculus. In formal systems of number theory formulated in the sequent calculus, the induction rule plays a central role. It can be eliminated from proofs of sequents of a certain elementary form: every proof of an atomic sequent can be transformed into a “simple” proof. This is Hilbert’s central idea for giving finitary consistency proofs. The proof requires a measure of proof complexity called an ordinal notation. The branch of proof theory dealing with mathematical systems such as arithmetic thus has come to be called ordinal proof theory. The theory of ordinal notations is developed here in purely combinatorial terms, and the consistency proof for arithmetic presented in detail.

The Shoenfield Article Prize was awarded to Vasco Brattka (Bundeswehr University Munich) for his article, “A Galois Connection between Turing Jumps and Limits”, published in Logical Methods in Computer Science in 2018.

Vasco Brattka

Here’s the abstract of his article:

Limit computable functions can be characterized by Turing jumps on the input side or limits on the output side. As a monad of this pair of adjoint operations we obtain a problem that characterizes the low functions and dually to this another problem that characterizes the functions that are computable relative to the halting problem. Correspondingly, these two classes are the largest classes of functions that can be pre or post composed to limit computable functions without leaving the class of limit computable functions. We transfer these observations to the lattice of represented spaces where it leads to a formal Galois connection. We also formulate a version of this result for computable metric spaces. Limit computability and computability relative to the halting problem are notions that coincide for points and sequences, but even restricted to continuous functions the former class is strictly larger than the latter. On computable metric spaces we can characterize the functions that are computable relative to the halting problem as those functions that are limit computable with a modulus of continuity that is computable relative to the halting problem. As a consequence of this result we obtain, for instance, that Lipschitz continuous functions that are limit computable are automatically computable relative to the halting problem. We also discuss 1-generic points as the canonical points of continuity of limit computable functions, and we prove that restricted to these points limit computable functions are computable relative to the halting problem. Finally, we demonstrate how these results can be applied in computable analysis.

The Shoenfield prizes are awarded every three years. Any new book published during the nine years prior to the award year is eligible for the book prize; any article published during the six years prior to the award year is eligible for the article prize. You can see a list of previous prize winners here.

❌