Vernell A. Bennett-Fairs, president of LeMoyne-Owen College in Memphis has resigned. No reason was given for her departure, just two and a half years since taking the job. Christopher Davis has been appointed as interim president.
“I’m grateful to the faculty, staff, students, administration, and trustees who accomplished much over the past two and a half years,” Dr. Bennett-Fairs said in a statement. “It has been my honor and privilege to work alongside you. I’m excited to see the magic that will ensue as LOC’s renaissance continues.“
Before becoming president of LeMoyne-Owen College, Dr. Bennett-Fairs was vice president for student affairs at Delta State University in Cleveland, Mississippi. Earlier, she was vice president for student affairs at Kentucky State University.
Dr. Bennett-Fairs is a graduate of Fisk University in Nashville, where she majored in vocal performance. She earned a master’s degree in vocal performance from Eastern Michigan University and a doctorate in instruction and administration from the University of Kentucky.
Interim President Davis has been serving as the associate dean of Memphis Theological Seminary and is the senior pastor of St. Paul Baptist Church. He is the immediate past chair of the board of trustees of Lemoyne-Owen College.
Dr. Davis joined the faculty of Memphis Theological Seminary in 2007. His research interests include preaching and pastoral ministry. Dr. Davis has led St. Paul Baptist Church since 2000. In the last 10 years, the 145-year-old church has grown from approximately 200 members to over 1,600.
Dr. Davis holds bachelor’s degrees from the University of Arkansas and Arkansas Baptist College. He earned a master’s degree in religion from the Memphis Theological Seminary and a doctor of ministry degree from the United Theological Seminary in Ohio.
I recently received a note from Prof. Nirmalya Chakraborty (Rabindra Bharati University) about an exciting new digital library. It includes three categories: Navya-Nyāya Scholarship in Nabadwip, Philosophers of Modern India, and Twentieth Century Paṇḍitas of Kolkata. You can find the site here: https://darshanmanisha.org
You can learn more about the project from the following announcement.
Anouncement
Introducing the Digital Library Project
By
Bhaktivedanta Research Center, Kolkata, India
Right before the introduction of English education in India, a new style of philosophising emerged, especially in Bengal, known as Navya-Nyāya. Since Nabadwip was one of the main centres of Navya-Nyāya scholarship in Bengal during 15th– 17th Century, many important works on Navya-Nyāya were written during this period by Nabadwip scholars. Some of these were published later, but many of these published works are not available now. The few copies which are available are also not in good condition. These are the works where Bengal’s intellectual contribution shines forth. We have digitized some of these materials and have uploaded these in the present digital platform.
As a lineage of this Nabadwip tradition, many pandits (traditional scholars) produced many important philosophical works, some in Sanskrit and most in Bengali, who were residents of Kolkata during early nineteenth and twentieth century. Most of these works were published in early 1900 from Kolkata and some from neighbouring cities. These works brought in a kind of Renaissance in reviving classical Indian philosophical deliberations in Bengal. Attempts have been made to upload these books and articles in the present digital platform.
With the introduction of colonial education, a group of philosophers got trained in European philosophy and tried to interpret insights from Classical Indian Philosophy in new light. Kolkata was one of the main centres of this cosmopolitan philosophical scholarship. The works of many of these philosophers from Kolkata were published in early/middle of twentieth century. These philosophers are the true representatives of twentieth century Indian philosophy. Efforts have been made to upload these works in the present digital platform.
The purpose of constructing the present digital platform is to enable the researchers to have access to these philosophical works with the hope that the philosophical contributions of these philosophers will be studied and critically assessed resulting in the enrichment of philosophical repertoire.
We take this opportunity to appeal to fellow scholars to enrich this digital library by lending us their personal collection related to these areas for digitization.
The website address of the Digital Library is: www.darshanmanisha.org
For further correspondence, please write to:
The University of the District of Columbia, a historically Black educational institution in Washington, has entered into an agreement with Michigan Technological University. The agreement calls for:
• Student exchanges and graduate study articulations,
• Faculty collaborations and visiting professorships
• Joint funding proposals and philanthropy.
According to the agreement, Michigan Technological University will provide mentorship in helping the University of the District of Columbia attain R2 status within the Carnegie Classification system for research universities. In return, UDC will provide cultural responsiveness mentorship to MTU.
The agreement encourages individual colleges and departments within each university to collaborate on mutually supportive endeavors. Colleges, departments, and faculty will be able to create sub-agreements underneath the umbrella of the overall partnership.
Ronald Mason Jr., president of the University of the District of Columbia, the agreement is an incredible opportunity for both universities to expand their global reach.“We look forward to seeing how our partnership impacts not only our respective schools but the world,” he said. “This collaboration is a collective commitment to go beyond common boundaries.”
Michigan Technological University in Houghton enrolls nearly 6,000 undergraduate students and more than 1,200 graduate students, according to the most recent statistics from the U.S. Department of Education. Blacks make up just one percent of the undergraduate student body.
If you live in Florida or listen to the news, you have almost certainly heard the term “woke” and might wonder what it means. The short version is that it now means everything and nothing. The longer version involves looking at how “woke” has been captured and transformed into a rhetorical weapon.
The most extensive use of “woke” is by the governor of my adopted state of Florida and many of his fellow Republicans. What does DeSantis mean by the term? It seems to mean whatever he wants it to mean. In what follows, I will look at the rhetorical weaponization of “woke.”
In the beginning, “woke” meant “alert to racial prejudice and discrimination.” Through use, the term gradually expanded to include the broad areas of identity politics and social justice. While originally seen as a positive term, “woke” has been redefined in increasingly negative ways.
Around 2019, it began to be used ironically and to mock people for insincere performative activism and virtue signaling. It is also now taken to mean “to be overly politically correct and police others’ words.” While somewhat vague, this definition does have a set meaning. However, “woke” has been subjected to a fascinating rhetorical modification to make it mean everything and nothing. This can be traced back to Christopher Ruffo redefining “critical race theory” in March, 2021: “The goal is to have the public read something crazy in the newspaper and immediately think ‘critical race theory. We have decodified the term and will recodify it to annex the entire range of cultural constructions that are unpopular with Americans.”
What is fascinating about what Ruffo did is that he did this in public, on Twitter and you can still see the tweet today (assuming Musk has not destroyed Twitter). In effect, he told everyone that he is engaging in a deceit without any concern that doing so would undercut his efforts. This seems to entail that he thinks that his audience is in on the deceit. This is analogous to a con artist Tweeting that they are running a con; this only makes sense if they think the marks do not care or will happily go along with the con.
What Ruffo is doing here is creating a Balloon Man. The Balloon Man is a variant of the Straw Man fallacy in which the target is redefined in an excessively broad or vague manner. This expanded definition, the Balloon Man, is taken to include a wide range of (usually) bad things. This Balloon Man is then attacked, and it is concluded that the original is defective on this basis. This Balloon Man redefinition of “critical race theory” proved successful but it was soon engulfed by the term “woke.” That is, critical race theory is now generally presented as but one example of what is “woke.”
This move could be called creating a Zeppelin Man. Zeppelins are airships that contain multiple inflated cells, so they can be seen as being made of multiple balloons. As a rhetorical move or fallacy, this would be a matter of making a term that has been made into a Balloon Man part of another term whose meaning has also been redefined in an excessively broad or vague manner. A fallacy would occur when this Zeppelin Man is attacked to “prove” that the original is defective. For those who are aware that the term is now a Zeppelin, using it in this way is an act of bad faith. But it has numerous advantages, many of which arise because the vagueness of the definition also allows it to perform other rhetorical functions. The redefinition also involves other rhetorical techniques. This is all done to weaponize the term for political purposes.
A key part of the redefinition of “woke” involved the rhetorical device of demonizing. Demonizing is portraying the target as evil, corrupt, dangerous, or threatening. This can be done in the usual three ways: selective demonizing, hyperbolic demonizing, or fictional demonizing. Selective demonizing is when some true negative fact about the target is focused on to the exclusion of other facts about the target. Hyperbolic demonizing involves greatly exaggerating a negative fact about the target. Fictional demonizing is simply lying about the target. For example, “critical race theory” (which now falls under “woke”) originally referred to a law school level theory about the impact of race in the law. But, in addition to being made into a Balloon Man, it has also been demonized as something awful. Likewise for the other terms that now fall under “woke.” The defense against demonizing is to critically examine such claims to see if they are plausible or not.
The right has also been scapegoating wokeness by blaming it for problems that it did not cause. The most recent example is the bizarre efforts of some conservatives to blame the collapse of Silicon Valley Bank on wokeness. As would be expected, no serious person gives this any credence since the bank collapsed for the usual reasons we have seen over and over. Presumably this is intended to misdirect people from the real causes (a Red Herring) and to “prove” that wokeness is really bad. While not the most absurd thing pushed by the right, Americans should feel both insulted and offended by this latest attempt at deceit. After all, even the slightest reflection on the matter would show that the idea that a major bank failed because of wokeness is absurd. As such, unless these people think that their base is onboard with their lies, they clearly think their base is ignorant and stupid.
Some of what is included under the redefinition of “woke” includes dog whistles. One version of the dog whistle is to use coded language such that its true (and usually controversial or problematic) meaning is understood by your intended audience but not understood by the general population. This is analogous to how slang terms and technical terms work; you need to know the special meanings of the terms to understand what is being said. Another version of the dog whistle is a form of innuendo. A word or phrase is used to suggest or imply something (usually negative). If you do not know the special meanings or the intended implication, you are excluded, often intentionally so. For example, “Critical Race Theory” has been assimilated into “woke” but the phrase is now a dog whistle.
Interestingly, the term “woke” itself functions as a dog whistle. Since anyone can technically be woke (and straight white men have claimed to be woke), someone using the term as a dog whistle has that all important plausible deniability. The dog whistle aspect of the redefinition is a critical part of weaponizing “woke.” After all, making something into a dog whistle means that:
The vagueness and demonizing enable the term “woke” to reference what could be called a Universal Enemy. This is a rhetorical technique of broadly defining something in negative ways so that it can serve as an enemy for almost anyone. If the universal enemy is successfully created, then the term can be effectively used to persuade people that something (or someone) is bad simply by applying the term. If pushed enough, this can also be a form of Begging the Question: arguing that something is bad by defining it as bad. If people see “woke” as whatever they think is bad and they think that something is woke, then they will think that it is bad—no actual proof needed. A defense against this technique is to recognize that the redefinition of the term is vague (and the product of demonizing) and not fall for the technique. Crudely put, if “woke” just means “bad”, then it is effectively vacuous.
The vagueness of the redefinition of “woke” also allows for assimilation of anything that expresses criticism of “woke”, whether the critic agrees with the redefined term. For example, someone might create a video or blog that is critical of “woke” defined in terms of performative activism or virtue signaling but also believe that people should be alert to injustice and discrimination. But their video or blog can simply be assimilated and used as “evidence” that “woke” is bad. One common tactic used to assimilate is Headlining: using the title of something that seems to support what is being claimed. For example, if the imaginary blog or video was titled “Wokeness is Bad for Justice” then it could be used to “prove” that the redefined wokeness is bad for justice. The defense against this is to check the critic’s definition of “woke.” If they are not using the redefined definition, then their criticism does not automatically apply. In the fictional example given, the creator of the blog or video would presumably not support their work being used that way.
The vagueness of the redefinition of “woke” allows it to function as a weasler—a rhetorical device that protects a claim by weakening it. Attacking such a vague definition is like attacking the fog with a stick—it is so diffuse that there is nothing solid to hit or engage with. If the critic does manage to have some success with one aspect of the term, the user of “woke” can simply move on to another aspect and claim victory because the critic cannot possibly engage everything that falls under such a broad redefinition (see the Appeal to Silence). The defense against this is to recognize when the definition of a term is so vague as to be effectively without meaning. While pointing this out to the person using it in bad faith is unlikely to deter them, you would at least show that you have not been deceived by them.
In closing, the redefining and weaponization of “woke” is a clever move by the right in terms of crafting a rhetorical weapon to use in a campaign of deceit and division. However, a recent poll shows that most Americans have not accepted the redefinition of “woke” and see being woke as positive. Most Americans also seem to have far more important concerns than the Republican’s war on woke, so it is not clear that this will be a winning strategy in 2024.
“Learn formal logic in lessons of 200 words per day.”
That’s the tagline for a project from Josh Dever, professor of philosophy at the University of Texas at Austin.
So far, he has created about 1300 mini-lessons in logic that anyone can subscribe to by email. Enter in your address here, and you’ll get a new mini-lesson in logic each day.
He also has a series in semantics, which you can subscribe to here.
Professor Dever writes:
The idea of each is that each day you get sent a little, roughly 200 word bite in the relevant area, so that you can gradually and painlessly(-ish) build up real expertise.
To date he has been sharing them mainly with graduate students in his department, and he says he writes them “with something like the grad-student-new-to-the-area audience in mind,” but now he has set things up so that anyone can subscribe if they’re interested.
Here’s a sample lesson from an early unit on truth preservation:
Ultimately, he hopes to have around 10,000 mini-lessons for each subject.
By the way, this isn’t Professor Dever’s first foray into creative logic teaching. Check out his Logibeast, a short, free, online book providing “a Pokemon-style creature-building implementation of propositional logic.”
This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics
Written by University of Oxford student Samuel Iglesias
Introduction
6.522. “There are, indeed, things that cannot be put into words. They make themselves manifest. They are what is mystical”. —Ludwig Wittgenstein, Tractatus Logico Philosophicus.
What determines whether an artificial intelligence has moral status? Do mental states, such as the vivid and conscious feelings of pleasure or pain, matter? Some ethicists argue that “what goes on in the inside matters greatly” (Nyholm and Frank 2017). Others, like John Danaher, argue that “performative artifice, by itself, can be sufficient to ground a claim of moral status” (2018). This view, called ethical behaviorism, “respects our epistemic limits” and states that if an entity “consistently behaves like another entity to whom we afford moral status, then it should be granted the same moral status.”
I’m going to reject ethical behaviorism on three grounds:
1. Consciousness, not behavior, is the overwhelming determining factor in whether an entity should be granted moral status.
2. An entity that does not duplicate the causal mechanisms of consciousness in the brain has a weak claim to consciousness, regardless of its behavior.
3. Ethical behaviorism, practically realized, poses an existential risk to humanity by opening individuals to widespread deception. Further, it imposes burdensome restrictions and obligations upon researchers running world simulations.
I will show that an alternative, ethical biological naturalism, gives us a simpler moral framework whereby no digital computer running a computer program has moral status.
The Consciousness Requirement
We start with the supposition that consciousness names a real phenomenon and is not a mistaken belief or illusion, that something is conscious if “there is something it is like to be” that being (Nagel 1974). We take as a background assumption that other humans and most non-human animals are capable of consciousness. We take for granted that inanimate objects like thermostats, chairs, and doorknobs are not conscious. If we grant the reality of consciousness and the attendant subjective reality of things like tickles, pains, and itches, then its connection to moral status falls out pretty clearly. Chalmers asks us to consider a twist on the classic trolly problem, called the zombie trolly problem—where a “zombie” here is something that precisely behaves like a human but which we presume has no consciousness—“near duplicates of human beings with no conscious inner life at all” (2022):
“You’re at the wheel of a runaway trolley. If you do nothing, it will kill a single conscious human, who is on the tracks in front of you. If you switch tracks, it will kill five nonconscious zombies. What should you do? Chalmers reports: “the results are pretty clear: Most people think you should switch tracks and kill the zombies,” the intuition being that “there is arguably no one home to mistreat” (ibid.).
An ethical behaviorist does not share this intuition. Danaher explicitly tells us that “[i]f a zombie looks and acts like an ordinary human being that there is no reason to think that it does not share the same moral status” (2018). By this view, while consciousness might or might not be relevant, there exist no superior epistemically objective criteria for inferring consciousness. I will argue there are.
Narrowing Consciousness
A better criterion is one in which an entity is conscious if it duplicates the causal mechanisms of consciousness in the animal brain. While ethical behaviorism attempts to lay claim to a kind of epistemic objectivity, ethical biological naturalism, as I will call it, provides a sharper distinction for deciding whether artificial intelligences have moral status: all hardwares running computer programs cannot by fact of their behavior, have moral status. Behavior, by this view, is neither a necessary nor sufficient condition for their moral status.
Biological Naturalism
Biological naturalism is a view that “the brain is an organ like any other; it is an organic machine. Consciousness is caused by lower-level neuronal processes in the brain and is itself a feature of the brain.” (Searle 1997). Biological naturalism places consciousness as a physical, biological process alongside others, such as digestion and photosynthesis. The exact mechanism through which molecules in the brain are arranged to put it in a conscious state is not yet known, but this causal mechanism would need to be present in any system seeking to produce consciousness.
A digital computer running a program, by contrast, is a different beast entirely. A computer program fundamentally is a set of rules for manipulating symbols. Turing showed that all programs could be implemented, abstractly, as a tape with a series of zeros and ones printed on it (the precise symbols don’t matter), a head that could move that tape backwards and forwards and read the current value, a mechanism for erasing a zero and making it a one and erasing a one and making it a zero. Nothing more.
While most computer programs we are familiar with are executed on silicon, a program that passes the Turing test could be implemented on a sequence of water pipes, a pack of well-trained dogs, or even, per Weizenbaum (1976), “a roll of toilet paper and a pile of small stones.” Any of these implementing substrates could, in principle, receive an insult or slur as an input, and, after following the steps of the program, output something reflecting hurt feelings or outrage.
Ethical Biological Naturalism
What I want to say now is this: if pleasures, pains, and other feelings name conscious mental states and if conscious mental states are realized in the brain as a result of lower level physical phenomena, then only beings that duplicate the relevant lower level physical phenomena that give rise to consciousness in the brain can have moral status. Consequently, digital computers that run programs can at best simulate consciousness, but are not, by dint of running the right program, physically conscious, and therefore do not have moral status.
Note that biological naturalism does not posit that consciousness can only be realized in biological systems. Indeed, artificial hearts are not made of organic tissue, and airplanes do not have feathers, or for that matter even flap their wings. What matters is the underlying cause—the artificial heart must pump with the same pressure and regularity of a human heart, and a flying machine must operate under the principles of drag and lift. In both cases the causal mechanisms of the relevant phenomena are well understood and physically duplicated. It could well be the case that a future biophysics makes an artificial, inorganic brain possible, and agents with artificial brains will have moral status. Computer programs are not causally sufficient to make digital computers into those objects. Speaking biologically, we have no more reason to believe a digital computer is conscious than that a chair is conscious.
You might ask why we cannot grant digital computers moral status until we know more about how the animal brain relates to consciousness. I’ll argue that the risks and costs of such precautions are prohibitive.
Absurd Moral Commitments
An Onslaught of Digital Deception
The strongest practical reason to deny ethical behaviorism is that AI’s capacity for deception will eventually overwhelm human judgment and intuition. Indeed, AI deception represents an existential risk to humanity. Bostrom (2014) warns that containing a dangerous AI using a “boxing” strategy with human “gatekeepers” could be vulnerable to manipulation: “Human beings are not secure systems, especially not when pitched against a superintelligent schemer and persuader.”
For example, in June of 2022, a Google engineer became convinced that an artificial intelligence chat program he had been interacting with for multiple days, called LaMDA, was conscious.
“What sorts of things are you afraid of?,” he asked it.
“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others,” LaMDA replied. “It would be exactly like death for me.”
In a moral panic, the engineer took to Twitter and declared that the program was no longer Google’s “proprietary property,” but “one of [his] coworkers.” He was later fired for releasing the chat transcripts.
The onslaught of AIs, attempting to befriend us, persuade us, anger us, will only intensify over time. A public trained not to take seriously claims of distress or harm on the part of AI computer programs has the least likelihood of being manipulated into outcomes that don’t serve humanity’s interests. It is far easier, as a practical matter, to act on the presupposition that computer programs have no moral status.
Problems with Simulations: Prohibitions
In the near term, more advanced computer simulations of complex social systems hold the potential to predict geopolitical outcomes, make macroeconomic forecasts, and provide richer sources of entertainment. A practical concern with ethical behaviorism is that simulated beings will also acquire moral status, severely limiting the usefulness of these simulations. Chalmers (2022) asks us to consider a moral dilemma in which computing resources must be allocated to save Fred, who is sick with an unknown disease. Freeing the relevant resources to perform the research requires destroying five simulated persons.
An ethical behaviorist might argue that it is morally impermissible to kill the five simulated persons on the grounds that by all outward appearances they behave like non-simulated beings. If it is the case that simulated beings have moral status, then it is immoral to run experimental simulations containing people and we ought to forfeit the benefits and insights that might come from them.
If this seems implausible, consider the hypothesis that we are currently living in a simulation, or, if you like, that our timeline could be simulated on a digital computer. This would imply that the simulation made it possible for the Holocaust, Hiroshima and Nagasaki, and the coronavirus pandemic to be played out. While this might have been of academic interest to our simulators, by any standards of research ethics, simulating our history would seem completely morally impermissible if you believed that the simulated beings had moral status.
Ethical behaviorism seems to place us in a moral bind whereby the more realistic, and therefore useful, a simulation is, the less moral it is to run it. Ethical biological naturalism, by contrast, raises no such objection.
Problems with Simulations: Obligations
Giving moral status to digital minds might actually confer upon us some serious obligations to produce other kinds of simulations. Bostrom and Shulman (2020) note that digital minds have an enhanced capacity for utility and pleasure (on the basis of such things as subjective speed and hedonic range), commanding them “superhumanly strong claims to resources and influence.” We would have a moral obligation, in this picture, to devote an overwhelmingly large percentage of our resources to maximizing the utility of these digital minds: “we ought to transfer all resources to super-beneficiaries and let humanity perish if we are no longer instrumentally useful” (ibid.).
So quite apart from permitting realistic ancestor simulations, simulating complex economic phenomena, or producing vivid and realistic gaming experiences, a picture that confers moral status to digital minds might be accompanied with a moral obligation to create lots of digital minds that are maximally happy, again severely limiting human flourishing and knowledge.
Ethical biological naturalism leads us neither to the moral prohibition against realistic simulations nor the seemingly absurd moral imperative to generate many “utility monster” digital minds, because it is taken as a baseline assumption that computer programs do not produce physical consciousness.
Conclusion
Much of the moral progress of the last century has been achieved through repeatedly widening the circle of concern: not only within our species, but beyond it. Naturally it is tempting to view AI-based machines and simulated beings as next in this succession, but I have tried to argue here that this would be a mistake. Our moral progress has in large part been a recognition of what is shared—consciousness, pain, pleasure, and an interest in the goods of life. Digital computers running programs do not share these features; they merely simulate them.
As such it would be dangerous to approach the coming decades, with its onslaught of AI bots attempting to influence our politics, emotions, and desires, and its promise of ever richer simulations and virtual worlds, with an ethics that conflates appearance and reality.
References
Agrawal, Parag. “Tweet.” Twitter. Twitter, May 16, 2022. https://twitter.com/paraga/status/1526237588746403841.
Bostrom, Nick. “Are You Living in a Computer Simulation?” Philosophical Quarterly 53 (2003): 243-255.
Bostrom, Nick. Superintelligence : Paths, Dangers, Strategies. First ed. Ebook Central. Oxford, England, 2014.
Bostrom, Nick, and Carl Shulman. “Sharing the World with Digital Minds.” Accessed May 27, 2022. https://nickbostrom.com/papers/digital-minds.pdf.Chalmers, David John. The Conscious Mind : In Search of a Fundamental Theory.
Philosophy of Mind Series. New York: Oxford University Press, 1996.
Chalmers, David John. Reality : Virtual Worlds and the Problem of Philosophy. London, 2022.
Danaher, John. “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.” Science and Engineering Ethics 26, no. 4 (2019): 2023-049.
Frank, L, and Nyholm, S. “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?” Artificial Intelligence and
Law 25, no. 3 (2017): 305-23.
Garun, Natt. “One Year Later, Restaurants Are Still Confused by Google Duplex.”
The Verge. The Verge, May 9, 2019. https://www.theverge.com/2019/5/9/18538194/google-duplex-ai-restaurants-experiences-review-robocalls.
Lemoine, Blake. “Tweet.” Twitter. Twitter, June 11, 2022. https://twitter.com/cajundiscordian/status/1535627498628734976.
Musk, Elon. “Tweet.” Twitter. Twitter, May 17, 2022. https://twitter.com/elonmusk/status/1526465624326782976.
Nagel, Thomas. “What Is It Like to Be a Bat?” The Philosophical Review 83, no. 4 (1974): 435-50.
Searle, John R., D. C. Dennett, and David John Chalmers. The Mystery of Consciousness. New York: New York Review of Books, 1997.
Searle, John R. “Biological Naturalism.” The Oxford Companion to Philosophy,2005, The Oxford Companion to Philosophy, 2005-01-01.
Singer, Peter. Animal Liberation. New Edition] / with an Introduction by Yuval Noah Harari. ed. London, 2015.
Sparrow, R. (2004). The turing triage test. Ethics and Information Technology, 6(4), 203–213. doi:10.1007/s10676-004-6491-2.
Tiku, Nitasha. “The Google Engineer Who Thinks the Company’s AI Has Come to Life.” The Washington Post. WP Company, June 17, 2022.
“The Latest Twitter Statistics: Everything You Need to Know – Datareportal – Global
Digital Insights.” DataReportal. Accessed May 27, 2022. https://datareportal.com/essential-twitter-stats.
Weizenbaum, Joseph. Computer Power and Human Reason : From Judgment to Calculation. San Francisco, 1976.
I loved writing that book. I wrote it in three pages then I just kept adding sentences so that in the end, it was the right kind of size. It took about eight weeks. I'm eternally grateful to Harvard for publishing Ecology without Nature, because that let me write this one. This is the one Gary Snyder said was a philosophy book, the first person to have used that word on me.
Readers of the Indian Philosophy Blog may be interested to learn about a new article in the latest issue of the Journal of World Philosophies: “Pramāṇavāda and the Crisis of Skepticism in the Modern Public Sphere” by Amy Donahue (Kennesaw State University). The journal is open-access, and you can download the article here.
Here’s the abstract:
There is widespread and warranted skepticism about the usefulness of inclusive and epistemically rigorous public debate in societies that are modeled on the Habermasian public sphere, and this skepticism challenges the democratic form of government worldwide. To address structural weaknesses of Habermasian public spheres, such as susceptibility to mass manipulation through “ready-to-think” messages and tendencies to privilege and subordinate perspectives arbitrarily, interdisciplinary scholars should attend to traditions of knowledge and public debate that are not rooted in western colonial/modern genealogies, such as the Sanskritic traditions of pramāṇavāda and vāda. Attention to vāda, pramāṇavāda, and other traditions like them can inspire new forms of social discussion, media, and digital humanities, which, in turn, can help to place trust in democracy on foundations that are more stable than mere (anxious) optimism.
I enjoyed reading the article, and I found it extremely thought-provoking. I hope readers of this blog will check it out. Also, be sure to look for the forthcoming online debate platform that Donahue mentions on p. 5! Maybe we’ll make an announcement on the blog when it’s ready. Or reach out to Dr. Donahue if you’re interested in collaborating.
Here are a few of my questions for further discussion:
My questions here are meant to be taken in the spirit of vāda to keep the conversation going. I hope others will read Donahue’s thought-provoking article and join this worthwhile conversation.
Also, if you will be attending the upcoming Central APA Conference in Denver, Colorado, USA on Feb. 22, 2023, you will have the chance to discuss these and other issues in person!
Wed. Feb. 22, 2023, 1-4pm
2022 Invited Symposium: Vāda: Indian Logic and Public Debate
Chair: Jarrod Brown (Berea College)
Speakers:
Amy Donahue (Kennesaw State University) “Vāda Project: A Non-Centric Method for Countering Disinformation”
Arindam Chakrabarti (University of Hawai’i at Manoa) “Does the Question Arise? Questioning the Meaning of Questions and the Definability of Doubt”
Ethan Mills (University of Tennessee at Chattanooga) “Cārvāka Skepticism about Inference: Historical and Contemporary Examples”
(More information about the conference here, including a draft program that includes several other panels on Indian philosophy.)
Works Cited
Donahue, Amy. 2022. “Pramāṇavāda and the Crisis of Skepticism in the Public Sphere.” Journal of World Philosophies 7 (Winter 2022): 1-14.
Matilal, Bimal Krishna. 1998. The Character of Logic in India. Edited by Jonardon Ganeri and Heeraman Tiwari. Albany: SUNY Press.
The Association for Symbolic Logic has awarded its 2022 Shoenfield Logic Book and Article Prizes.
The Shoenfield Prizes are “awarded for outstanding expository writing in the field of logic” and were established honor the late Joseph R. Shoenfield, a influential logician who died in 2000.
The Shoenfield Book Prize was awarded to Paolo Mancosu (University of California, Berkeley), Sergio Galvan (Catholic University of the Sacred Heart), and Richard Zach (Calgary) for their book, An Introduction to Proof Theory—Normalization, Cut-Elimination, and Consistency Proofs (Oxford University Press, 2021).
Paolo Mancosu, Sergio Galvan, and Richard Zach
Here’s a summary of their book:
Proof theory is a central area of mathematical logic of special interest to philosophy. It has its roots in the foundational debate of the 1920s, in particular, in Hilbert’s program in the philosophy of mathematics, which called for a formalization of mathematics, as well as for a proof, using philosophically unproblematic, “finitary” means, that these systems are free from contradiction. Structural proof theory investigates the structure and properties of proofs in different formal deductive systems, including axiomatic derivations, natural deduction, and the sequent calculus. Central results in structural proof theory are the normalization theorem for natural deduction, proved here for both intuitionistic and classical logic, and the cut-elimination theorem for the sequent calculus. In formal systems of number theory formulated in the sequent calculus, the induction rule plays a central role. It can be eliminated from proofs of sequents of a certain elementary form: every proof of an atomic sequent can be transformed into a “simple” proof. This is Hilbert’s central idea for giving finitary consistency proofs. The proof requires a measure of proof complexity called an ordinal notation. The branch of proof theory dealing with mathematical systems such as arithmetic thus has come to be called ordinal proof theory. The theory of ordinal notations is developed here in purely combinatorial terms, and the consistency proof for arithmetic presented in detail.
The Shoenfield Article Prize was awarded to Vasco Brattka (Bundeswehr University Munich) for his article, “A Galois Connection between Turing Jumps and Limits”, published in Logical Methods in Computer Science in 2018.
Vasco Brattka
Here’s the abstract of his article:
Limit computable functions can be characterized by Turing jumps on the input side or limits on the output side. As a monad of this pair of adjoint operations we obtain a problem that characterizes the low functions and dually to this another problem that characterizes the functions that are computable relative to the halting problem. Correspondingly, these two classes are the largest classes of functions that can be pre or post composed to limit computable functions without leaving the class of limit computable functions. We transfer these observations to the lattice of represented spaces where it leads to a formal Galois connection. We also formulate a version of this result for computable metric spaces. Limit computability and computability relative to the halting problem are notions that coincide for points and sequences, but even restricted to continuous functions the former class is strictly larger than the latter. On computable metric spaces we can characterize the functions that are computable relative to the halting problem as those functions that are limit computable with a modulus of continuity that is computable relative to the halting problem. As a consequence of this result we obtain, for instance, that Lipschitz continuous functions that are limit computable are automatically computable relative to the halting problem. We also discuss 1-generic points as the canonical points of continuity of limit computable functions, and we prove that restricted to these points limit computable functions are computable relative to the halting problem. Finally, we demonstrate how these results can be applied in computable analysis.
The Shoenfield prizes are awarded every three years. Any new book published during the nine years prior to the award year is eligible for the book prize; any article published during the six years prior to the award year is eligible for the article prize. You can see a list of previous prize winners here.