FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Psychology of morality


Morality is a part of everyday life and personal experience. It is also, of course, the subject of a large field of philosophy -- philosophical ethics. What principles should I follow in action? What kind of person do I want to be? What do I owe to other people in a range of circumstances? 

We can also study moral thinking and action from the point of view of empirical psychology. Several areas of method and theory have been developed in psychology for the study of moral reasoning and behavior, including cognitive studies of moral thinking, social-psychological studies of the influence of external social factors on moral behavior, evolutionary studies of the evolutionary development of moral emotions, and ethnomethodological studies of "morality in interaction".

So it is worth asking how much we can learn about real everyday moral behavior from the empirical research psychologists have done on these questions to date. What insights can we gain from empirical research into the question, “why do people behave as they do in ‘morally’ salient circumstances”? And of particular interest — are there findings that are useful for understanding the behavior of “ordinary people” in times of catastrophe?

Naomi Ellemers, Jojanneke van der Toorn, Yavor Paunov, and Thed van Leeuwen's "The Psychology of Morality: A Review and Analysis of Empirical Studies Published From 1940 Through 2017" (link) provides a large literature review of research in the psychology of morality since 1940. Based on content analysis of almost 1,300 research articles. published since 1996, they have classified research topics and empirical methods into a small number of categories. Here is a cluster graph of their analysis.


Their analysis permits them to categorize the field of moral psychology around several groups of research questions and empirical approaches.

Research questions: The authors find that the roughly 2,000 articles considered permit identification of five large areas of research: moral reasoning; moral judgments; moral behavior; moral emotions; and moral self-views. These categories complement each other, in the sense that findings in one area can serve to explain findings in another area.

Empirical approaches and measures: The authors find several fairly distinctive empirical approaches to problems in moral psychology. Most of these approaches primarily make use of self-reports and questionnaires by subjects in response to morally relevant questions. Topics include —

  • hypothetical moral dilemmas
  • lists of traits or behaviors,
  • endorsement of abstract moral rules, and
  • position on specific moral issues (Table 1).

The bulk of these studies rely on correlational analysis. Some of the research papers reviewed make use of controlled experiments in which a set of controlled laboratory circumstances or a series of questions are presented to the subject, and the researcher hopes to discover causal relationships based on variations in behavior resulting from changing experimental conditions. (It is striking that neither of the most famous experiments on moral behavior are mentioned or placed within the conceptual structure of the authors' findings: the Milgram experiment and Philip Zimbardo's Stanford Prison Experiment.)

Ellemers et al further differentiate studies of morality according to the level of mechanism that is the primary object of investigation: intrapersonal, interpersonal, intragroup, and intergroup mechanisms. Here is their brief summary of these levels of mechanisms:

(a) research on intrapersonal mechanisms, which studies how a single individual considers, evaluates, or makes decisions about rules, objects, situations, and courses of action; (b) research on interpersonal mechanisms, which examines how individuals perceive, evaluate, and interact with other individuals; (c) research on intragroup mechanisms, investigating how people perceive, evaluate, and respond to norms or behaviors displayed by other members of the same group, work or sports team, religious community, or organization; and (d) research on intergroup mechanisms, focusing on how people perceive, evaluate, and interact with members of different cultural, ethnic, or national groups. (342)

Here is their tabulation of "number of publications" classified by mechanism and research theme.


"Intrapersonal" mechanisms are the predominant object of research in all research areas except "Moral judgments", and "Intragroup" mechanisms are least frequently examined across the board.

The authors identify three "seminal publications" in the field of the psychology of morality: Haidt 2001, Greene et al. 2001, and Jost, Glaser, Kruglanski, and Sulloway 2003. They also provide the top three seminal publications for each research area. These are selected based on the total number of citation each article received. 

This article succeeds in providing an abstract map of topics, methods, and levels of analysis across a reasonably comprehensive set of research articles published between 1960 and 2017. The extensive list of references the authors provide is a course in itself on the current state of empirical moral psychology. (Interested readers will also find much relevant discussion in Hellemers' monograph, Morality and the Regulation of Social Behavior: Groups as Moral Anchors.)

Two other articles are worth considering on the question of how we should go about trying to understand "human morality and moral behavior" using empirical methods.

Kristen Monroe's "Cracking the Code of Genocide: The Moral Psychology of Rescuers, Bystanders, and Nazis during the Holocaust" (link) is particularly interesting in connection with the problem of understanding how "ordinary people" can commit evil actions. Her article provides both a useful survey of a large literature of social-psychology studies of individual genocidal behavior, and her own original research based on close analysis of extensive interviews with genociders, bystanders, and rescuers. Especially important among the sources included in Monroe's literature review is The Courage to Care (Rittener, C., & Myers, S. (1986)), which provides a large collection of Holocaust-era survivor interviews from each category. Monroe's 2012 monograph Ethics in an Age of Terror and Genocide: Identity and Moral Choice goes into more extensive detail on the main findings of "Cracking the Code of Genocide" concerning what we can learn from interviews with participants about the nature of moral conduct.

Most interesting is Monroe's own work in which she performs detailed analyses of 100 interviews in order to identify underlying themes and psychological factors. She uses "narrative interpretive analytic methodology" (706) to sort out factors of psychological importance. Monroe's analysis finds that there are distinctive differences in self-images, worldview, and cognitive classifications (700) across these three groups of participants.

A narrative interpretive analysis of in-depth interviews with bystanders, Nazis, and rescuers of Jews during the Holocaust reveals the intricate but critical importance of psychological factors in explaining behavior during genocides.... Bystanders see themselves as passive people, lacking in control and low in efficacy. The Nazi self-image is as victims who need to protect themselves and their community. Rescuers consider themselves connected to all human beings through bonds of a common humanity. The rescuers' idealized cognitive model of what it means to be a human being is far more expansive and inclusive than the model employed by bystanders or Nazis. (700)

She offers six major findings:

  1. Self-image is the central psychological variable
  2. Identity constrains choice for all individuals
  3. Character and self-image are not all. A critical ethical aspect of identity is relational
  4. The ethical importance of values works through the fashion in which values are integrated into the speaker's sense of self and worldview
  5. Personal suffering, in the form of past trauma, heightens awareness of the plight of others for rescuers; for bystanders and Nazis, however, it increases a sense of vulnerability
  6. Speakers' cognitive categorization systems carry strong ethical overtones. (711)

Gabriel Abend looks at the field of moral psychology from the other end of the telescope in "What the Science of Morality Doesn’t Say About Morality" (link). He provides a literature review the current research area in moral psychology that aims to discover a neuroscience analysis of morality. This field of research program attempts to provide neurophysiological correlates with moral judgments. "What brain areas are “activated,” “recruited,” “implicated,” “responsible for,” or “associated with” making moral judgments?" (162). Abend's article provides a sustained critique of the assumptions in use in this field, and what he regards as its over-emphasis on one small aspect of "morality in everyday life": the question of moral judgment. Against the idea that this line of research constitutes the whole of a "new science of morality", Abend asks for methodological and theoretical pluralism. "I call for a pluralism of methods and objects of inquiry in the scientific investigation of morality, so that it transcends its problematic overemphasis on a particular kind of individual moral judgment" (abstract).

The approach to empirical research in moral psychology that appeals most to me is one that begins with a rich conception of the human moral subject — the human being capable of reflective thought and imagination, the person possessed with a social identity and self-image, the person situated within a set of meaningful social relationships, the person embodying a range of moral emotions. With a rich conception like this underlying the research agenda, there is ample space for empirical study of the causal and meaning-laden processes that influence action in difficult circumstances. And this approach brings empirical research into closer dialogue with philosophers like Martha Nussbaum and Susan Neiman.

*     *     *     *     *

The topic of explaining brutal and violent actions in times of social upheaval is directly relevant to the violence of China's Cultural Revolution represented in the photograph above. Here is a brief description of the violence by students against teachers and administrators in the early stages of the Cultural Revolution.

1966; August 5: Ms. Bian Zhongyun, the deputy principal of the Beijing Normal University Female Middle School, along with four other school educators, was attacked by the Red Guards on groundless charges. Bian died after several hours of humiliating treatment and brutal beating. This was the first case of the killing of educators in China by the Red Guards and other militant students. Many more cases followed, and the brutality escalated rapidly. Thousands of educators were publicly denounced and physically abused in “struggle sessions” by the rampaging students in Beijing’s secondary schools and universities. This includes 20 documented cases of killings y the Red Guards (Wang, 2004: 3-16 and Guo, 2006: 12) ***. The mass violence soon spread off campus, as the Red Guards beat seven residents of the same middle school to death in the city’s neighborhoods. In the District where this school was located, 333 residents were killed by the Red Guards at middle schools in August 1966 alone (Wang, 2004: 16) ***. [Yongyi Song, "Chronology of mass killings during the Chinese Cultural Revolution (1966-1976)"; link.)

Song summarizes a wide range of estimates of persons killed during the Cultural Revolution and settles on an estimate in excess of two million people. Many of the participants in these acts of cruelty, violence, humiliation, and murder were ordinary Chinese men and women, as well as teenagers and sub-teenagers. How are we to explain their behavior against their fellow citizens and even their teachers? Here are several earlier posts about the Cultural Revolution (link).

(Also of interest are several earlier posts in Understanding Society reviewing empirical work in psychology on the topic of character as a factor influencing behavior and action; linklink, link, link.)

Psychology of morality


Morality is a part of everyday life and personal experience. It is also, of course, the subject of a large field of philosophy -- philosophical ethics. What principles should I follow in action? What kind of person do I want to be? What do I owe to other people in a range of circumstances? 

We can also study moral thinking and action from the point of view of empirical psychology. Several areas of method and theory have been developed in psychology for the study of moral reasoning and behavior, including cognitive studies of moral thinking, social-psychological studies of the influence of external social factors on moral behavior, evolutionary studies of the evolutionary development of moral emotions, and ethnomethodological studies of "morality in interaction".

So it is worth asking how much we can learn about real everyday moral behavior from the empirical research psychologists have done on these questions to date. What insights can we gain from empirical research into the question, “why do people behave as they do in ‘morally’ salient circumstances”? And of particular interest — are there findings that are useful for understanding the behavior of “ordinary people” in times of catastrophe?

Naomi Ellemers, Jojanneke van der Toorn, Yavor Paunov, and Thed van Leeuwen's "The Psychology of Morality: A Review and Analysis of Empirical Studies Published From 1940 Through 2017" (link) provides a large literature review of research in the psychology of morality since 1940. Based on content analysis of almost 1,300 research articles. published since 1996, they have classified research topics and empirical methods into a small number of categories. Here is a cluster graph of their analysis.


Their analysis permits them to categorize the field of moral psychology around several groups of research questions and empirical approaches.

Research questions: The authors find that the roughly 2,000 articles considered permit identification of five large areas of research: moral reasoning; moral judgments; moral behavior; moral emotions; and moral self-views. These categories complement each other, in the sense that findings in one area can serve to explain findings in another area.

Empirical approaches and measures: The authors find several fairly distinctive empirical approaches to problems in moral psychology. Most of these approaches primarily make use of self-reports and questionnaires by subjects in response to morally relevant questions. Topics include —

  • hypothetical moral dilemmas
  • lists of traits or behaviors,
  • endorsement of abstract moral rules, and
  • position on specific moral issues (Table 1).

The bulk of these studies rely on correlational analysis. Some of the research papers reviewed make use of controlled experiments in which a set of controlled laboratory circumstances or a series of questions are presented to the subject, and the researcher hopes to discover causal relationships based on variations in behavior resulting from changing experimental conditions. (It is striking that neither of the most famous experiments on moral behavior are mentioned or placed within the conceptual structure of the authors' findings: the Milgram experiment and Philip Zimbardo's Stanford Prison Experiment.)

Ellemers et al further differentiate studies of morality according to the level of mechanism that is the primary object of investigation: intrapersonal, interpersonal, intragroup, and intergroup mechanisms. Here is their brief summary of these levels of mechanisms:

(a) research on intrapersonal mechanisms, which studies how a single individual considers, evaluates, or makes decisions about rules, objects, situations, and courses of action; (b) research on interpersonal mechanisms, which examines how individuals perceive, evaluate, and interact with other individuals; (c) research on intragroup mechanisms, investigating how people perceive, evaluate, and respond to norms or behaviors displayed by other members of the same group, work or sports team, religious community, or organization; and (d) research on intergroup mechanisms, focusing on how people perceive, evaluate, and interact with members of different cultural, ethnic, or national groups. (342)

Here is their tabulation of "number of publications" classified by mechanism and research theme.


"Intrapersonal" mechanisms are the predominant object of research in all research areas except "Moral judgments", and "Intragroup" mechanisms are least frequently examined across the board.

The authors identify three "seminal publications" in the field of the psychology of morality: Haidt 2001, Greene et al. 2001, and Jost, Glaser, Kruglanski, and Sulloway 2003. They also provide the top three seminal publications for each research area. These are selected based on the total number of citation each article received. 

This article succeeds in providing an abstract map of topics, methods, and levels of analysis across a reasonably comprehensive set of research articles published between 1960 and 2017. The extensive list of references the authors provide is a course in itself on the current state of empirical moral psychology. (Interested readers will also find much relevant discussion in Hellemers' monograph, Morality and the Regulation of Social Behavior: Groups as Moral Anchors.)

Two other articles are worth considering on the question of how we should go about trying to understand "human morality and moral behavior" using empirical methods.

Kristen Monroe's "Cracking the Code of Genocide: The Moral Psychology of Rescuers, Bystanders, and Nazis during the Holocaust" (link) is particularly interesting in connection with the problem of understanding how "ordinary people" can commit evil actions. Her article provides both a useful survey of a large literature of social-psychology studies of individual genocidal behavior, and her own original research based on close analysis of extensive interviews with genociders, bystanders, and rescuers. Especially important among the sources included in Monroe's literature review is The Courage to Care (Rittener, C., & Myers, S. (1986)), which provides a large collection of Holocaust-era survivor interviews from each category. Monroe's 2012 monograph Ethics in an Age of Terror and Genocide: Identity and Moral Choice goes into more extensive detail on the main findings of "Cracking the Code of Genocide" concerning what we can learn from interviews with participants about the nature of moral conduct.

Most interesting is Monroe's own work in which she performs detailed analyses of 100 interviews in order to identify underlying themes and psychological factors. She uses "narrative interpretive analytic methodology" (706) to sort out factors of psychological importance. Monroe's analysis finds that there are distinctive differences in self-images, worldview, and cognitive classifications (700) across these three groups of participants.

A narrative interpretive analysis of in-depth interviews with bystanders, Nazis, and rescuers of Jews during the Holocaust reveals the intricate but critical importance of psychological factors in explaining behavior during genocides.... Bystanders see themselves as passive people, lacking in control and low in efficacy. The Nazi self-image is as victims who need to protect themselves and their community. Rescuers consider themselves connected to all human beings through bonds of a common humanity. The rescuers' idealized cognitive model of what it means to be a human being is far more expansive and inclusive than the model employed by bystanders or Nazis. (700)

She offers six major findings:

  1. Self-image is the central psychological variable
  2. Identity constrains choice for all individuals
  3. Character and self-image are not all. A critical ethical aspect of identity is relational
  4. The ethical importance of values works through the fashion in which values are integrated into the speaker's sense of self and worldview
  5. Personal suffering, in the form of past trauma, heightens awareness of the plight of others for rescuers; for bystanders and Nazis, however, it increases a sense of vulnerability
  6. Speakers' cognitive categorization systems carry strong ethical overtones. (711)

Gabriel Abend looks at the field of moral psychology from the other end of the telescope in "What the Science of Morality Doesn’t Say About Morality" (link). He provides a literature review the current research area in moral psychology that aims to discover a neuroscience analysis of morality. This field of research program attempts to provide neurophysiological correlates with moral judgments. "What brain areas are “activated,” “recruited,” “implicated,” “responsible for,” or “associated with” making moral judgments?" (162). Abend's article provides a sustained critique of the assumptions in use in this field, and what he regards as its over-emphasis on one small aspect of "morality in everyday life": the question of moral judgment. Against the idea that this line of research constitutes the whole of a "new science of morality", Abend asks for methodological and theoretical pluralism. "I call for a pluralism of methods and objects of inquiry in the scientific investigation of morality, so that it transcends its problematic overemphasis on a particular kind of individual moral judgment" (abstract).

The approach to empirical research in moral psychology that appeals most to me is one that begins with a rich conception of the human moral subject — the human being capable of reflective thought and imagination, the person possessed with a social identity and self-image, the person situated within a set of meaningful social relationships, the person embodying a range of moral emotions. With a rich conception like this underlying the research agenda, there is ample space for empirical study of the causal and meaning-laden processes that influence action in difficult circumstances. And this approach brings empirical research into closer dialogue with philosophers like Martha Nussbaum and Susan Neiman.

*     *     *     *     *

The topic of explaining brutal and violent actions in times of social upheaval is directly relevant to the violence of China's Cultural Revolution represented in the photograph above. Here is a brief description of the violence by students against teachers and administrators in the early stages of the Cultural Revolution.

1966; August 5: Ms. Bian Zhongyun, the deputy principal of the Beijing Normal University Female Middle School, along with four other school educators, was attacked by the Red Guards on groundless charges. Bian died after several hours of humiliating treatment and brutal beating. This was the first case of the killing of educators in China by the Red Guards and other militant students. Many more cases followed, and the brutality escalated rapidly. Thousands of educators were publicly denounced and physically abused in “struggle sessions” by the rampaging students in Beijing’s secondary schools and universities. This includes 20 documented cases of killings y the Red Guards (Wang, 2004: 3-16 and Guo, 2006: 12) ***. The mass violence soon spread off campus, as the Red Guards beat seven residents of the same middle school to death in the city’s neighborhoods. In the District where this school was located, 333 residents were killed by the Red Guards at middle schools in August 1966 alone (Wang, 2004: 16) ***. [Yongyi Song, "Chronology of mass killings during the Chinese Cultural Revolution (1966-1976)"; link.)

Song summarizes a wide range of estimates of persons killed during the Cultural Revolution and settles on an estimate in excess of two million people. Many of the participants in these acts of cruelty, violence, humiliation, and murder were ordinary Chinese men and women, as well as teenagers and sub-teenagers. How are we to explain their behavior against their fellow citizens and even their teachers? Here are several earlier posts about the Cultural Revolution (link).

(Also of interest are several earlier posts in Understanding Society reviewing empirical work in psychology on the topic of character as a factor influencing behavior and action; linklink, link, link.)

Reimagining hiring with Open Recognition

Midjourney prompt: "diverse people being recognised for the full spectrum of who they are --no text words letters signatures"

It’s the first day of my three-week holiday so of course I woke up before 6am and I’m writing a work-related post. The reason for this is that on Hacker News just now, I saw a post entitled He who submits a resume has already lost. It’s not only a great title, but a thought-provoking post.

The author, an anonymous blogger who goes by ‘Resident Contrarian’, points out how sending out CVs and resumes is in no way to the benefit of job-seekers:

An [sic] group of 50 applicants submit resumes for a job. 10 or so of them are delusional, and get cut… Of the remaining 40, 35 are rejected… This leaves five candidates… Four of those candidates will eventually be rejected, leaving a best-of-50 candidate who will be paid as if he’s barely qualified.

Last week, I wrote that ‘hiring is broken’. I stand by that. However, we can at least partially solve this problem with Open Recognition based on Open Badges, and Verifiable Credentials. This approach allows us to not only foreground knowledge, skills, and understanding, but also behaviours, relationships, and experiences. I’ve written about this before, as has Laura, but here’s some additional reasons why we need a different approach to hiring, specifically:

  1. Levelling the playing field — by encompassing behaviours and experiences Open Recognition paints a fuller picture of candidates, ensuring that employers can unearth hidden gems — and not just those who went to the best universities.
  2. Trust, but verify — Open Badges offer transparency and authenticity, covering not just knowledge and skills, but also the relationships that underpin successful teamwork. For instance, a candidate might showcase their coding skills alongside their mentorship experience in a community coding club, offering proof of a well-rounded skillset.
  3. Bias reduction — by taking into account personal qualities, Open Recognition can help hiring managers make more equitable decisions. For example, a candidate’s demonstrated leadership in a volunteer organisation could be weighed-up with their formal education and work experience. This ensures that qualified candidates from all walks of life get their foot in the door.
  4. No more unpaid labour — “it’s a full time job to apply for a job” is a truism. Open Recognition using Open Badges and Verifiable Credentials streamlines the job-seeking process. It reduces the burden of unpaid labour, so instead of spending countless hours tailoring CVs/resumes and cover letters, candidates can focus on what matters. This includes forging meaningful relationships and engaging in valuable experiences rather than filling in online forms.
  5. Mutually-beneficial match — a more transparent hiring process accounting for the full spectrum of a candidate’s strengths works both for candidates and employers alike. For instance, a company seeking a project manager with proven leadership abilities and a history of successful collaborations might quickly identify a candidate who has earned Open Badges in project management methodologies and documented their team-building experiences.

I’d really like to fix hiring so that my own kids don’t have to deal with the soul-crushing reality of applying for jobs as it is in 2023. By embracing a holistic approach to hiring through Open Recognition, I really do think we can create a more equitable, transparent, and more human approach for everyone involved.

If you’re eager to explore this further and discover how you can get involved in the world of Open Badges and Open Recognition, head over to badges.community and join the movement!

The post Reimagining hiring with Open Recognition first appeared on Open Thinkering.

Open Recognition + Critical Pedagogy = empowerment, dialogue, and inclusion

Midjourney prompt: "Paolo Freire in conversation | illustration | charcoal on white paper | balding | grey bushy beard | serious face | large retro spectacles --aspect 3:2"

At the crossroads of education, social justice, and personal development stands critical pedagogy, a concept associated with the Brazilian educator and philosopher Paolo Freire. His conviction was that education should be egalitarian, democratic, and transformative; his work has had an outsize impact on my educational philosophy. Critical pedagogy emphasises the significance of dialogue, critical thinking, and active participation. The further I delve into the world of of Open Recognition, the clearer the links with Freire, both in essence and practice.

In Pedagogy of the Oppressed, Freire states that:

Education either functions as an instrument which is used to facilitate integration of the younger generation into the logic of the present system and bring about conformity or it becomes the practice of freedom, the means by which men and women deal critically and creatively with reality and discover how to participate in the transformation of their world.

Open Recognition, like critical pedagogy, is about empowering individuals to take ownership of their personal and professional development. The approach not only foregrounds knowledge, skills, and understanding, but also behaviours, relationships, and experiences.

Freire believed that through open and honest conversations, individuals could challenge existing power structures, question assumptions, and engage in transformative learning experiences. Similarly, Open Recognition offers a way for individuals to engage in meaningful conversations about their skills, experiences, and aspirations — using language and approaches that make sense to them.

In facilitating dialogue over power dynamics, Open Recognition nurtures a sense of community and belonging. It empowers individuals to share their stories and learn from one another, and this exchange of ideas and experiences not only contributes to personal growth but also fosters a sense of collective responsibility and solidarity

Critical pedagogy is grounded in the belief that education should be a vehicle for social change and empowerment. Open Recognition aligns with this vision by providing ways for individuals make meaningful contributions to their communities, challenge the status quo, and actively participate in shaping their own futures.

So it’s fair to say that Open Recognition and critical pedagogy share a common goal: the empowerment and transformation of individuals through dialogue, inclusion, and active participation. By explicitly embracing the principles of critical pedagogy, it’s my belief that Open Recognition can help create a more inclusive and equitable world.

If you’re interested in Open Recognition, critical pedagogy, and doing something different than the status quo, I’d highly suggest joining badges.community!

The post Open Recognition + Critical Pedagogy = empowerment, dialogue, and inclusion first appeared on Open Thinkering.

Embracing the Full Spectrum: towards a new era of inclusive, open recognition

White light going through a prism and being refracted into the colours of the rainbow. Image from Pixabay.

Earlier this month, Don Presant published a post entitled The Case for Full Spectrum “Inclusive” Credentials in which he mentioned that “people want to work with people, not just collection of skills”.

We are humans, not machines.

Yesterday, on the KBW community call, Amy Daniels-Moehle expressed her appreciation for the story that Anne shared in our Open Education Talks presentation about her experiences. Amy mentioned that the Gen-Z kids she works with had been excited when watching it. They used the metaphor of showing the full electromagnetic spectrum of themselves — more than just the visible light that we usually see.

It’s a useful metaphor. Just as the electromagnetic spectrum extends far beyond the range of visible light, encompassing ultraviolet, infrared, and many other frequencies, the concept of Open Recognition encourages us to broaden our perspective. As I’ve said before, it allows us to recognising not only knowledge, skills, and understanding, but also behaviours, relationships, and experiences

I remember learning in my Physics lessons that, with the electromagnetic spectrum, each frequency band has its unique properties, applications, and value. Visible light allows us to perceive the world around us. Ultraviolet and infrared frequencies have their uses in areas such as medicine, communication, and security. Other creatures, such as bees, can actually see these parts of the spectrum, which means they see the world very differently to us.

Similarly, it’s time for us to see the world in a new light. Open Recognition acknowledges that individuals possess diverse skills, competencies, and experiences that might not be immediately apparent or visible. Like the ultraviolet and infrared frequencies, these hidden talents may hold immense value and potential. Instead of doubling-down on what went before, we should be encouraging environment that embraces and celebrates this diversity. We can unlock untapped potential, create new opportunities, and enable more human flourishing.

In the same way that harnessing the full spectrum of electromagnetic radiation has led to groundbreaking discoveries and advancements, I believe that embracing Open Recognition can lead to a more inclusive, equitable, and thriving society. By acknowledging and valuing the myriad skills and talents each person brings, we can better collaborate and learn from one another. What’s not to like about that?

Note: if you’re interested in this, there’s a community of like-minded people you can join!

The post Embracing the Full Spectrum: towards a new era of inclusive, open recognition first appeared on Open Thinkering.

Reinventing the Fortress: using Open Recognition to enhance ‘standards’ and ‘rigour’

Midjourney-created image with prompt: "imposing fortress castle with guards, mountain range, wide angle, people in foreground holding bright lanterns, vivid colors, max rive, dan mumford, sylvain sarrailh, detailed artwork, 8k, 32k, lively rainbow, ultra realistic, beautiful lake, moon eclipse, ultra epic composition, hyperdetailed"

Imagine a formidable fortress standing tall. Long the bastion of formal education, it’s built upon the pillars of ‘standards’ and ‘rigour’. It has provided structure and stability to the learning landscape. These days, it’s being reinforced with smaller building blocks (‘microcredentials’) but the shape and size of the fortress largely remains the same.

However, as the winds of change begin to blow, a new force emerges from the horizon: Open Recognition. Far from seeking to topple the fortress, this powerful idea aims to harmonise with its foundations, creating a more inclusive and adaptive stronghold for learning.

Open Recognition is a movement that values diverse learning experiences and self-directed pathways. So, at first, it may appear to be in direct opposition to the fortress’s rigidity. However, upon closer inspection, rather than seeking to tear down the walls of standards and rigour, Open Recognition seeks to expand and reimagine them. This ensures that the fortress is inclusive: remaining relevant and accessible to all learners.

To create harmony between these seemingly conflicting forces, it’s important to first acknowledge that the fortress of standards and rigour does have its merits. It provides a solid framework for education, ensuring consistency and quality across the board. However, this approach can also be limiting, imposing barriers that prevent many learners from fully realising their potential.

Open Recognition brings flexibility and personalisation to the fortress. By validating the skills and competencies acquired through non-formal and informal learning experiences, Open Recognition allows the fortress to accommodate different sizes and shape of ‘room’, allowing the unique talents and aspirations of each individual to flourish

The key to harmonising these two forces lies in recognising their complementary nature. Open Recognition strengthens the fortress by expanding its boundaries, while standards and rigour provide the structural integrity that ensures the quality and credibility of the learning experiences within.

Educators and employers, as the guardians of the fortress, play a crucial role in fostering this harmony. By embracing Open Recognition, they can cultivate a more inclusive and dynamic learning ecosystem that values and supports diverse pathways to success. In doing so, they not only uphold the principles of standards and rigour but also enrich the fortress with the wealth of experiences and perspectives that Open Recognition brings.

As the fortress of standards and rigour harmonises with Open Recognition, it becomes a thriving stronghold of lifelong learning, identity, and opportunity. Far from crumbling under the weight of change, the fortress is invigorated by the union of these two powerful forces, ensuring its continued relevance and resilience in an ever-evolving world.

The post Reinventing the Fortress: using Open Recognition to enhance ‘standards’ and ‘rigour’ first appeared on Open Thinkering.

How Brain-to-Brain Interfaces Will Make Things Difficult for Us

Written by David Lyreskog

Four images depicting ‘Hivemind Brain-Computer Interfaces’, as imagined by the AI art generator Midjourney.

‘Hivemind Brain-Computer Interfaces’, as imagined by the AI art generator Midjourney

 

A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing, and hybrid and artificial intelligence, promises to transform how we think about collective and collaborative cognitive tasks. With implementations ranging from research to entertainment, and from therapeutics to military applications, as these tools continue to improve, we need to anticipate and monitor their impacts – how they may affect our society, but also how they may reshape our fundamental understanding of agency, responsibility, and other concepts which ground our moral landscapes.

In a new paper, I, together with Dr. Hazem Zohny, Prof. Julian Savulescu, and Prof. Ilina Singh, show how these new technologies may reshape fundamental components of widely accepted concepts pertaining to moral behaviour. The paper, titled ‘Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds’, was just published in Neuroethics, and is freely available as an Open Access article through the link above.

In the paper, we argue that the received views on how we (should) ascribe responsibility to individuals and collectives map poorly onto networks of these ‘Collective Minds’. The intimately collective nature of direct multiple-brain interfaces, for instance, where human minds can collaborate on and complete complex tasks without necessarily being in the same room – or even on the same continent! –  seem to suggest a collectivist moral framework to ascribe agency and responsibility. However, the technologies we are seeing in R&D do not necessitate the meeting of criteria we normally would turn to for ascription of such frameworks; they do not, for instance, seem to rely on that participants have shared goals, know what the goals of other participants are, or even know whether they are collaborating with another person or a computer. 

In anticipating and assessing the ethical impacts of Collective Minds, we propose that we move beyond binary approaches to thinking about agency and responsibility (i.e. that they are either individual or collective), and that relevant frameworks for now focus on other aspects of significance to ethical analysis, such as (a) technical specifications of the Collective Mind, (b) the domain in which the technology is deployed, and (c) the reversibility of its physical and mental impacts. However, in the future, we will arguably need to find other ways to assess agency constellations and responsibility distribution, lest we abandon these concepts completely in this domain.

The old ‘chicken and egg’ problem about microcredentials kind of misses the point

Chicken (employers will care about badges when they see them) vs egg (applications will care when employers start asking)
 Chicken and Egg… by Visual Thinkery is licenced under CC-BY-ND

I’ve been online long enough to know that you should copy to the clipboard text you’re about to post as a comment. That way you don’t lose it. Don Presant’s blog ate my comment on this post, so I’m posting it here.

For context, Brian Mulligan wondered about the burden that employers/institutions face when an application that contains a large number of digital credentials. He wondered whether anyone is working on tools to ease the burden of validating and evaludating the credential.

My response:

Brian, you make good points here – and ones that have been made before. The problem is that both hiring and Higher Education are broken. And by ‘broken’ (as someone has been through the entire system and has a terminal degree) I mean broken.

So we’re now in a situation where people ask a series of questions to candidates before they look at their CV. This is ostensibly for diversity and unconscious bias (which I fully support) but also because CVs do a terrible job at differentiating between candidates.

Given that many microcredentials simply take existing ‘chunky’ credentials such as degrees and diplomas, and break them down into smaller parts, they’re not solving the problem. They’re just allowing universities to make more money by prolonging it.

Instead, we need full-spectrum recognition of individuals. We’ve been at this ever since the start of my career – first with eportfolios, then with badges, then with blockchain, and now with Verifiable Credentials. The issue is that people mistakenly think it’s the credential, badge, or portfolio that needs to be validated. It’s not, it the identity of the individual.

We’re not going to live in a world where everyone has their own domain, sadly, so we need verification systems that allow people to claim and controlled identifiers either publicly or anonymously identify them. If you think about it, you shouldn’t have to apply for jobs, because jobs should come looking for you. I think the systems that are being built now, coupled with some of the AI that Don was talking about, so I think we’re getting closer to solving all of this.

For those dissatisfied with the false dawn of microcredentialing, I’m working on a ‘Reframing Recognition’ email course which I’m hoping to have ready after I get back from holiday. It’ll help people understand why Open Recognition is a much better approach. Come join badges.community to find out why.

The post The old ‘chicken and egg’ problem about microcredentials kind of misses the point first appeared on Open Thinkering.

ChatGPT makes stuff up




It is really important for people to understand that ChatGPT is not a valid source of academic content or references. And it is not just useless because it makes occasional "errors" -- it is unreliable all the way down as a source of knowledge. Remember how ChatGPT works. It is a Large Language Model, trained on a vast set of texts through 2021. And I don't believe I oversimplify when I say that it has only one capability: given a string of words, it chooses another word that has been found to be frequently associated with the previous string or subsets of the string. That's it. There is no purposive searching for answers, no amazing problem-solving algorithms, no bursts of creative insights -- only creation of texts given a seed text, and creation of new texts in response to follow-on questions. Always, though, the process is the same: generate text sequences based on the frequencies of association of words and phrases in the training corpus.

Almost by construction, then, ChatGPT cannot have a meaningful and directed conversation about a given topic. Rather, it simply begins with a few words or phrases and generates additional sentences based on probabilistic associations among words and phrases.

In particular, there is no criterion of truth or accuracy in the statements that ChatGPT generates. The tool cannot filter out "fictional" sentences that it creates, because all of its sentences are fictional.

Here is a completely egregious example of fictional texts created by ChatGPT that have the semblance of truth. Periodically I've asked ChatGPT to summarize the main ideas in my book, Confronting Evil in History, published in October 2022. Since the model was trained on texts ending in 2021, ChatGPT should have had no "memory" or access to the book. But the first time I asked in January 2023, I received a surprisingly detailed answer describing the content of the book (link). In subsequent weeks I got a non-response along these lines: 

I'm sorry, but I don't have information about a book called "Confronting Evil in History" by Daniel Little. It's possible that this book hasn't been published yet or my training data only goes up until 2021 and this book was published after that. Can you please give me more context or provide a different question?

Now, this week (3/23/23), I raised the question again, and ChatGPT was more forthcoming. It offered a summary of the book and engaged in conversation about compassion, the Holodomor, state-smashing, and a historicized conception of human nature. So far, so good. I then asked:

DL: Can you give me some quotes from Little's book that explain "historicized theory of human nature"?

Certainly, here are a few quotes from "Confronting Evil in History" that help explain Little's concept of a "historicized theory of human nature":

  • "Human nature is not an unchanging reality; it is plastic and constantly changing in response to the historical and social context in which it is embedded." (p. 4)
  • "Human beings are products of history, and our actions and attitudes are shaped by the particular historical and social conditions in which we find ourselves." (p. 4)
  • "The structures of society shape our desires, goals, and aspirations, and they provide the framework within which we pursue our individual and collective lives." (p. 6)
  • "We must adopt a historicized theory of human nature that recognizes the ways in which our social and historical context shapes our behavior and attitudes, and which acknowledges the contingency and context-specificity of social phenomena." (p. 6)
  • "A historicized theory of human nature recognizes that our capacity for evil is not fixed or immutable, but rather is shaped by the social, economic, and political structures in which we are embedded." (p. 97)

These are plausible sentences that might have occurred in the book. But there is a hitch: none of these quotes actually appears in the book. None of them is a genuine quotation. (In fact, the book is only 68 pages long, so the quote from p. 97 is doubly fictitious.) 

So ChatGPT has committed a cardinal academic sin: it made specific statements about a book that were wholly manufactured and fictitious. If it were a human author it would be guilty of serious academic fraud. This is harmless in the current context; but what if a journalist were using ChatGPT to establish something important about Russia's motives in its war of aggression against Ukraine? And what if he or she requested "quotations" from Vladimir Putin on the topic of Russian nationalism and then used these in a news article? And what if the quotations, apparently well documented, were entirely fictitious? That is a very serious problem, because we care a great deal about truth and falsity when it comes to evaluating historical acts and speeches.

Here is one other egregious example of ChatGPT's mendacity. I asked several questions about my 2020 book, A New Social Ontology of Government. and ChatGPT had "plausible" comments. But then I asked a more specific question:

DL: Are there book reviews of Little's "New Social Ontology of Government"?

Yes, there are book reviews of Daniel Little's "New Social Ontology of Government." Here are a few examples:

1. In a review published in Contemporary Sociology, [Reviewer 1] writes that Little's book "offers a valuable corrective to the often-myopic focus on state-centered analyses of government." [Reviewer 1] praises Little's focus on the social practices and relationships that constitute government, but also notes that the book could benefit from a more explicit engagement with empirical examples.

2. In a review published in Philosophy of the Social Sciences, [Reviewer 2] writes that Little's book is a "tour de force" that offers a "sophisticated and nuanced" account of the nature of government. [Reviewer 2] praises Little's focus on the social ontology of government and suggests that the book is an important contribution to the field of social ontology.

3. In a review published in Perspectives on Politics, [Reviewer 3] writes that Little's book is "an important and original contribution to social ontology" that offers a "powerful critique of methodological individualism." [Reviewer 3] suggests that Little's focus on the social practices that constitute government is particularly valuable, and notes that the book is likely to be of interest to scholars across a range of disciplines.

Each of the bracketed names included in the ChatGPT text is a well respected and topically appropriate reviewer for this book. The journals cited are relevant as well. However, none of these reviews is genuine. ChatGPT has made them up out of whole cloth. 

My final exchange with ChatGPT in that session is illuminating:

DL: These book reviews are all fictional. None of them exist.

I apologize for my previous response. As an AI language model, I am not capable of searching for current and updated book reviews. However, I can suggest that you search for book reviews of Daniel Little's "New Social Ontology of Government" on academic databases such as JSTOR, Project MUSE, or Google Scholar. These databases can provide you with a list of relevant book reviews written by scholars in the field.

What is the root problem here? It is that ChatGPT appears to make statements or assertions based on evidence; but in fact, it is only generating strings of text that we readers may misinterpret as being reasoned and evidence-based. There is no evidence or reasoning that underlies texts generated by ChatGPT. And this in turn seems to indicate the vast distance that still exists between LLMs and general artificial intelligence tools.


ChatGPT makes stuff up




It is really important for people to understand that ChatGPT is not a valid source of academic content or references. And it is not just useless because it makes occasional "errors" -- it is unreliable all the way down as a source of knowledge. Remember how ChatGPT works. It is a Large Language Model, trained on a vast set of texts through 2021. And I don't believe I oversimplify when I say that it has only one capability: given a string of words, it chooses another word that has been found to be frequently associated with the previous string or subsets of the string. That's it. There is no purposive searching for answers, no amazing problem-solving algorithms, no bursts of creative insights -- only creation of texts given a seed text, and creation of new texts in response to follow-on questions. Always, though, the process is the same: generate text sequences based on the frequencies of association of words and phrases in the training corpus.

Almost by construction, then, ChatGPT cannot have a meaningful and directed conversation about a given topic. Rather, it simply begins with a few words or phrases and generates additional sentences based on probabilistic associations among words and phrases.

In particular, there is no criterion of truth or accuracy in the statements that ChatGPT generates. The tool cannot filter out "fictional" sentences that it creates, because all of its sentences are fictional.

Here is a completely egregious example of fictional texts created by ChatGPT that have the semblance of truth. Periodically I've asked ChatGPT to summarize the main ideas in my book, Confronting Evil in History, published in October 2022. Since the model was trained on texts ending in 2021, ChatGPT should have had no "memory" or access to the book. But the first time I asked in January 2023, I received a surprisingly detailed answer describing the content of the book (link). In subsequent weeks I got a non-response along these lines: 

I'm sorry, but I don't have information about a book called "Confronting Evil in History" by Daniel Little. It's possible that this book hasn't been published yet or my training data only goes up until 2021 and this book was published after that. Can you please give me more context or provide a different question?

Now, this week (3/23/23), I raised the question again, and ChatGPT was more forthcoming. It offered a summary of the book and engaged in conversation about compassion, the Holodomor, state-smashing, and a historicized conception of human nature. So far, so good. I then asked:

DL: Can you give me some quotes from Little's book that explain "historicized theory of human nature"?

Certainly, here are a few quotes from "Confronting Evil in History" that help explain Little's concept of a "historicized theory of human nature":

  • "Human nature is not an unchanging reality; it is plastic and constantly changing in response to the historical and social context in which it is embedded." (p. 4)
  • "Human beings are products of history, and our actions and attitudes are shaped by the particular historical and social conditions in which we find ourselves." (p. 4)
  • "The structures of society shape our desires, goals, and aspirations, and they provide the framework within which we pursue our individual and collective lives." (p. 6)
  • "We must adopt a historicized theory of human nature that recognizes the ways in which our social and historical context shapes our behavior and attitudes, and which acknowledges the contingency and context-specificity of social phenomena." (p. 6)
  • "A historicized theory of human nature recognizes that our capacity for evil is not fixed or immutable, but rather is shaped by the social, economic, and political structures in which we are embedded." (p. 97)

These are plausible sentences that might have occurred in the book. But there is a hitch: none of these quotes actually appears in the book. None of them is a genuine quotation. (In fact, the book is only 68 pages long, so the quote from p. 97 is doubly fictitious.) 

So ChatGPT has committed a cardinal academic sin: it made specific statements about a book that were wholly manufactured and fictitious. If it were a human author it would be guilty of serious academic fraud. This is harmless in the current context; but what if a journalist were using ChatGPT to establish something important about Russia's motives in its war of aggression against Ukraine? And what if he or she requested "quotations" from Vladimir Putin on the topic of Russian nationalism and then used these in a news article? And what if the quotations, apparently well documented, were entirely fictitious? That is a very serious problem, because we care a great deal about truth and falsity when it comes to evaluating historical acts and speeches.

Here is one other egregious example of ChatGPT's mendacity. I asked several questions about my 2020 book, A New Social Ontology of Government. and ChatGPT had "plausible" comments. But then I asked a more specific question:

DL: Are there book reviews of Little's "New Social Ontology of Government"?

Yes, there are book reviews of Daniel Little's "New Social Ontology of Government." Here are a few examples:

1. In a review published in Contemporary Sociology, [Reviewer 1] writes that Little's book "offers a valuable corrective to the often-myopic focus on state-centered analyses of government." [Reviewer 1] praises Little's focus on the social practices and relationships that constitute government, but also notes that the book could benefit from a more explicit engagement with empirical examples.

2. In a review published in Philosophy of the Social Sciences, [Reviewer 2] writes that Little's book is a "tour de force" that offers a "sophisticated and nuanced" account of the nature of government. [Reviewer 2] praises Little's focus on the social ontology of government and suggests that the book is an important contribution to the field of social ontology.

3. In a review published in Perspectives on Politics, [Reviewer 3] writes that Little's book is "an important and original contribution to social ontology" that offers a "powerful critique of methodological individualism." [Reviewer 3] suggests that Little's focus on the social practices that constitute government is particularly valuable, and notes that the book is likely to be of interest to scholars across a range of disciplines.

Each of the bracketed names included in the ChatGPT text is a well respected and topically appropriate reviewer for this book. The journals cited are relevant as well. However, none of these reviews is genuine. ChatGPT has made them up out of whole cloth. 

My final exchange with ChatGPT in that session is illuminating:

DL: These book reviews are all fictional. None of them exist.

I apologize for my previous response. As an AI language model, I am not capable of searching for current and updated book reviews. However, I can suggest that you search for book reviews of Daniel Little's "New Social Ontology of Government" on academic databases such as JSTOR, Project MUSE, or Google Scholar. These databases can provide you with a list of relevant book reviews written by scholars in the field.

What is the root problem here? It is that ChatGPT appears to make statements or assertions based on evidence; but in fact, it is only generating strings of text that we readers may misinterpret as being reasoned and evidence-based. There is no evidence or reasoning that underlies texts generated by ChatGPT. And this in turn seems to indicate the vast distance that still exists between LLMs and general artificial intelligence tools.


Realigning Microcredentials with Open Badges

Cold hard credentialing to warm fuzzy recognition

In a previous blog post, I discussed how microcredentials have deviated from Mozilla’s original Open Badges vision. This post explores some ways in which microcredentials can be realigned with those initial goals and better empower individuals and communities.

Firstly, it’s essential to emphasise the importance of informal and non-formal learning. Experiences such as volunteer work, self-directed online learning, and engaging in communities of practice have immense value. Microcredential issuers should think more widely to recognise a broad range of learning, allowing individuals to showcase not only their knowledge, skills, and understanding, but also their behaviours, relationships, and experiences.

Interoperability and decentralisation are crucial for a thriving microcredential ecosystem. Open standards and protocols can enable seamless sharing and displaying of badges across platforms. Microcredential providers need to think not only about their own issuing, but that of others. How can learners showcase learning that has taken place elsewhere? In addition, how can we use approaches such as Creative Commons licensing to encourage the reuse and remix of badge metadata? The move to Verifiable Credentials will allow badges without images, which will make collaboration around taxonomies even more important.

Thirdly, accessibility and reducing barriers to entry are vital in countering the commercialisation of microcredentials. Universities and other microcredential providers are no doubt feeling the squeeze in the current economy, but free or low-cost learning opportunities make for a more inclusive learning ecosystem. After all, the original vision for Open Badges was to widen participation and recognise different kinds of learning.

Open Recognition plays a key role in realigning microcredentials with the initial Mozilla Open Badges white paper. Along with approaches such as ungrading and Recognition of Prior Learning (RPL), we can focus on formative aspects such as learner growth and development, rather than solely on traditional summative evaluation methods. This approach can help build trust and ensure microcredentials are meaningful to employers, educators, and learners alike.

Finally, building a diverse ecosystem is essential. At the moment, microcredentialing seems to be almost entirely about the formal education to employment pipeline. However, wider collaboration can ensure the relevance, sustainability, and utility of microcredentials. Community-driven initiatives and partnerships can foster innovation, create new opportunities, and encourage widespread adoption of the original Open Badges vision.

So, in conclusion, realigning microcredentials with the original Open Badges vision has the potential to empower learners, recognise diverse skills, and foster a more accessible recognition ecosystem. By implementing the strategies discussed in this post, we can contribute to the revitalisation of the Open Badges movement and create a better future for learners worldwide.


Image CC BY-ND Visual Thinkery for WAO

The post Realigning Microcredentials with Open Badges first appeared on Open Thinkering.

5 reasons why microcredentials are not Open Badges in name, spirit, or ethos

Microcredentials representing institutions and transcripts.

I was talking with someone today who reflected that Open Badges effectively lost its theoretical underpinnings when Mozilla handed over stewardship of the standard in 2017. I think this is true, which is why Open Recognition is a much more interesting space to be now than the monoculture than is microcredentialing.

This post outlines some of what I think has been lost in terms of the extremely fertile period of time from 2011 until 2016. For those not aware, I was involved in the Mozilla community around badges from mid-2011, went to work on the Mozilla Open Badges team, became their Web Literacy Lead, and then have consulted on badge-related projects since leaving Mozilla in 2015.

Here’s my list of how microcredentialing has taken us away from the original vision, especially compared to the Open Badges white paper and subsequent work by Mozilla, HASTAC, and the Connected Learning Alliance:

  1. Centralisation — the Open Badges ecosystem was designed to be a decentralised system based on ‘backpacks’. An zeal for control has led to centralised control over the issuing, validation, and management of badges. This has had a negative impact on the diversity of issuers and issuing platforms.
  2. Limited interoperability — despite interoperability being baked into the Open Badges standard, some of the more corporate and large-scale badge issuing platforms have gone out of their way to reduce the value this feature. .
  3. Narrow focus on job skills — Open Badges were supposed to recognise that learning happens everywhere, particularly outside traditional formal education settings. However, microcredentials are earned almost exclusively for skills which may be useful in the world of work, and issued by institutions and companies. This undervalues the importance of informal learning experiences and overlooks other important aspects of personal and professional growth.
  4. Commercialisation — some organisations have taken a profit-driven approach to microcredentials, emphasising ‘brand value’ and revenue generation over accessibility and openness. This not only limits the availability of free or low-cost learning opportunities, but undermines the original intent of the Open Badges system.
  5. Barrier to entry — the original vision was that anyone could create, issue, and share badges. However, some microcredential platforms have established barriers to entry, such as fees or partnership requirements, which can make it difficult for smaller organisations or individuals educators to participate in the ecosystem.

The people remaining loyal to the original, revolutionary vision of badges are all talking about Open Recognition these days. Microcredentials are ‘dead metaphors‘ which lack power in terms of human agency and individuals and communities being able to tell their story.

I’m looking forward to continuing to fight the good fight.


Image: cropped screenshot taken from homonym.ca

The post 5 reasons why microcredentials are not Open Badges in name, spirit, or ethos first appeared on Open Thinkering.

Sweaters that fool facial recognition

Protect your facial biometric data with knit wear? As absurd as that sounds, designer Rachele Didero, of the Italian startup Cap_able, has patented textiles that do just that. The patterns trick facial-recognition cameras into thinking it's not looking at a person. — Read the rest

What Plants Are Saying About Us

Professor Paco Calvo believes that plant behavior is the key to understanding how human minds work. Plant lover Amanda Gefter clearly sees the logic in his work and delights in explaining it to us in this fascinating piece.


Artificial neural networks have led to breakthroughs in machine learning and big data, but they still seemed, to Calvo, a far cry from living intelligence. Programmers train the neural networks, telling them when they’re right and when they’re wrong, whereas living systems figure things out for themselves, and with small amounts of data to boot. A computer has to see, say, a million pictures of cats before it can recognize one, and even then all it takes to trip up the algorithm is a shadow. Meanwhile, you show a 2-year-old human one cat, cast all the shadows you want, and the toddler will recognize that kitty.

At This School, Computer Science Class Now Includes Critiquing Chatbots

Move over, coding. Some schools are asking student programmers to think critically about rapid advances in artificial intelligence.

Marisa Shuman challenged her students at the Young Women’s Leadership School of the Bronx to examine the work created by a chatbot.
❌