FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

On the paucity of ‘raising awareness’

This post is about philosophy, memes, and taking action. It’s a reflection on an experience I had this week which caused me to reflect on the paucity of ‘awareness raising’ as a tactic.


I studied Philosophy at university a couple of decades ago. One of the courses was on ethics and involved the trolley problem.

Trolley problem basic setup. A person is standing next to a lever which can divert the trolley (i.e. train/tram) onto a different track. If they do, the trolley will hit one person instead of five. CC BY-SA McGeddon, Wikimedia Commons

The trolley problem is a series of thought experiments in ethics and psychology, involving stylized ethical dilemmas of whether to sacrifice one person to save a larger number. The series usually begins with a scenario in which a runaway tram or trolley is on course to collide with and kill a number of people (traditionally five) down the track, but a driver or bystander can intervene and divert the vehicle to kill just one person on a different track. Then other variations of the runaway vehicle, and analogous life-and-death dilemmas (medical, judicial etc.) are posed, each containing the option to either do nothing, in which case several people will be killed, or intervene and sacrifice one initially “safe” person to save the others.

It’s a powerful tool to generate insights into your own ethical position on certain topics. These days, it’s rolled out to warn about outsourcing decision-making to the systems underpinning self-driving cars. And, of course, it’s now a recognisable meme.

Trolley problem where nobody is tied to the track. The words read "nobody is in danger" and "however, you can pull the lever to make the train get closer just so you can wave at all the people"

In my experience, most of the trolley problem thought experiments lead towards an understanding of supererogation.

In ethics, an act is supererogatory if it is good but not morally required to be done. It refers to an act that is more than is necessary, when another course of action—involving less—would still be an acceptable action. It differs from a duty, which is an act wrong not to do, and from acts morally neutral. Supererogation may be considered as performing above and beyond a normative course of duty to further benefits and functionality.

Interestingly, in a recent episode of the Philosophy Bites podcast, Theron Pummer suggested a twist on this. Pummer, who is a Senior Lecturer in Philosophy at the University of St Andrews and Director of the Centre for Ethics, Philosophy and Public Affairs, has published a book entitled The Rules of Rescue. I haven’t read it yet, but to quote the summary on his own web page about the book:

Pummer argues that we are often morally required to engage in effective altruism, directing altruistic efforts in ways that help the most. Even when the personal sacrifice involved makes it morally permissible not to help at all, he contends, it often remains wrong to provide less help rather than more.

I have issues with Effective Altruism, which I’ll not go into here, but I find Pummer’s framing fascinating. Basically, you don’t have to help others in certain situations; no-one would think it was immoral or illegal to go about your business. However, if you do decide to help, then there’s a minimum amount of help that could reasonably be required.


This week, I was at MozFest House. I had a good time. As with all MozFests I’ve been to, there are exhibits with which you can interact. One of them asked you to use a touch screen to fill in details of the kinds of services you use. It then printed out a long receipt on the type of data that is gathered on you when using them. I asked the PhD students who had come up with the machine what I was supposed to do with this data. They intimated that they were merely raising awareness and didn’t suggest a single thing I could do.

I was left in a worse position than I began. One could say that’s the point of awareness-raising, that it’s about making people feel discomfort so that they take action. But if you’re going to make an intervention I would agree with Theron Pummer’s stance that there’s a certain minimum level of guidance to give. A first step, at least.

Contrast this with another interactive exhibit in which you received tokens for free coffee if you answered a series of questions about yourself. I managed to get three by lying and not providing personal data. Which, of course, could be said to be the point of the exercise: be careful about the data you put out there, especially for scant reward.


Once you see people putting in the minimum effort of ‘awareness raising’ you start seeing it everywhere. It’s particularly prevalent on social media, where it takes a single tap to reshare news and make others aware of something you’ve just seen. As humans, though, we tend to have a bias towards avoiding harm so social media timelines become full of doom.

I’m on a bit of a mission to get some more positivity into my life. Not in a mindless way. Not in an avoiding-reality kind of way. But rather following people who have noticed a problem and are doing something about it. Seeking out those who can take a step back and look at the wider picture. And, of course, those who share some of the wonder of the world around us.

The post On the paucity of ‘raising awareness’ first appeared on Open Thinkering.

Using AI to help solve Bloom’s Two Sigma Problem

Three curved lines showing performance. There are two standard deviations (i.e. two sigma) between Conventional Learning and 1:1 Tutoring.

Imagine we’re all surfers. The ocean we’re in is the educational system, and we’re all trying to ride the wave of knowledge to the shore of understanding. Some of us have master surfers as guides – personal tutors who are right there with us, helping us manoeuvre the currents and ride high on the knowledge wave. They know our strengths, they know our fears, and they ensure we don’t wipe out. These fortunate few reach the shore faster, more smoothly and often with a lot more fun.

Then there are the rest of us. We’re in a giant surf class. There’s one instructor and dozens of us learners. The instructor is doing their best, but they can’t give us all the personalised attention we need. Some of us catch the wave, some of us don’t. This is Bloom’s Two Sigma Problem.

Brought to the fore by educational psychologist Benjamin Bloom in the 1980s, the Two Sigma Problem highlights a gap in education. Personal tutoring can propel students’ performance by two standard deviations – like moving from the middle of a class right to the top 2%. The problem is, we can’t give everyone a personal tutor. It’s just not feasible. So, the question is, how do we give each student the benefits of one-on-one instruction, at scale?


Enter Artificial Intelligence (AI) and, in particular, Large Language Models (LLMs) such as ChatGPT. I’ve been experimenting with using ChatGPT as a tutor for my son during the revision period for his exams. It’s great at coming up with questions, marking them, and suggesting how to improve. This kind of feedback is absolutely crucial to learning. It’s also great at exploring the world and allowing curiosity to take you in new directions.

So, if we revisit the Two Sigma Problem based on what’s possible with LLMs, it looks like there’s a possible solution with multiple advantages:

  1. Personalisation: Like a master surfer guiding us through the waves, AI offers individualised instruction. It can adapt to each learner’s pace, skill level, and areas of interest. It’s like your own personal Mr. Miyagi, providing the right lesson at the right time. Wax on, wax off.
  2. 24/7 Availability: With AI, it’s always high tide. The learning doesn’t stop when the school bell rings. Whether it’s the middle of the day or the middle of the night, your AI tutor is there to help, guide, and explain.
  3. Scalability: One-to-one tutoring might not be feasible, but AI makes one-to-one-to-many a reality. An AI tutor doesn’t get exhausted or overbooked. It can help an unlimited number of students at once, ensuring everyone gets the ride of their lives on the knowledge wave.
  4. Feedback and Assessment: Picture a surf instructor who can instantly replay your wipeouts, showing you exactly what went wrong and how to fix it. That’s what AI can do. It provides immediate feedback, helping learners understand and correct their mistakes right away.
  5. Enhanced Resources: LLMs are like a treasure trove of knowledge. Trained on a vast array of educational content, they’re like having the British Library at your fingertips, ready to generate explanations, examples, and answers on a multitude of topics.
  6. Removing Bias: AI doesn’t care about your background, your accent, or the colour of your board shorts. When designed and trained properly, it treats all learners equally, providing a level playing field.

No technology is a silver bullet. As an educator, I know that while curiosity and feedback is really important, there’s nothing like another human providing emotional input — including motivation. AI is here to support, not replace, our human guides.

Even though it’s early days, we’re already seeing some really interesting developments in the application of LLMs in education. I’m no fan of Microsoft, but I will acknowledge that a feature they have in development called ‘passage generation’ looks interesting. This tool reviews data to create personalised reading passages based on the words or phonics rules a student finds most challenging. Educators can customise the passage, selecting suggested practice words and generating options, then publish the passage as a new reading assignment. I find this kind of thing really useful in Duolingo for learning Spanish. Context matters.

As a former teacher, I know how important prioritisation can be for the limited amount of time you have with each student. And as a parent, I’m a big believer in the power of deliberate practice for getting better at all kinds of things. Freeing up teachers to be more like coaches than instructors has been the dream ever since someone came up with the pithy phrase “guide on the side, not sage on the stage”.


One of the main concerns I think a lot people have with AI in general is that it will “steal our jobs”. I’d point out here that the main problem here isn’t AI, it’s capitalism. Any tool or system be used for good or for ill. If you’re not sure how we can approach this post-scarcity world, I’d recommend reading Fully Automated Luxury Communism by Aaron Bastani. Of course, regulation is and should be an issue, too.

The main issue I see with this is centralised LLMs run by companies running opaque models and beholden to shareholders. That’s why I envisage educational institutions running local LLMs, or at least within a network that only connects to the internet when it needs to. Just as Google Desktop used to allow you to search through your local machine and the web, I can imagine us all having an AI assistant that has full context, while preserving our privacy.


So the way to approach any new tool or service is to ask critical questions such as “who benefits?” but also to fully explore what’s possible with all of this. I’m hugely hopeful that AI won’t lead us into a sci-fi dystopia, but rather help to even out the playing field when it comes to human learning and flourishing.

What do you think? I’d love to hear in the comments!


Image remixed from an original on the SkillUp blog. Text written with the help of ChatGPT (it’s particularly good at coming up with metaphors, I’ve found!)

The post Using AI to help solve Bloom’s Two Sigma Problem first appeared on Open Thinkering.

Ep. 317: Character Philosophies in Dostoevsky’s “Brothers Karamazov” (Part One)

Subscribe to get this ad-free, plus a supporter-exclusive final part to this discussion, which you can preview.

Following up on our live episode, we further ponder the 1869 novel, revisiting the "problem of evil" arguments and how the various brothers cope with an imperfect world.

Plus, we relate Dostoevsky's views of freedom and ethics to those of other existentialists.

The post Ep. 317: Character Philosophies in Dostoevsky’s “Brothers Karamazov” (Part One) first appeared on The Partially Examined Life Philosophy Podcast.

The Doctrine of Double Effect: Do Intentions Matter to Ethics?

An introduction to the Doctrine of Double Effect, which claims that it is permissible to harm someone if it will also result in good consequences that outweigh the harm and the harm is just a “side-effect” of your action, rather than something you intend to make happen.

trolley1

nathannobis

Trolley problems: should he push the man?

Cookie Monster menaces locals in California seaside town

Police are warning locals in Santa Cruz, California, not to approach Cookie Monster, seen lately at the seaside town's wharf. Cookie Monster is belligerent and causing trouble, according to reports.

The man, who wears a costume of the beloved Sesame Street character, has reportedly been creeping out visitors to the Santa Cruz Wharf, bothering them and aggressively yelling.

Read the rest

George Carlin and the Philosophy of Religion

In this clip from the 1999 stand-up comedy special You Are All Diseased, the late comedian George Carlin offers a scathing attack on religion that can be useful in inspiring debate and introducing students to the topic of the philosophy of religion. Using humor, Carlin presents a clearly atheistic view and engages with many classic […]
❌