FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

10 things I think I might think about AI for teaching and learning

By: dave

I’ve been adding thoughts to this blog since 2005. I come here to get my ideas sorted out and to give me both something to link to for those ideas as well as to give me a sense of where my thinking was at a given point in time. The last six months have been AI all the time for me, and I’ve been in some excellent conversations around the topic. Some of these suggestions are the same ones I would have given two years ago, and some even 20 years ago, but they all keep coming up one way or another.

I turned this into a list because I just couldn’t come up with a theme for them. 😛

Discipline specific AI inspired learning goals

This one is starting to float to the top every time I talk about AI. One of the challenges of talking about digital strategy for a whole university is that different fields are often impacted in ways that the same advice can’t be given across different disciplines.

I’m feeling more comfortable with this suggestion. We need to be adding learning objectives/goals/whatever-you-call-thems at both the program and maybe even course level. The details of them will be a little different by discipline and even by parts of disciplines, but broadly speaking they would include something like

  1. How to write good prompts for black box search systems (ChatGPT/google) that return useful/ethical/accurate results in your discipline
  2. Choosing appropriate digital locations/strategies for asking questions
  3. Strategies for verifying/improving/cross-referencing results from these systems
  4. How is AI used by professionals in your discipline (good and bad)

You could say ‘yeah, dave, that digital literacy, we’ve been doing it (not doing it) for almost a generation now.’ I agree with you, but I think it’s becoming increasingly important. Search results have been (in my non-scientific anecdotal discussions) getting less useful and these GPT based systems are approaching ubiquity. Students are going to understand the subtlety of how it works in our profession. Many of us wont know either.

Teach/model Humility

This one’s easy. Say “I don’t know” a lot. Particularly if you’re in a secure work position (this can always be tricky with contingent faculty). Encourage your students to say ‘I don’t know’ and then teach them to try and verify things – sometimes to no avail. It takes practice, but it starts to feel really good after a while. There is NO WAY for anyone to know everything. The more we give in to that and teach people what to do when they don’t know, the better things are going to be.

When we get a results from an AI chatbot, we need to know if we are the wrong person to analyze it. We might need to get help.

Spend time thinking about the WHY of your old assessments

Had an awesome conversation with a colleague a few days ago talking about the how and why of teaching people facts before we expect those people to do things with those facts. This keeps coming up.

  1. We teach facts so that they can have them loosely at their fingertips when they are trying to do more complex things.
  2. We often create assessments so that students will have the scaffolding required to do what is, often, boring memorizing. We know they’ll be happy (or at least more competent) later, so we ‘offer to give them grades’ or ‘threaten to take grades away from them’ if they don’t memorize those things.
  3. Those assessments are now often/mostly completed by students using AI systems in ways that no longer require students to memorize things.

If we need students to have things in their heads, or do critical thinking or whatever, we need to clearly understand what we want AND explain the reasoning to students. At this point, the encouragement/threats that have been our assessments are just not going to be that effective. Does an essay done with GPT4 still teach critical thinking or encourage close reading of texts? Does a summary shoved in an AI system and copy/pasted into a text box help anyone?

Spend even more time thinking about your new AI infused assessments

Lots of interesting conversations recently about how to incorporate AI into activities that students are going to be doing. First and foremost, make sure you read some of the excellent articles out there about the ethical implications of making this decision. We have tons of deep, dark knowledge about the ethical implications of writing the way we’ve been doing for a few thousand years. If we’re going to take a step into this new space, we need to take time to think about the implications. This Mastodon post by Timnit Gebru is a good example of a consideration that just doesn’t exist before AI. AI not only produces problematic texts, the more it produces problematic texts the more problematic texts there are to influence the AI. It’s a vicious cycle.

https://dair-community.social/@timnitGebru/110328180482499454/embed

No really. There are some very serious societal/racial/gender/so-many-other implications to these tools.

Talk to your students about AI, a lot

This is not one of those things you can’t just kind of ignore and hope will go away. These AI tools are everywhere. Figure out what your position is (for this term) and include it in your syllabus. Bring it up to your students when you assign work for them to do. Talk to them about why they might/might not want to use it for a given assignment.

Try and care about student data

I know this one is hard. Everyone, two months ahead of a course they are going to teach, is going to say “oh yes, I care about what happens to my student’s data”. Then they see something cool, or they want to use a tracking tool to ensure the validity of their testing instrument, and it all goes out the window. No one is expecting that you understand the deep, dark realities of what happens to data on the web. My default is “if I don’t know what’s happening to student data, I don’t do the thing”. Find someone at your institution who cares about this issue. They are, most likely, really excited to help you with this.

You don’t want your stuff given away to random corporations, whether it be your personal information or your professional work, make sure that you aren’t doing it to someone else.

Teach less

In my last blog post, I wrote about how to adapt a syllabus and change some pedagogical approaches given all this AI business. The idea from it that I’ll carry forward to this one is teach less. If there’s anything in the ‘content’ that you teach that isn’t absolutely necessary, get rid of it. The more stuff you have for students to remember or to do the more they are going to get tempted by finding new ways to complete the work. More importantly, people can get content anywhere, the more time you spend demonstrating your expertise and getting into meaningful content/discussions that take a long time to evolve, the more we are providing them with an experience they can’t get on Youtube.

Be patient with everyone’s perspective

People are coming at this issue from all sides right now. I’ve seen students who are furious that other students are getting better marks by cheating. I’ve seen faculty who feel betrayed by their students. People focusing on trust. Others on surveillance. The more time we take to explore each other’s values on this issue, the better we’re all going to be.

Take this issue of a teacher putting wrong answers online to trap students. He really seems to think he’s doing his job. I disagree with him, but calling him an asshole is not really going to change his mind.

Engage with your broader discipline on how AI is being used outside of the academy

This is going to be different everywhere you go, but some fields are likely to change overnight. What might have been true for how something worked in your field in the summer of 2022 could be totally different in 2023. Find the conversations in your field and join in.

Focus on trust

Trying to trap your students, to track them or watch them seems, at the very least, a bad use of your time. It also kind of feels like a scary vision of an authoritarian state. My recommendation is the err on the side of trust. You’re going to be wrong some of the time, but being wrong and trusting feels like putting better things into the world. Building trust with your students ‘can’ lead to them having a more productive, more enjoyable experience.

  1. Explain to them why you are asking them to do the work you are giving them
  2. Explain what kind of learning you are hoping for
  3. Explain how it all fits together
  4. Approach transgressions of the social contract (say a student used AI when you both agreed they wouldn’t) as an opportunity to explain why they shouldn’t.
  5. Focus on care.

As teachers/faculty we have piles of power, already, over students. Yes, it might be slightly less power than we had 50 years ago, but I’m going to go ahead and suggest that it might be a good thing.

11 of 10 – Be aware of where the work is being done

I recognize that full time, tenured faculty are busy, but their situation is very different than a sessional faculty member trying to rewrite a course. Making these adaptations is a lot of work. Much of that work is going to be unpaid. That’s a problem. Also, for that matter, ‘authentic assessment’ is way more work.

Same for students. If you are changing your course, don’t just make it harder/more work.

Final thoughts

I’m wary to post this, because as soon as I do I’ll think of something else. As soon as I wrote that, I thought of number 11. I’ll just keep adding them as they come to mind.

Times are weird. Take care of yourselves.

Adapting your syllabus to an online content/AI generated content world

By: dave

I did another presentation on campus yesterday talking about what it means that students can now generate answers to assignment questions. I often find that writing out some concepts before a presentation helps me focus on what might be important. My pre-writing turned into a bit of a collection of the things that I’ve been pulling together for my session next week on adapting your syllabus.

I’m going to keep working on it, but I figured I would post it on my lonely blog, just to let it know that I still love it.

Here’s the deck

Introduction

The release of ChatGPT on the 30th of November, 2022 has brought into focus a change that has been coming to higher education since the advent of the Internet. Our students have increasing access to people and services on the Internet that provide them with ways to circumvent the intent of many of the assessments that have been traditionally used in higher education. 

Schinske and Tanner (2014) describe four purposes for assessments: feedback, motivation, student ranking and the objective evaluation of knowledge. Regardless of what combination of these purposes you use in your work, or what others you might add, the internet has changed the education landscape through,

  1. The availability of connection to other people to support our learners with assessments. (by text-message, online chat etc…)
  2. The availability of pre-created content (available through search or on sites like Chegg) that can be used to respond to our existing assessments
  3. The availability of generative systems (AI systems like ChatGPT) that can create responses to assessments

This has the potential to impact the effectiveness of our assessments. This is particularly problematic where our assessments are meant as motivation for students to learn. With the plethora of options for students to circumvent the intent of our assessments this require the rethinking of the way we design our courses.

This document takes a neutral position with regards to the student decision to use these connections and tools. These tools exist and the tracking tools that have been designed to identify students who have used these tools are resource heavy, time consuming to use effectively, ethically suspect and ultimately unreliable. We believe that a combination of good strategy in our assessment choices, a focus on student engagement in our classrooms and the establishment of trust relationships with students will be the most effective way forward.

Considering the purpose of assessments

The assessments we provide in our courses each serve a purpose. It can be helpful at the start of the process of reconsidering your assessments to chart what purposes each of your current assessments serve. The following model is developed from Schinske and Tanner article. 

Description of assessmentsFeedback on performanceMotivator for student effortScaling of studentsObjective knowledge

In completing this chart, it is likely that many assessments will fall into several categories. The new tools will impact the reliability of each of these purposes, but some more than others. The biggest impact will probably be in the motivation section. 

This kind of course redesign is also an excellent time to consider overall equity in your course (see Kansman, et. al. 2020). 

Grades as feedback on student performance 

In this view grades are given to students in order to let them know how they are performing in our classes. The evaluative feedback (grades) give them a quantitative sense of how they are doing based on your measurement of performance. The descriptive feedback (comments) is the feedback that you provide students in addition to that grade in order to explain how they can improve their performance or indicate places of particular strength.

Questions to ask:

  1. Does my approach provide an opportunity for students to improve on their performance in a way that would encourage them to return to their own work and improve upon it?
  2. Do the affordances of existing content websites and AI generation programs impede my ability to provide feedback on the performance of students to help them improve given my current assessment design?

Grades as motivator of student effort 

“If I don’t grade it they won’t do it”. Whether you consider it a threat or encouragement, this is the idea that we create assessments in order to encourage students to ‘do the work’. This could be encouraging students to do the actual work that we want them to do (eg. assess a piece of writing a student has done to encourage effective writing) or indirectly (eg. assess  a response to a reading to encourage the student to do the reading).

Grant and Green tell us that extrinsic motivators like grades are more effective at encouraging logarithmic or repetitive tasks, but less effective at encouraging heuristic tasks, like creativity or concentration. (2013) Content and AI systems are excellent at supporting students to do logarithmic tasks without the need to learn anything.

Questions to ask:

  1. Does my grading motivate students to learn (as opposed to simply complete the task)?
  2. Is the learning they are doing the learning that I intend?
  3. Do the affordances of existing content websites and AI generation programs impact the motivation of students to learn in response to my assessment motivator?

Scaling – Grades as tools for comparing students 

This is about using grades to create a ranking of students in a given class. There is a great deal of historical precedent for this (Smallwood, 1935), but it is not clear that this is necessary in the modern university. One way or the other, the curving does depend on the validity of the assessments. 

  1. Is grading on a curve mandatory in my program?
  2. Do the affordances of existing content websites and AI generation programs accurately reflect the different performances of students?

Grades as an objective evaluation of student knowledge

Using grades to objectively reflect the knowledge that a student has on a particular subject. There will doubtlessly be differing opinions on whether this is possible or even desirable, but, similarly to the scaling conversation, this is subject to the validity of the assessments.

  1. Do my grades provide reliable information about student learning?
  2. Do the affordances of existing content websites and AI generation programs allow me to accurately measure objective students knowledge given my current assessment design?
  3. Is the same objective knowledge necessary for students to memorise in the same way given these new technologies?

Ideas for adapting the syllabus

Teach less

Reducing the amount of content that you are covering in a course can be a great way to focus on critical issues. It also gives students a chance to dig deeper into the material. This opens up new opportunities for assessment.

  1. Iterative assignments – if students are focused on one or a few major themes/projects you can assign work that builds on a previous submission. Process writing is a good example of this. The student submits a pitch for a piece of writing for an assessment. Then the student continues with the same piece of work to improve it based on feedback given to them by the professor.
  2. Have students give feedback on each other’s project – When assignments start and end in a week, the feedback that a student gives to another student does not always get reviewed or have a purpose. If students are required to continuously improve their work, this could increase the investment that students have in investing in their work. This approach is, of course, subject to all of the challenges involved in peer assessment.

Lower the stakes for assessment

High stakes or very difficult assessments (20% or more in a given assessment) makes the usage of content or AI systems more valuable in the cost/benefit analysis for students. Lowering the stakes (regular 10% assessments or lower) could reduce student stress levels and encourage students to do their own work. This does, however, run the risk of turning assessments into busy work. It’s a balance.

Consider total work hours

Review your syllabus and consider how many hours it would take a student to be successful in your course. Consider the time it would take a non-expert to read the material for understanding, the time to do research and the time to complete assignments in addition to time spent in class. Would an average student at your school have time to complete the assessments they’ve been assigned given the total amount of work they need to do in all their classes?

Do more assessed work in class

Doing the assessment in class can be an easy way to ensure that students are at least starting the work themselves. More importantly, it gives students a chance to ask those first questions that can make the work easier for them to engage with. 

Reduce the length of assignments

Based on the Total Work Hours conversation above, consider whether the length of your assignments improve upon the intent of the assessment. Does a longer essay give them more of a chance to prove their knowledge or does it fairly reflect their performance? Is it possible that the longer assignments only support students who have more disposable time to complete their assignments?

Change the format of the submission

Not all work needs to be submitted in academic writing form. If you’re looking for students to consider a few points from a reading or from conversations in class, encourage them to submit something in point form. Changing the format of the submission can provide some flexibility for the students and also make the work more interesting to grade.

Ill-Structured (ill-defined) problems

A well structured problem is one where the question, the approach to solving the problem and the solution are all known to the problem setter (or, at least, are knowable). An ill-structured problem is one where one, two or all of those are not-known or not-knowable. Well-structured problems encourage students to search for the ‘right answer’. Ill-structured problems require a student to make decisions and apply their own opinion. (see Spiro et. al., 1991)

Rhizomatic Learning/Heutagogy

Approaches to learning where the curriculum in a given course is developed in partnership with students. Often called self-directed learning, these approaches aim to develop a learner’s ability to find, evaluate and incorporate knowledge available thereby developing essential skills for working in a knowledge landscape of information abundance. (see Cormier 2008 & Blaschke 2012)

Effort-based grading

While sometimes controversial, the move to an effort based grading approach has been shown as effective at encouraging students to take risks and be more engaged in performing their assignments while still helping students develop domain specific knowledge (Swinton, 2010). In this model students aren’t being judged on getting work ‘correct’ but rather their willingness to engage with the material. This can be done with a rubric, or by making assignments pass/fail. 

Ungrading

This refers to finding new ways to motivate students to participate in the work without using grades. Even advanced students struggle with this approach without specific guidance on how this can be done (Koehler & Meech, 2022) so this requires a significant shift in the approach to designing a course.

Contract Grading

A highly interactive approach to designing a course that gives students the choice of what assignments they wish to work on and, in some cases, allows students to decide on the amount of work they choose to do for the course. This approach, when done for an entire course, can potentially conflict with university and departmental guidelines and you might want to discuss it with colleagues. It might be easier to begin this approach as a section of a course rather than for an entire course. See Davidson, 2020.

Assignment integration across courses

Another approach, which does require coordination between different faculty members, is to have assignments apply to more than one course. It could be as simple as an academic writing course supporting the essay writing in another course or full integration in a program where a given project grows throughout a student’s experience moving through a program.

References

Blaschke, L. M. (2012). Heutagogy and lifelong learning: A review of heutagogical practice and self-determined learning. The International Review of Research in Open and Distributed Learning, 13(1), 56–71. https://doi.org/10.19173/irrodl.v13i1.1076

Cormier, D., How much ‘work’ should my online course be for me and my students? – Dave’s Educational Blog. (2020, June 20). https://davecormier.com/edblog/2020/06/20/how-much-work-should-my-online-course-be-for-me-and-my-students/

Cormier, D. (2008). Rhizomatic education: Community as curriculum. Innovate: Journal of Online Education, 4(5), 2.

Davidson, C. (2020). Contract Grading and Peer Review. https://pressbooks.howardcc.edu/ungrading/chapter/contract-grading-and-peer-review/

Grant, D., & Green, W. B. (2013). Grades as incentives. Empirical Economics, 44(3), 1563–1592. https://doi.org/10.1007/s00181-012-0578-0

Kansman, J., et. al. (2020) Intentionally Addressing Equity in the Classroom | NSTA. (n.d.). Retrieved April 20, 2023, from https://www.nsta.org/journal-college-science-teaching/journal-college-science-teaching-novemberdecember-2022-1

Koehler, A. A., & Meech, S. (2022). Ungrading Learner Participation in a Student-Centered Learning Experience. TechTrends, 66(1), 78–89. https://doi.org/10.1007/s11528-021-00682-w

Radzilowicz, J. G., & Colvin, M. B. (n.d.). Reducing Course Content Without Compromising Quality.

Schinske, J., & Tanner, K. (2014). Teaching More by Grading Less (or Differently). CBE Life Sciences Education, 13(2), 159–166. https://doi.org/10.1187/cbe.CBE-14-03-0054

Smallwood, M. L. (1935). An historical study of examinations and grading systems in early American universities a critical study of the original records of Harvard, William and Mary, Yale, Mount Holyoke, and Michigan from their founding to 1900,. Harvard University Press. http://books.google.com/books?id=OMgjAAAAMAAJ

Spiro, R. J., Feltovich, P. J., Feltovich, P. L., Jacobson, M. J., & Coulson, R. L. (1991). Cognitive Flexibility, Constructivism, and Hypertext: Random Access Instruction for Advanced Knowledge Acquisition in Ill-Structured Domains. Educational Technology, 31(5), 24–33. http://www.jstor.org/stable/44427517

Swinton, O. H. (2010). The effect of effort grading on learning. Economics of Education Review, 29(6), 1176–1182. https://doi.org/10.1016/j.econedurev.2010.06.014

Trail-Constant, T. (2019). LOWERING THE STAKES:  TIPS TO ENCOURAGE STUDENT MASTERY AND DETER CHEATING. FDLA Journal, 4(1). https://nsuworks.nova.edu/fdla-journal/vol4/iss1/11

Eight Wilderness Survival Tips for Adjunct Writing Instructors

You are teaching nine sections of comp classes at four universities, have no health insurance or retirement savings, and barely make ends meet. What happens if conditions suddenly become even more unstable? Enrollment dips, a department merges, or your tiny liberal arts college gets bought by a multinational conglomerate and replaces your course with a semester-long Kahoot quiz? It’s essential to be prepared as you’re just one paycheck away from needing to scavenge for existence in the wild. Here are some survival tips, should you be so unfortunate.

1. Pack for Emergencies
Always have a “go-bag” ready with the bare necessities: some bottled water, a book of matches, and a Tupperware container filled with the stale Panera bagels left in the faculty lounge from the professional development session you didn’t attend.

2. Leave Plans with a Friend
Make sure your friends know what forgotten, grown-over forest patch you’ll call home. If you have no friends because you spend ninety hours a week grading papers, then inform one of the seven people you share an office with instead.

3. Treat Your Injuries
As you have no real outdoor skills outside of binge-watching episodes of Survivor, make sure you tend to the inevitable wounds you will receive as promptly as possible. Use your belt as a makeshift tourniquet, any extra napkins for bandages, and disinfect any cuts with the Tide stain-remover pen you use clean off your shirt while scarfing down chicken tikka masala leftovers between classes.

4. Find or Build Shelter
The good news is you’ll no longer have to pay $3,000 a month for a studio apartment with cracked linoleum flooring. The bad news is you’re going to need some mathematic or engineering skills to build a reliable shelter to live in. Given you went to grad school for visual poetics, you should probably just try to nab a toddler’s playhouse from somebody’s backyard while they’re asleep.

5. Identify Clean Water
Let’s be honest, at this point, clean water is probably the least of your concerns.

6. Practice Fire Safety
If you packed matches in your “go-bag,” this will be easy. Just find some wood and light it on fire. Try not to sleep too close to the flames. If you didn’t pack any matches, you’re probably screwed.

7. Be Cautious with Wildlife
You will eventually come face to face with some animal as you now live in their world. Just react to them as you would a student wearing a Punisher American flag T-shirt—don’t look them directly in the eye, and don’t ask any open-ended questions about how to detect misinformation online.

8. Maintain a Positive Attitude
Though your situation might be difficult, try to focus on the positives: you no longer have to tell anybody they can’t use Barstool Sports as an academic source; you might never have to read the words “moreover” or “thusly” again; your work-life balance might even improve, even though you spend all of your free time trying to find berries and leaves you can eat without shitting yourself to death.

You could have gone to teach high school English or listened to your brother-in-law Jared and get that online “data analyst” certification, but no, you kept adjunct teaching. You stuck to a job that provides inadequate pay, minimal stability, and zero-growth potential because you believed in your work. You stuck with this job because of your principles… principles which you can’t quite remember anymore… but at least you stuck to them. It’s time for you to start investing in your future—one where you might have to fight a raccoon for those pizza crusts the frisbee bros left in the park.

Suppressing quantum errors by scaling a surface code logical qubit

Nature, Published online: 22 February 2023; doi:10.1038/s41586-022-05434-1

A study demonstrating increasing error suppression with larger surface code logical qubits, implemented on a superconducting quantum processor.

We tested the wastewater from planes to detect COVID among travellers – here's what we found

Wichudapa/Shutterstock

Small traces of many pathogens, such as viruses we may be infected with, are excreted when we go to the toilet. Ultimately, these agents find their way to municipal wastewater treatment plants where sewage samples can be taken and the levels of these pathogens measured.

This field of science is called wastewater-based epidemiology and it may be a way to track the spread of COVID across the world via airports. It’s already a powerful tool to monitor the levels of infectious diseases circulating in a community. It’s also relatively simple, inexpensive, and, most importantly, provides a snapshot of the health of a whole community (not only those people who seek medical help).

Wastewater-based epidemiology has been used for the early detection of poliovirus for decades, and it has been implemented to monitor SARS-CoV-2 (the virus that causes COVID-19) in more than 70 countries since the start of the pandemic. Wastewater surveillance allows us to not only track when SARS-CoV-2 is present, but it can identify different variants of the virus too.

In a new study, we tested the wastewater from planes arriving in the UK, and at airport terminals, for SARS-CoV-2. Our findings suggest that wastewater monitoring could be a useful tool for tracking COVID at international airports and other travel hubs. This could potentially help monitor how infectious diseases cross international borders.

Detecting COVID in wastewater

We sought to detect SARS-CoV-2 in sewage taken at the arrival terminals of three international airports in the UK (Heathrow, Bristol and Edinburgh), and from about 30 planes arriving into these airports, during March 2022. For aircraft surveillance, we collected the sewage samples from vacuum trucks which remove wastewater from the aircraft.

Most samples from both the planes and the terminals contained high concentrations of SARS-CoV-2, suggesting there were many people unwittingly bringing COVID back to the UK.

On March 18 2022 the UK government lifted the requirement for unvaccinated passengers to take a pre-departure test and another on day two after arriving. We studied sewage samples from both before and after these restrictions ended, and found high concentrations of SARS-CoV-2 regardless of when the samples were taken.

This could point to limitations in screening procedures, like pre-departure tests. That said, by March 2022 most of the UK population was vaccinated, so it’s possible the majority of passengers didn’t have to take a pre-departure test.

However, this is still worth noting as many countries, including the UK, reinstated pre-departure COVID testing for people travelling from China in early January 2023.


Read more: Testing sewage has helped track COVID – soon it could reveal much more about the UK's health


We previously carried out wastewater monitoring on sewage collected at hotel quarantine facilities for people arriving in the UK from red-list countries between March and July 2021.

Similar to our latest research on airport surveillance, this work confirmed that many passengers entering the UK were carrying SARS-CoV-2. In this case, all passengers arriving would have had to take a pre-departure test.

These cases may not have been caught because the infection was in its early stages when they were first tested, because the testing failed or because they contracted COVID while in transit. But this clearly shows the difficulties associated with trying to prevent infectious diseases crossing international borders.

Senior lab technician, Nick Woodhall, setting up a qPCR assay as part of the study.
Senior lab technician, Nick Woodhall, sets up a test for a wastewater sample. Kata Farkas, Author provided

Some limitations

Wastewater testing to monitor air travel passengers does have some drawbacks. For example, not every passenger uses the toilet on the plane. In a recent study we found that only 13% of passengers on short-haul flights and 36% of those on long-haul flights would be likely to defecate on the plane.

This data combined with the typical concentration of SARS-CoV-2 in faeces suggest that wastewater monitoring in this context is likely to capture about 8%–14% of all COVID cases on an aircraft. This is still a valuable addition to existing testing methods.


Read more: Polio was recently detected in sewage in the UK – here's what else scientists look for in our wastewater


There may also be limitations in sampling and logistics. Sampling directly from aircraft is prohibited in some cases. It can be technically challenging as well, especially at large airports with hundreds of flights arriving every day. Sampling from vacuum trucks is more feasible than going directly onto the aircraft, but there is a potential risk of cross-contamination, as the trucks are not washed or disinfected frequently.

When it comes to sampling sewers in passenger terminals, the origin of the detected pathogen in unknown, and there’s a risk that the pathogen identified is from ground crew members and not from travellers. Even when a variant of interest is detected in an aircraft sample with known origin, it still raises the question as to whether the infected person (or people) will remain in the country or transit onward.

A valuable tool

It’s important for all countries to monitor what potential new diseases, and new variants of established diseases, are entering their borders. Despite some possible limitations, our research suggests that wastewater-based epidemiology could be a useful tool for the surveillance of SARS-CoV-2 and other viral pathogens among international travellers.

This approach would help to better understand which pathogens are circulating globally without a need for individual testing, which can face ethical challenges and be complicated and expensive to undertake.

Indeed, a number of countries, including the US, Canada and EU member states have so far implemented wastewater monitoring on planes to identify novel SARS-CoV-2 variants.

The Conversation

Kata Farkas receives funding from the Welsh Government.

Davey Jones receives funding from Welsh Government and the UK Health Security Agency.

ChatGPT search – Autotune for knowledge

By: dave

Lots of interesting conversation going on in my community right now about the implications of ChatGPT style tools for the education system. Will students use it to cheat? Will we incorporate it in our classrooms? Can we use it to do mundane tasks for us? What are the ethical implications of using this kind of software?

My read is that these tools will do to education what the math calculator did to math education. And we’re still fighting about that 40 years later.

Those are important conversations, but I want to talk about something else. I’m interested in how these tools are going to change our relationship to learning, work and knowledge. In a conversation with Nick Baker this morning, we were trying to map out what the future workflow of the average person doing an average task.

  • Step 1 – Go to FutureGPT search
  • Step 2 – Ask FutureGPT ‘what does a government middle manager need to know about the Martian refugee crisis. Include three references and tell me at a grade 12 level using the voice of an expert talking to their boss’
  • Step 3 – Look over the response, click send message, include your mid-level manager’s email address.

I figure were, maybe, two years away from this? But who knows, we might have this before I post this message.

What does this mean for knowledge?

30 years ago when I was in college, you went to the card catalogue, found a book that might be relevant and went to a long line of library books to find your book. Once you remembered how the system worked. On that shelf were a bunch of other books that had been curated by 50 years of librarians to be similar in nature (in one way or another) to the book that you were looking for.

The librarians were my algorithm.

Right now, still, I’m using a search engine with a bunch of different practices to try and find the information I want curated by other people somewhere out there on the Internet. I put in a search string, I look at what I get back from the algorithm, make some adjustments, and try again. Throughout the process I land on some websites created by humans about the issue I’m interested in.

The search engine algorithm brings me to a human (probably) made knowledge space.

Starting this year, we’re going to be returned a mishmash of all the information that is available on the Internet, sorted by mysterious practices (popularity, number of occurrences, validity of sources if we’re lucky) and packaged neatly into a narrative. The algorithm is going to convert that information to knowledge for me.

The algorithm presents me with the knowledge, already packaged.

Autotune for knowledge

In 1998, Cher’s ‘Believe’ hit it big as the first autotuned song to sell tons of, I guess, CDs. Autotuning takes the human voices and ‘removes the flaws’ that are there. Any place where you might be off key, pitchy, where you might have slowed down or sped up in your singing. Musical purists have been decrying the process since as they say that it removes the human part of the process from the music. It’s everywhere now. If you listen carefully to most popular songs you can hear the uniformity in the sound.

That’s what’s going to happen to our daily knowledge use.

This, to me, is the real danger. These tools are so convenient, so useful, save so much time, how is anyone going to rationalized taking the time to actually look into issues to check for nuance? Who is going to pay you to take a week to learn about something enough so you can give an informed opinion when something that looks like an informed opinion can be generated in seconds?

The real danger is not to people who are experts in their fields. Super experts in every field will continue to do what they have always done. All of us, however, are novices in almost everything we do. Most of us will never be experts in anything. The vast majority of the human experience of learning about something is done at the novice level.

That experience is about to be autotuned.

❌