FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayAgile Learning

AI as Answer Machine? Or Wordsmith? Or Something Else?

Last month on the blog, I floated the idea of thinking about generative AI tools like ChatGPT, Bing, and Bard as technologies for learning. I drew an analogy to the digital camera I used as I learned photography. As I was developing a conceptual understanding of photography, I was also learning how to use the tool in my hands, with all its knobs and buttons, and these two kinds of learning informed and supported each other. Could we use generative AI tools in a similar fashion, teaching students to use the tools well as part of learning other skills, like writing and researching? If so, it might help to know what kind of tools we’re dealing with when we talk about ChatGPT and other AI text generation tools.

For instance, I saw this post by Simon Harris go around Mastodon a couple of weeks ago:

15yo: I use ChatGPT quite a bit.
Me: Oh?
15yo: Today, I asked it to give me the last digit of Pi.
Me: 🤔 but there is no…
15yo: …exactly 😁
Me: 😂
15yo: Apparently, it’s 4 😂

This is funny because ChatGPT is so, so wrong about pi. As an irrational number, pi has a nonterminating decimal representation; there’s no last digit of pi. The joke, however, relies on an assumption that ChatGPT is supposed to be some kind of answer machine, a tool for answering questions correctly. However, ChatGPT is famously not that. It’s well known for “hallucinating” facts and inventing citations. There are, in fact, a lot of educators who are worried that students will rely too much on ChatGPT and similar tools because these tools produce text that is factually incorrect. For example, check out this YouTube video of a chemistry instructor evaluating the accuracy of ChatGPT’s answers to various organic chemistry questions. Spoiler: It didn’t do that great.

A vintage "Underwood" manual typewriter seen in a window displaySo if these tools aren’t answer machines, what are they? I would argue that ChatGPT and similar tools are actually wordsmiths. They are powered by large language models that mix words together to create fluent prose. Ask it to draft an essay for you on a given topic, and it will put words together to do just that. Ask it to revise that draft based on your feedback, and it will do that, too. See, for instance, how Ethan Mollick used Bing to help him craft the final paragraph of his April 26th newsletter. Ask it to summarize a long text, and, because it’s good with words, you’ll get a pretty decent summary. It’s a wordsmith that actually doesn’t know what it’s writing but still produces intelligible texts.

This is, in fact, the other reason that educators are worried about AI text generators, because they can quickly produce acceptable responses to pretty standard essay question prompts, and educators are worried that students will use these tools to cheat, whether the essay question is of the “writing as assessment,” “writing to learn,” or “learn to write” variety. Because ChatGPT and its peers are good with words.

Midjourney-generated image of a team of robots playing basketballI’m struck by how the initial reporting and framing of these tools affect the ways we think about them. While AI text generators were seen as answer machines (albeit flawed ones), AI image generators have been seen as “smiths” of a different kind, working with pixels to create images. While a tool like ChatGPT making stuff up is seen as a problem, if I ask Midjourney or DALL-E (a couple of popular image generators) to give me a scene of robots playing basketball, no one worries about the result not being real-world realistic.

If AI text generators are tools for learning, as I argued on the blog last month, then I think it’s helpful to think of them as wordsmithing tools. Does a student struggle to come up with counter arguments for their thesis? Ask an AI tool to generate a few. Does a student freeze up when confronted with a blank page? Ask an AI tool to write a first draft. Does a student find it hard to adopt the right voice for their writing? Ask an AI tool to rewrite a draft essay in different voices. These are all wordsmithing tasks that ChatGPT and its ilk can handle pretty well.

A yellow balance bike for kids, with wheels but no pedals, leaning against a wall
“Balance Bike,” Justin Shanks, Flickr (CC)

Please note that I’m not suggesting we shouldn’t help students learn to develop counter arguments or to get past writer’s block or to figure out their writing voice. I am suggesting, however, that AI tools might be used to help students learn these skills, just as my digital camera helped me learn photography. Cynthia Alby uses the metaphor of learning to ride a bicycle with training wheels to talk about how AI tools might provide a kind of scaffolding for students learning writing skills. When I heard that metaphor, I was reminded of a 2012 blog post by Mark Sample, in which he talks about balance bikes as a different way to teach kids to ride bicycles. Learning with training wheels means getting help with balancing while learning to pedal, while learning with a balance bike means not worrying about pedaling while learning to balance, which is apparently easier for most kids. We might need to think carefully about where our students need the kind of support that AI tools can provide, and the answer might vary by student.

I’ve been using the training wheels and balance bike metaphors in my recent talks on teaching with AI, and I was all ready to bust out the “AI as wordsmith” argument above at a talk I gave earlier this week. But then I signed up for ChatGPT Pro to see what ChatGPT’s plugins are all about. Back in March, OpenAI, the company behind ChatGPT, announced that the tool would have plugins that allowed it to communicate (in a fashion) with other web tools and websites. It’s taken me three months, but I finally started playing around with these plugins. I quickly realized that some of the advice I had given faculty earlier in the year about designing “ChatGPT-proof” assignments hadn’t been accurate for a while!

For instance, if I ask the free version of ChatGPT, which uses GPT-3.5, to tell me about the May 2023 Supreme Court decision that dealt with Andy Warhol’s art, here’s the response I get:

I apologize for the confusion, but as of my knowledge cutoff in September 2021, I’m not aware of any specific Supreme Court decision in May 2023 related to Andy Warhol’s art. It’s possible that such a decision may have occurred after my knowledge cutoff or it may be a hypothetical scenario that hasn’t actually taken place. I recommend checking reliable news sources or legal references to get the most accurate and up-to-date information on any recent Supreme Court decisions regarding Andy Warhol’s art.

If I think about ChatGPT as an answer machine, I’m not getting the answer I want, but at least the tool is telling me what it doesn’t know. That’s an improvement over how the chatbot presented information at the start of 2023. But watch what happens when I use the same prompt in ChatGPT Pro with the VoxScript plugin enabled:

In May 2023, the U.S. Supreme Court ruled against the Andy Warhol Foundation in a copyright infringement case. The case was centered around a series of silk screen images that Warhol created based on a photograph of the late musician Prince, taken by photographer Lynn Goldsmith in 1981.

The court ruled in a 7-2 vote that Warhol infringed on Goldsmith’s copyright. The case hinged on whether Warhol’s images of Prince transformed Goldsmith’s photograph to a great enough degree to stave off claims of copyright infringement and therefore be considered as fair use under copyright law.

Goldsmith had sued the Andy Warhol Foundation for the Visual Arts (AWF) for copyright infringement after the foundation licensed an image of Warhol’s titled “Orange Prince” based on Goldsmith’s image of the pop artist to Condé Nast in 2016 for use in its publication Vanity Fair…

It goes on for another couple of paragraphs, and it gives me the news source it used to generate this summary of the case, a May 18th NPR report. This version of ChatGPT is indeed acting like an answering machine!

Quick sidebar: It is also now possible to share one’s ChatGPT correspondences with others. A few months ago, you needed some third-party tool to do so, but now ChatGPT has a share button for every chat. Thus the argument that there’s no “paper trail” for how one uses ChatGPT is no longer true.

Screenshot of the Bing query about the 2023 Andy Warhol Supreme Court caseI used ChatGPT for the above examples, but please remember that ChatGPT isn’t the only game in town. I asked Bing about the Andy Warhol case, and I got a similar summary, this time with three different news sources cited. Google Bard gave me a similar summary and added some text about how fair use works in the United States. Its sources, however, were meeting notes from a 2004 Congressional subcommittee hearing and a Quizlet created for maybe a law course? Your mileage may vary with these tools.

So are these tools now, in fact, answer machines? I’m going to argue that while they can be used in that way, a better way to think of them is as interfaces to other systems. Just as Siri or Alexa act as a voice-based interface to the apps on your digital devices, ChatGPT and Bing and such are text-based interfaces to, for instance, all the transcripts attached to all the YouTube videos. These tools are still wordsmiths, but wordsmiths used as interfaces.

To use a left field example, the fast food chain Wendy’s recently announced that it was working on an AI chatbot to take drive-thru orders. That’s AI-as-interface, with the chatbot, which is good with words, taking spoken orders for a cheeseburger and a Frosty and turning that into an order in the restaurant’s food production system. For a more academic example, consider Elicit, an AI research assistant that can help you find research in its corpus of 115 million academic papers that might address your research question. Ask it a question like “How should I form groups of students when using group work in the college classroom?” and (because it’s good with words) it will return a table of potentially relevant papers, summarized and tagged by intervention, sample size, and outcomes.

AI as answer machine, AI as wordsmith, AI as interface… I’m not sure where we’ll land with how we think about these tools, but I believe these framings are helpful in assessing what the tools can and cannot do and what roles they might play as learning technologies.

One more caveat: Most of what I’ve presented here is focused squarely on AI text generation tools powered by large language models. There are lots of other kinds of AI tools, like Cornell University’s Merlin app, which can identify birds by their songs, or that AI-powered agent that excelled at the strategic negotiation game Diplomacy, that don’t fall into any of the categories I’ve described here.

Generative AI as a Learning Technology

On last week’s episode of the Intentional Teaching podcast, I talked with educators and authors James Lang and Michelle D. Miller about ways we might rethink our assignments and courses in light of new generative AI tools like ChatGPT. Since we situated that conversation in some other technological disruptions to teaching and learning, including the internet and Wikipedia, perhaps it was inevitable that one of us drew a comparison to the advent of handheld calculators in math classes.

Jim pointed out that we don’t just hand kindergartners calculators and expect them to do anything useful. We have to teach kids numeracy skills before they start using calculators so that they know what they’re doing with the calculators. In the same way, Jim argued, we shouldn’t have first-year composition students use ChatGPT to help them outline their essays since those students need to develop their own pre-writing and outlining skills. The chatbot might produce a sensible first draft, but it would also short-circuit the student’s learning. ChatGPT might be more appropriate for use by more experienced writers, who can use the tool to save time, just as a more experienced math student would use a calculator for efficiency.

I generally agree with this analysis, but I had a different kind of experience using calculators in school. When I learned calculus my senior year, we used graphing calculators regularly both in and out of class. My memories are admittedly a little hazy now, but I believe that there was something of a symbiotic relationship between learning the concepts of calculus and learning how to use a calculator to manage the calculations of calculus. For instance, we might try to come up with a polynomial that had certain roots or certain slope properties, then graph our function using the calculator to see if we were correct. The tool provided feedback on our math practice, while we also got better at using the tool.

Four people in the woods looking at distant birds through binoculars and cameras
That’s me there with the telephoto lens on a bird walk.

Here’s another analogy: photography. I’m an amateur photographer. (Actually, I once got paid $75 to photograph an event, so technically I’m a professional photographer.) When I was learning photography, there was a lot of conceptual learning about light and depth of field and composition but also learning how to use my digital camera, what all the knobs and buttons did. As I experimented with taking pictures, my use of the camera helped sharpen my understanding of the relevant concepts of photography. And my better understanding of those concepts in turn informed the ways I used the knobs and buttons on the camera to take better photos.

Might AI tools like ChatGPT serve a similar role, at least for students with a certain level of foundational writing skills? It’s already quite easy to ask ChatGPT (or Bing or one of the other chatbots powered by large language models) to draft a piece of writing for you, and then to give it feedback or corrections to make. For an example, check out this post by John Warner in which he coaches ChatGPT to write a short story and then make it a better story. John is already an accomplished writer, but might a more novice writer use prompt refinement in this way to develop their own writing skills, much like I would use a bunch of different settings on my camera to take the same photo so I could better learn what those settings do?

All metaphors are wrong (to quote my colleague Nancy Chick), and none of the analogies I’ve laid out here are perfect. But I think there is some value in thinking about ChatGPT, etc., as similar to technologies like digital cameras or graphing calculators that we can use to learn skills and sharpen our craft as we learn to manipulate the tools.

Alpha Players and In-Class Group Work

File this under “designing board games is a lot like teaching”…

The cooperative board game Pandemic, designed by Matt Leacock, showing a world map with "virus cubes" spreadingI was recently reading the new issue of Senet magazine, a publication whose tagline is “board games are beautiful.” The issue featured an interview with Matt Leacock, designer of the popular cooperative board game Pandemic. In a cooperative board game, all the players work together to defeat the game. There’s no single winner; either everyone wins or the game wins. What happens when one player starts telling all the other players what they should do? That’s called the alpha player problem, and it can really take the fun out of a cooperative game.

Leacock was asked about the alpha player problem and what a game designer can do about it. “The designer… has a responsibility to create mechanisms where everyone can shine and one player can’t dominate.” What are some mechanisms that can prevent or at least minimize the alpha player problem? Leacock identified three strategies:

  1. Hidden information. Structure the game so that no one player has access to all the relevant information. “It’s difficult to be domineering if the other person has autonomy or ownership of that information,” Leacock said. He also noted that hiding information in a cooperative game can feel artificial.
  2. Wicked problems. If the game is hard enough, no one player can run the table. Leacock described his forthcoming cooperative game about climate change, Daybreak, which I mentioned here on the blog last fall, as hard in this way. “There are so many moving parts that trying to internalize the entire game state is very taxing.”
  3. Nuance problems. These are challenges in a game “where there are many right answers.” Leacock said he enjoys these kinds of challenges, since they “lead to lots of discussions.”

As I was reading the Leacock interview, I couldn’t help but think of analogies to the college classroom. When integrating group work in a class session, there’s a risk that some groups will have an “alpha student,” that is, a group member who takes charge in an unhelpful way. Not only can this make for some uncomfortable social situations, it can also deprive other group members of opportunities to learn.

How can teachers try to prevent or minimize the alpha student problem? Leacock’s three game design strategies transfer very well to educational settings!

  1. Hidden information. When students are given access to different resources or different ways to prepare for group interactions, no one student has all the information needed to tackle the group work. Consider a jigsaw activity where each member of a group brings different ideas or resources to the table, drawn from a previous set of group interactions. Or consider structured reading groups, an approach that involves giving different group members different roles to play as they prepare for and participate in group work.
  2. Wicked problems. Giving students a sufficiently challenging or complex problem, one that no single student can solve, can create a sense of interdependence. Researchers in the Netherlands led by Femke Kirschner studied how individuals and groups went about solving both low-complexity and high-complexity problems. In their 2011 study, they found that group work had little relative impact on student learning over individual work for the low-complexity tasks. For the high-complexity tasks, however, group work shined.
  3. Nuance problems. When there’s no single right answer to a question, it’s a lot harder for one student to dominate group discussions. That can still happen, but if you’ve framed the problem at hand as one that permits multiple interesting and useful answers, there’s more reason for all the students in a group to weigh in and share their perspectives and ideas. And these problems exist in all fields, even “high consensus” fields like the natural sciences. There are often multiple ways to get to a single answer, or ethical questions to explore.

How do you go about structuring group work to avoid “alpha students”? Do your methods map onto any of these three strategies?

For more on the intersection of games and teaching, see my “Learning at Play” blog posts or my Leading Lines podcast interviews with Patrick Rael, Max Seidman, and Kimberly Rogers.

On ChatGPT, Boilerplate, and Ghost Writing

Last week, the Office of Equity, Diversity, and Inclusion at Vanderbilt University’s Peabody College sent an email to students in response to the recent mass shooting at Michigan State University. These emails offering condolences and support and links to resources are all too common in higher education today and they often rely on boilerplate language. What’s not common is the use of ChatGPT, the AI text generator from OpenAI, to write such emails, but that’s what the Peabody administrators did. As reported in the Vanderbilt Hustler, the email included a parenthetical before the signature reading, “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023.”

This did not land well. Here are some of the student reactions reported in the Hustler:

  • “Disgusting… Deans, provosts, and the chancellor: Do more. Do anything. Lead us into a better future with genuine, human empathy, not a robot.”
  • “It’s hard to take a message seriously when I know that the sender didn’t even take the time to put their genuine thoughts and feelings into words. In times of tragedies such as this, we need more, not less humanity.”
  • “Automating messages on grief and crisis is the most on-the-nose, explicit recognition that we as students are more customers than a community to the Vanderbilt administration.”

The office sent a follow-up email with an apology, noting that the use of ChatGPT was “poor judgment.” Two administrators in that office have now “stepped back” from their positions as a result of the controversy.

What should we make of this? Let’s see… Someone uses ChatGPT to generate text for a writing task, and then the audience for that writing reacts negatively because they consider the use of an AI text generator to be wildly inappropriate for that kind of writing task. Sound familiar? This is very much how some faculty responded to student use of ChatGPT for writing assignments back in December and January. Many faculty and, indeed, entire universities, moved to ban the use of ChatGPT on assignments.

A variety of old printing plates arranged as an art objectI suspect that administrators at Vanderbilt and elsewhere now see ChatGPT as essentially banned by students for use in administrative communications. Or rather, I suspect administrators will avoid using ChatGPT for certain kinds of communications. As Matt Reed points out in his Inside Higher Ed column this week, the use of boilerplate text or even ChatGPT output for, say, an email about parking logistics probably wouldn’t generate the same kind of student outcry. Students don’t expect that kind of email to be personal in the same way an email about a mass shooting might be.

The problem for faculty, administrators, and students is that we don’t yet have established norms and expectations for the use of AI generated text in higher education. Consider that the Vanderbilt email cited its use of ChatGPT. That email was sent out by an administrator who is also a faculty member, and it’s pretty easy for me to imagine that she felt she was modeling best practices by citing her use of ChatGPT in the email. That’s what many faculty are recommending to their students this semester, particularly the ones who are exploring the use of these tools as writing assistants.

Which norm is “right”? The one that says it’s okay to use ChatGPT to help you write, as long as you acknowledge your use of the tool? Or the one that says for certain kinds of writing, the use of ChatGPT is wildly inappropriate? As a community, we haven’t figured that out yet, and I worry about imposing consequences on administrators or students when we don’t have established norms and expectations yet.

Here’s a sign that the norms haven’t been well established: One of the students quoted in that Hustler article said “that using ChatGPT is presumed to be cheating in academic settings.” That’s not actually true. Some faculty might presume that, and some faculty might have communicated that to the student. But the way the Vanderbilt Honor Code works, it’s up to individual instructors to determine for students what counts as unauthorized aid in their individual courses. And I know there are faculty at Vanderbilt who are exploring the use of ChatGPT in writing with their students, without presuming its use is cheating. The student is wrong in their assumption, but can you fault them for that given the heated discourse about ChatGPT over the last three months?

This week I decided it was finally time to watch the 2013 sci-fi movie Her directed by Spike Jones and starring Joaquin Phoenix. The film is about a man who develops a romantic relationship with his artificially intelligent personal assistant, voiced by actress Scarlett Johansson. Without spoiling the movie, I’ll note that the lead character works at a business where people ghost-write handwritten letters for customers, often very personal letters and often working with the same clients for years.

Given the reaction last week to the Vanderbilt email, it struck me as odd that in the future world of Her it’s socially acceptable to hire other humans to ghost-write letters to loved ones. Surely that’s a task that we wouldn’t outsource, right? On the other hand, maybe Her takes place in a world where the use of AIs to write such letters is frowned upon, but having other humans write such letters is fine. That doesn’t make much logical sense to me, since it’s the ghost-writing that strikes me as impersonal, not the ghost writer, but I can also imagine the kinds of super-heated discourse we’re seeing in the real world about ChatGPT leading us to illogical places in how we think about these tools.

When will it be acceptable to have some writing assistance from a tool like ChatGPT? Emails about tragedies, maybe not. Emails about parking logistics, maybe so. Might there be some kinds of writing assignments in the classes we teach where ChatGPT is disallowed, but others where its use is allowed or even encouraged? My hope is that as norms around these tools are developed in higher education that we’ll take our time and figure out these questions thoughtfully together.

Graduate Teaching Fellows Program Ending (Too Soon)

After 20 years, the Graduate Teaching Fellows program at the Vanderbilt Center for Teaching is coming to an end this year. The GTF program is a fantastic professional development opportunity for Vanderbilt graduate students interested in faculty and/or faculty development careers. GTFs facilitate teaching consultations for graduate students, design and lead workshops on various teaching topics, and partner with CFT senior staff on learning communities and writing projects and more. GTF alumni have gone on to all kind of positions around (and outside!) academia, including several who are now working at centers for teaching and learning.

Sadly, Vanderbilt’s Provost is cancelling the program as part of an effort to ensure six years of graduate student funding through students’ home departments and programs. That’s an admirable goal, but for grad students with a keen interest in the teaching missions of colleges and universities, the end of the GTF program is a loss. For many of our GTFs, serving as a teaching assistant for a sixth or seventh time wasn’t going to improve their CV, but having the chance to work at a teaching center for a year helped launch them into academic careers.

I posted the above on LinkedIn, where I am connected with a number of alumni of the program. Several weighed in with comments about what the program and their time at the Center for Teaching meant to them professionally. Here are a few of those comments:

  • “My GTF work coaching my peers and introducing international grad student instructors to the US classroom is still something that I rely on as a lawyer advising clients and coaching colleagues. The things I learned observing the way instruction worked outside my wheel house are still valuable to me when I dig into a new industry or expert report.” – Jeff Sheehan, GTF 2004-2005
  • “I would not have finished my degree if I could not have balanced the disciplinary pressures with reminders of “the outside world” by working at the CfT. Not to mention the friendships I would missed out on! Time away from the department has value beyond lines on a CV.” – Jessica Riviere, GTF 2013-2014
  • “my GTF experience made me a better faculty member, and now (since leaving my faculty position) made me a stronger candidate for my current role in workforce training. My CFT experience was what attracted industry employers to invite me for interviews, which was critical when we consider the lackluster state of the humanities job market.” – Danielle Picard, GTF 2014-2016
  • “I wouldn’t have gotten my three jobs in academia without the CFT, which also prepared me for my current college admissions position. This is such a shame!” – Alexandra Oxner, GTF 2018-2019

And here are a few more endorsements of the program, which I’ve copied from the program webpage while it’s still available:

  • “Having the opportunity to focus on and talk about teaching and best practices and become familiar with research and the literature on the scholarship of teaching and learning is an invaluable experience for any future educator.” – Lily Clairborne, GTF 2010-2011
  • “In deep discussion, collaboration, and teaching practice with colleagues and mentors at the CFT, I grew as a teacher to become better equipped to tackle the social, political, and intellectual challenges of contemporary university teaching. These personal and professional experiences instilled a desire that remains with me today: to develop further as a scholar not only in my field, but also as a teacher in my field.” – Christian Ehret, GTF 2014-2015
  • “Being a GTF required me to critically reflect on my pedagogical choices and to think deeply about the kind of teacher I wanted to be, which prepared me for tough interview questions on the job market and for my first year as a professor. Several search committees commented on my teaching experience and the ways that serving as a GTF made me a more competitive candidate.” – Robert Marx, GTF 2018-2019

Personally, as an assistant director and later director at the Vanderbilt Center for Teaching, I enjoyed getting to know, to mentor, and to learn from our Graduate Teaching Fellows. I was at the CFT for 17 of the program’s 20 years, so I worked with most of the GTFs! They added so much to the work of the CFT over the years, and I’m proud to see what all they’ve accomplished after their time in the program. The end of the program is a loss for graduate students at the university.

Catching Up on the Intentional Teaching Podcast

Intentional TeachingLast November I launched the Intentional Teaching podcast featuring interviews with a variety of educators aimed at helping listeners develop foundational teaching skills and explore new ideas in teaching. I published the sixth episode last week, and as I write this the podcast has a total of 1,239 downloads. Not too shabby for a brand-new podcast! Also, I’m sure I’ve read somewhere that over half of all podcasts don’t make it past five episodes, and I’m still going strong. I have three interviews already recorded for future episodes and another interview scheduled for later this week. I really appreciate the opportunity the podcast gives me to reach out to and learn from colleagues across higher education who are doing interesting things!

I thought I would share a brief tour of the first six episodes here on the blog with a few of my favorite quotes from my guests. You can listen to particular episodes below, subscribe to the podcast using any number of apps, or search for “Intentional Teaching” in your podcast app of choice.

In Episode 1, I interviewed Susan Hrach, director of the Faculty Center for the Enhancement of Teaching and Learning at Columbus State University and the author of the 2021 book Minding Bodies: How Physical Space, Sensation, and Movement Affect Learning. During the COVID-19 pandemic, many faculty became more aware of the role that our students’ bodies and physical environments play in their learning. I knew I wanted to explore this idea of embodied learning on the podcast, and I knew that there wouldn’t be a better place to start than by talking with Susan!

In the interview, Susan describes some of the ways we use our bodies for learning, and she shares practical advice for faculty teaching on-site or online for recognizing and fostering embodied learning. “I’ve really started thinking about ways to design assignments that do not chain my students to computer,” Susan said during the interview, and I appreciated thinking through this challenge with her. My favorite quote of the episode was “We need to be braver about moving furniture,” something Susan said as an encouragement for instructors to be more intentional in how they use their physical classrooms.

In Episode 2, I talked with Robert Cummings, associate professor of writing and rhetoric and executive director of academic innovation at the University of Mississippi. I’m working this year at UM’s Center for Excellence in Teaching and Learning, and Bob is one of my favorite new colleagues there. Back in November, ChatGPT had just launched as the first free and easy-to-use interface for the GPT-3 large language model, a text generator powered by artificial intelligence. There’s been a lot of talk in higher ed about ChatGPT and what it means for teaching and learning since November, and I am glad I got to talk to Bob about all this at the very start of this big conversation.

Bob helped me frame the advent of these AI tools around the ways they might help students learn and write and create. He also pointed me to the group YACHT, which used AI generator tools to help them write music and lyrics for songs of their new album. Might we have our students use ChatGPT in a similar fashion? He pointed out that a lot of students have trouble starting a piece of writing; they get overwhelmed by the blank page. Having ChatGPT output a very rough first draft or set of ideas might get students past this hurdle.

Bob also pointed out something provocative about where we find ourselves as humans who write and speak:

“The real problem that we’re going to experience as a public is we’ve never been in a situation… where writing has been divorced from thought. Previously, every time you encountered writing somewhere, behind that writing was thinking. It might not have been great thinking, sure. It might have been a stranger, it might have been a group of people, it might have been plagiarism, but at the core, somewhere there was thinking. There no longer is.”

In Episode 2, I interviewed Melinda Owens, assistant teaching professor in neurobiology at the University of California San Diego and one of the lead developers for Decibel Analysis for Research in Teaching (DART). DART can take an audio recording of a class session and, with fairly good accuracy, identify the time periods in that class session where active learning was taking place. That is, it can distinguish between one person talking (as in a lecture), multiple people talking (as in small group work), and no one talking. For the STEM courses on which DART was trained, this gives a good estimate of the amount of class time spent in active learning (students talking or thinking quietly) and traditional lecturing (the instructor talking).

Here’s Melinda on the value of DART as a self-assessment tool for instructors:

“I think it’s really interesting to see just how much of the time do you spend talking versus do the students spend talking? I think that is the most important thing because when the students are talking or they’re silent, they’re engaging, those are the times when their minds are working… Having it black and white on the paper in the form of [sound] graphs really made it, it’s obvious. People would notice things like, oh, I do activities at the beginning and end of class, but I just talk in the middle. Or things like, oh gosh by the middle of the term it’s all single voice. Or, oh, that lecture, I hate that lecture. I talk too much.”

DART is free for anyone to use. For details, see the DART website.

In Episode 4, I talked with Juan Gutiérrez, professor and chair of mathematics at the University of Texas at San Antonio. One of the areas I’ve been looking at while at the University of Mississippi is student success in the so-called “service” math courses. Those are the math courses that students not majoring in mathematics take in fulfilment of general education or program requirements. Some of these courses struggle to students to pass, and I heard that UTSA was making great strides in this area.

Juan shared that in just a two-year period, the DFW rate (that’s drop-fail-withdraw) in the UTSA math service courses dropped from 35% to 25%. That’s a huge improvement in student outcomes, especially for a department that teaches eight or nine thousand students each year. In the interview, Juan described the strategies that his department has used to make that kind of change.

For instance, UTSA didn’t focus on course redesign. Instead, they worked to align course learning outcomes up and down the service courses. Here’s Juan with a strong argument for this approach:

“We need to take a systemic approach. This Petri dish approach in which we take every course individually… It’s absurd. It’s creating islands that are disconnected from what is needed, which is a continuing curriculum. We don’t want a collection of cliffs students have to face every time that they go to a new course. We want a smooth ramp. We want an a function with infinite number of derivatives.”

In Episode 5, I interviewed Mary-Ann Winkelmes, founder and director of the TILT Higher Ed project. TILT stands for “transparency in learning and teaching,” and the project works with instructors and institutions to practice transparent course and assignment design. With all the conversation in higher education today about rigor and flexibility, I thought this would be a perfect time to talk with Mary-Ann about transparency in teaching and learning.

In addition to sharing loads of practical strategies for making the goals, tasks, and assessments of assignments more transparent to students, Mary-Ann eloquently argued for the value of such transparency:

“How do we measure [student] success? If what we aim to measure is how well can a person figure out the secret unwritten rules to the discipline they’ve chosen before they need any help, if that’s what we’re trying to measure, then don’t talk with them about the process of learning! But instead, if what we’re trying to measure is, what does the students’ best quality work look like? What does the top of their achievement look like when they spend a 100% of their time doing their best quality work as opposed to 50% of their time doing their best quality work after they’ve spent half of the time figuring out how on earth am I going do this work?”

Episode 6 featured my first pair of guests: Regan Gurung, associate vice provost and executive director of the Center for Teaching and Learning at Oregon State University, and Dwaine Plaza, professor of sociology at Oregon State. The two of them are editing a forthcoming book titled Onward to Better: How Facing a Pandemic Will Improve Higher Education in the 21st Century. Regan and Dwaine are in the interesting position of having read about two dozen chapter submissions for the book, all about lessons learned from teaching during the COVID-19 pandemic authored by faculty, staff, and administrators. I wanted to know what lessons higher ed has learned from pandemic teaching, and what lessons higher ed should learn.

I was glad to hear Regan confirm something I felt to be true: “There were clear differences between campuses that had a strong, well staffed center for teaching and learning and campuses that did not… We could see that those campuses with well established centers that also had good communication with leadership and administration, they’re the ones who coped better… If you listener are on a campus with a good teaching and learning center, you are lucky because your campus probably dealt with the pandemic better.” And I would suggest that now is a good time for institutions to invest in their teaching centers, before the next massive challenge to teaching and learning hits.

And Dwaine pointed to the ongoing value of the tools for engagement that faculty learned to use during pandemic teaching:

“Something as simple as a Jamboard or a poll or [Zoom reactions], all those things became part of our Zoom world… Many of us would love to have [those tools] in a large class situation where we can actually get that feedback right away and get a chance to react to it… If people are always saying, you know, my students have stopped showing up, in my opinion I think they’ve stopped coming to your class possibly because you’re not engaging them anymore. You’re not doing something for them that actually makes them want to step into that place because you haven’t created that culture for them. You might have created it in the first couple weeks, but then you’ve waned. They haven’t waned. You’ve waned, and so you’ve lost the audience.”

Thanks to Dwaine and Regan and all my guests on the podcast so far. I’m excited to learn from and with my guests on future episodes of Intentional Teaching! As mentioned above, you can subscribe to the podcast using any number of apps, or search for “Intentional Teaching” in your podcast app of choice.

The Perils of Lecture Class

Intentional LearningSince I don’t have enough going on (apparently), I recently started another newsletter. It’s called Intentional Learning, and it’s focused on strategies for learning and academic success at college. I spend most of my professional life talking with faculty and other instructors about teaching and learning, and I thought it was time to take that conversation to students.

The Intentional Learning newsletter comes out every Wednesday on LinkedIn. You can read it right here, and, if you’re a LinkedIn user, you can subscribe there, too. It’s written for students, so if you have students in your life that you care about, feel free to share it with them!

The latest issue on lecturing and active learning has gotten a fair amount of attention on LinkedIn, so I thought I would cross-post it here on my blog. 

The Perils of Lecture Class

“I wish Bruff would just work problems at the chalk board like other math professors.”

Ask any professor who has been teaching for a while about the student evaluations they receive at the end of each course, and they will immediately recall a few choice comments. Some of the comments are positive, like this one I received a few years ago: “I was so excited about this course’s podcast project that I shared my episode with my grandmother!” Some are hard to interpret, like “excessively politically correct” (in a math course?) or “Bruff Riders 4 Ever!” (I think that was positive.) And some are just frustrating to read, like the one above wishing for more chalk-and-talk.

That comment came from a student in a statistics course I taught with about a hundred students, mostly engineering students who were taking the course to fulfill a requirement for their major. They weren’t overly excited to be there, but I knew that going in, and I tried to make the course both relevant to their future careers as engineers and engaging as a learning experience. That meant I didn’t just work problems at the chalk board. Every class session asked students to actively participate in their own learning.

A typical class might involve a pre-class assignment with a few textbook pages to read and a couple of simple math questions to answer. Class would start with polling questions, where I asked students to respond to multiple-choice questions about the concepts for the day using their digital devices and to discuss some of the harder questions with their neighbors. Then we might move into group work for a while, maybe tackling a worksheet full of math problems or creating a data visualization with a partner. When I did work a problem at the chalk board, it was only after the students had a chance to try the problem themselves, so they could better understand my solution.

I was practicing what’s called “active learning instruction.” It’s usually held in contrast to traditional lecturing, aka “continuous exposition by the instructor.” Professors don’t practice active learning instruction just to make classes more lively. There’s a ton of educational research showing that these practices lead to greater student learning and student success, whether you measure that by course grades or persistence in the major or graduation rates. The student asking me to work more problems at the board? They were requesting an instructional approach that’s demonstrably inferior.

But I can’t get too frustrated with this student. This student probably thrived in chalk-and-talk classes, both in high school and college. That’s what they were expecting, and I asked them to do something very different, something that involved more work during class, and maybe something that didn’t feel all that useful to the student. And this student is hardly alone in viewing active learning instruction this way.

A team of researchers at Harvard University led by Louis Deslauriers ran an experiment where students were randomly assigned to one of two class sessions, one taught via traditional lecture and one taught via active learning. Same topics, same physics problems, same worked examples, just different instruction. The researchers quizzed students over the material and surveyed the students about how they felt about the classes. The result? Students learned more in the active learning classes but they felt they learned more in the lecture classes!

Here’s how the researchers put it in their article about the study: “Students rated the quality of instruction in passive lectures more highly, and they expressed a preference to have ‘all of their physics classes taught this way,’ even though their scores on independent tests of learning were lower than those in actively taught classrooms.” Does that sound familiar? My stats student said basically the same thing.

Should we generalize from this Harvard study? Maybe not. Harvard students aren’t like students elsewhere. But wouldn’t you expect Harvard students to be pretty savvy about learning? If they can be led astray by the comforting familiarity of traditional lecturing, I think anyone can. And I certainly hear from faculty colleagues at other institutions (that aren’t Harvard) that they encounter the same student pushback about active learning instruction.

What’s the takeaway here for college students? One, learning is hard work. If it feels super easy, it’s probably not actually changing your brain in useful ways. Two, we’re often pretty poor judges of how well we learn, especially when we’re newbies in a particular area. Three, if you show up to class and there’s not much to do other than listen to someone else talk and maybe take some notes, then you might not actually be learning much.

But you knew that, right? You took a bunch of Zoom classes during COVID where all you had to do was listen to a talking head, and you realized that wasn’t working for you. And now you’re back in a physical classroom and the talking heads are talking and you’re wondering if it’s worth your time to show to class. Next week in the newsletter, we bring this conversation about active learning and traditional lecturing into 2023 to see what it means for showing up to class in college today.

To subscribe to my Intentional Learning newsletter and receive future issues in your inbox, sign-up on LinkedIn.

Chatbot as Audience: A Conversation with Chad

After weeks of reading about other college educators having conversations with ChatGPT, the new artificial intelligence (AI) chatbot from OpenAI that uses the GPT-3 family of large language models, I thought it was time to have a conversation with ChatGPT myself. (Disclaimer: I don’t entirely understand all the terms I used in that last sentence.) See, I’ve been thinking about that thing that Lee Skallerup Bessette said last month about having students interact with chatbots as an audience for their academic work, and I wanted to figure out how that might work. Also, I was preparing to give a talk at Temple University about teaching and technology, and I couldn’t not mention ChatGPT in January 2023, right?

Image generated by DALL-E using the same prompt that Alan Levine used: A know it all man like Cliff Clavin sits at a wooden bar, talking like an expert, photographic style
Image generated by DALL-E using the same prompt that Alan Levine used: A know it all man like Cliff Clavin sits at a wooden bar, talking like an expert, photographic style

One of the ideas I had about student activities using ChatGPT, which I’ll call Chad from here out for convenience (after Alan Levine), is asking students to talk to Chad about some topic they already know well (a hobby or maybe a sport), to see what Chad does and doesn’t know about things. (I’m seriously anthropomorphizing here, of course. Chad isn’t a person and doesn’t know anything. Chad is an algorithm that I just barely understand.) Chad presents himself as an authority on lots of stuff, but it’s been clear from other educators’ interactions with Chad that Chad gets stuff wrong a lot. That’s a good lesson about AI text generation tools for students, and having them experience that firsthand about a topic they already know might help that lesson sink in.

I occasionally teach a course on cryptography, which includes discussion of the role of encryption in today’s society, which leads us to topics like surveillance and privacy. This a topic I know well, and one that I’ve had many students write papers about, so I chose this topic for my conversation with Chad. Here’s how it started:

Me: There has been much debate over the surveillance powers that governments have over their citizens. But if you have nothing to hide, why would you worry if your government is surveilling you?

Chad: The idea that “if you have nothing to hide, you have nothing to worry about” is a common argument in favor of government surveillance, but it is flawed for several reasons.

First, it presumes that the government’s surveillance is only directed at people who are doing something wrong, but that is not always the case. Surveillance programs can be used to target people based on their race, religion, political beliefs, or other factors that have nothing to do with criminal activity. Even if a person is not doing anything wrong, they may still be subject to harassment or discrimination if they are deemed to be “suspicious” because of the way they look or the group of people they associate with.

Additionally, the idea that you have nothing to hide implies that privacy is only important for those who are doing something wrong. However, privacy is a fundamental human right that is necessary for many aspects of our lives, such as the freedom to express ourselves and to make personal choices without fear of retaliation. It also allows us to establish and maintain relationships with others, and to participate in society without fear of surveillance.

Finally, even if you believe that you are not doing anything wrong and you have nothing to hide, the power to surveil still poses a danger to society as a whole. Governments that have the power to surveil their citizens without checks and balances can easily abuse that power to stifle dissent, silence critics, and punish political opponents. This can have a chilling effect on democracy and the free exchange of ideas, which is essential for a healthy society.

In summary the idea of ‘nothing to hide’ is a red herring, that misses the point that privacy is a fundamental human right, it is not only about hiding something illegal but also about the freedom of choice and self-expression, it’s about the preservation of democracy and free speech as well as for personal safety and security.

Did you actually ready all of that? If so, good for you. Chad is very, very wordy. He talks in solid, if unexciting three-point essays. Which is why so many educators have been freaking out these last couple of months, because if your homework involves assigning students to write solid, if unexciting three-point essays, then you can bet on the fact that Chad has been ghostwriting for your students since November.

Let’s not focus on that, however. Let’s imagine that in the discourse above, I’m a student talking to Chad about a topic I know well, or maybe one that I’m just learning about, like the tradeoffs between privacy and surveillance. I ask a relatively short question, and Chad responds with a longwinded three-point essay. What should I do next, if I’m trying to treat Chad like an audience for my growing understanding of privacy and surveillance? Let me see if I can point out a potential flaw in Chad’s argument. That would be a useful move to make.

Me: Some governments have robust checks-and-balances to prevent the kind of abuse you’re describing. Doesn’t that mitigate the concerns you’ve raised here?

Chad: Checks and balances are an important safeguard against government abuse of surveillance powers, but they are not a guarantee that such abuse will never occur. Even with robust checks and balances in place, there is always a risk that government officials will find ways to circumvent or undermine them.

For example, laws governing surveillance may be written in such a way that they are easily interpreted to allow for wide-ranging surveillance. Or, oversight bodies may lack the resources or independence to effectively monitor government actions. Or, Whistleblowers may be afraid to come forward.

Additionally, even when checks and balances are functioning as intended, they may not be sufficient to protect against certain kinds of abuse. For example, a surveillance program that targets individuals based on their race or political beliefs may not be illegal, but it can still be unjust and harmful.

It’s important to remember that the mere existence of checks and balances does not necessarily mean that a government’s surveillance powers are being used in a way that is consistent with human rights and democratic principles. It’s important to keep monitoring the government’s actions and be vigilant to potential abuse and push for more effective oversight and regulation mechanisms. And also to have transparency, so that citizens can see what their government is doing and have a say in the matter.

It’s also important to have a open public debate on the government’s surveillance powers, the purpose of such powers and the balance between protecting citizens and maintaining individual rights and freedoms. Only then we can have a society where citizens can have trust in their government and vice-versa.

Seriously, Chad, read the room. Short questions need short answers.

When composing my question to Chad, I had a very interesting moment. I typed, “Some governments have robust checks-and-balances to prevent the kind of…” Overreach is the word I wanted to use, but I wasn’t sure if Chad would understand that, so I went with “abuse” instead. Those aren’t entirely synonyms, however, and using “abuse” might take the argument in a direction I don’t want it to go. But would Chad get the nuance that comes with “overreach” in this discussion?

Again, I’m fully anthropomorphizing here, but that’s kind of the point. Chad acts something like an interested (perhaps overly interested) respondent for my questions about this topic. Playing out this exercise is getting me to think about my word choice as I communicate with an audience I don’t know very well. I totally want the students in my writing seminars to think about their word choices like this!

Let’s finish this debate.

Me: It’s true that checks and balances don’t prevent all overreach or abuse, but we citizens already cede a lot of power to our governments, trusting that the benefits of doing so outweigh the costs. You haven’t addressed the benefits of government surveillance for safety and threat prevention.?

Chad: You are correct that government surveillance can play a role in keeping citizens safe and preventing threats. Some argue that surveillance can help to detect and prevent terrorism, for example, or to locate and capture dangerous criminals. In some cases, government surveillance may be necessary to protect the public from immediate harm.

However, it’s important to keep in mind that the benefits of surveillance must be balanced against the costs to individual privacy and civil liberties. The decision to use surveillance should be based on a careful consideration of the potential risks and benefits, and it should be subject to robust oversight and accountability mechanisms to ensure that it is not being used in a way that is arbitrary or unjust.

Additionally, it’s important to consider the potential of alternative methods of threat prevention or safety enhancement which may not involve surveillance or may involve less intrusion to individual privacy. Such alternatives may be more effective and less costly in terms of civil liberties.

It’s also worth noting that some of the information gathered through surveillance may not be as useful as it may seem at first glance. For example, indiscriminate mass surveillance may lead to a large number of false leads and distractions, which can actually make it more difficult for law enforcement to identify and investigate real threats. And the collection of vast amount of data can also raise serious concerns about data security, data breaches and mishandling of information.

Finally, it’s important to remember that surveillance must be carried out in a way that respects human rights and the rule of law, as well as being transparent, proportionate and targeted. This means that the surveillance should be conducted in accordance with the law, and be necessary and proportionate to achieving a legitimate aim.

Okay, now Chad is making some rhetorical moves that go beyond the three-point essay. “You are correct that… However…” Again, not an exciting rhetorical move, but a solid one, and one that some of my first-year writing students struggle to make. I feel like I could do something with this kind of exercise in my writing seminars, especially if students are asking Chad about something they already know about. With familiar topics, this exercise might help students focus on the ways that debate happens and how they might make similar moves in their writing.

I might also ask my students to assess Chad’s posture of authority on this topic. “How does that land with you? Is this a posture you adopt in your academic writing?” The answer to the second question is, yes, all the time.

Okay, your turn. How might you use ChatGPT or similar tools as audiences for your students and their writing?

❌