FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Librarians' "new normal" includes pain points

Image: 
Illustration of a man with his head stuffed with fake news.

DENVER—As snow fell from gray skies on Tuesday, higher education professionals, publishers, librarians, information technologists, government researchers and others met this week for the Coalition for Networked Information spring membership meeting. There, attendees gathered to discuss the use of information technology to advance scholarship and education.

Ithaka S+R shared results from its triennial survey published last week, which sought to capture college library deans’ and directors’ perspectives three years into the pandemic.

But Ioana Hulbert, Ithaka S+R researcher and survey author, confided to a packed ballroom that she had been anxious during the survey’s administration in the fall of 2022—mostly because of question 17.

“Without fail, almost every library director stopped on this question for multiple days,” Hulbert said about the prompt that asked respondents how they would handle budget cuts. “I just sat there hoping they would come back and finish the survey.”

Many of the survey results resonated with librarians present at the Denver meeting. Over meals and in hallways, they discussed an evolving library landscape in which print resources have been demoted, staffing shortages feel urgent and pandemic-era students struggle to engage with libraries.

High and Low Priorities

Question 17, which was new this cycle, asked respondents to indicate the top three areas where they would implement cuts if a 10 percent budget reduction were necessary. This question and another that had been asked in this and earlier cycles concerning how respondents would allocate a 10 percent budget increase sought to highlight librarians’ lowest and highest priorities.

Much to Hulbert’s relief, 612 librarians completed the survey, including the vexing question 17. Still, Hulbert said she learned a lesson.

“It’s going to be the last question in the next cycle.”

When the results were tallied, the librarians’ lowest priorities were print resources. More than half of respondents (54 percent) would cut the print monograph budget, and nearly half (45 percent) would cut print journal subscriptions. In the event of a budget increase, the librarians would prioritize staffing. More than half (56 percent) would direct additional funds toward new or redefined positions, and about two out of five (41 percent) would prioritize employee salary increases.

Budget cuts—real or imagined—are not the only challenge. Fewer than one in five librarians at baccalaureate-level colleges (18 percent) agreed that their library has a well-developed strategy for redressing the influence of disinformation and misinformation. Librarians at master’s-granting colleges and doctoral universities felt similarly dispirited (13 percent and 20 percent, respectively).

“Is the ‘well-developed strategy’ portion of the question really driving the response?” Hulbert said. “Maybe that’s too high of a bar to say that you have an explicit, documented strategy somewhere.”

Nonetheless, the finding stood in stark contrast to the overwhelming majority of librarian respondents (98 percent) who indicated that helping students develop research, critical analysis and information literacy skills is a priority. This near consensus is set against the backdrop of a rise in disinformation during the pandemic.

Pandemic-Era Students Return to the Library

In recent years, students appear to have shifted the ways in which they engage with the library and librarians.

“We’re teaching this generation of post-pandemic, traumatized students who don’t have confidence in information,” said Christina Trunnell, assistant dean of the library at Montana State University. “Our core foundational information literacy programs that we teach don’t reach those students anymore.”

During the early pandemic lockdowns, Montana State students made abundant use of virtual chat reference services, Trunnell said. But that use plummeted more than 60 percent during each of the last two years. Meanwhile, this academic year, demand for one-on-one consultations has skyrocketed.

“We haven’t had time to assess this new cadre of students,” Trunnell said, adding that many college libraries are short-staffed. “How do students look up information? How do they understand it? Until we have time to assess those needs and assess those patterns, we’re behind the game.”

But assessing current students’ needs and offering one-on-one consultations places additional demands on library staff. At the same time, library deans and directors are struggling to retain and hire staff, according to the Ithaka survey. One in five of the librarian respondents (20 percent) is already outsourcing some skills. A similar percentage expects to reduce staff in access and technical services, metadata, and cataloging in the next five years. Technology and programming roles are the most challenging to recruit and retain, according to the survey.

Something’s got to give.

Meanwhile, students who attended high school during the pandemic may have underdeveloped library and literacy skills, according to some of the librarians in Denver.

“There’s a real disconnect in students even knowing what a library does,” Michael Vandenburg, dean of the libraries at Dalhousie University in Nova Scotia, said. “That may have something to do with how students experienced high school research during COVID lockdowns, but it also may reflect the defunding of libraries in secondary schools.”

Pandemic-era high school students working on research projects may not have had abundant opportunities to engage with high school librarians, Vandenburg offered as an example. Many college libraries offer orientation programs that help students understand the library’s resources. But such programming often competes for attention with offerings from other campus offices.

“Information literacy has to be baked into their coursework,” Vandenburg suggested. Some faculty need minimal help with such efforts, while others require extensive assistance over time, he said.

In a library landscape where budgets are strapped and librarians struggle to reach students, artificial intelligence might offer some efficiencies, according to Elias Tzoc, associate dean for teaching, learning and research at Clemson University.

“I know that’s part of the misinformation issue,” Tzoc said. “But when we use it in the right way, it can help scale this and other library services as well.”

Teaching and Learning
Technology
Editorial Tags: 
Image Source: 
sorbetto/DigitalVision Vectors/Getty Images
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Magazine treatment: 
Trending: 
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
6
In-Article Advertisement High: 
9
In-Article related stories: 
12
In-Article Advertisement Low: 
15
Include DNU?: 
Yes
In-Article Careers: 
3

New Recommendations for Encouraging Open-Access Publishing

Last summer, the White House mandated that any research based on federally funded studies must be made freely available to the public without an embargo. The new requirement, which updates an existing policy that allowed a 12-month embargo for making research freely available, will take effect by the end of 2025.

At the time, many open-access advocates celebrated the decision, but some scholars wondered who would fund the policy, given the high cost to researchers who publish open access.

Now, a paper published in the Journal of Science Policy and Government offers recommendations for colleges, publishers and funding agencies interested in supporting open access moving forward.

Colleges might cancel subscriptions with major publishers in favor of paying for researchers’ open-access article processing charges, according to the paper. They might also “reevaluate the weight that journal impact factor carries in the tenure and promotion review process,” given an absence of evidence correlating journal impact factor with research quality. Such a change would facilitate researchers’ incentives to publish in newer, open-access journals over “established, expensive, higher impact journals.”

Publishers should be more transparent about journal operating costs and how article processing charges are used, according to the paper. They might also offer a wider range of open-access publishing options.

Funding agencies might increase grant budgets to offset the expected higher costs of open-access publishing, according to the paper. The authors of the study described such a measure as “temporary, if expensive.” Funders could also increase their scrutiny of publication costs in researchers’ proposed budgets and “openly endorse non-profit [open-access] journals and platforms with minimal or no fees for researchers.”

The new policy presents hurdles for colleges, publishers and funding agencies, but some expect that it will benefit society.

“When research is widely available to other researchers and the public, it can save lives, provide policy makers with the tools to make critical decisions, and drive more equitable outcomes across every sector of society,” Alondra Nelson, head of the White House Office of Science and Technology Policy, wrote last August when the policy was announced. “The American people fund tens of billions of dollars of cutting-edge research annually. There should be no delay or barrier between the American public and the returns on their investments in research.”

Is this diversity newsletter?: 
Hide by line?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Trending: 
Live Updates: 
liveupdates0

New Recommendations for Encouraging Open-Access Publishing

Last summer, the White House mandated that any research based on federally funded studies must be made freely available to the public without an embargo. The new requirement, which updates an existing policy that allowed a 12-month embargo for making research freely available, will take effect by the end of 2025.

At the time, many open-access advocates celebrated the decision, but some scholars wondered who would fund the policy, given the high cost to researchers who publish open access.

Now, a paper published in the Journal of Science Policy and Government offers recommendations for colleges, publishers and funding agencies interested in supporting open access moving forward.

Colleges might cancel subscriptions with major publishers in favor of paying for researchers’ open-access article processing charges, according to the paper. They might also “reevaluate the weight that journal impact factor carries in the tenure and promotion review process,” given an absence of evidence correlating journal impact factor with research quality. Such a change would facilitate researchers’ incentives to publish in newer, open-access journals over “established, expensive, higher impact journals.”

Publishers should be more transparent about journal operating costs and how article processing charges are used, according to the paper. They might also offer a wider range of open-access publishing options.

Funding agencies might increase grant budgets to offset the expected higher costs of open-access publishing, according to the paper. The authors of the study described such a measure as “temporary, if expensive.” Funders could also increase their scrutiny of publication costs in researchers’ proposed budgets and “openly endorse non-profit [open-access] journals and platforms with minimal or no fees for researchers.”

The new policy presents hurdles for colleges, publishers and funding agencies, but some expect that it will benefit society.

“When research is widely available to other researchers and the public, it can save lives, provide policy makers with the tools to make critical decisions, and drive more equitable outcomes across every sector of society,” Alondra Nelson, head of the White House Office of Science and Technology Policy, wrote last August when the policy was announced. “The American people fund tens of billions of dollars of cutting-edge research annually. There should be no delay or barrier between the American public and the returns on their investments in research.”

Is this diversity newsletter?: 
Hide by line?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Trending: 
Live Updates: 
liveupdates0

Challenging 'bad' online policies and attitudes

Image: 
Four people seated onstage at a South by Southwest EDU panel discussion, three men and a woman, who is on the far left. Of the photo.

AUSTIN, Tex.—About a decade ago, nearly all—97 percent—of IBM’s job advertisements required a four-year college degree, according to David Barnes, vice president of global workforce policy at the tech giant, who spoke this week at the annual SXSW EDU conference here. That requirement disqualified the approximately two-thirds of Americans in the labor pool without degrees from applying for jobs at IBM, Barnes said. As a result, IBM struggled with significant, sustained hiring shortfalls.

“We decided to indulge in some self-help,” Barnes said. “We call it skills-first hiring.” First, IBM hiring managers had to erase some biases they had about candidates who had not earned college degrees. Next, they assessed job candidates for the skills they possessed and their abilities to learn new skills, regardless of degree status. Also, the company invested heavily in an online employee education learning platform.

“It’s driven by artificial intelligence,” Barnes said of IBM’s training and reskilling effort. “It’s a Netflix-like interface that pushes content. Or an employee can select content … We can’t use Charles Dickens learning models anymore.” Today, fewer than half of IBM’s job ads mandate a college degree. The company is now better positioned, Barnes said, to maintain its global lead in quantum computing—technology that holds the potential to revolutionize computing power.

Barnes and other academic and industry leaders spoke with conviction and passion during the “Online Backlash: Bad Policy Holds Students Back” panel at SXSW EDU. The leaders discussed the ways in which colleges, policy makers and employers might work together to help more Americans find or advance in viable employment, while also addressing the workforce skills gap. But some “bad” policies and attitudes about online learning undermine their efforts to work together, expand access and deliver outcomes to motivated, capable learners.

‘Bad’ Policy No. 1: Separate Education and Training

IBM is not the only company that has pivoted in response to societal dissatisfaction with traditional college degrees. Cengage, a textbook publisher for over 130 years, now offers educational experiences that lead to certificates and certifications, according to Michael Hansen, CEO of Cengage Group.

“Employers were saying, ‘We have job openings we can’t fill, and we want to work with the education system, but it is so unbelievably frustrating because they’re very rigid, and they don’t want to customize to our needs,’” Hansen said. These employers sought workforce training that could produce a pipeline of learners-turned-employees, and Hansen said they told him, “If you can do that, I’ll pay you.”

In response, Cengage produced online educational experiences that lead to certificates for in-demand jobs. The typical student is a 40-something woman with an income of approximately $45,000 looking to re-enter the workforce or grow in her career, Hansen said. Today, this Cengage initiative enrolls approximately 250,000 individuals annually, which comprises 10 percent of the company’s business. Hansen projects this will grow to over 30 percent within a few years.

“The first bad policy is that [higher ed] separates education and training,” Jane Oates, president of Working Nation and panel moderator, said. “We call them very separate things, and some [in higher ed] don’t even like use the word ‘train.’”

To be sure, some higher ed institutions embrace the notion that educational experiences should have market value. Western Governors University, for example, offers stackable, workplace-relevant credentials.

“We kind of live by this mantra that a credential with no market value is just a scam,” said Scott Pulsipher, president of Western Governors and another speaker on the panel. “It doesn’t matter whether you’re acquiring knowledge through an academic pathway or whether you’re acquiring that through an experiential pathway or through a work pathway. At the end of the day, it’s still, ‘what capability do you have, and how well does that capability align with what’s needed in the workforce and in a particular role?’”

‘Bad’ Policy No. 2: Assuming Online Learning is ‘Second Best’

Some policy makers, perhaps based on earlier efforts to root out for-profit online institutions that were seen as abusing federal financial aid programs, may continue to view online education as a “stepchild” in the higher ed ecosystem, according to Oates. Such a view is unwarranted given strides in developing and understanding best online practices, especially during the pandemic, Oates said.

“It’s a red herring if you focus on mode, method or model of instruction,” Pulsipher said. “Good policy would actually advance and incentivize delivery against outcomes, not inputs.” Yet some federal policy governing virtual learning, such as regular and substantive interaction with instructors in online courses, relies on inputs to determine quality, Pulsipher said. He suggests that policy makers ask questions about whether students are gaining mastery in a subject, completing credentials and getting jobs.

One panelist pushed back even harder against the notion that in-person learning is best.

“Online learning is the only way to get learning to scale,” Barnes said, adding that the half-life of his employees’ technical skills is approximately three years. “We could not keep our employees contemporary in any other way. We wouldn’t spend $300 million on online learning every year if it didn’t work.”

Successful IBM job candidates who are hired without college degrees may pursue apprenticeships at the company in fields such as cybersecurity, cloud computing, artificial intelligence and digital design. The apprenticeships have online learning components, and many offer opportunities to earn college credits through the American Council on Education. Ninety percent of the more than 1,000 individuals who have completed IBM-sponsored apprenticeships have become full-time IBM employees, and over 40 percent hail from underrepresented minority groups, Barnes said. Not requiring a college degree upon hiring has improved the company’s efforts to hire and retain a diverse workforce.

“They have a real hunger and a real intensity about succeeding in a job which they thought they did not have a pathway to,” Barnes said. “The people who come down these pathways are incredibly loyal.” The program has been very good for IBM, too, which is why it has invested over $250 million in its apprenticeship programs through 2025, Barnes said.

“You have to have the capacity to understand both talent and potential,” Ruth Simmons, President’s Distinguished Fellow at Rice University, said in her keynote talk. Though Simmons’s talk addressed higher education and American democracy, she echoed some themes in the “bad policy” panel. Simmons, who is president emerita of Smith College and Brown University, currently serves as senior adviser to the president of Harvard University on engagement with historically Black colleges and universities, among other national higher ed leadership positions. “If you understand both [talent and potential] and you don’t have narrow perceptions of what human being can do, you can take flight in all kinds of ways.”

Online college credit earned in an apprenticeship like those at IBM could be transferred to a college, should the learner ultimately want to pursue a college degree. But some colleges construct barriers to accepting transfer credit, possibly to maximize their revenue, Pulsipher said.

“In the federal student aid model, you’re financing the consumption of credit rather than the delivery of value,” Pulsipher said.

Barnes made a plea for higher education leaders to think less about inputs and more about outputs that, in his view, students and employers want.

“As an employer, we don’t measure ourselves by what we spend,” Barnes said. “We measure ourselves by what we create, what we produce, what we deliver and whether it’s quality.”

But Hansen had a different take on the apparently misaligned priorities.

“A lot of people say the education system is broken in the United States,” Hansen said. “It’s not broken. It does exactly what it’s designed to do, which is create degree recipients. It’s our job to challenge that and say maybe there need to be alternatives.”

Online and Blended Learning
Technology
Editorial Tags: 
Image Source: 
Susan D’Agostino for Inside Higher Ed
Image Caption: 
“In the federal student aid model, you’re financing the consumption of credit rather than the delivery of value,” Scott Pulsipher, Western Governors University president, said.
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Magazine treatment: 
Trending: 
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
6
In-Article Advertisement High: 
9
In-Article related stories: 
12
In-Article Advertisement Low: 
15
Include DNU?: 
Yes
In-Article Careers: 
3

LSU Reacts to Student’s Viral TikTok Promoting AI Essay Tool

Louisiana State University athlete and social media influencer Olivia Dunne posted a TikTok last weekend promoting Caktus AI, an AI essay-writing tool, to her more than seven million followers. In the paid advertisement, Dunne wrote that the tool “will provide real resources for you to cite at the end of your essays and paragraphs;)”

“Can’t imagine the school is gonna like this one [laughing face emoji],” a TikToker who goes by user588168471905 replied.

Indeed. In the wake of Dunne’s post, Louisiana State University issued a statement: “Technology, including AI, can foster learning and creativity. At LSU, our professors and students are empowered to use technology for learning and pursuing the highest standards of academic integrity. However, using AI to produce work that a student then represents as one’s own could result in a charge of academic misconduct, as outlined in the Code of Student Conduct. More information for faculty can be found here on ‘What College Faculty Should Know about ChatGPT,’” the statement said, according to WBRZ.

Dunne has since released another TikTok that could be interpreted as a response to her university, according to The Comeback. In it, she mimes dialogue from the popular TV series The Office: “What did I say? I talk a lot, so I’ve learned to just tune myself out.”

The news is set against the backdrop of an AI arms race that has struck some of higher ed’s most cherished values, including academic integrity, learning and life itself. In late 2022, OpenAI released ChatGPT—a sophisticated AI chat bot that interacts with users in a conversational way. Household names such as Google and Microsoft, as well as lesser known players such as Moonbeam and Caktus AI, now all offer sophisticated AI writing tools that produce plausible, college-level essays.

Though AI systems are expected to produce text that will one day be indistinguishable from human-written prose, many academics say that AI writing detection is a losing battle worth fighting.

Is this diversity newsletter?: 
Hide by line?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Website Headline: 
Microcredentials Confuse Employers, Colleges and Learners
Trending: 
Live Updates: 
liveupdates0

Microcredentials confuse employers, colleges and learners

Image: 
Businessmen analyze and discover secret documents. business concept vector illustration

Reskilling. Upskilling. Certificates. Certifications. Badges. Licenses. Microcredentials. Alternative credentials. Digital credentials.

So many terms. So little agreement on what they mean, least of all in higher ed.

“Employers say, ‘It’s great that this individual has these skills, but we’ll ask our own questions to verify the learner’s knowledge,’” Kyle Albert, assistant research professor at the George Washington University Institute of Public Policy, said. “It’s a trust-but-verify situation.”

Nonetheless, demand in the large, growing microcredential market is strong, but learners also struggle to make sense of offerings. By one count, the United States is home to more than one million unique educational credentials, which represents a more than threefold increase since 2018. (Some are offered by nonacademic providers.)

“Digital credential options are fairly easy to find on the internet where websites describe the curriculum,” Albert said. “But some [learners] say that they click on the first few that come up … and they rely on anecdotal reviews on Reddit, Yelp or Indeed.com,” given the largely absent data and analysis on program quality.

And colleges struggle to deliver what employers want.

“With the economy shifting … we need workforce education training faster and better,” said James Fong, chief research officer at the University Professional and Continuing Education Association (UPCEA).

Three studies on alternative credentials were published recently, and all point to employers’, colleges’ and students’ confusion about microcredentials. But there is good news, too. Abundant nomenclature aside, all parties appear eager to work together to deliver or pursue quality, verifiable, bite-size, low-priced, nondegree online offerings targeted to specific industries. Here’s a round of insights from the three reports.

Employers Have Questions

Most employers (69 percent) are aware of nondegree credentials—they are either “extremely familiar” or “very familiar” with them—but most (65 percent) would also like to see proof of their effectiveness, according to a 2023 report from UPCEA.

When a job applicant lists a nondegree credential on their résumé, close to half of employers do not know what to make of the program’s quality (46 percent) and the acquired skills and competencies (42 percent), according to the report.

“It has been the desire of many entrepreneurs, foundations and policy leaders that microcredentials will become the substitute for expensive degrees,” Sean Gallagher, executive professor of educational policy at Northeastern University, said. “But there’s still very little evidence that microcredentials will necessarily land someone a job in the same way that a degree will.”

The study surveyed 510 individuals who hire, train or offer development to employees within organizations that spanned financial services, health care, manufacturing, business education and other fields. Respondents reported job titles including senior manager, senior director, CEO, executive vice president and human resources manager or director.

The employer respondents laid bare their desire to engage with colleges on curriculum design for these non-credit-bearing offerings. That is, approximately two-thirds (65 percent) said they would collaborate with colleges to develop workforce credentials and gain information about program effectiveness. Approximately half (53 percent) deemed employer engagement a necessity.

“Employers want to be on advisory committees,” Fong said. “They want to be able to say what skills are important … Faculty can’t drive everything.”

This dynamic presents challenges and opportunities for colleges. Some in higher ed worry that alternative credentials may cannibalize their degree programs, experts say. But they also suggest that microcredentials could bolster colleges’ traditional offerings.

“We have 39 million people in the U.S. with some college but no degree,” Fong said, adding that this population grew by another two or three million during the pandemic. “We could reverse [that trend] by giving them educational products that will get them reconnected, that will value their prior learning, that will get them to that degree.”

But success in this realm may require colleges to think beyond degrees, according to Fong.

“Gen Z and millennials are used to taking smaller, bite-size pieces,” Fong said. “The 120-credit degree is such a big bite, considering the way they grew up. They were given rewards at earlier stages and milestones. [Higher ed] can have cake and eat it, too, with a degree, but we’ve also got to reward people for accomplishments along the way.”

To realize that vision, higher ed professionals might consider communicating more with employers, according to the report. Nearly half of the survey’s employer respondents (44 percent) said that no college has approached them with an invitation to collaborate on developing nondegree or alternative credentials. More than two-thirds (68 percent) of employers want to be approached by a college to collaborate on such initiatives.

Part of the communication problem lies in the abundance of terms for non-credit-bearing offerings. An earlier (2022) poll of UPCEA members found that higher ed professionals most often use the term “microcredentials” (31 percent) but that “alternative credentials” (26 percent), “nondegree credentials” (19 percent) and other terms are widely used as well. (Note: An earlier version of this article listed an incorrect year for this UPCEA poll for nomenclature for non-credit-bearing offerings.)

“This whole alternative credential, microcredential, nondegree credential thing is very important to employers,” Fong said. “They want to be able to say what skills are important. Nomenclature is an issue but so is the relationship” between employers and higher ed.

Learners Underestimate Outcomes

Learners who earned MicroMasters credentials from edX and Specializations from Coursera vastly underestimated how much they would learn in these pursuits, according to an EdResearcher study. Few (27 percent) thought that they would learn something upon starting a program, but nearly all (94 percent) reported having learned something new. EdX’s MicroMasters programs offer college-provided, graduate-level courses for developing career skills or earning graduate credit. Specializations on the Coursera platform are college-provided courses focused on career skills.

“That’s great news,” said Fiona Hollands, senior researcher at Teachers College at Columbia University, founder and managing director of EdResearcher, and co-author of the report, adding that students’ reasons for pursing education vary. “I’ve always been a bit of a skeptic in the past that a lot of higher education offerings are more credentialing vehicles than they are really teaching anybody anything new.”

Most of the learners (75 percent) had already earned an undergraduate degree, and two out of five had already earned a graduate degree, which makes the results on knowledge gained especially striking, Hollands added.

The study considered 25,891 survey responses from learners who started the courses between February 2017 and September 2021. The study also followed up with 2,288 of the learners who completed the courses between April 2018 and November 2022. The courses covered topics related to business, marketing, professional advancement, finance and data science.

The most frequently noted anticipated benefits for program completion were: improving job performance (41 percent), improving job applications (28 percent) and learning something new (27 percent). In contrast, the most frequently reported benefits were: learning something new (94 percent), improved job performance (38 percent) and improved English language skills (23 percent).

The benefits, while relevant to employees, are noteworthy but mostly uncompensated. That is, approximately 66 percent of the course completers paid for the courses themselves, and the vast majority studied during unpaid leisure time.

“Employers should be paying attention and supporting learners’ participation if the topic is relevant to their business,” Hollands said, adding that the learning may aid employers’ productivity and retention efforts. “Learners think they’re improving job performance, and they cost very little.” MicroMasters completers spend on average 412 hours and pay $900 to $1,300, according to the report. Specializations completers take an average of 42 hours and cost $325.

If employers built confidence in higher ed’s microcredential offerings, they might support their employees’ pursuits. But such a win would require enhanced communication between employers and colleges, Fong said, and between learners and employers, Hollands said.

Colleges Overlook Outreach to Employers

Global learners interested in science, technology, engineering and mathematics lack awareness of digital credential career training options, according to a recent IBM study. Even if they understood their options, they worry that such credentials may be costly to obtain.

The study, conducted by Morning Consult on behalf of IBM, considered more than 14,000 interviews of job seekers, students and career changers across 13 countries.

Nearly half (40 percent) of respondents reported that their greatest barrier is not knowing where to start on the digital credential landscape. Most respondents (60 percent) were concerned that the cost would be out of reach. Yet access and cost information would be timely, as most respondents (60 percent) are either already looking for a new job or expect to in the next year.

“There’s still very little evidence that microcredentials will necessarily land someone a job in the same way that a degree will,” Gallagher said. But a sizable percentage of EdResearcher survey respondents (40 percent) said that such offerings improved their job performance, so the lack of evidence does not appear to constrain demand.

Microcredential providers often market on social media platforms such as TikTok, Facebook and Instagram, according to Albert.

There, “they can reach the population of youth who are often really directionless,” Albert said. “Let’s be honest: the state of career advising for young people in the U.S. schools is not great, and once you’re out of high school, it’s quite abysmal.”

But here’s some good news. Most of these learners (90 percent) have confidence that they can develop skills or learn in an online program, the IBM study says. If they could make sense of digital credential options, they might build on their strong confidence.

Here, colleges must walk a fine line, experts say, as microcredential offerings are not a panacea.

“Microcredentials are a field where success stories are highly visible, but the failures are largely hidden,” Albert said. “When a microcredential is unsuccessful, it just kind of disappears from the institution’s website.” That can mean sunk costs if a college invests in a program that does not ultimately succeed. Still, such offerings, when built on higher ed–industry partnerships, can be marketed as stackable credentials that may feed into undergraduate or graduate degree programs.

“Faculty can’t just say no,” Fong said. “They can question the quality, just like they did with online. But in terms of an institutional survival, these new educational credentials are going to be essential if an institution wants to survive and thrive.”

 

Online and Blended Learning
Survey
Technology
Editorial Tags: 
Image Source: 
Yutthana Gaetgeaw/iStock/Getty Images
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Magazine treatment: 
Trending: 
Trending text: 
Microcredential Confusion
Trending order: 
1
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
3
In-Article Advertisement High: 
9
In-Article related stories: 
12
In-Article Advertisement Low: 
15
Include DNU?: 
Yes
In-Article Careers: 
6

Microcredentials confuse employers, colleges and learners

Image: 
Businessmen analyze and discover secret documents. business concept vector illustration

Reskilling. Upskilling. Certificates. Certifications. Badges. Licenses. Microcredentials. Alternative credentials. Digital credentials.

So many terms. So little agreement on what they mean, least of all in higher ed.

“Employers say, ‘It’s great that this individual has these skills, but we’ll ask our own questions to verify the learner’s knowledge,’” Kyle Albert, assistant research professor at the George Washington University Institute of Public Policy, said. “It’s a trust-but-verify situation.”

Nonetheless, demand in the large, growing microcredential market is strong, but learners also struggle to make sense of offerings. By one count, the United States is home to more than one million unique educational credentials, which represents a more than threefold increase since 2018. (Some are offered by nonacademic providers.)

“Digital credential options are fairly easy to find on the internet where websites describe the curriculum,” Albert said. “But some [learners] say that they click on the first few that come up … and they rely on anecdotal reviews on Reddit, Yelp or Indeed.com,” given the largely absent data and analysis on program quality.

And colleges struggle to deliver what employers want.

“With the economy shifting … we need workforce education training faster and better,” said James Fong, chief research officer at the University Professional and Continuing Education Association (UPCEA).

Three studies on alternative credentials were published recently, and all point to employers’, colleges’ and students’ confusion about microcredentials. But there is good news, too. Abundant nomenclature aside, all parties appear eager to work together to deliver or pursue quality, verifiable, bite-size, low-priced, nondegree online offerings targeted to specific industries. Here’s a round of insights from the three reports.

Employers Have Questions

Most employers (69 percent) are aware of nondegree credentials—they are either “extremely familiar” or “very familiar” with them—but most (65 percent) would also like to see proof of their effectiveness, according to a 2023 report from UPCEA.

When a job applicant lists a nondegree credential on their résumé, close to half of employers do not know what to make of the program’s quality (46 percent) and the acquired skills and competencies (42 percent), according to the report.

“It has been the desire of many entrepreneurs, foundations and policy leaders that microcredentials will become the substitute for expensive degrees,” Sean Gallagher, executive professor of educational policy at Northeastern University, said. “But there’s still very little evidence that microcredentials will necessarily land someone a job in the same way that a degree will.”

The study surveyed 510 individuals who hire, train or offer development to employees within organizations that spanned financial services, health care, manufacturing, business education and other fields. Respondents reported job titles including senior manager, senior director, CEO, executive vice president and human resources manager or director.

The employer respondents laid bare their desire to engage with colleges on curriculum design for these non-credit-bearing offerings. That is, approximately two-thirds (65 percent) said they would collaborate with colleges to develop workforce credentials and gain information about program effectiveness. Approximately half (53 percent) deemed employer engagement a necessity.

“Employers want to be on advisory committees,” Fong said. “They want to be able to say what skills are important … Faculty can’t drive everything.”

This dynamic presents challenges and opportunities for colleges. Some in higher ed worry that alternative credentials may cannibalize their degree programs, experts say. But they also suggest that microcredentials could bolster colleges’ traditional offerings.

“We have 39 million people in the U.S. with some college but no degree,” Fong said, adding that this population grew by another two or three million during the pandemic. “We could reverse [that trend] by giving them educational products that will get them reconnected, that will value their prior learning, that will get them to that degree.”

But success in this realm may require colleges to think beyond degrees, according to Fong.

“Gen Z and millennials are used to taking smaller, bite-size pieces,” Fong said. “The 120-credit degree is such a big bite, considering the way they grew up. They were given rewards at earlier stages and milestones. [Higher ed] can have cake and eat it, too, with a degree, but we’ve also got to reward people for accomplishments along the way.”

To realize that vision, higher ed professionals might consider communicating more with employers, according to the report. Nearly half of the survey’s employer respondents (44 percent) said that no college has approached them with an invitation to collaborate on developing nondegree or alternative credentials. More than two-thirds (68 percent) of employers want to be approached by a college to collaborate on such initiatives.

Part of the communication problem lies in the abundance of terms for non-credit-bearing offerings. An earlier (2022) poll of UPCEA members found that higher ed professionals most often use the term “microcredentials” (31 percent) but that “alternative credentials” (26 percent), “nondegree credentials” (19 percent) and other terms are widely used as well. (Note: An earlier version of this article listed an incorrect year for this UPCEA poll for nomenclature for non-credit-bearing offerings.)

“This whole alternative credential, microcredential, nondegree credential thing is very important to employers,” Fong said. “They want to be able to say what skills are important. Nomenclature is an issue but so is the relationship” between employers and higher ed.

Learners Underestimate Outcomes

Learners who earned MicroMasters credentials from edX and Specializations from Coursera vastly underestimated how much they would learn in these pursuits, according to an EdResearcher study. Few (27 percent) thought that they would learn something upon starting a program, but nearly all (94 percent) reported having learned something new. EdX’s MicroMasters programs offer college-provided, graduate-level courses for developing career skills or earning graduate credit. Specializations on the Coursera platform are college-provided courses focused on career skills.

“That’s great news,” said Fiona Hollands, senior researcher at Teachers College at Columbia University, founder and managing director of EdResearcher, and co-author of the report, adding that students’ reasons for pursing education vary. “I’ve always been a bit of a skeptic in the past that a lot of higher education offerings are more credentialing vehicles than they are really teaching anybody anything new.”

Most of the learners (75 percent) had already earned an undergraduate degree, and two out of five had already earned a graduate degree, which makes the results on knowledge gained especially striking, Hollands added.

The study considered 25,891 survey responses from learners who started the courses between February 2017 and September 2021. The study also followed up with 2,288 of the learners who completed the courses between April 2018 and November 2022. The courses covered topics related to business, marketing, professional advancement, finance and data science.

The most frequently noted anticipated benefits for program completion were: improving job performance (41 percent), improving job applications (28 percent) and learning something new (27 percent). In contrast, the most frequently reported benefits were: learning something new (94 percent), improved job performance (38 percent) and improved English language skills (23 percent).

The benefits, while relevant to employees, are noteworthy but mostly uncompensated. That is, approximately 66 percent of the course completers paid for the courses themselves, and the vast majority studied during unpaid leisure time.

“Employers should be paying attention and supporting learners’ participation if the topic is relevant to their business,” Hollands said, adding that the learning may aid employers’ productivity and retention efforts. “Learners think they’re improving job performance, and they cost very little.” MicroMasters completers spend on average 412 hours and pay $900 to $1,300, according to the report. Specializations completers take an average of 42 hours and cost $325.

If employers built confidence in higher ed’s microcredential offerings, they might support their employees’ pursuits. But such a win would require enhanced communication between employers and colleges, Fong said, and between learners and employers, Hollands said.

Colleges Overlook Outreach to Employers

Global learners interested in science, technology, engineering and mathematics lack awareness of digital credential career training options, according to a recent IBM study. Even if they understood their options, they worry that such credentials may be costly to obtain.

The study, conducted by Morning Consult on behalf of IBM, considered more than 14,000 interviews of job seekers, students and career changers across 13 countries.

Nearly half (40 percent) of respondents reported that their greatest barrier is not knowing where to start on the digital credential landscape. Most respondents (60 percent) were concerned that the cost would be out of reach. Yet access and cost information would be timely, as most respondents (60 percent) are either already looking for a new job or expect to in the next year.

“There’s still very little evidence that microcredentials will necessarily land someone a job in the same way that a degree will,” Gallagher said. But a sizable percentage of EdResearcher survey respondents (40 percent) said that such offerings improved their job performance, so the lack of evidence does not appear to constrain demand.

Microcredential providers often market on social media platforms such as TikTok, Facebook and Instagram, according to Albert.

There, “they can reach the population of youth who are often really directionless,” Albert said. “Let’s be honest: the state of career advising for young people in the U.S. schools is not great, and once you’re out of high school, it’s quite abysmal.”

But here’s some good news. Most of these learners (90 percent) have confidence that they can develop skills or learn in an online program, the IBM study says. If they could make sense of digital credential options, they might build on their strong confidence.

Here, colleges must walk a fine line, experts say, as microcredential offerings are not a panacea.

“Microcredentials are a field where success stories are highly visible, but the failures are largely hidden,” Albert said. “When a microcredential is unsuccessful, it just kind of disappears from the institution’s website.” That can mean sunk costs if a college invests in a program that does not ultimately succeed. Still, such offerings, when built on higher ed–industry partnerships, can be marketed as stackable credentials that may feed into undergraduate or graduate degree programs.

“Faculty can’t just say no,” Fong said. “They can question the quality, just like they did with online. But in terms of an institutional survival, these new educational credentials are going to be essential if an institution wants to survive and thrive.”

 

Online and Blended Learning
Survey
Technology
Editorial Tags: 
Image Source: 
Yutthana Gaetgeaw/iStock/Getty Images
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Magazine treatment: 
Trending: 
Trending text: 
Microcredential Confusion
Trending order: 
1
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
3
In-Article Advertisement High: 
9
In-Article related stories: 
12
In-Article Advertisement Low: 
15
Include DNU?: 
Yes
In-Article Careers: 
6

AI bots can seem sentient. Students need guardrails

Image: 
Black and white silhouette of a woman with her palm to her forehead as she looks at a laptop.

Facebook founder Mark Zuckerberg once advised tech founders to “move fast and break things.” But in moving fast, some argue that he “broke” those young people whose social media exposure has led to depression, anxiety, cyberbullying, poor body image and loss of privacy or sleep during a vulnerable life stage.

Now, Big Tech is moving fast again with the release of sophisticated AI chat bots, not all of which have been adequately vetted before their public release.

OpenAI launched an artificial intelligence arms race in late 2022 with the release of ChatGPT—a sophisticated AI chat bot that interacts with users in a conversational way, but also lies and reproduces systemic societal biases. The bot became an instant global sensation, even as it raised concerns about cheating and how college writing might change.

In response, Google moved up the release of its rival chat bot, Bard, to Feb. 6, despite employee leaks that the tool was not ready. The company’s stock sank after a series of product missteps. Then, a day later, and in an apparent effort not to be left out of the AI–chat bot party, Microsoft launched its AI-powered Bing search engine. Early users quickly found that the eerily human-sounding bot produced unhinged, manipulative, rude, threatening and false responses, which prompted the company to implement changes—and AI ethicists to express reservations.

Rushed decisions, especially in technology, can lead to what’s called “path dependence,” a phenomenon in which early decisions constrain later events or decisions, according to Mark Hagerott, a historian of technology and chancellor of the North Dakota University system who earlier served as deputy director of the U.S. Naval Academy’s Center for Cyber Security Studies. The QWERTY keyboard, by some accounts (not everyone agrees), may have been designed in the late 1800s to minimize jamming of high-use typewriter letter keys. But the design persists even on today’s cellphone keyboards, despite the suboptimal arrangement of the letters.

“Being deliberate doesn’t mean we’re going to stop these things, because they’re almost a force of nature,” Hagerott said about the presence of AI tools in higher ed. “But if we’re engaged early, we can try to get more positive effects than negative effects.”

That’s why North Dakota University system leaders launched a task force to develop strategies for minimizing the negative effects of artificial intelligence on their campus communities. As these tools infiltrate higher ed, many other colleges and professors have developed policies designed to ensure academic integrity and promote creative uses of the emerging tech in the classroom. But some academics are concerned that, by focusing on academic honesty and classroom innovation, the policies have one blind spot. That is, colleges have been slow to recognize that students may need AI literacy training that helps them navigate emotional responses to eerily human-sounding bots’ sometimes-disturbing replies.

“I can’t see the future, but I’ve studied enough of these technologies and lived with them to know that you can really get some things wrong,” Hagerott said. “Early decisions can lock in, and they could affect students’ learning and dependency on tools that, in the end, may prove to be less than ideal to the development of critical thinking and discernment.”

AI Policies Take Shape—and Require Updates

When Emily Pitts Donahoe, associate director of instructional support at the University of Mississippi’s Center for Teaching and Learning, began teaching this semester, she understood that she needed to address her students’ questions and excitement surrounding ChatGPT. In her mind, the university’s academic integrity policy covered instances in which students, for example, copied or misrepresented work as their own. That freed her to craft a policy that began from a place of openness and curiosity.

Donahoe opted to co-create a course policy on generative AI writing tools with her students. She and the students engaged in an exercise in which they all submitted suggested guidelines for a class policy, after which they upvoted each other’s suggestions. Donahoe then distilled the top votes into a document titled “Academic integrity guidelines for use and attribution of AI.”

Some allowable uses in Donahoe’s policy include using AI writing generators to brainstorm, overcome writer’s block, inspire ideas, draft an outline, edit and proofread. The impermissible uses included taking what the writing generator wrote at face value, including huge chunks of its prose in an assignment and failing to disclose use of an AI writing tool or the extent to which it was used.

Donahoe was careful to emphasize that the rules they established applied to her class, but that other professors’ expectations may differ. She also disclosed that such a policy was as new to her as to the students, given the quick rise of ChatGPT and rival tools.

“It may turn out at the end of the semester that I think that everything I’ve just said is crap,” Donahoe said. “I’m still trying to be flexible for when new versions of this technology emerge or as we adapt to it ourselves.”

Like Donahoe, many professors have designed new individual policies with similar themes. At the same time, many college teaching and learning centers have developed new resource pages with guidance and links to articles such as Inside Higher Ed’s “ChatGPT Advice Academics Can Use Now.”

The academic research community has responded with new policies of its own. For example, ArXiv, the open-access repository of pre- and postprints, and the journals Nature and Science have all developed new policies that share two main directives. First, AI language tools cannot be listed as authors, since they cannot be held accountable for a paper’s contents. Second, researchers must document use of an AI language tool.

Nonetheless, academics’ efforts to navigate the new AI-infused landscape remain a work in progress. ArXiv, for example, first released its policy on Jan. 31 but issued an update on Feb. 7. Also, many have discovered that documenting use is a necessary but insufficient condition for acceptable use. For example, when Vanderbilt University employees wrote an email to students about the recent shooting at Michigan State University in which three people were killed and five were wounded, after which the gunman killed himself, they included a note at the bottom that said, “Paraphrase from OpenAI’s ChatGPT.” Many found such a usage, while acknowledged, to be deeply insensitive and flawed.

Those who are at work drafting such policies are grappling with some of academe’s most cherished values, including academic integrity, learning and life itself. Given the speed and the stakes, these individuals must think fast while proceeding with care. They must be explicit while remaining open to change. They must also project authority while exhibiting humility in the midst of uncertainty.

But academic integrity and accuracy are not the only issues related to AI chat bots. Further, students already have a template for understanding these issues, according to Ethan Mollick, associate professor of management and academic director at Wharton Interactive at the Wharton School at the University of Pennsylvania.

Policies might go beyond academic honesty and creative classroom uses, according to many academics consulted for this story. That is, the bots’ underlying technology—large language models—is intended to mimic human behavior. Though the machines are not sentient, humans often respond to them with emotion. As Big Tech accelerates its use of the public as a testing ground for the suspiciously human-sounding chat bots, students may be underprepared to manage their emotional responses. In this sense, AI chat bot policies that address literacy may help protect students’ mental health.

“There are enough stressors in the world that really are impacting our students,” Andrew Armacost, president of the University of North Dakota, said. AI chat bots “add potentially another dimension.”

An Often-Missing Ingredient AI Chat Bot Policy

Bing AI is “much more powerful than ChatGPT” and “often unsettling,” Mollick wrote in a tweet thread about his engagement with the bot before Microsoft imposed restrictions.

“I say that as someone who knows that there is no actual personality or entity behind a [large language model],” Mollick wrote. “But, even knowing that it was basically auto-completing a dialog based on my prompts, it felt like you were dealing with a real person. I never attempted to ‘jailbreak’ the chat bot or make it act in any particular way, but I still got answers that felt extremely personal, and interactions that made the bot feel intentional.”

The lesson, according to Mollick, is that users can easily be fooled into thinking that an AI chat bot is sentient.

That concerns Hagerott, who, when he taught college, calibrated his discussions with students based on how long they had been in college.

“In those formative freshman years, I was always so careful,” Hagerott said. “I could talk in certain ways with seniors and graduate students, but boy, with freshmen, you want to encourage them, have them know that people learn in different ways, that they’ll get through this.”

Hagerott is concerned that some students lack AI literacy training that supports understanding of their emotional relationships to the large language models, including potential mental health risks. A tentative student who asks an AI chat bot a question about their self-worth, for example, may be unprepared to manage their own emotional response to a cold, negative response, Hagerott said.

Hollis Robbins, dean of the University of Utah’s College of Humanities, shares similar concerns. Colleges have long used institutional chat bots on their websites to support access to library resources or to enhance student success and retention. But such college-specific chat bots often have carefully engineered responses to the kinds of sensitive questions college students are prone to ask, including questions about their physical or mental health, Robbins said.

“I’m not sure it is always clear to students which is ChatGPT and which is a university-authorized and promoted chat,” Robbins said, adding that she looks forward to a day when colleges may have their own ChatGPT-like platforms designed for their students and researchers.

To be clear, none of the academics interviewed for this article argued that colleges should ban AI chat bots. The tools have infiltrated society as much as higher ed. But all expressed concern that some colleges’ policies may not be keeping pace with Big Tech’s AI release of undertested tools.

And so, new policies might focus on protecting student mental health, in addition to problems with accuracy and bias.

“It’s imperative to teach students that chat bots have no sentience or reasoning and that these synthetic interactions are, despite what they seem, still nothing more than predictive text generation,” Marc Watkins, lecturer in composition and rhetoric at the University of Mississippi, said of the shifting landscape. “This responsibility certainly adds another dimension to the already-challenging task of trying to teach AI literacy.”

Teaching and Learning
Technology
Editorial Tags: 
Image Source: 
kieferpix/iStock/Getty Images
Image Caption: 
Eerily human-sounding chat bots sometimes produce disturbing responses to students’ queries. Policies that address AI literacy may help protect students’ mental health.
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Magazine treatment: 
Trending: 
Trending text: 
Guardrails for Students on AI
Trending order: 
2
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
3
In-Article Advertisement High: 
6
In-Article related stories: 
9
In-Article Advertisement Low: 
12
Include DNU?: 
Yes

AI bots can seem sentient. Students need guardrails

Image: 
Black and white silhouette of a woman with her palm to her forehead as she looks at a laptop.

Facebook founder Mark Zuckerberg once advised tech founders to “move fast and break things.” But in moving fast, some argue that he “broke” those young people whose social media exposure has led to depression, anxiety, cyberbullying, poor body image and loss of privacy or sleep during a vulnerable life stage.

Now, Big Tech is moving fast again with the release of sophisticated AI chat bots, not all of which have been adequately vetted before their public release.

OpenAI launched an artificial intelligence arms race in late 2022 with the release of ChatGPT—a sophisticated AI chat bot that interacts with users in a conversational way, but also lies and reproduces systemic societal biases. The bot became an instant global sensation, even as it raised concerns about cheating and how college writing might change.

In response, Google moved up the release of its rival chat bot, Bard, to Feb. 6, despite employee leaks that the tool was not ready. The company’s stock sank after a series of product missteps. Then, a day later, and in an apparent effort not to be left out of the AI–chat bot party, Microsoft launched its AI-powered Bing search engine. Early users quickly found that the eerily human-sounding bot produced unhinged, manipulative, rude, threatening and false responses, which prompted the company to implement changes—and AI ethicists to express reservations.

Rushed decisions, especially in technology, can lead to what’s called “path dependence,” a phenomenon in which early decisions constrain later events or decisions, according to Mark Hagerott, a historian of technology and chancellor of the North Dakota University system who earlier served as deputy director of the U.S. Naval Academy’s Center for Cyber Security Studies. The QWERTY keyboard, by some accounts (not everyone agrees), may have been designed in the late 1800s to minimize jamming of high-use typewriter letter keys. But the design persists even on today’s cellphone keyboards, despite the suboptimal arrangement of the letters.

“Being deliberate doesn’t mean we’re going to stop these things, because they’re almost a force of nature,” Hagerott said about the presence of AI tools in higher ed. “But if we’re engaged early, we can try to get more positive effects than negative effects.”

That’s why North Dakota University system leaders launched a task force to develop strategies for minimizing the negative effects of artificial intelligence on their campus communities. As these tools infiltrate higher ed, many other colleges and professors have developed policies designed to ensure academic integrity and promote creative uses of the emerging tech in the classroom. But some academics are concerned that, by focusing on academic honesty and classroom innovation, the policies have one blind spot. That is, colleges have been slow to recognize that students may need AI literacy training that helps them navigate emotional responses to eerily human-sounding bots’ sometimes-disturbing replies.

“I can’t see the future, but I’ve studied enough of these technologies and lived with them to know that you can really get some things wrong,” Hagerott said. “Early decisions can lock in, and they could affect students’ learning and dependency on tools that, in the end, may prove to be less than ideal to the development of critical thinking and discernment.”

AI Policies Take Shape—and Require Updates

When Emily Pitts Donahoe, associate director of instructional support at the University of Mississippi’s Center for Teaching and Learning, began teaching this semester, she understood that she needed to address her students’ questions and excitement surrounding ChatGPT. In her mind, the university’s academic integrity policy covered instances in which students, for example, copied or misrepresented work as their own. That freed her to craft a policy that began from a place of openness and curiosity.

Donahoe opted to co-create a course policy on generative AI writing tools with her students. She and the students engaged in an exercise in which they all submitted suggested guidelines for a class policy, after which they upvoted each other’s suggestions. Donahoe then distilled the top votes into a document titled “Academic integrity guidelines for use and attribution of AI.”

Some allowable uses in Donahoe’s policy include using AI writing generators to brainstorm, overcome writer’s block, inspire ideas, draft an outline, edit and proofread. The impermissible uses included taking what the writing generator wrote at face value, including huge chunks of its prose in an assignment and failing to disclose use of an AI writing tool or the extent to which it was used.

Donahoe was careful to emphasize that the rules they established applied to her class, but that other professors’ expectations may differ. She also disclosed that such a policy was as new to her as to the students, given the quick rise of ChatGPT and rival tools.

“It may turn out at the end of the semester that I think that everything I’ve just said is crap,” Donahoe said. “I’m still trying to be flexible for when new versions of this technology emerge or as we adapt to it ourselves.”

Like Donahoe, many professors have designed new individual policies with similar themes. At the same time, many college teaching and learning centers have developed new resource pages with guidance and links to articles such as Inside Higher Ed’s “ChatGPT Advice Academics Can Use Now.”

The academic research community has responded with new policies of its own. For example, ArXiv, the open-access repository of pre- and postprints, and the journals Nature and Science have all developed new policies that share two main directives. First, AI language tools cannot be listed as authors, since they cannot be held accountable for a paper’s contents. Second, researchers must document use of an AI language tool.

Nonetheless, academics’ efforts to navigate the new AI-infused landscape remain a work in progress. ArXiv, for example, first released its policy on Jan. 31 but issued an update on Feb. 7. Also, many have discovered that documenting use is a necessary but insufficient condition for acceptable use. For example, when Vanderbilt University employees wrote an email to students about the recent shooting at Michigan State University in which three people were killed and five were wounded, after which the gunman killed himself, they included a note at the bottom that said, “Paraphrase from OpenAI’s ChatGPT.” Many found such a usage, while acknowledged, to be deeply insensitive and flawed.

Those who are at work drafting such policies are grappling with some of academe’s most cherished values, including academic integrity, learning and life itself. Given the speed and the stakes, these individuals must think fast while proceeding with care. They must be explicit while remaining open to change. They must also project authority while exhibiting humility in the midst of uncertainty.

But academic integrity and accuracy are not the only issues related to AI chat bots. Further, students already have a template for understanding these issues, according to Ethan Mollick, associate professor of management and academic director at Wharton Interactive at the Wharton School at the University of Pennsylvania.

Policies might go beyond academic honesty and creative classroom uses, according to many academics consulted for this story. That is, the bots’ underlying technology—large language models—is intended to mimic human behavior. Though the machines are not sentient, humans often respond to them with emotion. As Big Tech accelerates its use of the public as a testing ground for the suspiciously human-sounding chat bots, students may be underprepared to manage their emotional responses. In this sense, AI chat bot policies that address literacy may help protect students’ mental health.

“There are enough stressors in the world that really are impacting our students,” Andrew Armacost, president of the University of North Dakota, said. AI chat bots “add potentially another dimension.”

An Often-Missing Ingredient AI Chat Bot Policy

Bing AI is “much more powerful than ChatGPT” and “often unsettling,” Mollick wrote in a tweet thread about his engagement with the bot before Microsoft imposed restrictions.

“I say that as someone who knows that there is no actual personality or entity behind a [large language model],” Mollick wrote. “But, even knowing that it was basically auto-completing a dialog based on my prompts, it felt like you were dealing with a real person. I never attempted to ‘jailbreak’ the chat bot or make it act in any particular way, but I still got answers that felt extremely personal, and interactions that made the bot feel intentional.”

The lesson, according to Mollick, is that users can easily be fooled into thinking that an AI chat bot is sentient.

That concerns Hagerott, who, when he taught college, calibrated his discussions with students based on how long they had been in college.

“In those formative freshman years, I was always so careful,” Hagerott said. “I could talk in certain ways with seniors and graduate students, but boy, with freshmen, you want to encourage them, have them know that people learn in different ways, that they’ll get through this.”

Hagerott is concerned that some students lack AI literacy training that supports understanding of their emotional relationships to the large language models, including potential mental health risks. A tentative student who asks an AI chat bot a question about their self-worth, for example, may be unprepared to manage their own emotional response to a cold, negative response, Hagerott said.

Hollis Robbins, dean of the University of Utah’s College of Humanities, shares similar concerns. Colleges have long used institutional chat bots on their websites to support access to library resources or to enhance student success and retention. But such college-specific chat bots often have carefully engineered responses to the kinds of sensitive questions college students are prone to ask, including questions about their physical or mental health, Robbins said.

“I’m not sure it is always clear to students which is ChatGPT and which is a university-authorized and promoted chat,” Robbins said, adding that she looks forward to a day when colleges may have their own ChatGPT-like platforms designed for their students and researchers.

To be clear, none of the academics interviewed for this article argued that colleges should ban AI chat bots. The tools have infiltrated society as much as higher ed. But all expressed concern that some colleges’ policies may not be keeping pace with Big Tech’s AI release of undertested tools.

And so, new policies might focus on protecting student mental health, in addition to problems with accuracy and bias.

“It’s imperative to teach students that chat bots have no sentience or reasoning and that these synthetic interactions are, despite what they seem, still nothing more than predictive text generation,” Marc Watkins, lecturer in composition and rhetoric at the University of Mississippi, said of the shifting landscape. “This responsibility certainly adds another dimension to the already-challenging task of trying to teach AI literacy.”

Teaching and Learning
Technology
Editorial Tags: 
Image Source: 
kieferpix/iStock/Getty Images
Image Caption: 
Eerily human-sounding chat bots sometimes produce disturbing responses to students’ queries. Policies that address AI literacy may help protect students’ mental health.
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Magazine treatment: 
Trending: 
Trending text: 
Guardrails for Students on AI
Trending order: 
2
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
3
In-Article Advertisement High: 
6
In-Article related stories: 
9
In-Article Advertisement Low: 
12
Include DNU?: 
Yes

Should professors eliminate deadlines?

Image: 
Sand running through the bulbs of an hourglass

When Hannah Snyder, assistant professor of psychology at Brandeis University, first began teaching, she did not set multiple midsemester deadlines for students to report progress on their end-of-semester papers. As the weeks passed, she offered her students gentle reminders to begin early and pace themselves, given the approaching course end. But many students nonetheless procrastinated. As the final deadline drew close, many scrambled in a stressful, last-minute burst of work that produced underwhelming results.

“I’ve never had so many incompletes,” Snyder said of the missed opportunity to support the students throughout the term and ensure that all completed the course. “You’d think with all my research on the development of executive function and mental health and stress in emerging adulthood, I would have put together from the get-go that part of our job as faculty is to help students develop those skills.” (Executive function is a set of skills that underlie a human’s ability to plan and achieve goals.)

Some equity-minded professors may believe that a single long-term deadline is better than numerous short-term deadlines, especially for students whose schedules lack flexibility due to significant work or family responsibilities. Others argue that professors should set boundaries with students, not for them. Still others offer anecdotal reports that optional attendance policies, flexible deadlines and ungrading increase student engagement. Yet another faculty contingent resists structuring courses with short-term assignments that build to a large project out of concern that doing so coddles students.

But the science says that a single, far-off deadline for a substantial assignment undermines traditional-aged students’ success, as their self-monitoring and self-regulation skills are still developing.

“Most graduate programs provide very little instruction in teaching” for Ph.D.s who join the faculty ranks, Snyder said. “Professors are doing all sorts of things—sometimes with the best of intentions—that are actually not helpful based on a solid body of cognitive psych research.”

Productivity and Procrastination

Students procrastinate at rates that may be two to three times those observed in working populations, according to a Frontiers in Psychology study. The college environment is one that sometimes affords ample unstructured time, distractions and far-off deadlines—situational characteristics that contribute to what Frode Svartdal, professor of psychology at the Arctic University of Norway and co-author of the Frontiers study, has dubbed a “procrastination-friendly environment.”

At the same time, traditional-aged college students are in a developmental stage in which their executive function skills are often works in progress, Svartdal said. This means that they may, at times, be impulsive, distracted or challenged by efforts to follow through on planned activities. For example, an impulsive person may give up on a boring or difficult task in favor of an activity they prefer, even when doing so is not in their best interest.

When students’ still-developing executive function skills are paired with academe’s procrastination-friendly environment, the result can create a perfect storm. But speeding up students’ developmental growth rarely happens in an instant. That’s why addressing the situational piece of this equation offers more immediate promise. To minimize negative impacts on students’ intellectual growth, Svartdal offers a succinct message.

“Long-term deadlines should be avoided,” Svartdal wrote in an email.

Short-term deadlines serve as motivators for accomplishing accessible tasks. They also imbue each step in the process of completing a longer-term project with more meaning, according to Svartdal. When professors steer clear of single, far-off deadlines, they keep the focus on the course content.

“Presumably, our classes should be assessing students’ mastery of the material and not their executive function skills,” Snyder said. Numerous short-term deadlines “help all students but are critical for those who would otherwise flounder for reasons that have nothing to do with their understanding of the material.” This group includes those with mental health concerns, including attention deficit hyperactivity disorder, anxiety or depression.

Students with significant work or family responsibilities also benefit from more—not fewer—deadlines.

“Those students are more likely to find a particular task aversive. They’re tired, and they’re stressed,” said Akira Miyake, professor of psychology and neuroscience at the University of Colorado at Boulder. Miyake’s research considers holistic classroom interventions that aim to reduce academic procrastination. “To help students, provide scaffolding. Break down the [long-term] task into smaller deadlines.”

Still, students are often vocal about a desire for more independence, including on work for substantial projects. But Snyder knows from her research that students’ perceptions of their learning are often overestimates. Also, when a student submits an outline for a far-off paper, she is able to offer early corrective feedback that bolsters their success. And so, she has a ready response for students who protest.

“‘Current you’ might feel like I’m being annoying, but ‘future you’ is going to be glad not to have a mad scramble at the end, and you’ll get a better grade,’” Snyder said. Besides, student procrastinators may gain short-term benefits early in a semester in the form of, for example, more free time. But the long-term academic and personal costs can be more significant.

“Procrastination is associated with psychological distress,” Miyake said. Students who procrastinate suffer from reduced well-being, stress and mental and physical health problems, according to a Psychological Science study. But small changes to assignment-submission protocols can minimize this distress and help students stay focused on course content. “It’s easy to say that college students should be able to manage long-term deadlines on their own, but that’s not the case.”

Caveats

In between the options of “deadlines” and “no deadlines” lies the option of “flexible deadlines.” In recognition that today’s students are studying—and often also juggling work and family—during a global pandemic, among other significant societal concerns, some argue that structure and flexibility must be viewed as compatible.

“I’m all for being flexible about deadlines if a student asks for or needs accommodations,” Miyake said. Even when a request for accommodation is granted, the near-term deadline provides a professor with advance notice of a problem the student is facing. That heads-up offers the professor an opportunity to intervene early with help.

Also, students’ executive function skills, like most other human attributes, are variable. Managing long-term deadlines without supervision can be a skill that is honed over time.

“Some students can do it already, and some of them will never be able to do it,” Snyder said. “Both of those are OK.”

Many employers for whom students may work following graduation often rely on structured, short-term deadlines in the workplace, according to the researchers contacted for this article. That’s because employers also realize that multiple short-term deadlines keep teams on track and increase productivity.

“Not everyone has to be ready for a future where they’re let loose with months of unstructured time and expected to produce something big at the end,” Snyder said.

Some students, of course, work well when afforded autonomy. They may be innovative, prone to taking risks and proactive in their work. Such students might consider an independent path, as research has correlated these traits with entrepreneurial intention.

Assessment and Accountability
Teaching and Learning
Image Caption: 
serggn/iStock/Getty Images
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Magazine treatment: 
Trending: 
Trending text: 
Students Need Deadlines
Trending order: 
1
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
3
In-Article Advertisement High: 
6
In-Article related stories: 
9
In-Article Advertisement Low: 
12
Include DNU?: 
Yes

Should professors eliminate deadlines?

Image: 
Sand running through the bulbs of an hourglass

When Hannah Snyder, assistant professor of psychology at Brandeis University, first began teaching, she did not set multiple midsemester deadlines for students to report progress on their end-of-semester papers. As the weeks passed, she offered her students gentle reminders to begin early and pace themselves, given the approaching course end. But many students nonetheless procrastinated. As the final deadline drew close, many scrambled in a stressful, last-minute burst of work that produced underwhelming results.

“I’ve never had so many incompletes,” Snyder said of the missed opportunity to support the students throughout the term and ensure that all completed the course. “You’d think with all my research on the development of executive function and mental health and stress in emerging adulthood, I would have put together from the get-go that part of our job as faculty is to help students develop those skills.” (Executive function is a set of skills that underlie a human’s ability to plan and achieve goals.)

Some equity-minded professors may believe that a single long-term deadline is better than numerous short-term deadlines, especially for students whose schedules lack flexibility due to significant work or family responsibilities. Others argue that professors should set boundaries with students, not for them. Still others offer anecdotal reports that optional attendance policies, flexible deadlines and ungrading increase student engagement. Yet another faculty contingent resists structuring courses with short-term assignments that build to a large project out of concern that doing so coddles students.

But the science says that a single, far-off deadline for a substantial assignment undermines traditional-aged students’ success, as their self-monitoring and self-regulation skills are still developing.

“Most graduate programs provide very little instruction in teaching” for Ph.D.s who join the faculty ranks, Snyder said. “Professors are doing all sorts of things—sometimes with the best of intentions—that are actually not helpful based on a solid body of cognitive psych research.”

Productivity and Procrastination

Students procrastinate at rates that may be two to three times those observed in working populations, according to a Frontiers in Psychology study. The college environment is one that sometimes affords ample unstructured time, distractions and far-off deadlines—situational characteristics that contribute to what Frode Svartdal, professor of psychology at the Arctic University of Norway and co-author of the Frontiers study, has dubbed a “procrastination-friendly environment.”

At the same time, traditional-aged college students are in a developmental stage in which their executive function skills are often works in progress, Svartdal said. This means that they may, at times, be impulsive, distracted or challenged by efforts to follow through on planned activities. For example, an impulsive person may give up on a boring or difficult task in favor of an activity they prefer, even when doing so is not in their best interest.

When students’ still-developing executive function skills are paired with academe’s procrastination-friendly environment, the result can create a perfect storm. But speeding up students’ developmental growth rarely happens in an instant. That’s why addressing the situational piece of this equation offers more immediate promise. To minimize negative impacts on students’ intellectual growth, Svartdal offers a succinct message.

“Long-term deadlines should be avoided,” Svartdal wrote in an email.

Short-term deadlines serve as motivators for accomplishing accessible tasks. They also imbue each step in the process of completing a longer-term project with more meaning, according to Svartdal. When professors steer clear of single, far-off deadlines, they keep the focus on the course content.

“Presumably, our classes should be assessing students’ mastery of the material and not their executive function skills,” Snyder said. Numerous short-term deadlines “help all students but are critical for those who would otherwise flounder for reasons that have nothing to do with their understanding of the material.” This group includes those with mental health concerns, including attention deficit hyperactivity disorder, anxiety or depression.

Students with significant work or family responsibilities also benefit from more—not fewer—deadlines.

“Those students are more likely to find a particular task aversive. They’re tired, and they’re stressed,” said Akira Miyake, professor of psychology and neuroscience at the University of Colorado at Boulder. Miyake’s research considers holistic classroom interventions that aim to reduce academic procrastination. “To help students, provide scaffolding. Break down the [long-term] task into smaller deadlines.”

Still, students are often vocal about a desire for more independence, including on work for substantial projects. But Snyder knows from her research that students’ perceptions of their learning are often overestimates. Also, when a student submits an outline for a far-off paper, she is able to offer early corrective feedback that bolsters their success. And so, she has a ready response for students who protest.

“‘Current you’ might feel like I’m being annoying, but ‘future you’ is going to be glad not to have a mad scramble at the end, and you’ll get a better grade,’” Snyder said. Besides, student procrastinators may gain short-term benefits early in a semester in the form of, for example, more free time. But the long-term academic and personal costs can be more significant.

“Procrastination is associated with psychological distress,” Miyake said. Students who procrastinate suffer from reduced well-being, stress and mental and physical health problems, according to a Psychological Science study. But small changes to assignment-submission protocols can minimize this distress and help students stay focused on course content. “It’s easy to say that college students should be able to manage long-term deadlines on their own, but that’s not the case.”

Caveats

In between the options of “deadlines” and “no deadlines” lies the option of “flexible deadlines.” In recognition that today’s students are studying—and often also juggling work and family—during a global pandemic, among other significant societal concerns, some argue that structure and flexibility must be viewed as compatible.

“I’m all for being flexible about deadlines if a student asks for or needs accommodations,” Miyake said. Even when a request for accommodation is granted, the near-term deadline provides a professor with advance notice of a problem the student is facing. That heads-up offers the professor an opportunity to intervene early with help.

Also, students’ executive function skills, like most other human attributes, are variable. Managing long-term deadlines without supervision can be a skill that is honed over time.

“Some students can do it already, and some of them will never be able to do it,” Snyder said. “Both of those are OK.”

Many employers for whom students may work following graduation often rely on structured, short-term deadlines in the workplace, according to the researchers contacted for this article. That’s because employers also realize that multiple short-term deadlines keep teams on track and increase productivity.

“Not everyone has to be ready for a future where they’re let loose with months of unstructured time and expected to produce something big at the end,” Snyder said.

Some students, of course, work well when afforded autonomy. They may be innovative, prone to taking risks and proactive in their work. Such students might consider an independent path, as research has correlated these traits with entrepreneurial intention.

Assessment and Accountability
Teaching and Learning
Image Caption: 
serggn/iStock/Getty Images
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Magazine treatment: 
Trending: 
Trending text: 
Students Need Deadlines
Trending order: 
1
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
3
In-Article Advertisement High: 
6
In-Article related stories: 
9
In-Article Advertisement Low: 
12
Include DNU?: 
Yes

Community colleges' positive, pervasive digital leap

Image: 
A view of snowy Berlin, N.H., and sunny Miami.

Against the backdrop of headlines blaring news about declining community college enrollments, one bright spot has emerged: community college students, faculty and administrators are broadly aligned in their enthusiasm for the digital transformation that has occurred at their institutions since the start of the pandemic. That’s the broad finding of a report, “The Digital Transformation of the Community College,” published today by Bay View Analytics.

First, the (bad) backdrop: between 2012 and 2019, community college enrollments declined by approximately 12 percent. The pandemic accelerated that trend, as enrollment at these institutions fell another 9 percent between 2019 and 2020, according to the report.

But community college students, faculty and administrators—from rural New Hampshire, urban Miami and beyond—have been emboldened by their pandemic-era experiences with digital course materials and online learning. Now, they are calling for more technology use in the future, including in face-to-face classes, according to the report.

A bar graph showing fall 2022 survey results on how effective technology was for community college student learning. The administrator bar shows that 31% assigned a grade of A, 55% assigned a grade of B, 12% assigned a grade of C and 1% assigned a grade of F. The faculty bar shows that 45% assigned a grade of A, 43% assigned a grade of B, 10% assigned a grade of C and 2% assigned a grade of F. The student bar shows that 48% assigned a grade of A, 35% assigned a grade of B, 13% assigned a grade of C and 4% assigned a grade of F.

“If any of your models about student preferences for online and blended learning came from prior to 2020, you should trash them and start over again because so much has changed,” said Jeff Seaman, director of Bay View Analytics, and report co-author. Julia Seaman, research director at Bay View, also co-authored the report, which was produced with support from Cengage.

The study surveyed 1,206 faculty members and administrators, 2,358 students and 1,252 institutions from all 50 states and the District of Columbia. The report is the seventh round of data collection in a series of surveys that began in April 2020. Most of the findings in the current report hail from the fall 2022 survey. If a question was not included on that survey, the findings from the fall 2021 were included.

Most of the community college student respondents (79 percent) gave their online courses a grade of A or a B in terms of effectively meeting their educational needs. This fall 2022 grade marked a small (3 percent) improvement from the spring 2021 grade students assigned. Few students (6 percent) assigned grades of D or F to the effectiveness of their online courses.

“Pre-pandemic and early in the pandemic in the North Country of New Hampshire, broadband access was an issue,” said Charles Lloyd, president of White Mountains Community College. “Some of that has been relieved by more broadband access and hotspots. Our campus is now an open-access zone all across our parking lot.”

Most of the community college students (56 percent) and faculty members (52 percent) who responded to the survey view online learning more favorably than they did before the pandemic. A small percentage of these online students (15 percent) and faculty (17 percent) are pessimistic.

“This idea that a student is either online or in person is no more,” said Madeline Pumariega, president of Miami Dade College. “Post-COVID, our students are both. They use online as a modality of convenience, to balance their lives with their work schedules.”

More than two-thirds (69 percent) of the community college faculty respondents now prefer to incorporate technology into their in-person classes. Nearly all (97 percent) reported that they had access to effective technology support and training.

A bar graph showing fall 2022 survey results on changes in attitudes over the past year in the use of digital materials. The administrator bar shows 60% were more optimistic, 36% had no change, and 4% were more pessimistic. The faculty bar shows 45% were more optimistic, 46% had no change, and 9% were more pessimistic. The student bar shows 47% were more optimistic, 38% had no change, and 15% were more pessimistic. (Bay View Analytics)

 

“The digital transformation has real staying power from an access standpoint,” Lloyd said, adding that effective technology support is vital to access. “We cover the northern half of New Hampshire … Even though they may be online students, they still might take an online class from the comfort of our library.”

Community college students’ satisfaction with and optimism about online learning has created a growing demand for more online and blended learning, as most (58 percent) want more. Conveniently, their desires align with faculty desires (64 percent) to teach online courses.

On face value, remote Berlin, N.H.—home to White Mountains Community College—appears to have little in common with urban Miami. Berlin (pronounced “BER-lin”) is a heavily forested region of the state that was once home to a thriving paper industry, which, at its peak, employed more than 8,000 people in its mills and nearby woods. But the last of Berlin’s paper mills closed in 2021, and today the (beautiful) town ranks among the poorest in the state.

But Berlin and Miami have something in common. Both are home to community colleges whose presidents attested to positive, pervasive digital transformations in their student, faculty and administrator communities since the start of the pandemic, as the Bay View Analytics report describes.

“Whether a student wants to avoid driving 45 minutes on a country road with no cars or sitting in urban traffic for 45 minutes on a three-mile drive with 100,000 cars, online platforms at community colleges provide access points and the commodity of time, especially for our students that are balancing life along with pursuing college degrees,” Pumariega said.

Students, faculty and administrators also have more favorable opinions of digital course materials than they did in the fall of 2021, according to the report. On this topic, most administrators (60 percent) were optimistic, and close to half of faculty members (45 percent) and students (47 percent) were as well. Many reported no change, and a minority were pessimistic (4 percent of administrators, 9 percent of faculty and 15 percent of students).

“Students are not only seeing all things digital as niceties,” Llyod said. “They’re coming to expect them.”

A wholesale return to pre-pandemic norms for digital course materials and online and blended learning is unlikely, according to the report. Nearly all (90 percent) of the community college faculty respondents indicated that their teaching had changed due to the pandemic. And even more (97 percent) expect that the changes they made during the pandemic will endure.

Miami Dade, for example, now offers synchronous online course offerings in its MDC Live platform, in addition to in-person, blended and asynchronous online options. MDC Live benefits students with work or family responsibilities who need the convenience of online and the structure of regular live meetings with an instructor and classmates, Pumariega said.

Since the survey considered only enrolled students, it does not shed light on potential community college students who never attended or who dropped out.

“Who are the missing students, and why are some not returning?” Jeff Seaman asked. “The news about two-year institutions’ enrollments going forward has not been good.”

But compared with pre-pandemic times, today’s enrolled community college students are gaining more exposure to remote work and are more at ease with replacing physical items with electronic items. That may have implications for workplaces in which these graduates land, Jeff Seaman hypothesized.

“The challenge we’re trying to mitigate is consistency of quality of technology and the usage of it,” Lloyd said. “ChatGPT reminds us that whatever new thing comes out, we need to be on the forefront of it, embrace it and provide training around it.”

Community Colleges
Online and Blended Learning
Teaching and Learning
Teaching With Technology
Technology
Image Source: 
Left: DenisTangneyJr/Getty Images; right: Gabriele Maltinti/Getty Images
Image Caption: 
Community college students, faculty and administrators—from rural New Hampshire to urban Miami and beyond—have been emboldened by their pandemic-era experiences with digital course materials and online learning.
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Magazine treatment: 
Trending: 
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
3
In-Article Advertisement High: 
6
In-Article related stories: 
9
In-Article Advertisement Low: 
12
Include DNU?: 
Yes

ChatGPT sparks debate on how to design student assignments now

Image: 

Is an ice cream sandwich a sandwich? How about a sushi roll, chicken wrap or sloppy joe? These were some of the prompts included in a classification and model-building assignment in the fall 2022 Knowledge-Based AI course that David Joyner taught at the Georgia Institute of Technology.

But when Joyner, executive director of online education and the online master of science in computer science and senior research associate, was scheduled to teach the course again in the spring 2023 semester, he reconsidered the assignment in the presence of ChatGPT—the OpenAI chat bot that burst onto the global stage in late 2022 and sent shock waves across academe. The bot interacts with users in a conversational way, including by answering questions, admitting its mistakes, challenging falsehoods and rejecting inappropriate requests.

“I’d used the questions for five years because they were fun questions,” Joyner said. “But ChatGPT’s answer was so precise that I’m pretty sure it was learning from my own best students,” whom he suspected had posted their work online. Joyner replaced several of the sandwich options with avocado toast, shawarma, pigs in a blanket, Klondike bar and Monte Cristo. He also updated the academic misconduct statement on his syllabus to “basically say that copying from ChatGPT isn’t different from copying from other people.” Such efforts, Joyner acknowledges, may be a temporary fix.

As faculty members ponder academe’s new ChatGPT-infused reality, many are scrambling to redesign assignments. Some seek to craft assignments that guide students in surpassing what AI can do. Others see that as a fool’s errand—one that lends too much agency to the software.

Either way, in creating assignments now, many seek to exploit ChatGPT’s weaknesses. But answers to questions concerning how to design and scale assessments, as well as how to help students learn to mitigate the tool’s inherent risks are, at best, works in progress.

“I was all ready to not stress about the open AI shit in terms of student papers, because my assignments are always hyper specific to our readings and require the integration of news articles to defend claims etc. … BUT THEN I TRIED IT …” Danna Goldthwaite Young, professor of communication at the University of Delaware, wrote this week in introducing a thread on Twitter.

Students Should Surpass AI—or Not

When Boris Steipe, associate professor of molecular genetics at the University of Toronto, first asked ChatGPT questions from his bioinformatics course, it produced detailed, high-level answers that he deemed as good as his own. He still encourages his students to use the chat bot. But he also created The Sentient Syllabus Project, an initiative driven by three principles: AI should not be able to pass a course, AI contributions must be attributed and true, and the use of AI should be open and documented.

“When I say AI cannot pass the course, it means we have to surpass the AI,” Steipe said. “But we also must realize that we cannot do that without the AI. We surpass the AI by standing on its shoulders.”

Steipe, for example, encourages students to engage in a Socratic debate with ChatGPT as a way of thinking through a question and articulating an argument.

“You will get the plain vanilla answer—what everybody thinks—from ChatGPT,” Steipe said, adding that the tool is a knowledgeable, infinitely patient and nonjudgmental debate partner. “That’s where you need to start to think. That’s where you need to ask, ‘How is it possibly incomplete?’”

But not every faculty member is convinced that students should begin with ChatGPT’s outputs.

“Even when the outputs are decent, they’re shortcutting the students’ process of thinking through the issue,” said Anna Mills, English instructor at the College of Marin. “They might be taking the student in a different direction than they would have gone if they were following the germ of their own thought.”

Some faculty members also challenge the suggestion that students should compete with AI, as such framing appears to assign the software agency or intelligence.

“I do not see value in framing AI as anything other than a tool,” Marc Watkins, lecturer in composition and rhetoric at the University of Mississippi, wrote in an email. Watkins, his department colleagues and his students are experimenting with ChatGPT to better understand its limitations and benefits. “Our students are not John Henry, and AI is not a steam-powered drilling machine that will replace them. We don’t need to exhaust ourselves trying to surpass technology.”

Still, others question the suggestion that AI-proofing a course is difficult.

“Creating a course that AI cannot pass? Shouldn’t take very long at all,” Robert Cummings, associate professor of writing and rhetoric at the University of Mississippi, wrote in an email. “Most AI writing generators are, at this stage, laughably inaccurate … Testing AI interactions with components of a course might make more sense.”

But Steipe is pondering a possible future in which descendants of today’s AI-writing tools raise existential questions.

“This is not just about upholding academic quality,” Steipe said. “This is channeling our survival instincts. If we can’t do that, we are losing our justification for a contribution to society. That’s the level we have to achieve.”

How Faculty Can Exploit ChatGPT’s (Current) Weaknesses

In the future, faculty members may get formal advice about how to craft assignments in a ChatGPT world, according to James Hendler, director of the Future of Computing Institute and professor of computer, web and cognitive sciences at Rensselaer Polytechnic Institute.

In the meantime, faculty are innovating on their own.

In computer science, for example, many professors have observed that AI writing tools can write codes that work, though not necessarily of the kind that humans find easy to edit, Hendler said. That observation can be exploited to create assignments that distinguish between content and creative content.

“We try to teach our students how to write code that other people will understand, with comments, mnemonic variable names and breaking code up into meaningful pieces,” Hendler said. “That’s not what’s happening with these systems yet.”

Also, since ChatGPT’s ability to craft logical arguments can underwhelm, assignments that require critical thinking can work well in the presence of ChatGPT.

“It’s not very good at introspecting,” Steipe said. “It just generates. You often find non sequiturs or arguments that don’t hold water. When you point it out to the to the AI, it says, ‘Oh, I got something wrong. I apologize for the confusion.’”

Several faculty members contacted for this article mentioned that lessons learned from the earlier emergence of Wikipedia hint at a path forward. That is, both the online encyclopedia and OpenAI’s chat bot offer coherent prose that is prone to errors. They adapted assignments to mix use of the tech tools with fact-checking.

Moving forward, professors can expect students to use ChatGPT to produce first drafts that warrant review for accuracy, voice, audience and integration to the purpose of the writing project, Cummings wrote. As the tools improve, students will need to develop more nuanced skills in these areas, he added.

An Unsolved Problem

Big tech plans to mainstream AI writing tools in its products. For example, Microsoft, which recently invested in ChatGPT, will integrate the tool into its popular office software and sell access to the tool to other businesses. That has applied pressure to Google and Meta to speed up their AI-approval processes.

“My classes now require AI, and if I didn’t require AI use, it wouldn’t matter, everyone is using AI anyway,” Ethan Mollick, associate professor of management and academic director of the Wharton Interactive at the University of Pennsylvania, wrote on his blog that translates academic research into useful insights.

But big tech’s speed in delivering AI products to market has not always been accomplished with care. Social media platforms, for example, were once naïvely celebrated for bringing together those with shared interests, not realizing at the time that the platforms also brought together supporters of terror, extremism and hate.

Meta’s release of a ChatGPT-like chat bot several months before OpenAI’s product received a tepid response, which Meta’s chief artificial intelligence scientist, Yan LeCun, blamed on Meta being “overly careful about content moderation,” according to The Washington Post. (LeCun spoke with Inside Higher Ed about challenges in computer science in September.) Faculty members may need to help students learn to mitigate and address inherent, real-world harm new tech tools may pose.

“The gloves are off,” Steipe said of the huge monetary driver of the emergence of sophisticated chat bots. In higher education, this may mean that the ways in which professors assess students may change. “We’ve heavily been basing assessment on proxy measures, and that may no longer work.”

Professors may assess their students directly, but that level of personal interaction generally does not scale. Still, some are encouraged to find themselves on the same side, so to speak, as their students.

“Our students want to learn and are not in a rush to cede their voices to an algorithm,” Watkins wrote.

Such alignment, when present, may offer comfort to the heady disruption academics have experienced since ChatGPT’s release, especially as bigger questions—beyond how to assign grades—loom.

“The difference between the AI and the human mind is sentience,” Steipe said. “If we want to teach as an academy in the future that is going to be dominated by digital ‘thought,’ we have to understand the added value of sentience—not just what sentience is and what it does, but how we justify that it is important and important in the way that we’re going to get paid for it.”

Teaching and Learning
Technology
Editorial Tags: 
Image Caption: 
Sutthiphong Chandaeng/Getty Images
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Magazine treatment: 
Trending: 
Trending text: 
ChatGPT and Student Assignments
Trending order: 
1
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
3
In-Article Advertisement High: 
6
In-Article related stories: 
9
In-Article Advertisement Low: 
12
Include DNU?: 
Yes

Report: U of Arkansas system may buy University of Phoenix

Image: 

The University of Arkansas System is considering a transaction in which it would transform the for-profit University of Phoenix into an independent nonprofit affiliate, spokespeople for both institutions have confirmed. The deal, if it came to pass, would be the latest in a series of absorptions of formerly massive for-profit colleges by public universities, most of which have stirred controversy.

“Because these conversations are ongoing, we are unable to provide much detail,” Nate Hinkel, director of communications at the University of Arkansas System, wrote in a prepared statement. “However, I do want to confirm that the UA System itself would not be acquiring the University of Phoenix, and no public or university funds would be involved in this potential transaction. The contemplated structure would also not include any remaining private ownership of the nonprofit entity or the University of Phoenix.”

The system has created an affiliated nonprofit entity for the purpose of a potential University of Phoenix acquisition, Hinkel wrote.

The Arkansas Times broke the news about the possible deal based on a leak, according to Hinkel, who confirmed that discussions are under way but that “there is nothing imminent at this time.” Hinkel added that the university system has been exploring new educational markets, especially in online education. Also, this week’s regularly scheduled meeting of the Board of Trustees of the University of Arkansas does not have this item on its agenda, Hinkel said.

At its peak in 2010, the University of Phoenix enrolled 470,000 students, including a mix of in-person and online students who were mostly working learners. Over the course of the next decade, economic trends and aggressive regulatory scrutiny from the Obama administration battered the university’s reputation and drove enrollment down. In 2019, the institution and the Federal Trade Commission agreed to a multimillion-dollar settlement following a five-year investigation into whether the university engaged in deceptive advertising by falsely touting its relationships with big employers; the agreement did not include an admission of wrongdoing.

By 2021, the university had 78,600 students, according to the institution’s 2021 annual academic report—the most recent available. The institution, which is owned by Apollo Education Group, once had more than 200 satellite learning centers and campuses but announced last year that it will close all but one, in Phoenix, by 2025.

Andrea Smiley, vice president of public relations at the University of Phoenix, confirmed both that the for-profit university is in talks with the University of Arkansas System and that no deal is imminent.

“Being private equity owned, you’re always having that conversation,” Smiley said. “The aim of our leadership is to find the best structure to sustain the legacy of this institution that’s existed for almost 50 years … Is nonprofit the right path? It could be.”

Should the deal happen, it would not be the University of Arkansas’ first attempt at going big with online education. In 2014, the university system sought to join the growing number of public universities attempting to reach working learners by creating a fully separate online institution known as eVersity. But eVersity never achieved its founders’ enrollment targets or came close to break-even status financially. In 2022, the Arkansas system’s Board of Trustees folded eVersity into a new public institution—University of Arkansas Grantham—after a $1 purchase of the online Grantham University, a for-profit institution that had approximately 5,500 students.

The University of Phoenix “has been on a mission of going from being an advertising juggernaut and accepting everybody, a number of which weren’t able to graduate, to being much more selective and trying to manage their online programs at a much lower enrollment number with a much higher-quality student,” Wallace Boston, co-founder of Green Street Impact Partners, a private equity firm focused on investing in education technology companies, said. Earlier, Boston served as the president and CEO of American Public University System and its parent company, American Public Education.

The potential purchase “would not be surprising in the sense that the Biden administration has tried to put more onerous rules that apply primarily to the for-profit institutions,” Boston said, adding that those rules make the enrollment market especially tough for for-profit institutions.

In recent years, several public universities have absorbed formerly massive for-profit colleges in a series of controversial deals. In 2017, for example, Purdue University, a public institution in Indiana, produced a “tectonic shift” in American higher education when it acquired Kaplan University, including its roughly 32,000 students, 15 campus locations and 3,000 employees. The acquisition produced the nonprofit Purdue University Global. The deal marked a bold entry into the online education market for Purdue, as nearly 85 percent of Kaplan’s students at the time had been enrolled in fully online programs. But critics questioned whether Kaplan’s recruitment tactics and the value of its credentials posed reputational risks to the public institution.

A few years later, in 2020, the University of Arizona, a public land-grant institution, purchased for-profit Ashford University in a deal that included roughly 35,000 students—all online. The acquisition, which became the nonprofit University of Arizona Global Campus, was designed to maintain its own leadership, faculty members, academic programs and accreditation. Several business analysts at the time praised the deal as positive for Zovio, the publicly traded parent company of Ashford, which would run the new institution’s programs. But Arizona’s faculty members were also concerned about reputational risks in associating with a for-profit university that had been accused of predatory recruitment practices. They also questioned why they had largely been sidelined in discussions.

Arguably the most controversial aspect of the Purdue-Kaplan and Arizona-Ashford deals was the fact that in both cases, the newly formed nonprofit online university continued to be managed in part through services offered by the for-profit companies (Kaplan and Zovio). Last summer the University of Arizona ended that arrangement by buying Zovio's assets and taking full control of Arizona Global's operation. 

The Arkansas-Phoenix partnership would appear to avoid that issue through the outright purchase of Phoenix, which the Arkansas Times valued at an estimated $500 million to $700 million. 

Even so, should the University of Arkansas System absorb the University of Phoenix—one of the oldest for-profit universities—the deal is likely to spark discussion in the system’s community, American higher ed and beyond.

“I’ve seen recent articles that say that the Department of Education under Biden has announced that they’re going to scrutinize these for-profit nonprofit conversions much more than they’ve been scrutinized in the past,” Boston said. “That could possibly explain why there’s been no specific announcement yet.”

Correction: An earlier version of this story featured an image with an incorrect logo for the UA System. This has been corrected.

Online and Blended Learning
Technology
Editorial Tags: 
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Email Teaser: 
Arkansas System May Buy U of Phoenix
Magazine treatment: 
Trending: 
Trending text: 
Arkansas System May Buy U of Phoenix
Trending order: 
2
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
3
In-Article Advertisement High: 
6
In-Article related stories: 
9
In-Article Advertisement Low: 
12

Academics work to detect ChatGPT and other AI writing

Image: 
An artist's depiction of a brain from above. On the left side of the brain, straight black-and-white lines suggest technology. On the right, curved lines and colors suggest human creativity.

When humans write, they leave subtle signatures that hint at the prose’s fleshy, brainy origins. Their word and phrase choices are more varied than those selected by machines that write. Human writers also draw from short- and long-term memories that recall a range of lived experiences and inform personal writing styles. And unlike machines, people are susceptible to inserting minor typos, such as a misplaced comma or a misspelled word. Such attributes betray the text’s humanity.

For these reasons, AI-writing detection tools are often designed to “look” for human signatures hiding in prose. But signature hunting presents a conundrum for sleuths attempting to distinguish between human- and machine-written prose.

“If I’m a very intelligent AI and I want to bypass your detection, I could insert typos into my writing on purpose,” said Diyi Yang, assistant professor of computer science at Stanford University.

In this cat-and-mouse game, some computer scientists are working to make AI writers more humanlike, while others are working to improve detection tools. Academic fields make progress in this way. But some on the global artificial intelligence stage say this game’s outcome is a foregone conclusion.

“In the long run, it is almost sure that we will have AI systems that will produce text that is almost indistinguishable from human-written text,” Yoshua Bengio, the “godfather of AI” and recipient of the Turing Award, often referred to as the Nobel of computer science, told Inside Higher Ed in an email exchange. Bengio is a professor of computer science at the University of Montreal.

Nonetheless, the scientific community and higher ed have not abandoned AI-writing detection efforts—and Bengio views those efforts as worthwhile. Some are motivated to ferret out dishonesty in academic pursuits. Others seek to protect public discourse from malicious uses of text generators that could undermine democracies. (Educational technology company CEOs may have dollar signs in their eyes.) Still others are driven by philosophical questions concerning what makes prose human. Whatever the motivation, all must contend with one fact:

“It’s really hard to detect machine- or AI-generated text, especially with ChatGPT,” Yang said.

The ‘Burstiness’ of Human Prose

During the recent holiday break, Edward Tian, a senior at Princeton University, headed to a local coffeeshop. There, he developed GPTZero, an app that seeks to detect whether a piece of writing was written by a human or ChatGPT—an AI-powered chat bot that interacts with users in a conversational way, including by answering questions, admitting its mistakes, challenging falsehoods and rejecting inappropriate requests. Tian’s effort took only a few days but was based on years of research.

His app relies on two writing attributes: “perplexity” and “burstiness.” Perplexity measures the degree to which ChatGPT is perplexed by the prose; a high perplexity score suggests that ChatGPT may not have produced the words. Burstiness is a big-picture indicator that plots perplexity over time.

“For a human, burstiness looks like it goes all over the place. It has sudden spikes and sudden bursts,” Tian said. “Versus for a computer or machine essay, that graph will look pretty boring, pretty constant over time.”

Tian and his professors hypothesize that the burstiness of human-written prose may be a consequence of human creativity and short-term memories. That is, humans have sudden bursts of creativity, sometimes followed by lulls. Meanwhile, machines with access to the internet’s information are somewhat “all-knowing” or “kind of constant,” Tian said.

Upon releasing GPTZero to the public on Jan. 2, Tian expected a few dozen people to test it. But the app went viral. Since its release, hundreds of thousands of people from most U.S. states and more than 30 countries have used the app.

“It’s been absolutely crazy,” Tian said, adding that several venture capitalists have reached out to discuss his app. “Generative AI and ChatGPT technology are brilliantly innovative. At the same time, it’s like opening Pandora’s box … We have to build in safeguards so that these technologies are adopted responsibly.”

Tian does not want teachers use his app as an academic honesty enforcement tool. Rather, he is driven by a desire to understand what makes human prose unique.

“There is something implicitly beautiful in human writing,” said Tian, a fan of writers like John McPhee and Annie Dillard. “Computers are not coming up with anything original. They’re basically ingesting gigantic portions of the internet and regurgitating patterns.”

Detectors Without Penalties

Much like weather-forecasting tools, existing AI-writing detection tools deliver verdicts in probabilities. As such, even high probability scores may not foretell whether an author was sentient.

“The big concern is that an instructor would use the detector and then traumatize the student by accusing them, and it turns out to be a false positive,” Anna Mills, an English instructor at the College of Marin, said of the emergent technology.

But professors may introduce AI-writing detection tools to their students for reasons other than honor code enforcement. For example, Nestor Pereira, vice provost of academic and learning technologies at Miami Dade College, sees AI-writing detection tools as “a springboard for conversations with students.” That is, students who are tempted to use AI writing tools to misrepresent or replace their writing may reconsider in the presence of such tools, according to Pereira.

For that reason, Miami Dade uses a commercial software platform—one that provides students with line-by-line feedback on their writing and moderates student discussions—that has recently embedded AI-writing detection. Pereira has endorsed the product in a press release from the company, though he affirmed that neither he nor his institution received payment or gifts for the endorsement. He did, however, acknowledge that his endorsement has limits.

“We’re definitely worried about false positives,” Pereira told Inside Higher Ed. “I’m also worried about false negatives.”

Beyond discussions of academic integrity, faculty members are talking with students about the role of AI-writing detection tools in society. Some view such conversations as a necessity, especially since AI writing tools are expected to be widely available in many students’ postcollege jobs.

“These tools are not going to be perfect, but … if we’re not using them for gotcha purposes, they don’t have to be perfect,” Mills said. “We can use them as a tool for learning.” Professors can use the new technology to encourage students to engage in a range of productive ChatGPT activities, including thinking, questioning, debating, identifying shortcomings and experimenting.

Also, on a societal level, detection tools may aid efforts to protect public discourse from malicious uses of text generators, according to Mills. For example, social media platforms, which already use algorithms to make decisions about which content to boost, could use the tools to guard against bad actors. In such cases, probabilities may work well.

“We have to fight to preserve that humanity of communication,” Mills said.

A Long-Term Challenge

In an earlier era, a birth mother who anonymously placed a child with adoptive parents with the assistance of a reputable adoption agency may have felt confident that her parentage would never be revealed. All that changed when quick, accessible DNA testing from companies like 23andMe empowered adoptees to access information about their genetic legacy.

Though today’s AI-writing detection tools are imperfect at best, any writer hoping to pass an AI writer’s text off as their own could be outed in the future, when detection tools may improve.

“We need to get used to the idea that, if you use a text generator, you don’t get to keep that a secret,” Mills said. “People need to know when it’s this mechanical process that draws on all these other sources and incorporates bias that’s actually putting the words together that shaped the thinking.”

Tian’s GPTZero is not the first app for detecting AI writing, nor is it likely to be the last.

OpenAI—ChatGPT’s developer—considers detection efforts a “long-term challenge.” Their research conducted on GPT-2 generated text indicates that the detection tool works approximately 95 percent of the time, which is “not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective,” according to OpenAI. Detection accuracy depends heavily on training and testing sampling methods and whether training included a range of sampling techniques, according to the study.

After-the-fact detection is only one approach to the problem of distinguishing between human- and computer-written text. OpenAI is attempting to “watermark” ChatGPT text. Such digital signatures could embed an “unnoticeable secret signal” indicating that the text was generated by ChatGPT. Such a signal would be discoverable only by those with the “key” to a cryptographic function—a mathematical technique for secure communication. The work is forthcoming, but some researchers and industry experts have already expressed doubt about the watermarking’s potential, citing concerns that workarounds may be trivial.

Turnitin has announced that it has an AI-writing detection tool in development, which it has trained on “academic writing sourced from a comprehensive database, as opposed to solely publicly available content.” But some academics are wary of commercial products for AI detection.

“I don’t think [AI-writing detectors] should be behind a paywall,” Mills said.

Higher Ed Adapts (Again)

“Think about what we want to nurture,” said Joseph Helble, president of Lehigh University. “In the pre-internet and pre-generative-AI ages, it used to be about mastery of content. Now, students need to understand content, but it’s much more about mastery of the interpretation and utilization of the content.”

ChatGPT calls on higher ed to rethink how best to educate students, Helble said. He recounted the story of an engineering professor he knew years ago who assessed students by administering oral exams. The exams scaled with a student in real time, so every student was able to demonstrate something. Also, the professor adapted the questions while administering the test, which probed the limits of students’ knowledge and comprehension. At the time, Helble considered the approach “radical” and concedes that, even now, it would be challenging for professors to implement. “But the idea that [a student] is going to demonstrate ability on multiple dimensions by going off and writing a 30-page term paper—that part we have to completely rethink.”

Helble is not the only academic who floated the idea of replacing some writing assignments with oral exams. Artificial intelligence, it turns out, may help overcome potential time constraints in administering oral exams.

“The education system should adapt [to ChatGPT’s presence] by focusing more on understanding and creativity and using more expensive oral-based evaluations, like oral exams, or exams without permission to use technology,” Bengio said, adding that oral exams need not be done often. “When we get to that point where we can’t detect if a text is written by a machine or not, those machines should also be good enough to run the [oral] exams themselves, at least for the more frequent evaluations within a school term.”

Teaching and Learning
Teaching With Technology
Technology
Editorial Tags: 
Image Source: 
traffic_analyzer/Getty Images
Image Caption: 
“For a human, burstiness looks like it goes all over the place. It has sudden spikes and sudden bursts,” says Edward Tian, a Princeton student who developed an AI-writing detection app. For a machine-written essay, the graph looks “boring.”
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Magazine treatment: 
Trending: 
Trending text: 
Working to Detect ChatGPT
Trending order: 
1
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
3
In-Article Advertisement High: 
6
In-Article related stories: 
9
In-Article Advertisement Low: 
12

‘Free’ Online Program at Central State Shutters Amid Controversy

Central State University in Ohio has stopped enrolling new students in Career Plus, a controversial free online college program for union members, The Dayton Daily News reported. The public university will also discontinue offerings for current students after the spring 2023 semester. The program accounted for 3,589 of the university’s 3,633 online students last fall, which was nearly double the institution’s in-person enrollment of 1,801 students, according to the newspaper’s investigation.

Career Plus works together with unions, including the AFL-CIO and the American Federation of State, County, and Municipal Employees, to offer a free college benefit to union employees and children of union employees. Students in the program could earn an online associate degree at Eastern Gateway Community College and complete a bachelor’s degree at Central State.

Last July, the U.S. Education Department said that Eastern Gateway’s online program violated federal financial aid rules and that the institution was no longer permitted to disburse Pell Grants to new students. At the time, the Education Department accused Eastern Gateway of charging students it determined to have less financial need less than their Pell-eligible peers. Eastern Gateway is “currently working with the Department of Education to determine if there is a viable way to restructure the program and meet federal financial compliance,” according to the Eastern Gateway website.

Is this diversity newsletter?: 
Hide by line?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Trending: 
Live Updates: 
liveupdates0

Academics work to detect ChatGPT and other AI writing

Image: 
An artist's depiction of a brain from above. On the left side of the brain, straight black-and-white lines suggest technology. On the right, curved lines and colors suggest human creativity.

When humans write, they leave subtle signatures that hint at the prose’s fleshy, brainy origins. Their word and phrase choices are more varied than those selected by machines that write. Human writers also draw from short- and long-term memories that recall a range of lived experiences and inform personal writing styles. And unlike machines, people are susceptible to inserting minor typos, such as a misplaced comma or a misspelled word. Such attributes betray the text’s humanity.

For these reasons, AI-writing detection tools are often designed to “look” for human signatures hiding in prose. But signature hunting presents a conundrum for sleuths attempting to distinguish between human- and machine-written prose.

“If I’m a very intelligent AI and I want to bypass your detection, I could insert typos into my writing on purpose,” said Diyi Yang, assistant professor of computer science at Stanford University.

In this cat-and-mouse game, some computer scientists are working to make AI writers more humanlike, while others are working to improve detection tools. Academic fields make progress in this way. But some on the global artificial intelligence stage say this game’s outcome is a foregone conclusion.

“In the long run, it is almost sure that we will have AI systems that will produce text that is almost indistinguishable from human-written text,” Yoshua Bengio, the “godfather of AI” and recipient of the Turing Award, often referred to as the Nobel of computer science, told Inside Higher Ed in an email exchange. Bengio is a professor of computer science at the University of Montreal.

Nonetheless, the scientific community and higher ed have not abandoned AI-writing detection efforts—and Bengio views those efforts as worthwhile. Some are motivated to ferret out dishonesty in academic pursuits. Others seek to protect public discourse from malicious uses of text generators that could undermine democracies. (Educational technology company CEOs may have dollar signs in their eyes.) Still others are driven by philosophical questions concerning what makes prose human. Whatever the motivation, all must contend with one fact:

“It’s really hard to detect machine- or AI-generated text, especially with ChatGPT,” Yang said.

The ‘Burstiness’ of Human Prose

During the recent holiday break, Edward Tian, a senior at Princeton University, headed to a local coffeeshop. There, he developed GPTZero, an app that seeks to detect whether a piece of writing was written by a human or ChatGPT—an AI-powered chat bot that interacts with users in a conversational way, including by answering questions, admitting its mistakes, challenging falsehoods and rejecting inappropriate requests. Tian’s effort took only a few days but was based on years of research.

His app relies on two writing attributes: “perplexity” and “burstiness.” Perplexity measures the degree to which ChatGPT is perplexed by the prose; a high perplexity score suggests that ChatGPT may not have produced the words. Burstiness is a big-picture indicator that plots perplexity over time.

“For a human, burstiness looks like it goes all over the place. It has sudden spikes and sudden bursts,” Tian said. “Versus for a computer or machine essay, that graph will look pretty boring, pretty constant over time.”

Tian and his professors hypothesize that the burstiness of human-written prose may be a consequence of human creativity and short-term memories. That is, humans have sudden bursts of creativity, sometimes followed by lulls. Meanwhile, machines with access to the internet’s information are somewhat “all-knowing” or “kind of constant,” Tian said.

Upon releasing GPTZero to the public on Jan. 2, Tian expected a few dozen people to test it. But the app went viral. Since its release, hundreds of thousands of people from most U.S. states and more than 30 countries have used the app.

“It’s been absolutely crazy,” Tian said, adding that several venture capitalists have reached out to discuss his app. “Generative AI and ChatGPT technology are brilliantly innovative. At the same time, it’s like opening Pandora’s box … We have to build in safeguards so that these technologies are adopted responsibly.”

Tian does not want teachers use his app as an academic honesty enforcement tool. Rather, he is driven by a desire to understand what makes human prose unique.

“There is something implicitly beautiful in human writing,” said Tian, a fan of writers like John McPhee and Annie Dillard. “Computers are not coming up with anything original. They’re basically ingesting gigantic portions of the internet and regurgitating patterns.”

Detectors Without Penalties

Much like weather-forecasting tools, existing AI-writing detection tools deliver verdicts in probabilities. As such, even high probability scores may not foretell whether an author was sentient.

“The big concern is that an instructor would use the detector and then traumatize the student by accusing them, and it turns out to be a false positive,” Anna Mills, an English instructor at the College of Marin, said of the emergent technology.

But professors may introduce AI-writing detection tools to their students for reasons other than honor code enforcement. For example, Nestor Pereira, vice provost of academic and learning technologies at Miami Dade College, sees AI-writing detection tools as “a springboard for conversations with students.” That is, students who are tempted to use AI writing tools to misrepresent or replace their writing may reconsider in the presence of such tools, according to Pereira.

For that reason, Miami Dade uses a commercial software platform—one that provides students with line-by-line feedback on their writing and moderates student discussions—that has recently embedded AI-writing detection. Pereira has endorsed the product in a press release from the company, though he affirmed that neither he nor his institution received payment or gifts for the endorsement. He did, however, acknowledge that his endorsement has limits.

“We’re definitely worried about false positives,” Pereira told Inside Higher Ed. “I’m also worried about false negatives.”

Beyond discussions of academic integrity, faculty members are talking with students about the role of AI-writing detection tools in society. Some view such conversations as a necessity, especially since AI writing tools are expected to be widely available in many students’ postcollege jobs.

“These tools are not going to be perfect, but … if we’re not using them for gotcha purposes, they don’t have to be perfect,” Mills said. “We can use them as a tool for learning.” Professors can use the new technology to encourage students to engage in a range of productive ChatGPT activities, including thinking, questioning, debating, identifying shortcomings and experimenting.

Also, on a societal level, detection tools may aid efforts to protect public discourse from malicious uses of text generators, according to Mills. For example, social media platforms, which already use algorithms to make decisions about which content to boost, could use the tools to guard against bad actors. In such cases, probabilities may work well.

“We have to fight to preserve that humanity of communication,” Mills said.

A Long-Term Challenge

In an earlier era, a birth mother who anonymously placed a child with adoptive parents with the assistance of a reputable adoption agency may have felt confident that her parentage would never be revealed. All that changed when quick, accessible DNA testing from companies like 23andMe empowered adoptees to access information about their genetic legacy.

Though today’s AI-writing detection tools are imperfect at best, any writer hoping to pass an AI writer’s text off as their own could be outed in the future, when detection tools may improve.

“We need to get used to the idea that, if you use a text generator, you don’t get to keep that a secret,” Mills said. “People need to know when it’s this mechanical process that draws on all these other sources and incorporates bias that’s actually putting the words together that shaped the thinking.”

Tian’s GPTZero is not the first app for detecting AI writing, nor is it likely to be the last.

OpenAI—ChatGPT’s developer—considers detection efforts a “long-term challenge.” Their research conducted on GPT-2 generated text indicates that the detection tool works approximately 95 percent of the time, which is “not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective,” according to OpenAI. Detection accuracy depends heavily on training and testing sampling methods and whether training included a range of sampling techniques, according to the study.

After-the-fact detection is only one approach to the problem of distinguishing between human- and computer-written text. OpenAI is attempting to “watermark” ChatGPT text. Such digital signatures could embed an “unnoticeable secret signal” indicating that the text was generated by ChatGPT. Such a signal would be discoverable only by those with the “key” to a cryptographic function—a mathematical technique for secure communication. The work is forthcoming, but some researchers and industry experts have already expressed doubt about the watermarking’s potential, citing concerns that workarounds may be trivial.

Turnitin has announced that it has an AI-writing detection tool in development, which it has trained on “academic writing sourced from a comprehensive database, as opposed to solely publicly available content.” But some academics are wary of commercial products for AI detection.

“I don’t think [AI-writing detectors] should be behind a paywall,” Mills said.

Higher Ed Adapts (Again)

“Think about what we want to nurture,” said Joseph Helble, president of Lehigh University. “In the pre-internet and pre-generative-AI ages, it used to be about mastery of content. Now, students need to understand content, but it’s much more about mastery of the interpretation and utilization of the content.”

ChatGPT calls on higher ed to rethink how best to educate students, Helble said. He recounted the story of an engineering professor he knew years ago who assessed students by administering oral exams. The exams scaled with a student in real time, so every student was able to demonstrate something. Also, the professor adapted the questions while administering the test, which probed the limits of students’ knowledge and comprehension. At the time, Helble considered the approach “radical” and concedes that, even now, it would be challenging for professors to implement. “But the idea that [a student] is going to demonstrate ability on multiple dimensions by going off and writing a 30-page term paper—that part we have to completely rethink.”

Helble is not the only academic who floated the idea of replacing some writing assignments with oral exams. Artificial intelligence, it turns out, may help overcome potential time constraints in administering oral exams.

“The education system should adapt [to ChatGPT’s presence] by focusing more on understanding and creativity and using more expensive oral-based evaluations, like oral exams, or exams without permission to use technology,” Bengio said, adding that oral exams need not be done often. “When we get to that point where we can’t detect if a text is written by a machine or not, those machines should also be good enough to run the [oral] exams themselves, at least for the more frequent evaluations within a school term.”

Teaching and Learning
Teaching With Technology
Technology
Editorial Tags: 
Image Source: 
traffic_analyzer/Getty Images
Image Caption: 
“For a human, burstiness looks like it goes all over the place. It has sudden spikes and sudden bursts,” says Edward Tian, a Princeton student who developed an AI-writing detection app. For a machine-written essay, the graph looks “boring.”
Is this diversity newsletter?: 
Newsletter Order: 
0
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Magazine treatment: 
Trending: 
Trending text: 
Working to Detect ChatGPT
Trending order: 
1
Display Promo Box: 
Live Updates: 
liveupdates0
Most Popular: 
3
In-Article Advertisement High: 
6
In-Article related stories: 
9
In-Article Advertisement Low: 
12

‘Free’ Online Program at Central State Shutters Amid Controversy

Central State University in Ohio has stopped enrolling new students in Career Plus, a controversial free online college program for union members, The Dayton Daily News reported. The public university will also discontinue offerings for current students after the spring 2023 semester. The program accounted for 3,589 of the university’s 3,633 online students last fall, which was nearly double the institution’s in-person enrollment of 1,801 students, according to the newspaper’s investigation.

Career Plus works together with unions, including the AFL-CIO and the American Federation of State, County, and Municipal Employees, to offer a free college benefit to union employees and children of union employees. Students in the program could earn an online associate degree at Eastern Gateway Community College and complete a bachelor’s degree at Central State.

Last July, the U.S. Education Department said that Eastern Gateway’s online program violated federal financial aid rules and that the institution was no longer permitted to disburse Pell Grants to new students. At the time, the Education Department accused Eastern Gateway of charging students it determined to have less financial need less than their Pell-eligible peers. Eastern Gateway is “currently working with the Department of Education to determine if there is a viable way to restructure the program and meet federal financial compliance,” according to the Eastern Gateway website.

Is this diversity newsletter?: 
Hide by line?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 
Trending: 
Live Updates: 
liveupdates0
❌