FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Josie Del Castillo

As an art student at the University of Texas Rio Grande, Josie Del Castillo came to appreciate the rich pigment of oils and how easily they spread across the canvas. “I didn’t get it at first, but it made me want to practice,” she says. “The techniques that I’ve developed now—I always go back to oil painting, even though I’ve tried other mediums.” A first-generation, Mexican-American artist raised in the border town of Brownsville, Texas, Del Castillo produces self-portraits and portraits of her friends and family, her canvases depicting the landscape of her hometown, with its palm trees, resacas, and sunsets framed by tall clouds.


  • Que Te Valga, 2021, oil on canvas.
  • El amor de una madre, 2022, acrylic, guache, and ink on paper.
  • Artist Block, 2020, oil on canvas.

Growing up, Del Castillo and her family crossed the border often to visit family in Mexico. “I never really saw a difference between the two countries until I was older,” she says. “We always saw it as ‘going to the other side.’” Today she celebrates the similarities and differences between two cultures in her work, often by means of natural symbolism. The aloe plant, for example, “has healing powers in Mexican-American culture, so I use that to symbolize healing ourselves.” Del Castillo’s portraits often explore her own resiliency and that of her Brownsville community. “I’m intrigued by people’s personalities, how they present themselves, and what they do for the community,” she says.

The post Josie Del Castillo appeared first on The American Scholar.

Howard University Selects a New President, Ben Vinson III

Ben Vinson III, the provost of Case Western Reserve, will lead an institution that has surged, with record research grants and high-profile academic hires.

Ben Vinson III is a historian, with his focus cast outside of the United States.

LLMs, Embeddings, Context Injection, and Next Generation OER

By: david

If you can remember the web of 30 years ago(!), you can remember a time when all it took to make a website was a little knowledge of HTML and a tilde account on the university VAXcluster (e.g., /~wiley6/). While it’s still possible to make a simple website today with just HTML, making modern websites requires a dizzying array of technical skills, including HTML, CSS, JavaScript frameworks, databases and SQL, cloud devops, and others. While these websites require far more technical expertise to build, they are also far more feature-rich and functional then their ancestors of 30 years ago. (Imagine trying to code each of the millions of pages on Wikipedia.org or Amazon.com completely by hand with notepad!)

This is what large language models (LLMs) like ChatGPT are doing to OER. Next generation OER will not be open textbooks that were created faster or more efficiently because LLMs wrote first drafts in minutes. That’s current generation OER simply made more efficiently. The next generation of OER will be the embeddings (from a 5R perspective, these are revised versions of an OER) that are part of the process of feeding domain knowledge into LLMs so that they can answer questions correctly and give you accurate explanations and examples. Creating embeddings and injecting this additional context into an LLM just-in-time as part of a prompt engineering strategy requires significantly more technical skill than typing words into Pressbooks does. But it will also give us OER that are far more feature-rich and functional than their open ancestors of 25 years ago.

Here’s a video tutorial of how to integrate a specific set of domain knowledge into GPT3 so that it can dialog with a user based on that specific domain knowledge. This domain knowledge could come from chapters in an open textbook, but in the example in the video it’s coming from software documentation. Granted, this video is almost two months old, which feels more than two years old at the rate AI is changing right now. So this isn’t the exact way we’ll end up doing it, but the video will give you the idea.

Rather than fine tuning an LLM, where the entire model training process has to be repeated, embeddings allow us to find just the right little pieces of OER to provide to the LLM as additional context when we submit a prompt. This is orders of magnitude faster and less expensive than retraining the entire model, and still gives the model access to the domain specific information we want it to have during our conversation / tutoring session / etc. And by “orders of magnitude faster and less expensive” I mean this is a legitimate option for a normal person with some technical skill, unlike retraining a model which can easily cost over $1M in compute alone.

Every day feels like a year for those of us trying to keep up with what’s happening with AI right now. It would be the understatement of the century to say lots more will happen in this space – we’re literally just scratching the surface. Our collective lack of imagination is the only thing holding us back. What an incredible time to be a learner! What an incredible time to be a teacher! What an incredible time to be working and researching in edtech!

Viki Eagle

Viki Eagle began taking photographs of her Native American friends when she was an undergraduate at the University of Denver. “Being one of the very few Native people on campus, I wanted to tell our story from our perspective,” she says. A member of the Sičháŋǧu Lakȟóta tribe, Eagle (who is also half-Japanese) has dealt with racism firsthand. “When I was growing up in the early ’90s, people really believed that Native Americans didn’t exist,” she says, “or if we did exist, we were still living in teepees or wearing buckskin. Our history erases us.” After completing her series of images of her friends, Real Life Indian, Eagle decided to go even further toward dispelling Native stereotypes by photographing heavy metal bands on reservations. This series, Re(Mapping) a Rez Metal Sonic ReZistance, is now her year-long focus as the Denver Art Museum’s 2023 Native Arts Artist-in-Residence.


  • (Sample photo taken by Viki Eagle)
  • (Sample photo taken by Viki Eagle)
  • (Sample photo taken by Viki Eagle)
  • (Sample photo taken by Viki Eagle)

Eagle’s interest in heavy metal stems from its ability to push back—“in the most extreme way”—against whitewashed ideas of what Native music is. Her photographs span several genres—documentary, portrait, landscape, and still-life. Eagle would like for her images of musicians and their audiences to demonstrate heavy metal’s popularity on reservations, and in the process, open viewers’ minds to the diversity of contemporary Native life. “I hope that people take away the creativity,” she says. “As contemporary Native people, expressing ourselves, our message and our story is still within that music.”

The post Viki Eagle appeared first on The American Scholar.

OER / ZTC Advocates Have an AI Problem

By: david

At some point over the last decade, open educational resources (OER) advocacy in US higher education became zero textbook costs (ZTC) advocacy. The two are intertwined now in a manner that would be difficult to disentangle even if you wanted to try. There are plenty of practical reasons why this might have happened. For example, politicians understand costs much better than they understand learning, which makes policy work and other political advocacy around eliminating textbook costs far easier than advocating for ways that “open” (whatever that word means) might be leveraged to improve student outcomes. But OER / ZTC advocates have had a fundamental problem simmering for many years now, and the recent advent of large language models (LLMs) like GPT-4 will quickly bring that simmer to a boil.

The idea of “zero textbook costs” makes a kind of sense when you believe that the ideal instructional materials are books. Books contain static, unchanging words and pictures on a page. They’re perfect for printing or exporting as a PDF. And when you’re talking about a PDF – a digital file of which you can make seemingly infinite, perfect copies for fractions of a penny – I understand why one might expect the price of these files to eventually approach their true cost to deliver – free.

The elephant in the room is that the interactive capabilities of computers and the internet make it possible to create dramatically more effective educational materials than static books (whether print or PDF). The ability to provide learners with practice and immediate, corrective feedback is very powerful. CMU researchers have demonstrated that the learning effect of this kind of online interactive practice is approximately six times larger than the learning effects of either reading or watching video. However, these kinds of interactive learning materials have a cost associated with delivering them, which means that whether you get them CMU, OpenStax, Lumen Learning, or elsewhere, they typically have a price that is greater than zero. Consequently, because there is a price associated with using “courseware” (the term many people use for this kind of interactive educational resource), OER / ZTC supporters end up actively advocating against courseware and for static educational materials that will result in inferior learning outcomes for students – all in the name of saving them money. This is why I refer to this line of advocacy as “free no matter the cost.”

OER / ZTC advocates have largely succeeded in turning a blind eye to the courseware elephant in the room. But conversational LLMs like ChatGPT are a towering pyramid of elephants in the room. They take us all the way back to Bloom’s two sigma problem, in which Bloom and colleagues demonstrated that the average student – the average student – who is tutored full-time outperforms 98% of students who learn in a traditional classroom setting. Tutoring is an incredibly powerful teaching method, and LLMs have finally made this capability broadly available at a reasonable cost. There may be ineffective initial attempts, but eventually combining courseware with LLM-powered tutoring will dramatically improve outcomes for students who get the chance to use these tools.

But alas, some students won’t get that opportunity. As with courseware, LLMs have real costs associated with their hosting and delivery, meaning that learning materials that incorporate LLMs will have a price greater than zero. And this means that students whose faculty, department, institution, or state have guaranteed that students will pay zero dollars for their course materials will miss out on these next-generation learning materials.

Conversational LLMs like ChatGPT will confront OER / ZTC advocates with an even starker choice than courseware has in the past. Will they continue to advocate for “free no matter the cost,” even as it becomes more obvious that they are essentially advocating for decreased student learning? Maybe student complaints about OER / ZTC policies that, paradoxically, deny them access to these kinds of highly effective learning materials will finally move the learning materials discussion beyond “zero cost” and back to being focused on how we best support student learning. We can hope.

Is there a role for OER in this emerging learning materials landscape? I believe there will be. But that’s a topic for another time.

Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?

By: admin

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by University of Oxford student Trenton Andrew Sewell 

Social Media Companies (SMCs) should use artificial intelligence (‘AI’) to automate content moderation (‘CM’) presuming they meet two kinds of conditions. Firstly, ‘End Conditions’ (‘ECs’) which restrict what content is moderated. Secondly, ‘Means Conditions’ (‘MCs’) which restrict how moderation occurs.

This essay focuses on MCs. Assuming some form of moderation is permissible, I will discuss how/whether SMCs should use AI to moderate. To this end, I outline CM AI should respect users ‘moral agency’ (‘MA’) through transparency, clarity, and providing an option to appeal the AI’s judgment. I then address whether AI failing to respect MA proscribes its use. It does not. SMCs are permitted[1] to use AI, despite procedural failures, to discharge substantive obligations to users and owners.

This essay will demonstrate that:
1) Respect for user’s MA entails SMCs should use AI in a:
a. Transparent, reason-giving way,
b. Based on clear rules and,
c. With an option for appeal.
2) But failing to meet these standards does not proscribe using AI. It is a necessary means of discharging important obligations.

Ideal CM AI
People have rights we should respect. This cred is the basis of this essay. However, rights include substantive rights such as to expression. Here, I am presuming that any moderated content is a legitimate target. Hence, moderating this content simpliciter does not violate users’ rights because SMCs could permissibly moderate the post/user.

The question that remains is what ‘procedural-rights’ users possess. How should SMCs respect users whilst moderating? Here, I address the procedural-rights users have because of their ‘moral agency’ (‘MA’).

MA is the capacity of an agent to understand moral reasons. Respecting the dignity of a person involves treating them as a moral-agent[2]. This requires engagement in moral reasoning[3]. Moral reasoning is the process of giving reasons concerning the justification of an act. Engagement in moral reasoning acknowledges one’s MA and dignity – a basic Kantian requirement[4].

Applying MA to Moderation
Moderation is akin to punishment. H.L.A Hart defined punishment “in terms of five elements:
1. …consequences…considered unpleasant.
2. …for an offence against…rules.
3. …of an…offender for his offence.
4. …intentionally administered by [another] and
5. …administered by an authority constituted by the…system.”[5]

Moderation removes posts and restricts access to platform features which is unpleasant. It occurs to ‘offenders’ for breaching the community guidelines. It is intentionally administered by SMCs which created authorities who impose moderation. It satisfies Hart’s 5 elements.

If moderation is punishment, then respecting MA in the process of moderation will be similar to respecting MA in the process of criminal punishment. That involves giving reasons why the act/offence was wrongful, and why the response to the act/offence was just.[6]

Hence, SMCs respect users’ MA whilst moderating if they:
1) Provide moral reasons to users why they ought not post certain content and;
2) Provide moral reasons to users why they are moderating[7]

SMCs should give users reasons why the guidelines were violated, and why moderation was the right response. CM AI must be, in other words, transparent[8].

Respect for MA requires more than granting reasons. It requires the option of appealing an AI’s judgement to a human moderator.

“Penalizing someone for violating the rules…reasserts our shared values…calling something hate speech…is a….performative assertion that something should be treated as hate speech and…undoubtedly, it will be disagreed with” [9].

Users should be free to question whether such an assertion is an accurate representation of the guidelines. A moral-agent is also a giver, not merely a receiver, of reasons. To engage in the moral reasoning which respects one’s MA, SMCs should give users the option to justify their post.

Furthermore, to respect users, AI should use rules which are prospectively clear. Respecting people as moral-agents is to regard them as able to follow rules they are aware of[10]. Part of what legitimizes punishment is that the user could have complied with the rule.

To respect users as moral-agents, AI should facilitate users’ compliance with rules. CM AI should be:
i) Based on rules;
ii) Which are published;
iii) Prospective,
iv) Intelligible;
v) Free from contradiction;
vi) Possible to follow;
vii) Not constantly changing and;
viii) With congruence between the rule and official actions.[11]

If CM AI satisfies these eight principles, then it respects users by recognizing their MA and furthermore, providing rational freedom.

Moral-agents should not face ‘bolts-from-the-blue’. Their freedom should not be dependant on an AI’s whims. The guidelines that the AI follows should allow users to know whether they are in, and avoid, non-compliance.

This prospective clarity enhances the morality of CM AI by providing ‘freedom from domination’:

“[freedom] is not…the availability of…choices. It is conceivable that a free man might have fewer options…than a slave…[But] we think of slavery as the…embodiment of unfreedom…because…the conditions under which he enjoys…options…are…dependant upon the will of the master.”[12]

Clear rules liberate one from dependence/domination. A user’s freedom is not dependant on the SMC but rather, on the rules which equally constrain moderators.

But why accept that ‘punishment’ by SMCs should respect moral agency? State punishment of crimes might need to – but why content moderation?

Because all should respect each other as moral agents. To do otherwise is to disrespect our dignity. Insofar as moral agency is only consistent with certain procedures of punishment by the state, I see no reason why (as an ideal matter) it would impose fundamentally different requires on punishment by family, friends, strangers, or crucially here – SMCs.

In summary:
Moderation is punishment. To respect MA whilst punishing, SMCs must use transparent AI which gives users reasons to justify SMCs’ response. Furthermore, respecting MA requires that AI decisions are appealable to a human moderator. This provides the opportunity for moral discourse which further respects MA. Lastly, respecting MA requires that the AI’s decisions allow the user to prospectively avoid non-compliance.

Unideal AI?
Whilst the prior section explored how a CM AI can respect user’s MA, it neglected two questions. Does CM AI currently respect MA? If it does not, should SMCs continue to use AI which violates procedural rights?

The answer to the first question is no. “A common critique of automated decision making is the…lack of transparency…[It is]…difficult to decipher…the specific criteria by which…decisions were made”[13]. Furthermore, systems of appeal, such as Facebook’s “Supreme Court”, are available to very few users[14]Finally, users report not knowing when they will be moderated, leading to confusion, and anger[15].

The answer to the second question – should SMCs use unideal AI – is complicated.

One could answer: if MA should be respected, then SMCs are not at liberty to use CM AI unless it respects user’s MA. In short, if CM AI is not transparent, appealable, and prospectively clear, it should not be used.

This view is flawed because SMCs do not only have process obligations. They have substantive obligations to their users and owners.

For its users, SMCs could be obligated to prevent the spread of toxic content, terrorist propaganda or child exploitation. To do otherwise is to become complicit. Christopher Bennet explained this complicity, and its corollary obligations, as resulting from ‘normative control’: “control over whether the…act is done with permission or not”[16]. The wrong done by “a car owner who permits another to engage in reckless… [driving]…[is]…that the owner could and should have…withdrawn his consent”[17]SMCs can – through moderation – to determine whether an act is ‘permissible or impermissible’. “[W]herever [SMCs] [do] not mark some act as impermissible, it regards it as permissible…It can be complicit in allowing…acts to be permissible where it should have made them impermissible…complicity…comes about through a failure to [moderate]”[18]SMCs have an obligation to its users to moderate content (the scope of which is a matter for later investigation).

Furthermore, SMCs have shareholders/investors. “A corporate executive is an employee of the owners of the business. He has direct responsibility to his employers. That responsibility is to conduct the business in accordance with their desires, which generally will be to make…money”[19]. When an agent is managing money belonging to another, we traditionally accept she is obliged to act with regard for the principal’s interests. Those same obligations bind all SMCs baring those which are owner operated[20].

These substantive obligations answer whether SMCs should use imperfect AI, because using CM AI is crucial for discharging of these duties. Even if AI is imperfect, SMCs are obliged to use it for CM; CM AI is needed to SMC’s obligations to different stakeholders.

Given SMCs’ size general size, CM requires AI. Yan LeCunn – Facebook’s chief AI scientist – has stated that: “Without AI there would not be any possibility of…speech filtering, detecting harassment, child exploitation, or terrorist propaganda”[21]To adequately meet SMCs’ substantive obligations to not be complicit in certain harmful conduct, SMCs needs to use AI.

A potential response is that it is “size…that makes automation seem necessary… [and]…size can be changed”[22]. Specifically, “if moderation is…overwhelming at…scale, it should be understood as a limiting factor on…growth”[23]. SMCs should accept making less profit to reduce the need for CM AI.

However, this neglects their obligations to owners. Even if SMC could make moderation respect users MA by setting growth aside, they would breach their fiduciary obligations to owners. Furthermore, SMCs are under pressure to moderate from the public. Not moderating could result in a harm to their brand, ability to recruit talent etc. Moderation is likely in owner’s interests.

Not using CM AI would result in SMCs failing their substantive obligations to either its users, its owners, or more likely both. Yet, one could say that if a ‘right’ to be recognized as moral-agent exists, SMCs should not violate it. Procedural-rights are side-constraints which requires not using imperfect AI. What this neglects is that X being a right does not mean it is of equal importance to right Y. If all obligations cannot be simultaneous met, then choices must be made about which obligations should be unfulfilled.

I would contend that procedural-rights in CM are some of SMCs’ least important obligations. Users who have posted content eligible to moderation are the reason a trade-off of rights is necessary. If they had not done wrong, then the SMC would not need to decide whether to respect their procedural-rights or the substantive rights of its users or owners. If a set amount of cost must be imposed, then it seems appropriate to apply that cost upon the individual most responsible – the user being moderated[24]. Since not using CM AI would result in SMCs failing their substantive obligations, and these obligations are more important, procedural obligations cease to really matter. Human moderation is not feasible, and imperfect CM AI is preferable to no moderation at all. SMCs should use AI because it discharges their more important duties. Nevertheless, insofar as SMCs can improve their CM AI to bring it closer to the ideal, they are obliged to do so. It should work towards the ideal but not let it be the enemy of the good or necessary.

Conclusion
Social Media Companies should use artificial intelligence to automate content moderation. The use of this technology is needed to meet SMCs’ substantive obligations to their users and owners. That means that the conditions under which it should be used are broad. Even if, AI moderation does not respect user’s moral agency, it should still be used. Nevertheless, where possible, SMCs should work to bring its AI moderation more in line with an ideal of respect. This Ideal AI Content Moderation would be transparent (capable of giving users the reasons which underpin the moderation decision) with an option to appeal to a human moderator (as a recognition of the two-sided nature of moral reasoning). Furthermore, the AI should operate on clear, prospective, and reasonably predictable rules such that users are given a freedom from domination and are spared from moderation happening like a ‘bolt-from-the-blue’.
AI moderation is a necessity for SMCs.
They should use AI moderation to meet their substantive obligations whilst striving for the procedural ideal.


Notes:

[1] Perhaps obliged.

[2] (Strawson, 1962).

[3] (Hirsch, 1993).

[4] (Jacobs, 2019, p. 29) (Seelmann, 2014).

[5] (Hart, 2008, pp. 5-6).

[6] (Edwards & Simester, 2014) (von Hirsch A. , 1992).

[7] (Edwards & Simester, 2014, p. 64).

[8] (Suzor & Etal, 2019).

[9] (Gillespie, 2020, p. 3).

[10] (von Hirsch & Hörnle, 1995).

[11] (Fuller, 1969, p. 39) (Simmonds, 2007, p. 64).

[12] (Simmonds, 2007, p. 101).

[13] (Gorwa & et.al, 2020, p. 11) (Burrell, 2016).

[14] (Kelion, 2020).

[15] (West, 2018).

[16] (Bennett, 2019, pp. 78-81).

[17] Ibid (p. 81).

[18] Ibid.

[19] (Friedman, 1970).

[20] There is thus an interesting question about how these obligations could apply to Twitter post Elon’s takeover.

[21] (LeCunn, 2020).

[22] (Gillespie, 2020, p. 4).

[23] Ibid.

[24] (McMahan, 2005) (Øverland, 2014).

Works Cited
Bennett, C. (2019). How Should We Argue for a Censure Theory of Punishment? In A. du Bois-Pedain, & A. Bottoms, Penal Censure (pp. 67-86). Hart Publishing.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 1-16.
Cohen-Almagor, R. (2015). Confronting the internet’s dark side: moral and social responsibility on the free highway. Cambridge: Cambridge University Press.
Edwards, J., & Simester, A. (2014). Prevention with a Moral Voice. In A. Simester, A. Du Bois-Pedain, & U. Neumann, Liberal Criminal Theory (pp. 43-65). Hart Publishing.
Friedman, M. (1970, September 13). The Social Responsibility of Business Is to Increase Its Profits. New York Times.
Fuller, L. (1969). The Morality of Law. New Haven: Yale University Press.
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 1-5.
Gorwa, R., & et.al. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform goverance. Big Data & Society, 1-15.
Günther, K. (2014). Crime and Punishment as Communication. In A. du Bois-Pedain, A. Simester, & U. Neumann, Liberal Criminal Theory (pp. 123-140). Hart Publishing.
Hart, H. (2008). Punishment and Responsibility: Essays in the Philosophy of Law. Oxford University Press.
Hirsch, A. v. (1993). Censure and Sanctions. Oxford University Press.
Jacobs, J. (2019). Censure, Sanction and the Moral Psychology of Resentment. In A. du Bois-Pedain, & A. Bottoms, Penal Censure (pp. 19-40). Hart Publishing.
Kelion, L. (2020, September 24). Facebook ‘Supreme Court’ to begin work before US Presidential vote. Retrieved from BBC: https://www.bbc.co.uk/news/technology-54278788
LeCunn, Y. (2020, June). Deep learning, neural networks and the future of AI. (C. Anderson, Interviewer)
McMahan, J. (2005). Self-Defense and Culpability. Law and Philosophy, 751–774.
Øverland, G. (2014). Moral Obstacles: An Alternative to the Doctrine of Double Effect. Ethics, 481-506.
Seelmann, K. (2014). Does Punishment Honour the Offender? In A. Du Bois-Pedain, A. Simister, & U. Neumann, Liberal Criminal Theory (pp. 111-121). Hart Publishing.
Simmonds, N. (2007). Law as a Moral Idea. Oxford: Oxford University Press.
Strawson, P. (1962). Freedom and Resentment. Retrieved from UCL: https://www.ucl.ac.uk/~uctytho/dfwstrawson1.htm
Suzor, N. P., & etal. (2019). What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commerical Content Moderation. International Journal of Communication, 1526-1543.
von Hirsch, A. (1992). Proportionalty in the Philosophy of Punishment. Crime and Justice, 16, 55-98.
von Hirsch, A., & Hӧrnle, T. (1995). Postive Generalpravention und Tadel. Goltdammer’s Archiv fur Strafrecht, 142.
West, S. M. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media and Society, 4366-4383.

Loren Erdrich

Painter and sculptor Loren Erdrich began mixing mediums on large canvases after a friend brought back vials of natural pigments from Morocco. “They did these crazy things on the paper that I was working with,” she says, such as swirling together and blooming into bursts of patterns. “When those ran out, I ended up finding dye that had similar characteristics—my work is very materials forward.” Erdrich now uses a combination of dye, powdered pigment mixed with water, and acrylic paint on canvas, letting the materials guide the final composition. “The dye is part of the fabric,” she says. “It soaks in and colors the actual fabric. I may occasionally have marks of other materials that sit on top of the canvas but really what you’re looking at is a large piece of fabric, the color of which has been changed.” Her most recent works are included in a solo exhibition, In a Certain Light, currently on display at Shrine Gallery in New York.


  • As A Tree,, 2022, water, raw pigment, dye, acrylic, colored pencil and water-soluble pastel on muslin, 56 x 52 inches.
  • Old Gods Try Hard, 2022, water, raw pigment, dye, acrylic, colored pencil and water-soluble pastel on muslin, 72 x 66 inches.
  • Saving-Face, 2021, water, raw pigment, dye, acrylic, colored pencil and water-soluble pastel on muslin, 56 x 52 inches.

Erdrich’s images center on ethereal beings. These gauzy, amorphic figures are not portraits of anyone or anything in particular. “I’ve always been interested in that crossover point between inside and outside, what is seen and what can’t be seen,” she says. “That place where you have one foot in the world that you can see and one foot in the invisible world—it’s always been about giving light to everything else around us.” Erdrich says that when she creates these figures, she’s challenging the viewer to imagine what it would be like to live a life of unlimited existence, where dualities aren’t divided and opposites can appear together. “We live in this world that really wants these clear categories and borders,” she says. “But I’ve always seen the world as many things at once. How do I get everything? How do I talk about all of it that isn’t just the actual physical thing I see in front of me?”

The post Loren Erdrich appeared first on The American Scholar.

He Was Billed as the Next LeBron. But Will Emoni Bates Make It at All?

Bates was on a magazine cover at 15. He was playing Division I college basketball at 17. Now he’s 19, his college career may be over, and N.B.A. success is anything but guaranteed.

Emoni Bates played one season at Eastern Michigan, in his hometown, Ypsilanti, but is expected to enter the upcoming N.B.A. draft.

N.Y.U. Chooses Linda Mills as Its Next President

Dr. Mills will become the first woman to head New York University, one of the largest private universities in the country.

The best monitors for 2023

Computer monitors keep evolving rapidly, with new technology like OLED Flex, QD-OLED and built-in smart platforms just in the last year alone. That’s on top of big improvements in things like color accuracy, image quality, size and resolution.

The choice is nice but overwhelming, as there are a lot of products in this market and a lot of features. Buyers looking for computer monitors now have to consider things like HDR, brightness, color accuracy, type of display technology, input lag and more. And then there are the usual considerations like size, adjustability, inputs and so on.

To help you with all that, we’ve researched the latest models for all kinds of markets, whether you’re a gamer, business user or content creator. Read on to find out which is the best computer monitor for you and, especially, your budget.

The basics

Panel type

The cheapest monitors are still TN (twisted nematic), which are strictly for gaming or office use. VA (vertical alignment) monitors are also relatively cheap, while offering good brightness and a high contrast ratio. However, content creators will probably want an IPS (in-plane switching) LCD display that delivers better color accuracy, image quality and viewing angles.

If maximum brightness is important, a quantum dot LCD display is the way to go — those are typically found in larger displays. OLED monitors are now available and offer the best blacks and color reproduction, but they lack the brightness of LED or quantum dot displays. Plus, they cost a lot. The latest type of OLED monitor, called QD-OLED from Samsung, just came out in 2022. The most notable advantage is that it can get a lot brighter, with monitors shown at CES 2022 hitting up to 1,000 nits of peak brightness.

MiniLEDs are now widely used in high-end displays. They’re similar to quantum dot tech, but as the name suggests, it uses smaller LED diodes that are just 0.2mm in diameter. As such, manufacturers can pack in up to three times more LEDs with more local dimming zones, delivering deeper blacks and better contrast.

Screen size, resolution and display format

In this day and age, screen size rules. Where 24-inch displays used to be more or less standard (and can still be useful for basic computing), 27-, 32-, 34- and even 42-inch displays have become popular for entertainment, content creation and even gaming these days.

Nearly every monitor used to be 16:9, but it’s now possible to find 16:10 and other more exotic display shapes. On the gaming and entertainment side, we’re also seeing curved and ultrawide monitors with aspect ratios like 21:9. If you do decide to buy an ultrawide display, however, keep in mind that a 30-inch 21:9 model is the same height as a 24-inch monitor, so you might end up with a smaller display than you expected. As a rule of thumb, add 25 percent to the size of a 21:9 monitor to get the vertical height you’d expect from a model with a 16:9 aspect ratio.

A 4K monitor is nearly a must for content creators, and some folks are even going for 5K or all the way up to 8K. Keep in mind, though, that you’ll need a pretty powerful computer to drive all those pixels. And 4K resolution should be paired with a screen size of 27 inches and up, or you won’t notice much difference between 1440p. At the same time, I wouldn’t get a model larger than 27 inches unless it’s 4K, as you’ll start to see pixelation if you’re working up close to the display.

One new category to consider is portable monitors designed to be carried and used with laptops. Those typically come in 1080p resolutions and sizes from 13-15 inches. They usually have a lightweight kickstand-type support that folds up to keep things compact.

Samsung Smart Monitor M5
Samsung

HDR

HDR is the buzzy monitor feature to have these days, as it adds vibrancy to entertainment and gaming – but be careful before jumping in. Some monitors that claim HDR on the marketing materials don’t even conform to a base standard. To be sure that a display at least meets minimum HDR specs, you’ll want to choose one with a DisplayHDR rating with each tier representing maximum brightness in nits.

However, the lowest DisplayHDR 400 and 500 tiers may disappoint you with a lack of brightness, washed out blacks and mediocre color reproduction. If you can afford it, the best monitor to choose is a model with DisplayHDR 600, 1000 or True Black 400, True Black 500 and True Black 600. The True Black settings are designed primarily for OLED models, with maximum black levels at .0005 nits.

Where televisions typically offer HDR10 and Dolby Vision or HDR10+, most PC monitors only support the HDR10 standard, other than a few (very expensive) models. That doesn’t matter much for content creation or gaming, but HDR streaming on Netflix, Amazon Prime Video and other services won’t look quite as punchy. In addition, most models supporting HDR600 (and up) are gaming monitors, rather than content creation monitors – with a few exceptions. 

Refresh rate

Refresh rate is a key feature, particularly on gaming monitors. A bare minimum nowadays is 60Hz, and 80Hz refresh rates and up are much easier on the eyes. However, most 4K displays top out at 60Hz with some rare exceptions and the HDMI 2.0 spec only supports 4K at 60Hz, so you’d need at least DisplayPort 1.4 (4K at 120Hz) or HDMI 2.1. The latter is now available on a number of monitors, particularly gaming displays. However, it’s only supported on the latest NVIDIA RTX 3000- and 4000-series, AMD RX 6000-series GPUs.

Inputs

There are essentially three types of modern display inputs: Thunderbolt, DisplayPort and HDMI. Most monitors built for PCs come with the latter two, while a select few (typically built for Macs) will use Thunderbolt. To add to the confusion, USB-C ports may be Thunderbolt 3 and by extension, DisplayPort compatible, so you may need a USB-C to Thunderbolt or DisplayPort cable adapter depending on your display.

Color bit depth

Serious content creators should consider a more costly 10-bit monitor that can display billions of colors. If budget is an issue, you can go for an 8-bit panel that can fake billions of colors via dithering (often spec’d as “8-bit + FRC”). For entertainment or business purposes, a regular 8-bit monitor that can display millions of colors will be fine.

Color gamut

The other aspect of color is the gamut. That expresses the range of colors that can be reproduced and not just the number of colors. Most good monitors these days can cover the sRGB and Rec.709 gamuts (designed for photos and video respectively). For more demanding work, though, you’ll want one that can reproduce more demanding modern gamuts like AdobeRGB, DCI-P3 and Rec.2020 gamuts, which encompass a wider range of colors. The latter two are often used for film projection and HDR, respectively.

Console gaming

Both the Xbox Series X and Sony’s PS5 can handle 4K 120Hz HDR gaming, so if you’re into resolution over pure speed, you’ll want a monitor that can keep up. 4K resolution, HDR and at least 120Hz is the minimum starting point, but fortunately there are 27-inch displays with those specs starting at well under $1,000.

Pricing and parts shortages

Though the pandemic has eased, monitor supply is still a bit tighter than pre-pandemic levels due to supply and demand issues. To that end, you may have trouble finding monitors at Amazon, B&H or elsewhere for the suggested retail price. For our guide below, we’re basing our picks on the MSRP, as long as the street price doesn’t exceed that by more than $25.

Best monitors under $200

Samsung T35F

The monitor with the best balance of size, refresh rate and color accuracy is Samsung’s 27-inch 1080p T35F. It’s good for business or light gaming and content work, thanks to the IPS panel and 75Hz refresh rate. Plus, it’s fairly attractive and modern looking. There are some things you don’t get at that price, of course – it can only tilt and has an HDMI 1.4 connection.

LG 24GL600F

If you’re fine with a smaller display and are more into gaming, another solid option is LG’s 24-inch 24GL600F. It offers a high 144Hz refresh rate with AMD FreeSync support, a 1ms response time and low input lag. You also get HDMI and DisplayPort inputs, but like the T35F, there’s no height adjustment.

Buy LG 24GL600F at Amazon - $200

Best monitors under $400

HP U28 4K HDR Monitor

The 28-inch HP U28 4K HDR monitor is a great all around choice, especially for content creators. The 60Hz IPS panel and factory calibration delivers excellent color accuracy and it’s a nice size for creative or business work. It comes with DisplayPort, HDMI and three USB 3.0 ports, along with a USB-C port with 65W of charging for a laptop or tablet. And it’s easy to set just right, thanks to height, swivel and pivot adjustment.

Gigabyte G27QC

If gaming is more your thing, the $300 Gigabyte G27QC is a top pick. The 27-inch, 1440p curved monitor has an ideal size and resolution for gaming, and it has a quick 165Hz refresh rate and 1ms response time. You can connect via HDMI 2.0 or DisplayPort 1.2 connections and get HDR support – albeit, without DisplayHDR certification.

Buy Gigabyte G27QC at Amazon - $300

BenQ 27-inch QHD HDR Monitor

The $400 BenQ 27-inch 2K QHD HDR model is ideal for creative work, particularly photo editing and graphic design. While resolution is limited to 1440p, it covers 100 percent of the sRGB color gamut with a “Delta E” accuracy value of less than 3 for consistent color performance. You also get height, pivot and swivel adjustment (a full 90 degrees), with HDMI 2.0, DisplayPort 1.4 and USB-C daisy chaining and 65W power delivery.

Buy 27-inch BenQ QHD monitor at Amazon - $400

Best monitors under $500

LG 32UN650-W

The 32-inch LG 32UN650-W is a great 4K monitor for entertainment, creative chores and gaming. The 31.5-inch, 60Hz IPS panel covers an excellent 95 percent of the DCI-P3 gamut with 10-bit color, but also supports AMD FreeSync for gaming. It also supports HDR, albeit with just 350 nits of maximum brightness. It has HDMI 2.0 and DisplayPort 1.4 ports, tilt and height adjustments and even built-in speakers.

ASUS ROG Swift PG259QN

Sometimes speed rules over size and resolution, and the 24.5-inch 1080p ASUS ROG Swift PG256QN is fast. It maxes out at a 360Hz refresh rate (with NVIDIA G-Sync support) and 1ms GtG response time. At the same time, you get 1.07 billion colors with HDR support (up to 400 nits brightness) so you can see your enemies quickly and clearly. Other niceties of this best monitor pick include a fully adjustable stand, ASUS’s GamePlus Hotkey Enhancements and a large heatsink.

Buy ASUS ROG Swift monitor at Amazon - $499

Gigabyte M28U

Gigabyte’s M28U 28-inch 144Hz 4K gaming monitor sure does a lot. It has an IPS panel with a 2ms (MPRT) response time, 94 percent DCI-P3 coverage, DisplayHDR 400 certification, 2 HDMI 2.1 ports and FreeSync Premium Pro support. It comes in a little bit more expensive than $500, but we've often seen it on sale for less.

Buy Gigabyte M28U at Amazon - $649

Best monitors under $1,000

ViewSonic ColorPro VP2786-4K

In this price range you can have resolution, color accuracy or brightness, but not all three. The one with the best balance is ViewSonic’s $1,000 ColorPro VP2786 27-inch 4K HDR Monitor. The true 10-bit IPS panel covers 98 percent of the DCI-P3 color palette with an excellent Delta <2 accuracy figure, and is certified for soft-proofing by the demanding Fogra print industry. At the same time, it offers HDR10 support, albeit with a limited 350 nits of output. It even includes a “ColorPro” wheel control compatible with Adobe or Capture One apps.

Dell G3223Q

The best 4K gaming monitor under $1,000 is Dell’s G3223Q 4K 32-inch HDR 144Hz monitor because of the speed, brightness and compatibility. It has an IPS panel with a 144Hz refresh rate, 1ms GtG response time, 95 percent DCI-P3 coverage and DisplayHDR 600 certification. Plus, it comes with a pair of HDMI 2.1 ports and is both FreeSync and G-Sync compatible.

Buy Dell G3223Q at Amazon - $1,000

Dell P3223QE

Dell’s P3223QE 4K USB-C Hub monitor is productivity-oriented, thanks to the wired Ethernet connectivity and USB-C ports that offer up to 90W of power delivery for laptops. It’s a 4K IPS panel with a 178-degree viewing angle and 350 nits of brightness and support for a billion colors (8-bit + FRC). It offers height, pivot, swivel and tilt adjustment, a VESA mounting interface and DisplayPort/HDMI inputs.

Buy Dell P3223QE at Amazon - $742

Best monitor for Mac users

Apple Studio Display

In general, monitor compatibility issues with MacBooks and Macs are a thing of the past, though you can still experience issues with things like refresh rates, particularly on M1 Macs. If you’d prefer to stay within the Apple family, the most cost-effective option is still the 27-inch Apple Studio Display. It supports 5K resolution (5,120 x 2,880) with up to 600 nits of brightness, so it can handle creative chores with ease. It even includes a 12-megapixel UltraWide camera that keeps you in frame via Center Stage, along with a three-mic array.

LG Ultrafine 4K and 5K

The best third-party option is LG’s $700 UltraFine 4 display, also sold on Apple’s Store. With a 24-inch 4K panel, you not only get very high resolution but also 500 nits of brightness (albeit, without HDR capability). It’s color-accurate out of the box, making it great for video- and photo-editing work on a Mac or MacBook. Finally, it supports Thunderbolt 3 with daisy chaining and power delivery, all of which is very useful for Mac users who may want multiple displays.

Buy LG UltraFine 4 at Amazon - $700

Best Ultrawide monitor

LG 34GP950G-B

Ultrawide 21:9 monitors are a great option for some types of content creation, games (particularly driving and flight sims) and productivity work. The best model this year is LG’s 34GP950G-B, a 34-inch 3,440 x 1,440 curved monitor. The curved IPS panel supports HDR10 with 400 nits of brightness and maximum (via overclocking) 180Hz refresh rate. It’s also G-Sync and FreeSync compatible (the latter over DisplayPort only).

Best portable monitor

LePow C2S

For the best balance of performance and price, LePow’s 15.6-inch C2S is a solid option. It offers decent brightness (220 nits), solid contrast and a very respectable 96.1-percent sRGB gamut coverage. You get a generous selection of ports (one mini-DisplayPort, one mini-HDMI port and two USB-C ports, along with a headphone jack. The metal stand is solid and practical, and it even has built-in speakers of decent quality.

Best luxury monitor

ASUS ProArt PA32UCG-K

ASUS still holds the prize for best luxury monitor, but it discontinued the previous mini-LED $4,000 ProArt PA32UCX monitor and replaced it with the $5,000 PA32UCG-K display. It uses the same mini-LED tech, but ups the ante with 1,600 nits of brightness via 1,152 backlight zones, an HDMI 2.1 port, 4K 120Hz resolution, 10-bit, 98 percent DCI-P3 coverage and an impressive 85 percent Rec.2020 coverage. Oh, and it’s one of the few monitors out there that supports Dolby Vision, along with HDR10 and HLG.

You’re probably doing it wrong if you’re using a $5K monitor for gaming. However, it does support AMD FreeSync (good for gaming creation) and has a 5-millisecond response time, very respectable for a display essentially designed for professional colorists. And to that end, color accuracy is calibrated to Delta E < 1 and it’s a true 10-bit panel delivering billions of colors. To verify that, it even comes with an X-rite i1 Display Pro color calibrator, normally sold separately for around $500.

On top of this model, ASUS now makes several slightly less bright and less expensive variants, namely the $4,180 PA32UCX-PK, (plus -P, and -K variants with slightly different features), offering 1,200 nits of brightness and a 60Hz (not 120Hz) refresh rate. Specs are nearly identical otherwise.

ASUS ProArt Display PA27UCX-K

ASUS ProArt Display PA27UCX-K monitor in a video editing setup.

Michael McGregor

In 2015, Michael McGregor quit his corporate job and moved to Mexico City, where the vibrant hues of his new home inspired him to reach for his colored pencils. “I hadn’t drawn anything since I was a teenager,” he said. The city “changed my perspective completely. I woke up to the colors that were actually in the universe.” Four years later, he moved to Los Angeles and began creating bright acrylic-and-oil stick paintings of flowers, as well as interior scenes inspired by the still life paintings of the Dutch masters and the early 20th-century Fauvist movement. The canvases are semi-autobiographical. “I grew up in a flower store,” he says. “My mom and all of my sisters and I would work in the flower store together so all of my work naturally stemmed from that.”


  • It's A Glamorous World (Salon Hodler, After LL), 2021, charcoal, acrylic and oil stick on canvas, 34 x 42 inches.
  • Just Like The Birds Need The Sky Above, 2022, charcoal, acrylic and oil stick on canvas, 50 x 40 inches.

McGregor’s series Private Party is currently on view at the Hashimoto Contemporary Gallery in Los Angeles. A white piano features in six of the show’s 34 works—a reference to the one his roommate purchased during the pandemic and to several of McGregor’s favorite Matisse paintings, all of which depict a woman seated at a piano. For McGregor, the series is a meditation on the spoils of excess. “I’m always interested in ideas of glamour and elegance, and that sort of seesaw where elegance and glamour can become decadent or hedonistic or maybe a little bit disturbing,” he says. “How can you treat elegance in a way that doesn’t feel so refined?”

The post Michael McGregor appeared first on The American Scholar.

AI, Instructional Design, and OER

By: david

2022 saw some significant advancements in artificial intelligence. My threshold for “significant” here being that the advances moved out of labs and arXiv.org preprints and into tools that many people were using and talking about. Lots of people thought text-to-image tools like DALL-E, Stable Diffusion, and Midjourney were fun. But Large Language Models (LLMs), and particularly the recent demo of ChatGPT, seem to have put the fear of God into everyone from middle school English teachers to the CEO of Google. The potential partnership between OpenAI (the makers of ChatGPT) and Microsoft may even present the first substantive challenge to Google’s search monopoly we’ve ever seen – and that’s saying something. While most of the dialog around AI and education seems to be focused on assessment, I think the implications for instructional designers are critically important, too. And, because you’ve got to play the hits, let’s look at what their impact will be on OER as well.

About Instructional Designers

I think a lot of people believe that instructional designers are the people who are really good at using the campus LMS – folks who can create new course shells for faculty, help faculty get their content uploaded into the LMS, and maybe even help with a little multimedia editing and production. Unfortunately, literally none of those things are instructional design. Instructional design is the process of leveraging what we understand about how people learn to create experiences that maximize the likelihood that the people who participate in those experiences will learn. Instructional designers need a deep understanding of both learning science research and educational technologies in order to effectively integrate them in support of learning. Uploading a syllabus PDF into Blackboard is not instructional design.
You know what else isn’t instructional design? The creation of accurate descriptions and explanations of facts, theories, and models. The raw text and images that make up the overwhelming majority of what we call textbooks. I’ve written previously about the difference between informational resources and educational resources. Wikipedia and other encyclopedias are informational resources. Reference materials and technical documentation are informational resources. The overwhelming majority of textbooks are primarily informational resources. What distinguishes an educational resource from an informational resource is that the latter shows some intentional application of knowledge about how people learn. I have previously argued that the minimum amount of effort you could invest to convert an informational resource into an educational resource was to add practice with feedback. That simple sounding task quickly explodes in complexity as you consider the research on what form that practice should take, how long it should last, when it should be scheduled, what kind of feedback should be provided, whether the feedback should come immediately or after some delay, etc.
Now, please don’t misunderstand me – there is absolutely an art and science to creating accurate, beautiful informational resources. It’s just a different art and science from the one involved in creating educational resources. All educational resources begin as informational resources, so they are critically important to the work we do as instructional designers. You might say they are a kind of intellectual infrastructure, in the way Brett Frischmann uses the term:

Intellectual infrastructure, such as basic research, ideas, general purpose technologies, and languages, creates benefits for society primarily by facilitating a wide range of downstream productive activities, including information production, innovation, and the development of products and services, as well as education, community building and interaction, democratic participation, socialization, and many other socially valuable activities. (emphasis added)

Informational resources (especially openly licensed ones) facilitate a wide range of downstream productive activities, including the creation of educational resources. Or, to use language I have used often before, informational resources are “content infrastructure” upon which instructional designers build when they create educational resources.

LLMs Will Make Creating the Content Infrastructure Significantly Easier, Faster, and Cheaper

LLMs will dramatically increase the speed of creating the informational resources that comprise the content infrastructure. Of course the drafts of these informational resources will need to be reviewed and improvements will need to be made – just as is the case with all first drafts – to insure accuracy and timeliness. But it appears that LLMs can get us 80% or so of the way to reasonable first drafts orders of magnitude faster, eliminating the majority of the expense involved in this part of the process. Here’s an example of what I’m talking about. Imagine you’re a SME who has been tasked with writing the content for an introductory economics textbook. (The following examples are from ChatGPT.)

Definitions and Examples

That’s not quite right, but it’s far faster to feed a prompt into ChatGPT and then edit the output to make it accurate than it would be to write that from scratch. And I think this will always be the way to think about these tools – incredibly helpful for creating first drafts for humans to then review, augment, and polish.

But LLMs won’t just help SMEs – they can also provide instructional designers with first drafts of some of the work they do. Imagine you’re an instructional designer who’s been paired with a faculty member to create a course in microeconomics. These tools might help you quickly create first drafts of:

Learning Outcomes

Discussion Prompts and Rubrics

Formative Assessment Items

Again, not exactly right, but an incredibly helpful starting point. Especially if you need to create several thousand assessment items.

As we see above, LLMs can engage at a basic level with concepts from the instructional design domain like discussion questions, rubrics, multiple choice questions, and feedback. But in the same way that a SME in a discipline needs to check a draft informational resource for accuracy of information, an instructional designer will need to check these educational resources for pedagogical and psychometric accuracy. And improvements will absolutely need to be made.

Did I Mention this Content Infrastructure Will Be Open?

In his application to register a work he created using AI software like Stable Diffusion, Steven Thaler wrote that the work “was autonomously created by a computer algorithm running on a machine” and that he was “seeking to register this computer-generated work as a work-for-hire to the owner.” In other words, he applied for copyright protection for a work he created by providing a prompt to a generative AI tool. The US Copyright Office rejected his attempt to register copyright in the work – twice. In their final response they wrote:

Copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the [human] mind.” COMPENDIUM (THIRD) § 306 (quoting Trade-Mark Cases, 100 U.S. 82, 94 (1879)); see also COMPENDIUM (THIRD) § 313.2 (the Office will not register works “produced by a machine or mere mechanical process” that operates “without any creative input or intervention from a human author” because, under the statute, “a work must be created by a human being”)….

While the [review] Board is not aware of a United States court that has considered whether artificial intelligence can be the author for copyright purposes, the courts have been consistent in finding that non-human expression is ineligible for copyright protection….

Courts interpreting the Copyright Act, including the Supreme Court, have uniformly limited copyright protection to creations of human authors…. For this reason, the Compendium of U.S. Copyright Office Practices — the practice manual for the Office — has long mandated human authorship for registration…. Because copyright law as codified in the 1976 Act requires human authorship, the Work cannot be registered.

In other words, as far as the US Copyright Office is concerned, output from programs like ChatGPT or Stable Diffusion are not eligible for copyright protection. Now, that could change if Congress gets involved (are they actually capable of doing anything?), or if the Supreme Court turned its collective back on decades of precedent (which, admittedly, has been happening recently). But unless something rather dramatic happens along these lines, the outputs of generative AI programs will continue to pass immediately into the public domain. Consequently, they will be open educational resources under the common definition:

Open educational resources (OER) are teaching, learning, and research materials that are either (1) in the public domain or (2) licensed in a manner that provides everyone with free and perpetual permission to engage in the 5R activities.

Generative AI tools could have an incredible impact on the breadth and diversity of OER that exist, since they will dramatically decrease the cost and time necessary to create the informational resources that underlie these learning materials. Current funding for the creation of OER (when it’s available at all) typically focuses on the courses enrolling the largest number of students. This makes sense when your theory of philanthropy is that the money you spend should benefit the most people possible. But that theory of philanthropy also means that the “long tail” of courses that each enroll relatively few students are unlikely to ever receive funding. LLMs will radically alter the economics of creating OER, and should make it possible for OER to come to these courses as well. (And while LLMs will have a significant impact on the economics of creating OER, they may not have as dramatic an impact on the sustainability, maintenance, and upkeep of OER over time.)

Two potentially interesting questions: First, in the same way teachers are worried about students submitting work written by LLMs, the same issue is coming for the US Copyright Office. If the output of these tools are not eligible for protection, the Office will soon have a deep interest in understanding which works are created by machines and which are truly the result of human creativity.

Second, how much time and effort would need to be invested in improving the drafts created by one of these tools before the improved version would be recognized as a derivative work that is eligible for copyright protection? (Think about Pride and Prejudice and Zombies. While it was based on a public domain work, it was transformed sufficiently to deserve its own copyright protection.)

The Impact on Traditional Publishers and OER

I’ve written before that OER advocates would eventually be sorry that they based their advocacy almost exclusively on cost savings, because the price of proprietary learning materials will come down and that talking point will evaporate:

[While] the cost of traditionally copyrighted educational materials has historically been much higher than the price of OER, the cost of textbooks has plateaued for the first time in decades (Perry 2020). As publishers respond to the price pressure created by OER in the course materials market, the difference in the prices of OER and traditionally copyrighted resources is likely to continue to decrease. If the access hypothesis holds, the impact of OER on student outcomes attributable to affordability will decrease in parallel. In other words, adopting OER may not be a long-term strategy for saving students significant amounts of money or closing the achievement gap between lower income and higher income students. (2021)

The impact of LLMs on the economics of creating proprietary materials should be the same as it is for OER (assuming publisher adapt to this new reality). As “big name authors” become supplanted by “big name editors” of work initially drafted by LLMs, the amount of royalties publishers owe to authors will either decrease significantly or may even disappear altogether. Now able to produce and sell proprietary material much less expensively than they were in the past, publishers could either pass this savings on to students in order to compete even more effectively with the price of OER, or simply use the savings to improve their margins, or a little of both. The key point here is that the boon of LLMs for learning materials production is NOT exclusive to OER. In fact, the benefit may accrue disproportionately to publishers of proprietary materials since the publishers of OER have never had to pay royalties. LLMs may put proprietary materials on equal footing with OER in this regard.

The Age of the Instructional Designer

We’ve been saying for more than a decade that content is a commodity. That is more true in the context of LLMs than it ever has been before. If content doesn’t differentiate learning materials from one another, what will? It’s possible we may finally be reaching a time when effectiveness will come to the forefront as the primary difference between learning materials. And while content accuracy is the domain of SMEs, effectiveness at supporting learning is the domain of instructional designers.

As the role of SMEs changes from author to editor and their time commitment to projects decreases, the role of instructional designers will grow in importance and effort. The instructional designer is likely the primary prompt engineer. Instructional design expertise will be reflected in the output of these systems in proportion to the degree that instructional design expertise is embedded in the prompts fed into the systems. IDs will engineer prompts and feed them into systems and then do rounds of rapid review of outputs with SMEs at each step in the process from the beginning (what is missing from this list of learning outcomes?) to the end (does the feedback on this formative assessment item accurately correct the misunderstanding a student likely has when they select option B?). As accuracy is being assured via this process, a significant amount of instructional design expertise will need to be applied to the varied outputs of these systems to bring them together in a cohesive, coherent way that will effectively support learning. Again, the right way to think about these tools is that they are incredibly helpful for creating first drafts for humans to then review, augment, and polish. And, when it comes to creating learning materials that are highly effective for all learners, there will be plenty of augmenting and polishing to do.

What an incredible time to be an instructional designer! Every school that teaches instructional design needs to immediately update their curriculum to leverage the existence of these tools. Using LLMs and other AI effectively (e.g., creating custom images) will be a key part of preparing instructional designers for the next decade – a decade that looks to be an absolutely incredible time to be an instructional designer!

Harvard Reverses Course on Human Rights Advocate Who Criticized Israel

News that the university had blocked a fellowship for the former head of Human Rights Watch stirred debate over academic freedom and donor influence.

Kenneth Roth, the former director of Human Rights Watch, in New York last April. The Harvard Kennedy School recently reversed its early decision to reject his fellowship application because of his criticisms of Israel.

Faculty are Losing Interest in Adopting OER

By: david

I missed this when it was first published, but a few months ago Ithaka S+R shared the results of their Ithaka S+R  US Faculty Survey 2021. The survey went out to 145,099 randomly selected faculty members across the US and over 5% of invitees responded. I was particularly interested in the survey’s findings about the state of open educational resources in US higher education. Below, I’ll share these results, the authors’ interpretations of the results, and some of my own thoughts about the results. (The images and quotes from the survey are (C) Ithaka S+R, 2022 and are licensed CC BY-NC.)


The faculty survey asked the following question in 2018 and again in 2021:

Which, if any, of the following open educational resources have you created and/or used in your courses? Please check all that apply.

The authors then explain these results as follows:

There has been a notable increase in the amount of faculty creating and using OER since 2018 (see Figure 39). Overall, more faculty members use OER than create them, though the share of faculty who have created any OER increased has also increased [sic] along with the share of faculty who have used OER. In 2021, 41 percent of faculty reported using open textbooks, 38 percent have used open video lectures, and 26 percent have used open course modules (see Figure 39).

Given the amount of video on the internet (e.g., YouTube) which is not “open” as that word is used in the phrase “open educational resources”, I find it HIGHLY unlikely that almost 40% of faculty found and used videos that were either in the public domain or which were licensed under a CC license granting users the 5R permissions. Which means that when faculty answered this question they were probably actually answering the question “which of the following free online resources have you created and/or used in your courses?” And honestly, which was more likely to happen during the emergency shift to online learning caused by the pandemic – faculty were finding and incorporating any reasonable quality free resource they could find online? Or faculty were taking the time to carefully vet the licensing of the resources they found and specifically chose OER to include in their courses?

In other words, I think this question likely dramatically overestimates that actual amount of OER use happening in higher education, if by “OER use” you mean (as I believe most OER advocates do) “OER adopted as the primary learning resource instead of an expensive proprietary resource.”

It’s also the case that, given the way the question is worded, using a chapter from an open textbook as a supplementary reading in a course where you adopted a $200 textbook would result in a “yes” answer under “open textbooks” here. Which would lead to an even greater overestimation of OER use. I believe a more informative version of the question would have including language like “as the primary required learning resource.”


I found these questions asked in 2018 and again in 2021 even more interesting:

Figure 40. Please read the following statements and indicate the degree to which you agree or disagree with each.

The authors then explain these results as follows:

Unintuitively, while more faculty are using and creating OER since 2018, interest in using and creating OER has declined in that same period. In 2018, 57 percent agreed they were interested in using OER compared to 51 percent in 2021. We see a similar decrease for those interested in creating and publishing OER–30 percent of faculty in 2018 were interested in creating OER in 2018 compared to 25 percent in 2021 (see Figure 40). This may be due to the continued lack of incentive, either monetary or through new professional development opportunities, for adopting OER into their courses.

Note that Question 39 did not, in fact, ask faculty if they “are using” OER; it asked if they “have used” OER. That distinction is incredibly important, because I believe it points to another possible explanation for the change in level of interest the survey found in using OER (whether that means actual OER or simply free online resources). Some of the faculty who adopted and used those resources simply aren’t interested in doing it again. Compared to the full suite of supports and helps they’re used to getting from commercial publishers – not just the textbook content, but thousands of assessment items, Powerpoints to use in class, pacing guides for 15 and 8-week terms, analytics that help faculty track struggling students, etc. – the overwhelming majority of OER offer very little support for faculty. It is, objectively, more work for faculty when they switch from adopting a full courseware solution to using a free PDF. For example, if you switched from MyMathLab to an OpenStax book for college algebra, you would find yourself hand-grading a lot of homework the courseware used to grade for you. It’s not hard to understand why, having experienced all that additional manual work once, faculty would no longer be interested in adopting OER in the future. That’s how rates of “have used” OER can go up, while rates of “interested in using OER” can go down.

The authors’ explanation of why interest in adopting OER has dropped – that faculty aren’t being paid extra to adopt them – is itself an implicit acknowledgement of how much extra work it is for faculty to adopt and use OER in their classes. No one has to provide incentives for faculty to adopt publisher materials for their classes – faculty are intrinsically motivated to adopt those resources because they save faculty time and effort. Demanding extra pay before adopting OER only makes sense if you believe it will take extra work to adopt and use OER (compared to adopting and using publisher materials).


Of course it’s possible to adopt OER within the courseware context, integrated with the complete set of supports faculty want, but many OER advocates reject that way of using OER because there are some costs associated with it.

In years to come I expect that interest in adopting and using “naked OER” – just the openly licensed content – will continue to decline until only early adopter-types are left. The only real hope I see for a future in which OER adoption goes to scale within US higher education is one in which OER are integrated and combined with all the other helps and supports faculty want. Because faculty decide what gets adopted and used in class.

❌