FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

What Color is the Sun?

Quick, without looking, what color is the sun? Would you believe it's green? Also, please don't look directly at the sun.

The post What Color is the Sun? appeared first on The Scholarly Kitchen.

The Neurotic Dogma of Reality

The world appears to be a certain way, but sometimes appearances are deceiving. This doesn’t seem to undermine what we think we know, for instance, the apparently obvious fact that we have hands. But, how do you know you’re not dreaming right now? Or better, do you know you’re not a handless brain in a […]

How the Press Feels about Philosophy

While major newspapers report on philosophy with, on average, slightly positive sentiment, it seems this positive sentiment becomes slighter by the year.

Charles Lassiter (Gonzaga) has conducted a “sentiment analysis” of news stories about philosophy during the decade of 2012-2022. What is sentiment analysis? Professor Lassiter says:

Sentiment analysis is exactly what it sounds like: getting an analysis of the positive and negative sentiments in a text. There are a few different ways to do this. The AFINN corpus from Finn Arup Nielsen tags words with a value between -5 (“bastard”, “motherfucker”) and 5 (“outstanding”, “superb”), excluding 0. We’ll be using that. 

Applying this analysis to 686 articles about philosophy in national and local newspapers (filtered so as to exclude articles about, say, the Philosophy cosmetics brand, or a basketball coach’s “philosophy” for the upcoming game), he found, on average, a little more positive than negative sentiment:

Sentiment analysis of U.S. newspaper articles about philosophy from 2012-2022, by Charles Lassiter

However, an analysis of how sentiment has changed year by year reveals a negative trend:

Sentiment trend in U.S. newspaper articles about philosophy from 2012-2022, by Charles Lassiter

You can learn more about Professor Lassiter’s analysis at his site, Soaked Feet, Bone-Dry Cuffs.

I think one could raise some questions about what can be learned from the analysis. For example, it could be that Professor Lassiter’s method for filtering out irrelevant results—he did so by excluding articles that did not mention “liberal arts”—skewed the results negative.

It could also be that neutral or positive articles about philosophy books with “negative” subject matter or the use of “negative” words were mischaracterized by the analysis. Consider Assholes: A Theory by Aaron James (2014), or A Theory of Jerks and Other Philosophical Misadventures by Eric Schwitzgebel (2019), two books widely discussed in the media during the studied period. Or recall that there have been many articles on recent books about women in the history of philosophy, which might have been identified as having negative sentiment because they discussed discrimination, exclusion, and harassment. And of course there have been articles on philosophical work concerning negative-sounding topics, such as The Case for Rage: Why Anger is Essential to Anti-Racist Struggle (2021) by Myisha Cherry or Grief: A Philosophical Guide (2022) by Michael Cholbi (to name just a couple). It would be helpful to know whether the sentiment analysis accurately sorted articles about these and similar books.

Despite such concerns, it may be that, in general, Professor Lassiter’s results hold up. In light of that possibility, we might ask about their causes, whether we should be concerned about them, and what, if anything, we should do to reverse the trend.

Thinker Analytix

Multimodal LLMs Are Here (updated)

“What’s in this picture?” “Looks like a duck.” “That’s not a duck. Then what’s it?” “Looks more like a bunny.”

Earlier this week, Microsoft revealed Kosmos-1, a large language model “capable of perceiving multimodal input, following instructions, and performing in-context learning for not only language tasks but also multimodal tasks.” Or as Ars Technica put it, it can “analyze images for content, solve visual puzzles, perform visual text recognition, pass visual IQ tests, and understand natural language instructions.”

Researchers at Microsoft provided details about the capabilities of Kosmos-1 in “Language Is Not All You Need: Aligning Perception with Language Models“. It’s impressive. Here’s a sample of exchanges with Kosmos-1:

And here are some more:

Examples of Kosmos-1's capabilities include (1)-(2) visual explanation, (3)-(4) visual question answering, (5) web page question answering, (6) simple math equation, and (7)-(8) number recognition.

Selected examples generated from KOSMOS-1. Blue boxes are input prompt and pink boxes are KOSMOS-1 output. The examples include (1)-(2) visual explanation, (3)-(4) visual question answering, (5) web page question answering, (6) simple math equation, and (7)-(8) number recognition.

The researchers write:

Properly handling perception is a necessary step toward artificial general intelligence.

The capability of perceiving multimodal input is critical to LLMs. First, multimodal perception enables LLMs to acquire commonsense knowledge beyond text descriptions. Second, aligning perception with LLMs opens the door to new tasks, such as robotics, and document intelligence. Third, the capability of perception unifies various APIs, as graphical user interfaces are the most natural and unified way to interact with. For example, MLLMs can directly read the screen or extract numbers from receipts. We train the KOSMOS-1 models on web-scale multimodal corpora, which ensures that the model robustly learns from diverse sources. We not only use a large-scale text corpus but also mine high-quality image-caption pairs and arbitrarily interleaved image and text documents from the web.

Their plans for further development of Kosmos-1 include scaling it up in terms of model size and integrating speech capability into it. You can read more about Kosmos-1 here.

Philosophers, I’ve said it before and will say it again: there is a lot to work on here. There are major philosophical questions not just about the technologies themselves (the ones in existence and the ones down the road), but also about their use, and about their effects on our lives, relationships, societies, work, government, etc.


P.S. Just a reminder that quite possibly the stupidest response to this technology is to say something along the lines of, “it’s not conscious/thinking/intelligent, so no big deal.”


UPDATE (3/3/23): While we’re on the subject of machine “vision,” recently researchers have made advances in machines being able to determine and reconstruct what a human is seeing simply by looking at what’s happening in the person’s brain. Basically, they trained an image-oriented model (the neural network Stable Diffusion) on data obtained by observing what people’s brains are doing as they look at different images—such data included fMRIs of the brains of 4 subjects as they looked at thousands of images, and the images themselves. They then showed the neural network fMRIs that had been excluded from the training set—ones that had been taken while the subjects looked at images that also were excluded from the training set—and had it reconstruct what it thought the person was looking at when the fMRI was taken. Here are some of the results:

The leftmost column is what the four subjects were shown. The remaining four columns are the neural network’s reconstruction of what each of the subjects saw based on their individual fMRIs when doing so.

More details are in the paper: “High-resolution image reconstruction with latent diffusion models from human brain activity“. (via Marginal Revolution)

 

The Perils of Lecture Class

Intentional LearningSince I don’t have enough going on (apparently), I recently started another newsletter. It’s called Intentional Learning, and it’s focused on strategies for learning and academic success at college. I spend most of my professional life talking with faculty and other instructors about teaching and learning, and I thought it was time to take that conversation to students.

The Intentional Learning newsletter comes out every Wednesday on LinkedIn. You can read it right here, and, if you’re a LinkedIn user, you can subscribe there, too. It’s written for students, so if you have students in your life that you care about, feel free to share it with them!

The latest issue on lecturing and active learning has gotten a fair amount of attention on LinkedIn, so I thought I would cross-post it here on my blog. 

The Perils of Lecture Class

“I wish Bruff would just work problems at the chalk board like other math professors.”

Ask any professor who has been teaching for a while about the student evaluations they receive at the end of each course, and they will immediately recall a few choice comments. Some of the comments are positive, like this one I received a few years ago: “I was so excited about this course’s podcast project that I shared my episode with my grandmother!” Some are hard to interpret, like “excessively politically correct” (in a math course?) or “Bruff Riders 4 Ever!” (I think that was positive.) And some are just frustrating to read, like the one above wishing for more chalk-and-talk.

That comment came from a student in a statistics course I taught with about a hundred students, mostly engineering students who were taking the course to fulfill a requirement for their major. They weren’t overly excited to be there, but I knew that going in, and I tried to make the course both relevant to their future careers as engineers and engaging as a learning experience. That meant I didn’t just work problems at the chalk board. Every class session asked students to actively participate in their own learning.

A typical class might involve a pre-class assignment with a few textbook pages to read and a couple of simple math questions to answer. Class would start with polling questions, where I asked students to respond to multiple-choice questions about the concepts for the day using their digital devices and to discuss some of the harder questions with their neighbors. Then we might move into group work for a while, maybe tackling a worksheet full of math problems or creating a data visualization with a partner. When I did work a problem at the chalk board, it was only after the students had a chance to try the problem themselves, so they could better understand my solution.

I was practicing what’s called “active learning instruction.” It’s usually held in contrast to traditional lecturing, aka “continuous exposition by the instructor.” Professors don’t practice active learning instruction just to make classes more lively. There’s a ton of educational research showing that these practices lead to greater student learning and student success, whether you measure that by course grades or persistence in the major or graduation rates. The student asking me to work more problems at the board? They were requesting an instructional approach that’s demonstrably inferior.

But I can’t get too frustrated with this student. This student probably thrived in chalk-and-talk classes, both in high school and college. That’s what they were expecting, and I asked them to do something very different, something that involved more work during class, and maybe something that didn’t feel all that useful to the student. And this student is hardly alone in viewing active learning instruction this way.

A team of researchers at Harvard University led by Louis Deslauriers ran an experiment where students were randomly assigned to one of two class sessions, one taught via traditional lecture and one taught via active learning. Same topics, same physics problems, same worked examples, just different instruction. The researchers quizzed students over the material and surveyed the students about how they felt about the classes. The result? Students learned more in the active learning classes but they felt they learned more in the lecture classes!

Here’s how the researchers put it in their article about the study: “Students rated the quality of instruction in passive lectures more highly, and they expressed a preference to have ‘all of their physics classes taught this way,’ even though their scores on independent tests of learning were lower than those in actively taught classrooms.” Does that sound familiar? My stats student said basically the same thing.

Should we generalize from this Harvard study? Maybe not. Harvard students aren’t like students elsewhere. But wouldn’t you expect Harvard students to be pretty savvy about learning? If they can be led astray by the comforting familiarity of traditional lecturing, I think anyone can. And I certainly hear from faculty colleagues at other institutions (that aren’t Harvard) that they encounter the same student pushback about active learning instruction.

What’s the takeaway here for college students? One, learning is hard work. If it feels super easy, it’s probably not actually changing your brain in useful ways. Two, we’re often pretty poor judges of how well we learn, especially when we’re newbies in a particular area. Three, if you show up to class and there’s not much to do other than listen to someone else talk and maybe take some notes, then you might not actually be learning much.

But you knew that, right? You took a bunch of Zoom classes during COVID where all you had to do was listen to a talking head, and you realized that wasn’t working for you. And now you’re back in a physical classroom and the talking heads are talking and you’re wondering if it’s worth your time to show to class. Next week in the newsletter, we bring this conversation about active learning and traditional lecturing into 2023 to see what it means for showing up to class in college today.

To subscribe to my Intentional Learning newsletter and receive future issues in your inbox, sign-up on LinkedIn.

❌