FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayPPJ Current

Helping verbs are curious, AND fascinating

Decorative grey background with light circles. "Helping verbs are curious, AND fascinating" by Edwin Battistella

Helping verbs are curious, AND fascinating

English has a big bagful of auxiliary verbs. You may have learned these as “helping verbs” in elementary and middle school, since they are sometimes described as verbs that “help” the main verb express its meaning. There are even schoolroom songs about them. They are a curious bunch.

The auxiliaries include the modal verbs (can and could, shall and should, will and would, may and might, and must). The verb that follows a modal is in its bare, uninflected form: can go, could go, must go, and so on. There are also a number of semi-modal auxiliary verbs (such as dare, need, ought to, had better, have to, and used to). Some are compound words spelled with a space and several have unusual grammatical properties as well, such as being resistant to contraction or inversion. And in parts of the English-speaking world, modals can double up, yielding expressions like might could, may can, might should, and more.

Aside from the modals, semi-modals, and double modals, the primary auxiliaries are forms of have, be, and do, which are inflected for tense (is versus was, has versus had, do versus did), number (is versus are, has versus have), and person(is versus am versus are, do versus does). These auxiliaries help to indicate verbal nuances like emphasis, the perfect and progressive aspects, and the passive voice. Here are some examples, adapted from Ernest Hemmingway’s The Old Man and the Sea:

Those who did catch sharks had taken them to the shark factory on the other side of the cove … (emphatic do and perfect aspect had)

The old man opened his eyes and for a moment he was coming back from a long way away. (progressive aspect)

His shirt was patched so many times that it was like the sail … (passive voice)

The primary auxiliaries come before the negative adverb not and allow contraction to it.

They didn’t catch sharks.

His shirt wasn’t patched.

He hadn’t taken the sharks.

And they play a role in questions by hopping to the left over the subject

Did they catch sharks?

Was his shirt patched?

Had he taken the sharks?

or by being copied at the end in a tag question.

They caught sharks, didn’t they?

His shirt was patched, wasn’t it?

He had taken the sharks, hadn’t he?

Main verbs like see and go and walk don’t do any of those tricks.

Things get even curiouser, however, because the helping verbs have and do have doppelgangers that actually are main verbs.

The old man did his chores. 

His shirt had a tear in it.

How do we know these are main verbs and not helping verbs? Well, for one thing, they are the only verbs in the sentence. For another, they can occur with other helping verbs:

The old man had done his chores. 

His shirt had had a tear in it all day.

And if you make the sentences questions or negate them, you have to add a form of auxiliary do.

Did the old man do his chores?

Did his shirt have a tear in it?

The helping verb be also has a doppelganger main verb, but the forms of main verb be behave pretty much just like the helping verb. More curious behavior, keeping us on our toes. The first sentence below has past tense main verb was followed by an adjective; the other two have the past tense helping verb was.

The shark was tenacious. (main verb was)

The shark was never caught. (auxiliary was)

The old man was trying his best. (auxiliary was)

But all three was forms hop to the left in questions.

Was the shark tenacious?

Was the shark ever caught?

Was the old man trying his best?

The curious behavior of helping verbs goes on and on, with different dialects doing different things. If you’ve read many British novels or watched British television you might have noticed forms of helping verb do popping up in elliptical sentences. Here’s an example from J. R. R. Tolkien’s The Two Towers: “Sam frowned. If he could have bored holes in Gollum with his eyes, he would have done.” (For a study of these forms, check out Ronald Butters’s 1983 article “Syntactic change in British English propredicates.”)

In African American English, the auxiliary done lends a completive meaning to events. You can see it in these dialogue examples from August Wilson’s Fences and from Walter Mosely’s Blond Faith: “Now I done give you everything I got to give you!” and “Didn’t she tell you that Pericles done passed on.” For more on this use of done, take a look at the chapters by Lisa J. Green and Walter Sistrunk and by Charles E. DeBose in the Oxford Handbook of African American Language.

We’ve just scratched the surface of auxiliaries. I hope you’ve become curious about these curious words.

Featured image by Alexander Grey via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

How music benefits your brain

A woman puts on headphones to listen to music while sitting on a couch.

On this episode of the Big Brains podcast, a scholar explains the neuroscience of how listening to and playing music builds our mind.

Music plays an important role in all of our lives. But listening to music or playing an instrument is more than just a creative outlet or hobby—it’s also scientifically good for us. Research shows that music can stimulate new connections in our brains; keeping our cognitive abilities sharp and our memories alive.

In a new book, Every Brain Needs Music: The Neuroscience of Making and Listening to Music (Columbia University Press, 2023), Larry Sherman explores why we all need music for our mental well-being—and how it can even help us later in life.

Sherman is a professor of neuroscience at Oregon Health & Science University.

Listen to the episode below:

Read the transcript to the episode. Subscribe to Big Brains on Apple Podcasts, Stitcher, and Spotify.

Source: University of Chicago

The post How music benefits your brain appeared first on Futurity.

In Memory of Bear Braumoeller

Sometimes you come across people that permanently change the way you think. About life, yourself, or an area of study. They instill a sense of resolute optimism about the world and your abilities. Bear Braumoeller was that person for us. Wise, accomplished, brilliant, humble, and kind. Anyone who can be remembered that way lived life well. Bear is one of those people. He was our professor, mentor, colleague, and friend. We were richer for knowing him, and are poorer for his passing.

We first got the chance to meet Bear during our recruitment process to Ohio State. We gravitated toward him and his research. Bear went out of his way to bring in the best and brightest graduate students to the program, and was absolutely relentless in his efforts. He took phone calls from us, discussed all of our options, and went out of his way to procure funds and opportunities for every student. Bear was known to showcase some of the best places to eat in Columbus, too. We all got along with Bear immediately, and he became a powerful force in our proverbial corner, helping us navigate and thrive in graduate school.

We’ve been fortunate to have terrific professors, but Bear was an unusually good professor. In graduate seminars, we were exposed to a wide breadth of topics in political and social science. The breadth that Bear introduced in his courses was unique for a political science class. Most importantly, he taught us how to read books and articles critically and constructively. Graduate students are often great at tearing apart a piece of scholarship. And that’s important. But published works are generally published for a reason, he reminded us, and so it’s equally important to identify their strengths in addition to their weaknesses. That approach cultivated humility (there are always tradeoffs in research) but was also encouraging. If graduate students think pieces published by top scholars in good journals are bad because we only focus on their downsides, how could we possibly do good work?

Bear’s take on the literature and the discipline was just like his research interests: complex, rich, and nuanced. He loved what he studied, and his knowledge in these areas often seemed encyclopedic. He would recommend a citation and quote on a whim, from memory. He always asked big, important questions, and he did his best to answer them. His two books, The Great Powers and the International System and Only the Dead, address two important questions in international politics: how leaders and historical circumstances jointly shape major historical outcomes, and whether war is declining. He was methodologically sophisticated, but for him it was about getting closer to the truth. He truly didn’t care what method you used if it fit the question. He had a great academic pedigree (University of Chicago, University of Michigan) but he wasn’t elitist. He wanted to hear from smart people, and he believed in demystifying the academy, making it accessible.

Bear was a formal advisor, but also a tremendous mentor to us. He helped guide many important decisions in graduate school, from the type of training we needed, our choice of dissertation topics, to the construction of our committees. Bear’s was ready to provide feedback on any idea or draft, regardless of its stage of development. He was also kind when he didn’t have to be, and when no one would praise him for it publicly. It’s just who he was. His feedback was always constructive and intended to enable better work. When we made mistakes he would correct us – firmly, gently, and privately.

Bear created the MESO (Modeling Emergent Social Order) Lab, which has been supported by NSF and the Carnegie Corporation of New York. It didn’t start as a lab, though. The first day some of us gathered in the conference room, it was just a group of people who Bear thought might be interested in an idea he had. We talked it over – a question about the relationship between hierarchical order and war – and decided it was interesting enough to pursue. One of the first things we did was to gather on a Thursday and just start working, the whole day, with no distractions, putting ideas on paper and into code. He would call them Hackathons, reminiscent of a Silicon Valley start-up. These early days made a huge impact on Bear. Numerous times after that, in presentations or conversations about what we were doing, he would mention that he had never before felt as productive as he did in those early research sessions. He realized that this was it, this was the way forward for him. This was not merely working on a project. This represented a change in how he was going to do research, in how he approached being a professor and working with graduate students.

International Relations is not known for collaborative research. The vast majority of major work in the field has a single author, more rarely two, and very rarely more than two authors. Some of us had co-authored with Bear before, but this was different. Whereas previous partnerships were more traditional co-authored research projects in which each author did their part, this was something bigger. Bear had a vision beyond group publications. He wanted us to grow into scholars who would think big, who wouldn’t be afraid to tackle questions that might seem intimidatingly broad, and who would pull the right minds together to tackle those problems. Our first project was “Hierarchy and War”, which addresses two of the biggest topics in the discipline. We were meant to say something new about both – and the relationship between them – in a single paper. The ambition was daunting, but that was Bear’s way: take big, important questions and swing as hard as you could at answering them.

As membership in the MESO Lab grew and expanded, Bear expanded the lab’s projects as well. As always, all projects are led by us, the students. Bear gave us remarkable autonomy and control over these projects: despite our status as graduate students, we had the final say over theoretical framing, modeling decisions, and data analysis. He gave us room to explore different paths, even if it meant delaying the progress of the project. In addition to developing us as scholars, he helped us develop as people. Bear understood that a good life outside of work with food, travel, and family, was of equal importance to doing great work. He expected high quality work from us, but the lab never became a source of stress or frustration. Being in the MESO Lab has been one of the greatest blessings from being Bear’s students. Just as a system is not equal to the sum of its parts, our lab produces scholarship that is more creative and fruitful than what we could individually create.

The loss of Bear leaves a gaping hole, not only in our lab but in our profession more broadly. People around the world have so beautifully expressed their appreciation and admiration for Bear, with an outpouring of tributes and memories. As is so often the case with grieving, those left behind expressed a desire for one more conversation, one more snarky comment, one more belly laugh, one more smile. His presence and reputation were felt with the same gravity and strength across the discipline. So many people felt as strongly and warmly about Bear as we did.

It is impossible to properly account for all the things Bear taught us. He taught us to be ambitious in our research. He taught us to be fearless when exploring and implementing new ideas. He taught us to be gentle and kind, with others and ourselves. His ideas and influence are all over our projects and dissertations. We will do our best to carry forward that work and legacy.

Rest in peace, Bear. It was a privilege and honor to have known you as a leader, mentor, and friend. Your memory is a blessing and you are missed.

About the authors

Maryum Alam, Andrew Goodhart, Michael Lopate, Haoming Xiong, and Liuya Zhang are political science Ph.D. candidates at The Ohio State University. Maël van Beek is an incoming postdoctoral research associate at Princeton University. David Peterson is an incoming post-doctoral fellow at the University of Michigan. Jared Edgerton is an Assistant Professor of political science at the University of Texas, Dallas.

Please consider donating to support Bear’s daughter, Molly Braumoeller.

Markets Won’t Stop Fossil Fuels

Global climate institutions have embraced the primacy of capital, private firms, and markets—and in so doing have fatally undermined their own efficacy.

“Lying” in computer-generated texts: hallucinations and omissions

An image of a human head made with colourful pipe cleaners to illustrate the blog post "'Lying' in computer-generated texts: hallucinations and omissions" by Kees van Deemter and Ehud Reiter

“Lying” in computer-generated texts: hallucinations and omissions

There is huge excitement about ChatGPT and other large generative language models that produce fluent and human-like texts in English and other human languages. But these models have one big drawback, which is that their texts can be factually incorrect (hallucination) and also leave out key information (omission).

In our chapter for The Oxford Handbook of Lying, we look at hallucinations, omissions, and other aspects of “lying” in computer-generated texts. We conclude that these problems are probably inevitable.

Omissions are inevitable because a computer system cannot cram all possibly-relevant information into a text that is short enough to be actually read. In the context of summarising medical information for doctors, for example, the computer system has access to a huge amount of patient data, but it does not know (and arguably cannot know) what will be most relevant to doctors.

Hallucinations are inevitable because of flaws in computer systems, regardless of the type of system. Systems which are explicitly programmed will suffer from software bugs (like all software systems). Systems which are trained on data, such as ChatGPT and other systems in the Deep Learning tradition, “hallucinate” even more. This happens for a variety of reasons. Perhaps most obviously, these systems suffer from flawed data (e.g., any system which learns from the Internet will be exposed to a lot of false information about vaccines, conspiracy theories, etc.). And even if a data-oriented system could be trained solely on bona fide texts that contain no falsehoods, its reliance on probabilistic methods will mean that word combinations that are very common on the Internet may also be produced in situations where they result in false information.

Suppose, for example, on the Internet, the word “coughing” is often followed by “… and sneezing.” Then a patient may be described falsely, by a data-oriented system, as “coughing and sneezing” in situations where they cough without sneezing. Problems of this kind are an important focus for researchers working on generative language models. Where this research will lead us is still uncertain; the best one can say is that we can try to reduce the impact of these issues, but we have no idea how to completely eliminate them.

“Large generative language models’ texts can be factually incorrect (hallucination) and leave out key information (omission).”

The above focuses on unintentional-but-unavoidable problems. There are also cases where a computer system arguably should hallucinate or omit information. An obvious example is generating marketing material, where omitting negative information about a product is expected. A more subtle example, which we have seen in our own work, is when information is potentially harmful and it is in users’ best interests to hide or distort it. For example, if a computer system is summarising information about sick babies for friends and family members, it probably should not tell an elderly grandmother with a heart condition that the baby may die, since this could trigger a heart attack.

Now that the factual accuracy of computer-generated text draws so much attention from society as a whole, the research community is starting to realize more clearly than before that we only have a limited understanding of what it means to speak the truth. In particular, we do not know how to measure the extent of (un)truthfulness in a given text.

To see what we mean, suppose two different language models answer a user’s question in two different ways, by generating two different answer texts. To compare these systems’ performance, we would need a “score card” that allowed us to objectively score the two texts as regards their factual correctness, using a variety of rubrics. Such a score card would allow us to record how often each type of error occurs in a given text, and aggregate the result into an overall truthfulness score for that text. Of particular importance would be the weighing of errors: large errors (e.g., a temperature reading that is very far from the actual temperature) should weigh more heavily than small ones, key facts should weigh more heavily than side issues, and errors that are genuinely misleading should weigh more heavily than typos that readers can correct by themselves. Essentially, the score card would work like a fair school teacher who marks pupils’ papers.

We have developed protocols for human evaluators to find factual errors in generated texts, as have other researchers, but we cannot yet create a score card as described above because we cannot assess the impact of individual errors.

What is needed, we believe, is a new strand of linguistically informed research, to tease out all the different parameters of “lying” in a manner that can inform the above-mentioned score cards, and that may one day be implemented into a reliable fact-checking protocol or algorithm. Until that time, those of us who are trying to assess the truthfulness of ChatGPT will be groping in the dark.

Featured image by Google DeepMind Via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

Plain as day?

Sunset over mountains illustrating "Plain as day?" blog post by the Oxford Etymologist on the OUP blog.

Plain as day?

Etymologists interacting with the public on a day-to-day basis usually receive questions about words like copacetic and shenanigans, but so many nouns, verbs, adjectives, adverbs, and even prepositions and conjunctions not crying for attention are not less, perhaps even more, interesting. Over the years, I have written about summer, winter, ice, sheep, dog, live, leave, good, bad, red, and even such inconspicuous words as but and yet, to mention just a few. We use them like old trusty tools and never stop to ask where they came from. Somebody somewhere coined them, which probably means that once upon a time they were as transparent to speakers as giggle and hiccup. But today their origin is either debatable or unknown. Isn’t a dog called a dog because it looks like a dog, runs like a dog, and barks like a dog? This is what the naïve speaker thinks, though nothing in the group d-o-g suggests a muzzle, swiftness, or any kind of explosive sound. How did day and night get their names? Aren’t the words in some way associated with light and darkness? Dictionaries know a lot about the oldest history of both but do not always provide a clue to the “motivation” that might explain their origin.

The origin of the word “day”

Let us look at day. Its past is not totally hidden (no reference to or pun on clear obscure!). Day has exact cognates everywhere in Old Germanic, including of course Old English, as well as in Sanskrit, Celtic, Slavic, and elsewhere. In Old Icelandic, the proper name Dagr has been recorded. Similar names existed in Gothic and Old High German. Does this fact testify to the word’s significance in some religious ritual? We’ll never know. What conclusion will an etymologist two thousand years from now draw about our names June, Melody, and Makepeace? Day, it should be remembered, does not always mean “a period of twenty-four hours” (as in a few days ago) or half of this period (as in day and night or daytime): it sometimes refers to “a certain period or date” as in Doomsday, the day of reckoning, I’ll remember it until my dying day, and the like. Even if you were born at night, you probably celebrate your birthday. It follows that we are not quite sure where to begin our exploration, though day as “the period of light” looks more promising: after all, law-abiding citizens tend to make their arrangements for the time when there is enough light around, while at night most of us sleep.

DAWN, the beloved sister of DAY.
(Via Pexels, public domain)

Speakers of Modern English no longer realize that dawn has the same root as day. The verb to dawn means “to begin to grow light,” and the same reference is obvious in the phrase it dawned on me. I’ll skip the phonetic part of the story (why dawn? In German, unlike what we observe in English, the connection between the noun Tag and the verb tagen is immediately obvious). Will then a search for the etymology of day take us to the idea of “light,” as suggested above? Perhaps. But first, let us remember Latin diēs “day.” Its root occurs in the English words diurnal, dial “an instrument to tell the time of day by the shadow cast by the sun,” and diary, all three of course borrowed from Romance. Though diēs and day sound somewhat alike, they are not related (a fact often mentioned in dictionaries, to warn readers against what looks like an obvious conclusion), and as though to prove the absence of ties between them, language provides us with a word like Gothic sin-tiens “daily,” in which sin– means “one” and –tien– is a cognate of the Latin word. The correspondence d (Latin) ~ t (Gothic or any other Germanic language) is regular, by the so-called First Consonant Shift: compare Latin duo versus English two.

This is what a dial looks like.
(“Ottoman Sundial at the Debbane Palace museum” by Elias Ziade, via Wikimedia Commons, CC BY-SA 4.0)

This –tien- has unquestionable correspondences all over the Indo-European world: for example, Sanskrit had dínam “day.” In Latin, we find the word nun-dinum “market held every ninth day.” Also, Russian den’ “day” (with cognates elsewhere in Slavic) and many other references elsewhere to burning, ashes, and warmth belongs here too. On the strength of, among others, several Greek words meaning “appear” and “visible,” the root of all such words has been understood as “shining.” Since the Sanskrit dèvas (obviously related) means “god,” this idea looks realistic. The Indo-Europeans habitually referred to “god” as “shining” or “sky” (such was Latin Jū-piter “sky father,” known to the ancient Scandinavians as Týr, no longer a sky god; but the name reveals his distant past). Yet it is still odd that both the words related to day and those related to diēs, though unconnected, sound somewhat alike and not only mean “day” but also begin with d. Did d suggest burning, heat, or glowing or refer to things dry and arid? Such ancient sound-symbolic associations are beyond reconstruction. They are often hard to pinpoint even in our modern languages.

Another puzzling lookalike is Sanskrit áhar “day.” It almost rhymes with Proto-Germanic dagaz but lacks d-, to which, above, I ventured to ascribe magical properties. An incredible coincidence? The Sanskrit noun has no correspondences in Germanic, Romance, Celtic, and elsewhere. (At least, none has been discovered.) Did áhar once have d- and lose it? The fertile imagination of historical linguists reconstructed several processes that could be responsible for the loss. Initial d- does sometimes disappear for no known reasons. For instance, t in tear (from the eye), which, as expected, corresponds to d outside Germanic, is sometimes absent altogether: this is true of Sanskrit and Baltic, among others. We have enough trouble with smobile (it tends to turn up wherever it wants). Did dmobile also exist? Most unlikely.

Jupiter and his degraded Scandinavian counterpart Týr.
(L: Louvre Museum. R: Icelandic National Library. Both via Wikimedia Commons, public domain.)

My point has been to show how intriguing some of our common words sometimes are. Copacetic is late (it was first recorded in the twentieth century), while day is older than most of the hills around us. But the problem of origin remains the same: people coin words and etymologists wander in a labyrinth of look-alikes, roots, fleeing initial, and final consonants, and emerge with the all-too familiar verdict: “Origin unknown (uncertain, disputed).” Yet day probably did refer to heat or a bright light. This conclusion sounds reasonable, assuming (and this a reliable assumption) that the word’s initial sense was “the time of light,” rather than “a certain period, date.” Plain as day? Almost.

Featured image by Ivana Cajina via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

Is a 15-week limit on abortion an acceptable compromise?

A photo of a protest sign that says "keep abortion legal" in front of the US Capitol building. "Is a 15-week limit on abortion an acceptable compromise?" by Bonnie Steinbock on the OUP blog

Is a 15-week limit on abortion an acceptable compromise?

A recent opinion piece by George F. Will, “Ambivalent about abortion, the American middle begins to find its voice” in the Washington Post made the startling claim that the overturning of Roe v. Wade (Dobbs v. Jackson Women’s Health Organization, 2022) has resulted in “a partial healing of the nation’s civic culture.” One might think exactly the reverse. The Dobbs decision energized voters, especially women and young people, resulting in numerous Republican electoral defeats across the country. However, Will argues that the return of abortion policy to the states gives voters the opportunity of choosing moderate restrictions on abortion. Since most Americans support early abortion while opposing late-gestation abortion, Will thinks that a 15-week ban on abortion would be an acceptable compromise.

Why 15 weeks? Two reasons can be given. Almost all abortions in the US—93%—occur within the first 15 weeks of pregnancy. For this reason, making abortion illegal after 15 weeks would not, it would seem, impose serious burdens on most people seeking abortions. 

Another reason is that several European countries limit abortion on request to the first trimester, leading some US lawmakers to suggest that a 15-week ban would bring our abortion law in line with theirs. This is disingenuous, to say the least. While elective abortion is limited in some European countries, it is not banned afterwards, but is allowed on other grounds, including economic or social reasons, or a threat to the woman’s physical or mental health. Moreover, in most European countries, patients do not have to pay for abortion; it is covered under universal health coverage. The fact is that the trend in Europe has not been to limit abortion, but to expand access to it. Countries in Europe “… have removed bans, increased abortion’s legality and taken steps to ensure laws and policies on abortion are guided by public health evidence and clinical best practices.”

Were states to guarantee access to abortion prior to 15 weeks, a 15-week ban might be acceptable. However, even before Dobbs, many women in the US lacked access to abortion, due to a dearth of providers, especially in rural areas. They often had to travel many miles to find an abortion clinic, which meant that they had to arrange childcare if they have other children or take time off work. Delay is also caused by the need to raise money for an abortion, which is not paid for by Medicaid in most states, except in cases of rape, incest, or a life-threatening condition. To be sure, even if there were none of these roadblocks, some women would still not be able to have early abortions because they do not know that they are pregnant, due to youth, being menopausal, chronic obesity, or a lack of pregnancy symptoms. Any time limits will pose hardships for some people. But if access to early abortions were guaranteed, a compromise on a 15-week limit might be worth it.

I suspect that time-limit advocates are not particularly interested in making sure that women who have abortions get them early in pregnancy. They want to place roadblocks in the way of getting abortions, full stop. That these roadblocks increase the numbers of late abortions is of little concern to them, however much they wring their hands over late abortions. Abortion can be reduced by reducing the number of unwanted pregnancies, something that has been shown to be achieved by access to contraceptives and science-based sex education in the schools. Remember when pro-lifers emphasized those methods? Me neither. 

“Some US lawmakers suggest that a 15-week ban would bring our abortion law in line with European countries. This is disingenuous, to say the least.”

My second concern is with abortions sought after 15 weeks. The reason for a late abortion may be that the woman has a medical condition that has not developed, or has not been detected, until later in pregnancy. In such cases, the pregnancy is almost always a wanted pregnancy, and the decision to terminate imposes a tragic choice.

It may be responded that all states allow abortions to be performed when this is necessary to save the pregnant woman’s life, and many allow for abortions to protect her from a serious health risk. The problem is that these exceptions conflict with standard medical care, especially in the case of miscarriage. Once the woman has begun to miscarry, the failure to remove the fetus is likely to cause her sepsis, which can be life-threatening. However, in states with restrictive abortion laws, doctors cannot perform an immediate abortion, which is the standard of care in such situations. They have to wait until her death is imminent and, in some states, they cannot remove the fetus until its heart stops. 

Ireland’s restrictive abortion law was repealed after a woman who was denied an abortion during a miscarriage died from septicemia. To the best of my knowledge, no woman in the US has died as a result of restrictive abortion laws, but some have come close. An OB-GYN in San Antonio had to wait until the fetal heartbeat stopped to treat a miscarrying patient who developed a dangerous womb infection. The delay caused complications which required her to have surgery, lose multiple liters of blood, and be put on a breathing machine. Texas law essentially requires doctors to commit malpractice.

Conservatives often portray those in the pro-choice camp as advocating abortion until the day of delivery, for trivial reasons. This is deeply unfair. If they want us to compromise on time limits, they should be willing to guarantee access to abortion before 15 weeks. They should be willing to compromise on pregnancy prevention through contraception and sex education. And they should agree to drop all restrictions on late-term abortions that make legislators, rather than doctors, in charge of deciding what is appropriate medical care for their patients.

Featured image: Gayatri Malhotra via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

Eukaryotes may all derive from an Asgard ancestor

rows of dividing cells

Eukaryotes, complex life forms with nuclei in their cells, including all the world’s plants, animals, insects, and fungi, trace their roots to a common Asgard archaean ancestor, research finds.

That means eukaryotes are, in the parlance of evolutionary biologists, a “well-nested clade” within Asgard archaea, similar to how birds are one of several groups within a larger group called dinosaurs, sharing a common ancestor. The team has found that all eukaryotes share a common ancestor among the Asgards.

No fossils of eukaryotes have been found from farther back than about 2 billion years ago, suggesting that before that, only various types of microbes existed.

“So, what events led microbes to evolve into eukaryotes?” says Brett Baker, associate professor of integrative biology and marine science at the University of Texas at Austin. “That’s a big question. Having this common ancestor is a big step in understanding that.”

Led by Thijs Ettema of Wageningen University in the Netherlands, the research team identified the closest microbial relative to all complex life forms on the tree of life as a newly described order called the Hodarchaeales (or Hods for short). The Hods, found in marine sediments, are one of several subgroups within the larger group of Asgard archaea. The findings appear in Nature.

The Asgard archaea evolved more than 2 billion years ago, and their descendants are still living. Some have been discovered in deep sea sediments and hot springs around the world, but so far only two strains have been successfully grown in the lab. To identify them, scientists collect their genetic material from the environment and then piece together their genomes. Based on genetic similarities with other organisms that can be grown in the lab and studied, the scientists can infer metabolism and other features of the Asgards.

“Imagine a time machine, not to explore the realms of dinosaurs or ancient civilizations, but to journey deep into the potential metabolic reactions that could have sparked the dawn of complex life,” says Valerie De Anda, a researcher in Baker’s lab. “Instead of fossils or ancient artifacts, we look at the genetic blueprints of modern microbes to reconstruct their past.”

The researchers expanded the known Asgard genomic diversity, adding more than 50 undescribed Asgard genomes as input for their modeling. Their analysis indicates that the ancestor of all modern Asgards appears to have been living in hot environments, consuming CO2 and chemicals to live. Meanwhile, Hods, which are more closely related to eukaryotes, are metabolically more similar to us, eating carbon and living in cooler environments.

“This is really exciting because we are looking for the first time at the molecular blueprints of the ancestor that gave rise to the first eukaryotic cells,” De Anda says.

Support for this research came from the Origin of Eukaryotes program at the Moore and Simons Foundations; the US National Science Foundation; the Wellcome Trust Foundation; the European Research Council; the Swedish Research Council; the Dutch Research Council; the National Natural Science Foundation of China; the Wenner-Gren Foundation; the Science for Life Laboratory (Sweden); and the European Commission’s Marie Skłodowska-Curie Actions.

Source: UT Austin

The post Eukaryotes may all derive from an Asgard ancestor appeared first on Futurity.

Real patterns and the structure of language

Real patterns and the structure of language by Ryan M. Nefdt, author of "Language, Science, and Structure: A Journey into the Philosophy of Linguistics" published by Oxford University Press

Real patterns and the structure of language

There’s been a lot of hype recently about the emergence of technologies like ChatGPT and the effects they will have on science and society. Linguists have been especially curious about what highly successful large language models (LLMs) mean for their business. Are these models unearthing the hidden structure of language itself or just marking associations for predictive purposes? 

In order to answer these sorts of questions we need to delve into the philosophy of what language is. For instance, if Language (with a big “L”) is an emergent human phenomenon arising from our communicative endeavours, i.e. a social entity, then AI is still some ways off approaching it in a meaningful way. If Chomsky, and those who follow his work, are correct that language is a modular mental system innately given to human infants and activated by miniscule amounts of external stimulus, then AI is again unlikely to be linguistic, since most of our most impressive LLMs are sucking up so many resources (both in terms of data and energy) that they are far from this childish learning target. On the third hand, if languages are just very large (possibly infinite) collections of sentences produced by applying discrete rules, then AI could be super-linguistic.

In my new book, I attempt to find a middle ground or intersection between these views. I start with an ontological picture (meaning a picture of what there is “out there”) advocated in the early nineties by the prominent philosopher and cognitive scientist, Daniel Dennett. He draws from information theory to distinguish between noise and patterns. In the noise, nothing is predictable, he says. But more often than not, we can and do find regularities in large data structures. These regularities provide us with the first steps towards pattern recognition. Another way to put this is that if you want to send a message and you need the entire series (string or bitmap) of information to do so, then it’s random. But if there’s some way to compress the information, it’s a pattern! What makes a pattern real, is whether or not it needs an observer for its existence. Dennett uses this view to make a case for “mild realism” about the mind and the position (which he calls the “intentional stance”) we use to identify minds in other humans, non-humans, and even artifacts. Basically, it’s like a theory we use to predict behaviour based on the success of our “minded” vocabulary comprising beliefs, desires, thoughts, etc. For Dennett, prediction matters theoretically!

If it’s not super clear yet, consider a barcode. At first blush, the black lines of varying length set to a background of white might seem random. But the lines (and spaces) can be set at regular intervals to reveal an underlying pattern that can be used to encode information (about the labelled entity/product). Barcodes are unique patterns, i.e. representations of the data from which more information can be drawn (by the way Nature produces these kinds of patterns too in fractal formation).  

“The methodological chasm between theoretical and computational linguistics can be surmounted.”

I adapt this idea in two ways in light of recent advances in computational linguistics and AI. The first reinterprets grammars, specifically discrete grammars of theoretical linguistics, as compression algorithms. So, in essence, a language is like a real pattern. Our grammars are collections of rules that compress these patterns. In English, noticing that a sentence is made up of a noun phrase and verb phrase is such a compression. More complex rules capture more complex patterns. Secondly, discrete rules are just a subset of continuous processes. In other words, at one level information theory looks very statistical while generative grammar looks very categorical. But the latter is a special case of the former. I show in the book how some of the foundational theorems of information theory can be translated to discrete grammar representations. So there’s no need to banish the kinds of (stochastic) processes often used and manipulated in computational linguistics, as many theoretical linguists have been wont to do in the past. 

This just means that the methodological chasm between theoretical and computational linguistics, which has often served to close the lines of communication between the fields, can be surmounted. Ontologically speaking, languages are not collections of sentences, minimal mental structures, or social entities by themselves. They are informational states taken from complex interactions of all of the above and more (like the environment). On this view, linguistics quickly emerges as a complexity science in which the tools of linguistic grammars, LLMs, and sociolinguistic observations all find a homogeneous home. Recent work on complex systems, especially in biological systems theory, has breathed new life into this interdisciplinary field of inquiry. I argue that the study of language, including the inner workings of both the human mind and ChatGPT, belong within this growing framework. 

For decades, computational and theoretical linguists have been talking different languages. The shocking syntactic successes of modern LLMs and ChatGPT have forced them into the same room. Realising that languages are real patterns emerging from biological systems gets someone to break the awkward silence…

Featured image by Google DeepMind Via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

Real patterns and the structure of language

Real patterns and the structure of language by Ryan M. Nefdt, author of "Language, Science, and Structure: A Journey into the Philosophy of Linguistics" published by Oxford University Press

Real patterns and the structure of language

There’s been a lot of hype recently about the emergence of technologies like ChatGPT and the effects they will have on science and society. Linguists have been especially curious about what highly successful large language models (LLMs) mean for their business. Are these models unearthing the hidden structure of language itself or just marking associations for predictive purposes? 

In order to answer these sorts of questions we need to delve into the philosophy of what language is. For instance, if Language (with a big “L”) is an emergent human phenomenon arising from our communicative endeavours, i.e. a social entity, then AI is still some ways off approaching it in a meaningful way. If Chomsky, and those who follow his work, are correct that language is a modular mental system innately given to human infants and activated by miniscule amounts of external stimulus, then AI is again unlikely to be linguistic, since most of our most impressive LLMs are sucking up so many resources (both in terms of data and energy) that they are far from this childish learning target. On the third hand, if languages are just very large (possibly infinite) collections of sentences produced by applying discrete rules, then AI could be super-linguistic.

In my new book, I attempt to find a middle ground or intersection between these views. I start with an ontological picture (meaning a picture of what there is “out there”) advocated in the early nineties by the prominent philosopher and cognitive scientist, Daniel Dennett. He draws from information theory to distinguish between noise and patterns. In the noise, nothing is predictable, he says. But more often than not, we can and do find regularities in large data structures. These regularities provide us with the first steps towards pattern recognition. Another way to put this is that if you want to send a message and you need the entire series (string or bitmap) of information to do so, then it’s random. But if there’s some way to compress the information, it’s a pattern! What makes a pattern real, is whether or not it needs an observer for its existence. Dennett uses this view to make a case for “mild realism” about the mind and the position (which he calls the “intentional stance”) we use to identify minds in other humans, non-humans, and even artifacts. Basically, it’s like a theory we use to predict behaviour based on the success of our “minded” vocabulary comprising beliefs, desires, thoughts, etc. For Dennett, prediction matters theoretically!

If it’s not super clear yet, consider a barcode. At first blush, the black lines of varying length set to a background of white might seem random. But the lines (and spaces) can be set at regular intervals to reveal an underlying pattern that can be used to encode information (about the labelled entity/product). Barcodes are unique patterns, i.e. representations of the data from which more information can be drawn (by the way Nature produces these kinds of patterns too in fractal formation).  

“The methodological chasm between theoretical and computational linguistics can be surmounted.”

I adapt this idea in two ways in light of recent advances in computational linguistics and AI. The first reinterprets grammars, specifically discrete grammars of theoretical linguistics, as compression algorithms. So, in essence, a language is like a real pattern. Our grammars are collections of rules that compress these patterns. In English, noticing that a sentence is made up of a noun phrase and verb phrase is such a compression. More complex rules capture more complex patterns. Secondly, discrete rules are just a subset of continuous processes. In other words, at one level information theory looks very statistical while generative grammar looks very categorical. But the latter is a special case of the former. I show in the book how some of the foundational theorems of information theory can be translated to discrete grammar representations. So there’s no need to banish the kinds of (stochastic) processes often used and manipulated in computational linguistics, as many theoretical linguists have been wont to do in the past. 

This just means that the methodological chasm between theoretical and computational linguistics, which has often served to close the lines of communication between the fields, can be surmounted. Ontologically speaking, languages are not collections of sentences, minimal mental structures, or social entities by themselves. They are informational states taken from complex interactions of all of the above and more (like the environment). On this view, linguistics quickly emerges as a complexity science in which the tools of linguistic grammars, LLMs, and sociolinguistic observations all find a homogeneous home. Recent work on complex systems, especially in biological systems theory, has breathed new life into this interdisciplinary field of inquiry. I argue that the study of language, including the inner workings of both the human mind and ChatGPT, belong within this growing framework. 

For decades, computational and theoretical linguists have been talking different languages. The shocking syntactic successes of modern LLMs and ChatGPT have forced them into the same room. Realising that languages are real patterns emerging from biological systems gets someone to break the awkward silence…

Featured image by Google DeepMind Via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

Confronting bud and buddy

Photo of green flower buds. Confronting bud and buddy by the Oxford Etymologist on the OUP blog

Confronting <em>bud</em> and <em>buddy</em>

In the previous installment (14 June 2023), I mentioned several attempts to explain the origin of bud(dy). See also the comment at the bottom of that post. It may perhaps be useful to remember that monosyllabic words beginning with b, d, g ~ p, t, k and ending in one of those consonants (bed, big, pig, kick, gig, dog, gag, keg, dab, bug, and a host of others) are notoriously obscure from the etymological point of view. Sound imitative? Sound-symbolic? Baby words? Borrowed? Some of them once had an ending (e, i, a), which is of no consequence as regards their origin. Incidentally, bud (on a plant) also poses problems. Known from texts since the Middle English period, it has a Dutch lookalike with t at the end and resembles French bouton, which may be of Germanic origin, but this derivation is far from certain. In one widely-used but rather unreliable dictionary, buddy is said to be “etymologically identical with the adjective buddy ‘full of buds’.” Thus, our correspondent, who had a similar idea, even though she cannot be said to be in good company, is at least not alone. Another complicating factor deserves mention. Such nouns and verbs may be coined, forgotten, and coined again in the same form. After all, it is not too hard to come up with words like bob, gab, pad, and so forth.   

Buddy poses familiar problems. It may or may not be a native word. Everybody is agreed that we are dealing with an Americanism. It appeared in texts around the year 1800, which excludes the idea of its derivation from bud “part of a plant.” Strange things sometimes happen. Guy, one the most common American words, goes back to a proper name and the 1605 Gunpowder Plot. The plot happened in England, but guy ousted or limited the use of pal, fellow, and their likes in American English. Last week, I mentioned Skeat’s derivation of buddy from booty-fellow and expressed my doubts on this score. Despite my admiration of everything Walter W. Skeat did, I keep thinking that in this case he was wrong.

Jacob and Wilhelm Grimm: two brothers, two buddies.
(Via Wikimedia Commons, public domain)

What follows depends largely on Jeremy Bergerson’s article in the Dutch periodical Leuvense Bijdragen (91, 2002, 63-71). Dutch has the dialectal word boetje “brother.” It has been recorded in numerous variants, and, according to a reasonable suggestion, it is an affectionate form of the baby word boe “brother.” In recent English scholarship, the idea that buddy goes back to brother has gained the upper hand. The direction seems to be correct, but the way may have been less straight than etymologists would like it. The problem is the loss of r after b. In languages in which r is a trill (or a roll: both terms mean the same), for instance, Spanish, Italian, and Russian (to cite a few examples), r is usually the hardest sound to master. Children tend to substitute l for it. But English r is NOT a trill, and the well-known substitution for it is w. Many of us have heard children say: “I am hungwy.” This is also a much-ridiculed pseudo-aristocratic affectation. As a general rule, an English-speaking baby would probably not say buddy for brother.

We may try to ask for help abroad. Dutch bout, beut, boetje, and budde, among many other similar forms, seem to go back to the baby word boe (pronounced with a vowel like English oo in boo). The story is partly reminiscent of the history of English boy. Boy, too, may be a derivative of a baby word for “little brother,” and the Old English name Boia perhaps contains the “root” in its pristine form. Dutch Boio, Boiga, Boga, and even Scandinavian Bo may once have meant “little brother.” (More details on this score can be found in my and J. Lawrence Mitchell’s An Analytic Dictionary of English Etymology: An Introduction, pp. 15-16). If this reconstruction is correct, the many words cited above do go back to “brother,” but not to the word brother. The baby etymon contained no r. This suggestion runs counter to what we find in some of our most respectable sources.

They are happy, rather than hungwy.
(Via Wallpaper Flare, public domain)

Unexpected light on the origin of buddy comes from the English word Boots. The word is known from such compounds as slyboots and lazyboots, but most will probably remember Boots “a name for the servant in hotels who cleans the boots.” The OED cites numerous examples. Apparently, Boots emerged with the meaning “servant; a person at the lowest level of the hierarchy” rather than “shoeshine boy.” This is evidenced by such senses as “the youngest officer in a regiment” and others. The hotel Boots did clean boots and put them outside the gentleman’s door, but this is not why he received his name. George Webbe Dasent used the word Boots for rendering Norwegian Askepott ~ Askeladden (this is what the despised third son in fairytales, “male Cinderella,” is called). If Dasent had associated Askepott only or mainly with the shoe shiner, he would hardly have used the word in such a context. An ingenious correspondent to Notes and Queries once suggested that Puss in Boots might be a misnomer, because allegedly, Puss and Boots, that is, Puss as Servant was meant. But the English title is a translation from French, where Chat botté is unambiguous.

Askeladden in full glory.
(Via Wikimedia Commons, public domain)

Bergerson suggested that Boots is a borrowing of Dutch boet, a childish word for “brother.” We find Dutch boet “a recent recruit in the Indian Army” and a few other words that fit our story well. If boet (pronounced approximately like English boot) was indeed borrowed into English with the meaning “a person of inferior rank,” the rest is plain sailing. The ending s is added freely to English words. I, for example, have dealt with two guineapigs: Cuddles and Sniffers. In Dickens’s Dombey and Son, the servant who wheeled Mrs Skewton around had the name Withers. Dobbin of ours, a most important character in Thackeray’s Vanity Fair, was called Figs, because his father sold all kinds of groceries. And don’t we occasionally live in digs and have things for keeps?

Dutch words in English are numerous (very numerous indeed). If Boots is one of them, perhaps so is buddy from boetje. The voicing of t between vowels is no problem. Buddy is an American coinage, and in American English, intervocalic t is voiced, so that seated and seeded, Plato and playdough, sweetish and Swedish, futile and feudal (to mention just a few examples), are homophones pairwise. With time, the vowel in boetje acquired, as expected, the value of u in but, and butty ~ buddy was born. Speakers quite correctly interpreted the last sound as a diminutive suffix and produced bud from buddy by back formation. The word has just the expressive value one needs: compare studs “a great virile guy,” as in Studs Lonigan byJames T. Farrell.

Have we solved the riddle? No, the etymology of buddy and Boots will probably remain “debatable” (unless it suddenly gains universal recognition—a rare case in etymological studies), but perhaps we have made a step in the right direction.

Featured image via pxfuel (public domain)

OUPblog - Academic insights for the thinking world.

The company we keep, part two: bud(dy)

Coal miners "butt to butt". "The company we keep, part two: buddy" by the Oxford Etymologist on the OUP blog

The company we keep, part two: <em>bud(dy)</em>

I am picking up where I left off two weeks ago. Since there have been no comments or letters connected with part one, I will, as promised, tackle the convoluted history of bud. The Internet is awash in suggestions about the origin of this universally known word. Some such suggestions are reasonable, even clever; others fanciful (for instance, bud has been derived by some from Pashto, from Spanish, and even from the names Buddha and Budweiser!). Of course, we now have the opinion of the OED online, and yet I may add something to the ongoing discussion—no revolutionary hypothesis: rather a glance at the state of the art. Predictably, bud ~ buddy can also be found in the first volume of the original OED, but its editor James A. H. Murray was reticent on the word’s etymology. Today’s editors have much more to say about the history of bud; yet the sought-for origin remains disputable and will probably remain such.

Though the first volume of the OED appeared in 1896, people kept returning to the history of bud, and some of their hypotheses deserve attention. Perhaps the most important part of the tale is whether bud ~ buddy has anything to do with the word butty “comrade.” Here are some quotations from the letters sent to the biweekly Notes and Queries (my constant and inexhaustible source of inspiration). “What is the origin of the word butty, gamekeeper’s slang for “comrade”? The dog was took away home to granny by my butty” (Richard Jefferies, ‘The Amateur Poacher,’ 1896, p. 117”). The editor answered:

“The origin is unknown. The ‘H.E.D.’ [that is, Historical English Dictionary, the original name of the OED; even Murray could not remember when HED became OED!] says that it is a possible corruption of booty. The word is in general use in England. See ‘English Dialect Dictionary’ [by Joseph Wright].”

Volumes 1 to 6 of A New English Dictionary. The beginning of the HED/OED.
The beginning of the HED/OED

Other contributors also cited butty from the regions where they lived, and Walter W. Skeat, ever-ready with a suggestion when it came to etymology, wrote:

I would suggest that butty, comrade, is a mere abbreviation of booty-fellow, one who shares in booty; hence a comrade. The full form occurs in Palsgrave [an early sixteenth-century lexicographer] and is duly explained in the “H.E.D.”, s. v. [sub voce “under the word”] “Booty,” §5.

Since that time the derivation of butty from booty-fellow has been repeated more than once. Other contributors to Notes and Queries mentioned butty collier “a man who contracts with the proprietor of a coalpit to get and raise coal to the bank at so much per ton.” In that context, butty was also connected with boot. Additionally, “in ironworks, where two men frequently manage a forge, one superintendent by day and the other by night, each often describes the other as his butty.” In the same region, “a man and a woman living together irregularly sometimes describe each other as his or her butty, and other people would so describe their relations” (1901). “In Warwickshire [the West Midlands] sweethearts who keep company describe their association as buttying with each other.” Bud “husband” has been used for centuries, but of course no phonetic legerdemain can produce this wordfrom husband. Finally, in Scotland, buddy, buddie, and butty have been long since used as a pet designation for a little child.

Two lovers kiss. In some regions, this activity is called buttying.
In some regions, this activity is called buttying.
(Via Pexels, public domain)

Judging by the meaning of the words mentioned above, it seems that buddy and butty have rather little to do with booty, a French word of Germanic origin. Booty-fellow is close but seems to have little to do with buddy as we know it. Below, I will quote part of a letter from the journal American Speech 4, 1929, p. 389. Though the author’s suggestion looks like an example of folk etymology, bud is such a controversial word that even dubious remarks have some value.

The Standard Dictionary [that is, Funk and Wagnalls] gives butty as a variant of buddy. I raise the query whether precedence should not be the other way. When I was a boy in the Pennsylvania coalfields, one of the commonest words in the miners’ speech [again mining, as in England!] was butty, or buttie, as I have always preferred to spell it, meaning work-fellow. In the cramped, underground and honey-combs where bituminous coal is dug, the man with a pick and the man with the shovel are literally butties, working all day long buttock to buttock, or in the vulgar but comradely abbreviation, butt to butt. The word was used generally in a technical sense, without any endearing connotation of spiritual nearness. But the boys of the community thirty years ago adopted the word for the idea which later at school they were to express by chum, a word which I confess always sounds emasculated and utterly lacking the vulgar warmth of buttie (!).

The author begins his letter by saying that buddy “is by origin a childish corruption of brother, with a familiar form ‘bud.’ … it is incontrovertible that bud and buddy are diminutives of brother…”.

The “emasculated” chum was discussed briefly and inconclusively in the previous blog post, and I will not comment on the author’s use of the word incontrovertible. I have more than once expressed my objections to terms like certainly, undoubtedly, obviously, and so forth. (Very few things in etymology are incontrovertible.) I am only a bit confused about the order of events. If buddy is from brother, how can it a variant of butty? Here, I’ll also mention the enigmatic sixteenth-century adjective baddy of unclear meaning, but, apparently, with some negative connotations. This word deserves attention, because Murray devoted a long letter to “the queer phrase” paddy persons in Notes and Queries 9/XII, 1903, 87-88, and nothing that great man wrote should fall through the cracks. It turned out that paddy persons is a misreading: the correct 1585 phrase is baddy persons, and I wondered whether baddy has anything to do with buddy. My question remains unanswered, but the syntax of the quotation is characteristic, and I would like to quote that phrase as a postscript to my blog post.

A book and a lit candle in the dark. The mood of the stories are gloomy.
The mood of the stories are gloomy.
(Via Pexels, public domain)

Here is this phrase: “I doubt [= fear] not that the flower of the pressed English bandes are gone, and that the remnant supplied with such baddy persons as commonly, in voluntary procurements, men are glad to accept.” Baddy will, unfortunately, remain a minor riddle, but another thing should not be overlooked: “The flower of such band(es) are gone.” The subject is of course flower, and the verb should have been is. Yet the writer made the verb agree with the noun closest to it (bands). As is well-known, colonial languages are conservative. American English is no exception. The Revised Version of the Bible has: “Our Father which art in heaven.” And I sometimes hear statements like: “That’s the guy which I told you about.” But this usage is relatively rare. By contrast, the syntax of the phrase quoted above is ineradicable: students’ papers teem with it. I have once quoted my favorite example from an undergraduate’s essay: “The mood of the tales are gloomy.” This usage, well-known to the historians of English syntax, is still rather common in Ireland and in other “colonial” varieties of English. It would be interesting to heart what our readers know about it.

Next week, I’ll finish my discussion of buddy and say what little I think I know about its origin. Nothing in my suggestion will be original or incontrovertible.

Featured image via Wikimedia Commons (public domain)

OUPblog - Academic insights for the thinking world.

What can Large Language Models offer to linguists?

Google Deepmind. "What can Large Language Models offer to linguists?" by David J. Lobina on the OUP blog

What can Large Language Models offer to linguists?

It is fair to say that the field of linguistics is hardly ever in the news. That is not the case for language itself and all things to do with language—from word of the year announcements to countless discussions about grammar peeves, correct spelling, or writing style. This has changed somewhat recently with the proliferation of Large Language Models (LLMs), and in particular since the release of OpenAI’s ChatGPT, the best-known language model. But does the recent, impressive performance of LLMs have any repercussions for the way in which linguists carry out their work? And what is a Language Model anyway?

 At heart, all an LLM does is predict the next word given a string of words as a context —that is, it predicts the next, most likely word. This is of course not what a user experiences when dealing with language models such as ChatGPT. This is on account of the fact that ChatGPT is more properly described as a “dialogue management system”, an AI “assistant” or chatbot that translates a user’s questions (or “prompts”) into inputs that the underlying LLM can understand (the latest version of OpenAI’s LLM is a fine-tuned version of GPT-4).  

“At heart, all an LLM does is predict the next word given a string of words as a context.”

An LLM, after all, is nothing more than a mathematical model in terms of a neural network with input layers, output layers, and many deep layers in between, plus a set of trained “parameters.” As the computer scientist Murray Shanahan has put it in a recent paper, when one asks a chatbot such as ChatGPT who was the first person to walk on the moon, what the LLM is fed is something along the lines of:

Given the statistical distribution of words in the vast public corpus of (English) text, what word is most likely to follow the sequence “The first person to walk on the Moon was”?

That is, given an input such as the first person to walk on the Moon was, the LLM returns the most likely word to follow this string. How have LLMs learned to do this? As mentioned, LLMs calculate the probability of the next word given a string of words, and it does so by representing these words as vectors of values from which to calculate the probability of each word, and where sentences can also be represented as vectors of values. Since 2017, most LLMs have been using “transformers,” which allow the models to carry out matrix calculations over these vectors, and the more transformers are employed, the more accurate the predictions are—GPT-3 has some 96 layers of such transformers.

The illusion that one is having a conversation with a rational agent, for it is an illusion, after all, is the result of embedding an LLM in a larger computer system that includes background “prefixes” to coax the system into producing behaviour that feels like a conversation (the prefixes include templates of what a conversation looks like). But what the LLM itself does is generate sequences of words that are statistically likely to follow from a specific prompt.

It is through the use of prompt prefixes that LLMs can be coaxed into “performing” various tasks beyond dialoguing, such as reasoning or, according to some linguists and cognitive scientists, learn the hierarchical structures of a language (this literature is ever increasing). But the model itself remains a sequence predictor, as it does not manipulate the typical structured representations of a language directly, and it has no understanding of what a word or a sentence means—and meaning is a crucial property of language.

An LLM seems to produce sentences and text like a human does—it seems to have mastered the rules of the grammar of English—but at the same time it produces sentences based on probabilities rather on the meanings and thoughts to express, which is how a human person produces language. So, what is language so that an LLM could learn it?

“An LLM seems to produce sentences like a human does but it produces them based on probabilities rather than on meaning.”

A typical characterisation of language is as a system of communication (or, for some linguists, as a system for having thoughts), and such a system would include a vocabulary (the words of a language) and a grammar. By a “grammar,” most linguists have in mind various components, at the very least syntax, semantics, and phonetics/phonology. In fact, a classic way to describe a language in linguistics is as a system that connects sound (or in terms of other ways to produce language, such as hand gestures or signs) and meaning, the connection between sound and meaning mediated by syntax. As such, every sentence of a language is the result of all these components—phonology, semantics, and syntax—aligning with each other appropriately, and I do not know of any linguistic theory for which this is not true, regardless of differences in focus or else.

What this means for the question of what LLMs can offer linguistics, and linguists, revolves around the issue of what exactly LLMs have learned to begin with. They haven’t, as a matter of fact, learned a natural language at all, for they know nothing about phonology or meaning; what they have learned is the statistical distribution of the words of the large texts they have been fed during training, and this is a rather different matter.

As has been the case in the past with other approaches in computational linguistics and natural language processing, LLMs will certainly flourish within these subdisciplines of linguistics, but the daily work of a regular linguist is not going to change much any time soon. Some linguists do study the properties of texts, but this is not the most common undertaking in linguistics. Having said that, how about the opposite question: does a run-of-the-mill linguist have much to offer to LLMs and chatbots at all?   

Featured image: Google Deepmind via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

Elon Musk, Mars, and bioethics: is sending astronauts into space ethical?

"Elon Musk, Mars, and bioethics: is ending astronauts into space ethical?" by Konrad Szocik on the OUP blog

Elon Musk, Mars, and bioethics: is sending astronauts into space ethical?

The recent crash of the largest-ever space rocket, Starship, developed by Elon Musk’s SpaceX company, has certainly somewhat disrupted optimism about the human mission to Mars that is being prepared for the next few years. It is worth raising the issue of the safety of future participants in long-term space missions, especially missions to Mars, on the background of this disaster. And it is not just about safety from disasters like the one that happened to Musk. Protection from the negative effects of prolonged flight in zero gravity, protection from cosmic radiation, as well as guaranteeing sufficiently high crew productivity over the course of a multi-year mission also play an important role.

Fortunately, no one was killed in the aforementioned crash, as it was a test rocket alone without a crew. However, past disasters in which astronauts died, such as the Space Shuttle Challenger and Space Shuttle Columbia disasters, remind us that it is the seemingly very small details that determine life and death. So far, 15 astronauts and 4 cosmonauts have died in space flights. 11 more have died during testing and training on Earth. It is worth mentioning that space flights are peacekeeping missions, not military operations. They are carried out relatively infrequently and by a relatively small number of people. 

It is also worth noting the upcoming longer and more complex human missions in the near future, such as the mission to Mars. The flight itself, which is expected to last several months, is quite a challenge, and disaster can happen both during takeoff on Earth, landing on Mars, and then on the way back to Earth. And then there are further risks that await astronauts in space. 

The first is exposure to galactic cosmic radiation and solar energetic particles events, especially during interplanetary flight, when the crew is no longer protected by both Earth’s magnetic field and a possible shelter on Mars. Protection from cosmic radiation for travel to Mars is a major challenge, and 100% effective protective measures are still lacking. Another challenge remains being in long-term zero-gravity conditions during the flight, followed by altered gravity on Mars. Bone loss and muscle atrophy are the main, but not only, negative effects of being in these states. Finally, it is impossible to ignore the importance of psychological factors related to stress, isolation, being in an enclosed small space, distance from Earth.

A human mission to Mars, which could take about three years, brings with it a new type of danger not known from the previous history of human space exploration. In addition to the aforementioned amplified impact of factors already known—namely microgravity, cosmic radiation, and isolation—entirely new risk factors are emerging. One of them is the impossibility of evacuating astronauts in need back to Earth, which is possible in missions carried out at the International Space Station. It seems that even the best-equipped and trained crew may not be able to guarantee adequate assistance to an injured or ill astronaut, which could lead to her death—assuming that care on Earth would guarantee her survival and recovery. Another problem is the delay in communication, which will reach tens of minutes between Earth and Mars. This situation will affect the degree of autonomy of the crew, but also their responsibility. Wrong decisions, made under conditions of uncertainty, can have not only negative consequences for health and life, but also for the entire mission.

“It is worth raising the question of the ethicality of the decision to send humans into such a dangerous environment.”

Thus, we can see that a future human mission to Mars will be very dangerous, both as a result of factors already known but intensified, as well as new risk factors. It is worth raising the question of the ethicality of the decision to send humans into such a dangerous environment. The ethical assessment will depend both on the effectiveness of available countermeasures against harmful factors in space and also on the desirability and justification for the space missions themselves. 

Military ethics and bioethics may provide some analogy here. In civilian ethics and bioethics, we do not accept a way of thinking and acting that would mandate the subordination of the welfare, rights, and health of the individual to the interests of the group. In military ethics, however, this way of thinking is accepted, formally in the name of the higher good. Thus, if the mission to Mars is a civilian mission, carried out on the basis of values inherent in civilian ethics and bioethics rather than military ethics, it may be difficult to justify exposing astronauts to serious risks of death, accident, and disease.

One alternative may be to significantly postpone the mission until breakthrough advances in space technology and medicine can eliminate or significantly reduce the aforementioned risk factors. Another alternative may be to try to improve astronauts through biomedical human enhancements. Just as in the army there are known methods of improving the performance of soldiers through pharmacological means, analogous methods could be applied to future participants in a mission to Mars. Perhaps more radical, and thus controversial, methods such as gene editing would be effective, assuming that gene editing of selected genes can enhance resistance to selected risk factors in space. 

But the idea of genetically modifying astronauts, otherwise quite commonsensical, given also the cost of such a mission, as well as the fact that future astronauts sent to Mars would likely be considered representative of the great effort of all humanity, raises questions about the justification for such a mission. What do the organizers of a mission to Mars expect to achieve? Among the goals traditionally mentioned are the scientific merits of such a mission, followed by possible commercial applications for the future. Philosophers, as well as researchers of global and existential catastrophes, often discuss the concept of space refuge, in which the salvation of the human species in the event of a global catastrophe on Earth would be possible only by settling somewhere beyond Earth. However, it seems that the real goals in our non-ideal society will be political and military.

OUPblog - Academic insights for the thinking world.

The Legend of Soft Power

Like millions of other people around the world, I have spent much of the past few weeks playing The Legend of Zelda: Tears of the Kingdom (TotK), the nineteenth installment in Nintendo’s widely acclaimed series. With ten million units reportedly sold in its first three days—and other metrics on the prevalence of gaming and significant industry profits even after a rough 2022—I have started to wonder why the study of popular culture and International Relations (IR) has given video games relatively little attention.

Work on popular culture and IR has identified various ways in which films, television series, popular literature, and other cultural artifacts (often in the science fiction genre) might reflect and even affect real-world politics. It stands to reason that video games could have similar effects, but with few notable exceptions, these products have received much less attention than those in more established media. I will more systematically consider how video games might affect our political world in my next post. For now, I want to focus on TotK.

TotK might not seem like a game that offers much fodder for IR scholars. There is plenty of fun to be had, but at least in the first half of the game that I have completed, there is little explicitly political content. The story is a fairly straightforward tale of good versus evil, and our valiant hero, Link, is asked to find damsel-in-frequent-distress Princess Zelda.

At most, TotK scandalously asks you to corrupt a local mayoral election by gifting mushrooms from one of the candidates to potential voters. [Spoiler alert] Your election interference matters little—the two candidates decide to share power because, as it turns out, “The best way to keep Hateno Village vibrant is to work together to combine traditional culture with new ideas!”

Where TotK might matter most clearly for IR scholars is in the scope of the game’s reach. This will likely end up being one of the best-selling games of all time, and wherever it falls on that list, it will join many other Nintendo products. Given Nintendo’s world-wide popularity—as well as that of other Japanese game developers and publishers—we might consider whether popular cultural exports like TotK act as a source of “soft power” for the exporting country.

As Joseph Nye originally defined the concept in 1990, soft power is “co-optive” rather than “command” power displayed “when one country gets other countries to want what it wants”. Nye identified “culture” as a “soft power resource” because a state that “stands astride popular channels of communication has more opportunities to get its message across and to affect the preferences of others”. (See the Duck’s own Peter Henne on this topic for a more detailed discussion of this concept.)

For Nye, soft power was a central aspect of his argument—developed more fully in Bound to Lead: The Changing Nature of American Power—that the United States would not soon be eclipsed by any other potential competitor. The volume and uptake of American cultural exports constituted evidence that the United States could remain the world’s leading power even if others made some relative gains in the military or economic domains.

Nye saw various kinds of cultural exports as generative of American soft power. “Young Japanese who have never been to the United States wear sports jackets with the names of American colleges. Nicaraguan television broadcast American shows even while the government fought American-backed guerrillas. Similarly, Soviet teenagers wear blue jeans and seek American recordings, and Chinese students used a symbol modeled on the Statue of Liberty during the 1989 uprisings.”

By contrast, Nye saw Japanese cultural exports as unlikely to overtake American popular culture on the world stage. “Although Japanese consumer products and cuisine have recently become more fashionable, they seem less associated with an implicit appeal to a broader set of values than American domination of popular communication.”

Whether one is playing TotK or, say, watching 2020’s highest-grossing film, Demon Slayer: Mugen Train, today’s ubiquity of Japanese cultural exports would suggest that such products have broader appeal and may be a more reliable source of soft power than Nye expected.

Writing a year before Nye, Francis Fukuyama made such an argument—”the triumph of the West” could be seen in part through the spread of its popular culture, and Japan’s popular cultural products had helped make it one of the world’s leading powers. Japan had “follow[ed] in the footsteps of the United States to create a truly universal consumer culture that has become both a symbol and an underpinning of the universal homogenous state”.

Fukuyama was not concerned that Japanese cultural products would rival those of the United States. Rather, the successful post-war infusion of “the essential elements of economic and political liberalism” into Japan produced a popular culture that complemented American cultural products and that affirmed “consumerist” liberal democracy as a path toward prosperity and influence.

For me and many others, the hours we log restoring order to TotK‘s Kingdom of Hyrule represent a fraction of the exposure we have had to Japanese cultural exports. Do all those experiences—perhaps the experiences of watching Studio Ghibli films, reading Haruki Murakami novels, or decluttering with Marie Kondo’s assistance—translate into soft power?

If enough Americans engage with images of Japan that generate fond feelings for (or “affective investment” in) the country, does that mean that the United States as a government will be more likely to “want what [Japan] wants” in at least some areas?

I do not yet have firm answers to these questions. At a time when Chinese officials are seeking to enhance their own country’s soft power, however, and when Japanese game developers are fretting about the rise of the Chinese gaming industry, it would be worth building on some of the scholarship I have cited here to answer such questions. We might thereby bring video games more fully into the study of popular culture and IR.

Author’s note: I have edited the original post to specify that “IR” is an acronym for International Relations and to add a spoiler alert for a side quest.

A Holistic College and Career Readiness Practice

“Bresee helps the youth and those who are most disadvantaged. Serving Koreatown, a primarily Hispanic community, and advocate for the need of bringing peace to our community. By focusing on the youth, Bresee is able to build a better future where everyone is given equal opportunities and leads them to a successful future.” – Youth... Read more »

The post A Holistic College and Career Readiness Practice appeared first on Connected Learning Alliance.

What’s coming down the pike?

What's coming down the turnpike? By Edwin Battistella on the OUPblog

What’s coming down the pike?

During the news coverage of the COVID pandemic, I enjoyed seeing Dr Anthony Fauci on television and hearing his old-school Brooklyn accent, still shining through in his late seventies in words like Pfizerbecausedatahere, and that.   

But my favorite expression to listen for was his use of “down the pike” to mean “in the future.” Fauci explained once that “you don’t see for days or weeks down the pike.” Another time he said “before you know it, two to three weeks down the pike, you’re in trouble.” Discussing vaccine testing he said “So we go into phase one, it’ll take about three months to determine if it’s safe. That’ll bring us three or four months down the pike.”

“Down the pike” is an expression I grew up hearing all the time in my home state of New Jersey. And, of course, many people say it besides Anthony Fauci. Joe Biden talks about how government and the private sector should “anticipate and respond to shortages that may be coming down the pike.” If you Google “down the pike” you’ll find it everywhere, even in New York Times headlines like “Is a Trans-Atlantic Pact Coming Down the Pike?” and (with an attempted pun) “Hydrogen Cars, Coming Down the Pike.” You can find occasional instances of things coming “up the pike,” and there is an early twentieth-century slang expression “hit the pike,” meaning “hit the road” or “leave.”

Not everyone is familiar with “down the pike.” Some mislearn it as the semantically plausible “down the pipe” rather than “down the pike.” It’s “down the pike,” but where does it come from?

I had assumed that “down the pike” had something to do with the New Jersey Turnpike, the 117-mile toll highway that runs from New York City to Delaware. The New Jersey Turnpike Authority was created in 1948 and the Turnpike itself was completed in 1951. It’s been collecting tolls ever since, along with the Garden State Parkway, which was completed in 1957.  

There are lots of turnpikes, however, and the word goes way back. The Oxford English Dictionary gives examples from the early 1400s. It originally meant “a spiked barrier fixed in or across a road or passage,” and was used as a defense against attacks by men on horseback.

Later turnpike became extended to the sense of a turnstile to block horses. Samuel Johnson offered this definition: “Turnpike… a cross of two bars armed with pikes at the end, and turning on a pin, fixed to hinder horses from entering.” By the late 1600s, the turnpike was a toll booth of sorts, and a late seventeenth-century Act of Parliament refers to “collecting the said [toll]… by setting up a Turnpike or otherwise.”

Turnpike, or the clipping pike, was often used to refer generally to roads in the nineteenth century, and the expression “coming down the pike” was another way of saying “coming down the road.”

By the late 1800s, figurative senses were emerging and taking hold. An 1898 story in the Dayton Herald had the line “Bowling, if dead, is the liveliest corpse that ever ’came down the pike’, as they say on the bowery.” The quote marks suggest that figurative “down the pike” was a newish expression at the time.

Pike was also used as a synonym for boardwalk or midway, and in 1903 the organizers of the St. Louis World’s Fair announced that the upcoming fair would call its promenade “The Pike,” to distinguish it from Chicago’s Midway. Perhaps St. Louis got the idea from Long Beach, California, which debuted a boardwalk amusement zone called “The Pike” even earlier—in 1902. In any event, St. Louis encouraged visitors to “Come Down the Pike,” and there was later a Broadway musical titled “Down the Pike” whose second act took place at the 1904 World’s Fair.

The figurative sense of “coming down the pike” took hold but remained sporadic in the early twentieth century. A 1905 Portland, Maine, newspaper talked about “nothing but anarchy coming down the pike” and “chaos coming down the pike.” A 1936 issue of the International Stereotypers’ and Electrotypers’ Union Journal which refers to “The fall election… coming down the pike.” Both examples are clearly oriented toward time and events rather than space.   

By the 1950s the sense of “coming down the pike” to mean happening in the future was increasingly common and it took off in the 1960s and 70s, after the completion of the Interstate Highway System.

With language, you never know what’s coming down the pike.

Featured image: the New Jersey Turnpike, 1992, via Wikimedia Commons (CC BY-SA 2.5)

OUPblog - Academic insights for the thinking world.

Oumar Ba

Dr. Oumar Ba of Cornell University visits the Hayseed Scholar podcast. Dr. Ba grew up in Senegal, attending his first school at an early age near the Senegal-Mauritania border. He developed an interest in politics in high school and at his first university (Cheik Anna Diop in Dakar) where he pursued Geography. Oumar moved to the United States in early 2001, taking a Greyhound Bus from New York to Ohio. Following a series of jobs, including one at an auto manufacturing plant, he would return to academia pursuing a Master’s in International Affairs and Political Science at Ohio University. It was there he worked on his thesis, exploring topics in International Law under the guidance of Professor Andrew Ross. He then went to Gainesville to pursue his PhD, and where he met friend of the pod Professor Aida Hozic. Dr. Ba reflects on his time at Morehouse College, an HBCU where he worked in his first tenure track job, and the interview (during the pandemic) and then recent move to Cornell. He reflects upon his experience of his book being the subject of the 2019 ISA-Northeast circle, discusses his approach to writing (‘chaos’), and what he does to recharge and unwind. Dr. Ba and Brent conclude with a discussion on the World Cup.

The company we keep

Chronos and his child. "The company we keep" by the Oxford Etymologist on the OUP blog

The company we keep

Observing how various words for “friend” originate and develop is a rather curious enterprise. Some etymologies are trivial, that is, they have been known for a long time and are undisputed. Such is, among others, the case of friend. Any good dictionary will tell the same story. The last two letters of friend (-nd) are a trace of an old present participle. In English, present participles end in –ing (a barking dog, flowering wilderness, and so forth), but at one time, the ending of this part of speech was –and. The other Germanic languages still have some easily recognizable traces of –and (cf. German kommend “coming” and so forth). The root of friend is also transparent to the language historian: it once meant “to love” (so already in Gothic, a fourth-century Germanic language, which has come down to us in a rather full form). In Old English, the word did mean “lover.”

Do we love our relatives? It depends. In a society in which kinship determines people’s behavior, love (the sense of belonging, loyalty, devotion, and many other feelings) has numerous “pre-modern” shades. In any case, Icelandic frændi means neither “friend” nor “lover” but “kinsman, relative.” Those who believe that English spelling will gain greatly if reformed may note that friend and fiend are spelled alike and yet do not rhyme. Fiend is also an old present participle. It once meant “hating; hater.” So much for the origin of friends and friendship.

A friend in need.
Via Pexels (public domain)

Another trivial case is fellow. Fellow certainly belongs with the subject announced in the title: compare fellow traveler and good fellow “an affable person.” The word came to English from Scandinavian. Icelandic félagi is a compound: means “property, money” (English fee is its distant cousin), while –lagi is related to English lay (as in to lay something). Consequently, a fellow is a person who lays down money with someone in a joint undertaking. The word bears some similarity to companion, except that companion is a borrowing from French, rather than Scandinavian. The Romance roots of this word are obvious: com– “together” and panis “bread” (a companion is therefore “one who breaks bread with another”). My students have never heard the phrase boon companion “good companion.” In older books, this phrase usually means “someone with whom one drinks and makes merry” (for example, we can remember Tony Lumpkin and his boon companions in She Stoops to Conquer), and there is no reason why it should be forgotten.

Sharing food will now take us to the noun mate. We have received and welcomed linguistic guests from Scandinavian and French. It would therefore be wrong to ignore Low German. Enter mate. The word (ge)mate, from gamato-, had the prefix ga- ~ ge-, denoting association, and the root discernible in English meat. Meat once meant “food,” as it still does in sweetmeats and green meat (two other items of the English vocabulary my students fail to recognize), from the root “to cut,” as in German messen and (!) English mete out. Thus, we again end up in a friendly company of food sharers.

A few more notes about this mess. The word mess, of French origin, first meant “a serving of food; dish.” In the eighteenth century, it could refer to “mixed food for an animal.” The familiar sense “medley; jumble; a state of utter disarray” appeared later. The root of mess can be seen in Latin mittere “to send out” (as in e-mit and mess-age). In a way, the noun messmate is a tautological compound like pathway. Both parts mean approximately the same: here “food-food,” rather than “someone with whom ones shares food.”

In the company we keep, three words are more serious etymological puzzles: crony, chum, and buddy.

Crony turned up in books in the 1660s. Samuel Pepys knew the word, and so did Stephen Skinner, the author of the 1671 etymological dictionary of English (the second ever). Both Pepys and Skinner were Cambridge men, and crony, long before it acquired the modem negative connotations, meant “roommate” (at Cambridge). In the entry for 30 May 1665, Pepys speaks of the death of Jack Cole, “who was a great chrony (sic) of mine.” The only reasonable derivation of crony seems to be from Greek Chronios “contemporary,” from Chronos “time.” The use of Greek and Latin words at British universities and public schools was commonplace. Pepys’s spelling of the word with ch probably shows that this is how he too understood the word, even though at that time and later, “elegant” (Greek or Latin) spelling variants were customary. To be sure, by Pepys’s time, the word may have been used for decades and been garbled more than once. Therefore, the current etymology looks acceptable but not certain. The once suggested derivation of crony from some cognate of the verb croon is nonsense.

A fellow traveler makes a long journey short.
Via PxFuel (public domain)

Chum, the Oxford counterpart of crony, became known from printed texts at almost the same time as crony. (Does it follow that both were, after all, recent seventeenth-century coinages? If so, why did they suddenly come into being?). Both were slang, and those who discussed them two hundred years later sometimes apologized for dealing with such low terms: at that time, slang was a synonym for filth.  “I confess I rather like the word, though not a few of those born in the [eighteen]-forties, at least, seem disposed to call it slang” (from an 1895 letter). E. B. Brewer, the author of a once tremendously popular 1870 book on the origin of words and idioms, defined chum as “bedfellow.” It is unclear where he found this gloss, but every two students at old Cambridge and Oxford did indeed share a bed.

Cicero and Tacitus used the Latin word con-taberna-lis (literally, “someone sharing a taberna,” roughly “tavern”) with the sense “comrade” (incidentally, a comrade is also a person sharing a camera “chamber” with someone). But contabernalis could hardly have been “Englished” into chum. Even less probable is the derivation of chum from Latin cum “with.” Those old suggestions deserve little consideration.

In 1896, the great Walter W. Skeat offered his etymology of chum. He cast doubt on the derivation of chum from chamber-(fellow), suggested cautiously in the OED, and indeed the path from chamber to chum “does not run smooth.” Skeat found the following entry in the 1767 very well-known Bremen Dictionary: “[In my translation]: “Kumpan, abbreviated as Kump, associate, companion, comrade; College socius, consors. Engl. chum.” He believed that chum was not only a gloss on but also a borrowing of German Kump and explained how the change may have happened. He reconstructed a devious way from Kump to chum. Yet if people could alter chamber into chum, they could do the same, and much more easily, with Kump. But why should British students have borrowed a German slang term? Though that is again not improbable, the reconstruction remains guesswork. With time, Skeat probably lost enthusiasm for his hypothesis, because he did not even include chum in the latest edition of his Concise Dictionary. Thus, today, we can remain chummy, without knowing why we use the word everybody knows (not a rare case in etymology in its relation to life).

The story of buddy is long, and I’ll leave it for the next blog post, but I hasten to thank our readers for their corrections, conjectures, and comments on cowardly custard. (Sorry for the alliteration!). I also have to inform them that I am off to a conference of the Dictionary Society of North America in Boulder, Colorado. This is my first trip “abroad” since the beginning of the pandemic. Therefore, the buddy thriller will appear a week later than scheduled. The long wait will, I hope, whet everybody’s appetite for more tales in the series of One Thousand and One Etymologies.

Featured image: “Chronos and his child” by Giovanni Francesco Romanelli, via Wikimedia Commons (public domain)

OUPblog - Academic insights for the thinking world.

The Connected Wellbeing Initiative: Building Understanding and Action Regarding Teens’ Technology Use and Their Mental Health

The positive benefits of youth interacting with technology are often ignored while the negatives are emphasized. It’s time for that to change. In a commitment to this effort, the Connected Learning Alliance, along with the Connected Learning Lab at the University of California, Irvine, are excited to share the new Connected Wellbeing Initiative with the... Read more »

The post The Connected Wellbeing Initiative: Building Understanding and Action Regarding Teens’ Technology Use and Their Mental Health appeared first on Connected Learning Alliance.

❌