FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Off for the US Holiday — More Grammar

We are off today and tomorrow for the US Independence Day holiday. Also included, a song that hews carefully to archaic rules about prepositions at the end of sentences.

The post Off for the US Holiday — More Grammar appeared first on The Scholarly Kitchen.

Helping verbs are curious, AND fascinating

Decorative grey background with light circles. "Helping verbs are curious, AND fascinating" by Edwin Battistella

Helping verbs are curious, AND fascinating

English has a big bagful of auxiliary verbs. You may have learned these as “helping verbs” in elementary and middle school, since they are sometimes described as verbs that “help” the main verb express its meaning. There are even schoolroom songs about them. They are a curious bunch.

The auxiliaries include the modal verbs (can and could, shall and should, will and would, may and might, and must). The verb that follows a modal is in its bare, uninflected form: can go, could go, must go, and so on. There are also a number of semi-modal auxiliary verbs (such as dare, need, ought to, had better, have to, and used to). Some are compound words spelled with a space and several have unusual grammatical properties as well, such as being resistant to contraction or inversion. And in parts of the English-speaking world, modals can double up, yielding expressions like might could, may can, might should, and more.

Aside from the modals, semi-modals, and double modals, the primary auxiliaries are forms of have, be, and do, which are inflected for tense (is versus was, has versus had, do versus did), number (is versus are, has versus have), and person(is versus am versus are, do versus does). These auxiliaries help to indicate verbal nuances like emphasis, the perfect and progressive aspects, and the passive voice. Here are some examples, adapted from Ernest Hemmingway’s The Old Man and the Sea:

Those who did catch sharks had taken them to the shark factory on the other side of the cove … (emphatic do and perfect aspect had)

The old man opened his eyes and for a moment he was coming back from a long way away. (progressive aspect)

His shirt was patched so many times that it was like the sail … (passive voice)

The primary auxiliaries come before the negative adverb not and allow contraction to it.

They didn’t catch sharks.

His shirt wasn’t patched.

He hadn’t taken the sharks.

And they play a role in questions by hopping to the left over the subject

Did they catch sharks?

Was his shirt patched?

Had he taken the sharks?

or by being copied at the end in a tag question.

They caught sharks, didn’t they?

His shirt was patched, wasn’t it?

He had taken the sharks, hadn’t he?

Main verbs like see and go and walk don’t do any of those tricks.

Things get even curiouser, however, because the helping verbs have and do have doppelgangers that actually are main verbs.

The old man did his chores. 

His shirt had a tear in it.

How do we know these are main verbs and not helping verbs? Well, for one thing, they are the only verbs in the sentence. For another, they can occur with other helping verbs:

The old man had done his chores. 

His shirt had had a tear in it all day.

And if you make the sentences questions or negate them, you have to add a form of auxiliary do.

Did the old man do his chores?

Did his shirt have a tear in it?

The helping verb be also has a doppelganger main verb, but the forms of main verb be behave pretty much just like the helping verb. More curious behavior, keeping us on our toes. The first sentence below has past tense main verb was followed by an adjective; the other two have the past tense helping verb was.

The shark was tenacious. (main verb was)

The shark was never caught. (auxiliary was)

The old man was trying his best. (auxiliary was)

But all three was forms hop to the left in questions.

Was the shark tenacious?

Was the shark ever caught?

Was the old man trying his best?

The curious behavior of helping verbs goes on and on, with different dialects doing different things. If you’ve read many British novels or watched British television you might have noticed forms of helping verb do popping up in elliptical sentences. Here’s an example from J. R. R. Tolkien’s The Two Towers: “Sam frowned. If he could have bored holes in Gollum with his eyes, he would have done.” (For a study of these forms, check out Ronald Butters’s 1983 article “Syntactic change in British English propredicates.”)

In African American English, the auxiliary done lends a completive meaning to events. You can see it in these dialogue examples from August Wilson’s Fences and from Walter Mosely’s Blond Faith: “Now I done give you everything I got to give you!” and “Didn’t she tell you that Pericles done passed on.” For more on this use of done, take a look at the chapters by Lisa J. Green and Walter Sistrunk and by Charles E. DeBose in the Oxford Handbook of African American Language.

We’ve just scratched the surface of auxiliaries. I hope you’ve become curious about these curious words.

Featured image by Alexander Grey via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

Real patterns and the structure of language

Real patterns and the structure of language by Ryan M. Nefdt, author of "Language, Science, and Structure: A Journey into the Philosophy of Linguistics" published by Oxford University Press

Real patterns and the structure of language

There’s been a lot of hype recently about the emergence of technologies like ChatGPT and the effects they will have on science and society. Linguists have been especially curious about what highly successful large language models (LLMs) mean for their business. Are these models unearthing the hidden structure of language itself or just marking associations for predictive purposes? 

In order to answer these sorts of questions we need to delve into the philosophy of what language is. For instance, if Language (with a big “L”) is an emergent human phenomenon arising from our communicative endeavours, i.e. a social entity, then AI is still some ways off approaching it in a meaningful way. If Chomsky, and those who follow his work, are correct that language is a modular mental system innately given to human infants and activated by miniscule amounts of external stimulus, then AI is again unlikely to be linguistic, since most of our most impressive LLMs are sucking up so many resources (both in terms of data and energy) that they are far from this childish learning target. On the third hand, if languages are just very large (possibly infinite) collections of sentences produced by applying discrete rules, then AI could be super-linguistic.

In my new book, I attempt to find a middle ground or intersection between these views. I start with an ontological picture (meaning a picture of what there is “out there”) advocated in the early nineties by the prominent philosopher and cognitive scientist, Daniel Dennett. He draws from information theory to distinguish between noise and patterns. In the noise, nothing is predictable, he says. But more often than not, we can and do find regularities in large data structures. These regularities provide us with the first steps towards pattern recognition. Another way to put this is that if you want to send a message and you need the entire series (string or bitmap) of information to do so, then it’s random. But if there’s some way to compress the information, it’s a pattern! What makes a pattern real, is whether or not it needs an observer for its existence. Dennett uses this view to make a case for “mild realism” about the mind and the position (which he calls the “intentional stance”) we use to identify minds in other humans, non-humans, and even artifacts. Basically, it’s like a theory we use to predict behaviour based on the success of our “minded” vocabulary comprising beliefs, desires, thoughts, etc. For Dennett, prediction matters theoretically!

If it’s not super clear yet, consider a barcode. At first blush, the black lines of varying length set to a background of white might seem random. But the lines (and spaces) can be set at regular intervals to reveal an underlying pattern that can be used to encode information (about the labelled entity/product). Barcodes are unique patterns, i.e. representations of the data from which more information can be drawn (by the way Nature produces these kinds of patterns too in fractal formation).  

“The methodological chasm between theoretical and computational linguistics can be surmounted.”

I adapt this idea in two ways in light of recent advances in computational linguistics and AI. The first reinterprets grammars, specifically discrete grammars of theoretical linguistics, as compression algorithms. So, in essence, a language is like a real pattern. Our grammars are collections of rules that compress these patterns. In English, noticing that a sentence is made up of a noun phrase and verb phrase is such a compression. More complex rules capture more complex patterns. Secondly, discrete rules are just a subset of continuous processes. In other words, at one level information theory looks very statistical while generative grammar looks very categorical. But the latter is a special case of the former. I show in the book how some of the foundational theorems of information theory can be translated to discrete grammar representations. So there’s no need to banish the kinds of (stochastic) processes often used and manipulated in computational linguistics, as many theoretical linguists have been wont to do in the past. 

This just means that the methodological chasm between theoretical and computational linguistics, which has often served to close the lines of communication between the fields, can be surmounted. Ontologically speaking, languages are not collections of sentences, minimal mental structures, or social entities by themselves. They are informational states taken from complex interactions of all of the above and more (like the environment). On this view, linguistics quickly emerges as a complexity science in which the tools of linguistic grammars, LLMs, and sociolinguistic observations all find a homogeneous home. Recent work on complex systems, especially in biological systems theory, has breathed new life into this interdisciplinary field of inquiry. I argue that the study of language, including the inner workings of both the human mind and ChatGPT, belong within this growing framework. 

For decades, computational and theoretical linguists have been talking different languages. The shocking syntactic successes of modern LLMs and ChatGPT have forced them into the same room. Realising that languages are real patterns emerging from biological systems gets someone to break the awkward silence…

Featured image by Google DeepMind Via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

Real patterns and the structure of language

Real patterns and the structure of language by Ryan M. Nefdt, author of "Language, Science, and Structure: A Journey into the Philosophy of Linguistics" published by Oxford University Press

Real patterns and the structure of language

There’s been a lot of hype recently about the emergence of technologies like ChatGPT and the effects they will have on science and society. Linguists have been especially curious about what highly successful large language models (LLMs) mean for their business. Are these models unearthing the hidden structure of language itself or just marking associations for predictive purposes? 

In order to answer these sorts of questions we need to delve into the philosophy of what language is. For instance, if Language (with a big “L”) is an emergent human phenomenon arising from our communicative endeavours, i.e. a social entity, then AI is still some ways off approaching it in a meaningful way. If Chomsky, and those who follow his work, are correct that language is a modular mental system innately given to human infants and activated by miniscule amounts of external stimulus, then AI is again unlikely to be linguistic, since most of our most impressive LLMs are sucking up so many resources (both in terms of data and energy) that they are far from this childish learning target. On the third hand, if languages are just very large (possibly infinite) collections of sentences produced by applying discrete rules, then AI could be super-linguistic.

In my new book, I attempt to find a middle ground or intersection between these views. I start with an ontological picture (meaning a picture of what there is “out there”) advocated in the early nineties by the prominent philosopher and cognitive scientist, Daniel Dennett. He draws from information theory to distinguish between noise and patterns. In the noise, nothing is predictable, he says. But more often than not, we can and do find regularities in large data structures. These regularities provide us with the first steps towards pattern recognition. Another way to put this is that if you want to send a message and you need the entire series (string or bitmap) of information to do so, then it’s random. But if there’s some way to compress the information, it’s a pattern! What makes a pattern real, is whether or not it needs an observer for its existence. Dennett uses this view to make a case for “mild realism” about the mind and the position (which he calls the “intentional stance”) we use to identify minds in other humans, non-humans, and even artifacts. Basically, it’s like a theory we use to predict behaviour based on the success of our “minded” vocabulary comprising beliefs, desires, thoughts, etc. For Dennett, prediction matters theoretically!

If it’s not super clear yet, consider a barcode. At first blush, the black lines of varying length set to a background of white might seem random. But the lines (and spaces) can be set at regular intervals to reveal an underlying pattern that can be used to encode information (about the labelled entity/product). Barcodes are unique patterns, i.e. representations of the data from which more information can be drawn (by the way Nature produces these kinds of patterns too in fractal formation).  

“The methodological chasm between theoretical and computational linguistics can be surmounted.”

I adapt this idea in two ways in light of recent advances in computational linguistics and AI. The first reinterprets grammars, specifically discrete grammars of theoretical linguistics, as compression algorithms. So, in essence, a language is like a real pattern. Our grammars are collections of rules that compress these patterns. In English, noticing that a sentence is made up of a noun phrase and verb phrase is such a compression. More complex rules capture more complex patterns. Secondly, discrete rules are just a subset of continuous processes. In other words, at one level information theory looks very statistical while generative grammar looks very categorical. But the latter is a special case of the former. I show in the book how some of the foundational theorems of information theory can be translated to discrete grammar representations. So there’s no need to banish the kinds of (stochastic) processes often used and manipulated in computational linguistics, as many theoretical linguists have been wont to do in the past. 

This just means that the methodological chasm between theoretical and computational linguistics, which has often served to close the lines of communication between the fields, can be surmounted. Ontologically speaking, languages are not collections of sentences, minimal mental structures, or social entities by themselves. They are informational states taken from complex interactions of all of the above and more (like the environment). On this view, linguistics quickly emerges as a complexity science in which the tools of linguistic grammars, LLMs, and sociolinguistic observations all find a homogeneous home. Recent work on complex systems, especially in biological systems theory, has breathed new life into this interdisciplinary field of inquiry. I argue that the study of language, including the inner workings of both the human mind and ChatGPT, belong within this growing framework. 

For decades, computational and theoretical linguists have been talking different languages. The shocking syntactic successes of modern LLMs and ChatGPT have forced them into the same room. Realising that languages are real patterns emerging from biological systems gets someone to break the awkward silence…

Featured image by Google DeepMind Via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

What’s coming down the pike?

What's coming down the turnpike? By Edwin Battistella on the OUPblog

What’s coming down the pike?

During the news coverage of the COVID pandemic, I enjoyed seeing Dr Anthony Fauci on television and hearing his old-school Brooklyn accent, still shining through in his late seventies in words like Pfizerbecausedatahere, and that.   

But my favorite expression to listen for was his use of “down the pike” to mean “in the future.” Fauci explained once that “you don’t see for days or weeks down the pike.” Another time he said “before you know it, two to three weeks down the pike, you’re in trouble.” Discussing vaccine testing he said “So we go into phase one, it’ll take about three months to determine if it’s safe. That’ll bring us three or four months down the pike.”

“Down the pike” is an expression I grew up hearing all the time in my home state of New Jersey. And, of course, many people say it besides Anthony Fauci. Joe Biden talks about how government and the private sector should “anticipate and respond to shortages that may be coming down the pike.” If you Google “down the pike” you’ll find it everywhere, even in New York Times headlines like “Is a Trans-Atlantic Pact Coming Down the Pike?” and (with an attempted pun) “Hydrogen Cars, Coming Down the Pike.” You can find occasional instances of things coming “up the pike,” and there is an early twentieth-century slang expression “hit the pike,” meaning “hit the road” or “leave.”

Not everyone is familiar with “down the pike.” Some mislearn it as the semantically plausible “down the pipe” rather than “down the pike.” It’s “down the pike,” but where does it come from?

I had assumed that “down the pike” had something to do with the New Jersey Turnpike, the 117-mile toll highway that runs from New York City to Delaware. The New Jersey Turnpike Authority was created in 1948 and the Turnpike itself was completed in 1951. It’s been collecting tolls ever since, along with the Garden State Parkway, which was completed in 1957.  

There are lots of turnpikes, however, and the word goes way back. The Oxford English Dictionary gives examples from the early 1400s. It originally meant “a spiked barrier fixed in or across a road or passage,” and was used as a defense against attacks by men on horseback.

Later turnpike became extended to the sense of a turnstile to block horses. Samuel Johnson offered this definition: “Turnpike… a cross of two bars armed with pikes at the end, and turning on a pin, fixed to hinder horses from entering.” By the late 1600s, the turnpike was a toll booth of sorts, and a late seventeenth-century Act of Parliament refers to “collecting the said [toll]… by setting up a Turnpike or otherwise.”

Turnpike, or the clipping pike, was often used to refer generally to roads in the nineteenth century, and the expression “coming down the pike” was another way of saying “coming down the road.”

By the late 1800s, figurative senses were emerging and taking hold. An 1898 story in the Dayton Herald had the line “Bowling, if dead, is the liveliest corpse that ever ’came down the pike’, as they say on the bowery.” The quote marks suggest that figurative “down the pike” was a newish expression at the time.

Pike was also used as a synonym for boardwalk or midway, and in 1903 the organizers of the St. Louis World’s Fair announced that the upcoming fair would call its promenade “The Pike,” to distinguish it from Chicago’s Midway. Perhaps St. Louis got the idea from Long Beach, California, which debuted a boardwalk amusement zone called “The Pike” even earlier—in 1902. In any event, St. Louis encouraged visitors to “Come Down the Pike,” and there was later a Broadway musical titled “Down the Pike” whose second act took place at the 1904 World’s Fair.

The figurative sense of “coming down the pike” took hold but remained sporadic in the early twentieth century. A 1905 Portland, Maine, newspaper talked about “nothing but anarchy coming down the pike” and “chaos coming down the pike.” A 1936 issue of the International Stereotypers’ and Electrotypers’ Union Journal which refers to “The fall election… coming down the pike.” Both examples are clearly oriented toward time and events rather than space.   

By the 1950s the sense of “coming down the pike” to mean happening in the future was increasingly common and it took off in the 1960s and 70s, after the completion of the Interstate Highway System.

With language, you never know what’s coming down the pike.

Featured image: the New Jersey Turnpike, 1992, via Wikimedia Commons (CC BY-SA 2.5)

OUPblog - Academic insights for the thinking world.

What does a technical writer do?

What does a technical writer do? By Edwin Battistella for the OUP blog

What does a technical writer do?

When people think about careers in writing, they may focus on writing novels or films, imagining themselves as the next Stephen King or Sofia Coppola. They may aspire be a poet like Tracy K. Smith or Ada Limón. They may lean toward non-fiction, aiming to become an author like Jill Lepore or Louis Menand. But for steady work, there is nothing like technical writing for science, medicine, manufacturing, finance, retail, and other specialized fields.  

According to the US Bureau of Labor Statistics, there are more than 55,000 technical writing jobs in the US, projected to hit 59,000 by 2031. And while there are advanced certifications available in specialty fields and even master’s degrees in technical communication, technical writing is a career path open to writers with undergraduate degrees in English, communication, linguistics, and related fields. It usually helps though to have some experience in design, business, science, or technology.

What do technical writers do?

Technical writers prepare instruction manuals, guides, technical articles, descriptions, posts, and documents for all manner of processes, products, and procedures. One student of mine working as an intern said that her first task was to write instructions for packing jars of peanut butter in shipping boxes. Another developed a safety manual for a supermarket’s deli kitchen. A third got assigned to explain procedures for using water softeners and filtration systems. And I once worked for a time documenting an automated query system for real estate listings.

When I’ve invited career technical writers to visit my classes, they’ve shared some of the work they do at any given time. They’ve written clinical evaluation reports and protocols in the medical field and instructions for financial and business systems and software. They have written explanations of how environmental data is collected and analyzed and procedures for compliance with federal and state regulations. One rewrote hospital incident reports to make sure they were understandable to staff and insurers. Another developed a design proposal for an event venue. The writing itself is tremendously varied. 

A writer friend once told me, “A lot of writing is not writing,” meaning that there was more to it than words on paper. That’s certainly true for technical writing. Research and consultation are large parts of the technical writer’s work. Figuring out the needs of the users takes time, curiosity, and perseverance. Technical writers consult with designers, developers, managers, technical staff, and end-users to understand the products or processes involved. They may be responsible for recommending the most appropriate medium and design for materials and for ensuring that content is uniform across various modes of delivery.

Some technical writers may also be responsible for collecting and analyzing user feedback and usage patterns. And they may work as technical editors for projects developed collaboratively or drafted by others: scientists, clinicians, programmers, and engineers may not be familiar with writing for the general public or for their end-users.

At times, technical writers can also find themselves doing some feature writing in addition to the technical bread and butter. A writer might profile staff members explaining their work and write newsletter, blog, or magazine copy to tell the backstory of a new discovery, promote an innovative product, or explain a live-saving procedure.   

Our fiction-writer colleagues like to remind us that “all writing is creative writing.” If you enjoy research and can write clearly, technical writing may be the place to put your creativity to work.

Featured image by Amy Hirschi via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

When meanings go akimbo

"When meanings go akimbo" by Edwin Battistella on the OUPblog

When meanings go akimbo

The realization started with the word akimbo. I had first learned it as meaning a stance with hands on the hips, and I associated the stance with the comic book image of Superman confronting evildoers. Body language experts sometimes call this a power pose, intended to project confidence or dominance.  

From time to time, I had encountered akimbo used to mean haphazardly sprawled, in expressions like “arms and legs akimbo,” but I assumed it was an error. And I’d seen the occasional phrasing “legs akimbo,” which referred to a splayed position. 

Then I ran across “studiously akimbo hair,” which seemed to connote an intentional messiness. It was time to check some dictionaries.

I found that akimbo comes from a Middle English phrase in kenebowe, which meant “at a sharp angle.” Arms and hands had been in kenebowe since the 1400s and a-kimbo from the 1800s. In the nineteenth century we find a tailor on a bench with his legs akimbo and a man dancing with knees akimbo. There are hearts akimbo, hats akimbo, curtains akimbo, and, yes, hair akimbo. Today there’s even an action movie called “Guns Akimbo.” The Oxford English Dictionary yields a 1940s Mayfair slang use for being on one’s high horse, with the example “She got terribly akimbo.” Bent out of shape, perhaps?

By now, it was clear to me that my narrow conception of akimbo had gone askew. I got curious about what other lexical preconceptions of mine might be in disarray, so I turned to penultimate and erstwhile

Penultimate is still defined as “next to last,” though Merriam-Webster notes that it is sometimes used as an “intensified version of ultimate,” but not usually in edited prose. Erstwhile too is still defined as “former,” though it is sometimes used as a synonym for esteemed or as a fancy way ofsaying worthwhile. 

It’s not hard to see how these meanings could shift in use. When someone refers to the “penultimate scene” of a play or “an erstwhile professor,” listeners may understand these as referring to a finale or an eminence. Vagueness permits some drift in common usage, but the specificity of the older definitions provides a strong counterweight. It’s one thing for a dictionary to go from akimbo to askew, but a bit harder to get from the meaning of final to next-to-last or of esteemed to former.

A word that has shifted like akimbo is cohort. From its beginnings as a tenth of a Roman legion, the word was later extended to other bands of warriors and to people united in a common cause. Later it also came to refer to a group sharing a demographic characteristic (such as an age cohort or a cohort of students).  

Since a cohort is a group and groups are made up of individuals, cohort also came to refer to one’s compatriots. The Oxford English Dictionary gives the gloss “assistant, colleague, accomplice,” and one of the citations is a bit of snark from a 1965 issue of the Times Literary Supplement: “The new American vulgarism of ‘cohort’ meaning ‘partner’.” 

And by the way, even though you might be in cahoots with your cohorts, the two words don’t seem to be related. Cahoot is most likely related to the French word cahute referring to a cabin or hut, where one might form a partnership away from prying eyes.

Words shift their meanings for a lot of reasons. For akimbo and cohort, the fuzziness of their early meanings (“limbs askew” and “band of individuals”) has left room for new standard meanings. In other cases, as with penultimate and erstwhile, dictionaries have resisted including the newer, casual uses. For now.

And sometimes, there is a split decision. Recognizing that some people blur amused and bemused, Merriam Webster gives the sense “wry amusement especially from something that is surprising or perplexing.” The OED sticks with the sense of confused, muddled, or stupefied.   

I’m bemused for now but waiting to see how those meanings play out in the future.

Featured image by Leon ellDOT via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

Semantic prosody

Semantic prosody by Edwin Battistella, author of "Dangerous Crooked Scoundrels" published by Oxford University Press (OUP)

Semantic prosody

When linguists talk about prosody, the term usually refers to aspects of speech that go beyond individual vowels and consonants such as intonation, stress, and rhythm. Such suprasegmental features may reflect the tone or focus of a sentence. Uptalk is a prosodic effect. So is sarcasm, stress, or the accusatory focus you achieve by raising the pitch in a sentence like “I didn’t forget your birthday.”

Scholars working with computer corpora of texts have extended the notion of prosody to aspects of meaning. The term “semantic prosody” was coined by William Louw in his 1993 essay “Irony in the text or insincerity in the writer: the diagnostic potential of semantic prosodies.” Building on work by John McHardy Sinclair, Louw used the term to refer to the way in which otherwise neutral words can have their meanings shaded by habitually co-occurring with other, positive or negative, words. He referred to it as a “semantic aura.” 

How do you see the aura? Researchers use tools like the Key Word In Context (or KWIC) feature which produces a listing of collocates of a key word. As the term suggests, collocates are words that are co-located with the key word in the corpus and in some genre. Semantic prosody is not as in-your-face as a connotation, and as Louw’s title suggests, it can be used ironically. Perhaps because of this, dictionary definitions tend not mention prosodies in a word’s definition.

So, to take an example used by Susan Hunston of the University of Birmingham in her article “Semantic Prosody Revisited,” the word persistent often occurs with a following noun in negative contexts. We find examples like persistent errors, persistent intimidation, persistent offenders, persistent cough, persistent sexism, or persistent unemployment. That tone seems to carry over to examples like persistent talk or persistent reports. The reports and talk have a presumption of negativity to them. Hunston points out that persistent is not necessarily negative, however. One can be a persistent advocate, or a persistent suitor, or reach a goal by persistent efforts. Its aura comes from being often negative.

That carries over to the verb persist as well, I think. When Massachusetts Senator Elizabeth Warren was silenced during a 2017 confirmation debate, the Senate majority leader’s comment that “She was warned. She was given an explanation. Nevertheless, she persisted.” was intended as a rebuke. Quickly, however, “Nevertheless, she persisted” and “Nevertheless, they persisted,” became a rallying cry of women refusing to be silenced and a renewed call to activism. The rebuke of “she persisted” was repurposed as defiance and determination. 

Linguists are fascinated by phenomena like semantic prosody and the potential hidden patterns in language use. For writers who are not linguists, semantic prosody is worth pondering as one drafts and revises. How do our words shade our sentences with positive or negative associations? And how can we play with those associations to surprise readers. 

Consider the example of break out, a two-word verb studied by Dominic Stewart in his 2010 book Semantic Prosody: A Critical Evaluation. What sort of things break out? Typically, it’s wars, crises, fires, conflicts, violence, insurrections, diseases, inflammations. As writers, we can reverse that tone with phrasings like “peace broke out” or “hope broke out,” giving peace and hope the sudden eruption often associated with negative events.  

As readers, we ought to be aware of potential semantic prosody in the media we consume. When we encounter words like utterlysymptomatic, chill, threaten, rife, or give rise to, what subtle tones are being communicated to us? There’s not, as yet, a dictionary of semantic prosody where you can look up a word’s preferences, but you can certainly think about them.

Featured image by Pawel Czerwinski via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

Do nouns have tense?

Do nouns have tense? This blog post by Edwin Battistella looks at “temporal interpretation.”

Do nouns have tense?

You may have seen the 2021 BBC story with the heading “Nirvana sued by the baby from Nevermind’s album cover.”  

The 1991 album cover showed the then-baby nude and swimming toward a dollar bill on a fishhook. Not long ago, the thirty-ish grown-up baby sued the former members of Nirvana and the estate of Kurt Cobain, accusing them of child pornography and of causing him emotional damage.  But the headlines make it sound like a baby is suing. CBS News tried for tortuously clarity, offering “Man shown as a baby on Nirvana’s Nevermind album appeals ruling in band’s favor.” 

The clunkiness of both attempts is a reminder that English noun phrases have something called a “temporal interpretation.” That’s linguist-speak for how we understand their place in time relative to the tense of the verb. You can think of it as a time stamp on a noun phrase.

There is a robust academic literature on temporal nouns and some excellent articles, dissertation, and books by scholars such as Irene Heim, Murvet Enç, Renate Musan, and Judith Tonhauser, among others.  

The basic point is that a noun can be understood as existing at a different time than the verb. For example, a tired mom might elbow her partner in the middle of the night and grumble, “It’s your turn to feed the baby.” The baby and the feeding are both in the present. But an adult showing old family photos might comment that “The baby in that picture is me.” The baby is in the past. And a pregnant mother-to-be would be likely to say that “The baby was kicking all day,” using baby rather than fetus or baby-to-be. The baby is in the not too distant future.

“A noun can be understood as existing at a different time than the verb.”

We often gloss over the alignment or misalignment of a noun’s time stamp with its predicate. If I say that “A student of mine was named to the board of Tesla,” it can be understood as referring to a current student or a former student. University students are not typically on corporate boards (even Elon Musk’s), so the likelihood is that I mean a former student. A listener gets that, even if I choose not to make things unambiguous by adding former.  

Modifiers like former or current help to place a noun phrase in time, giving nuance: a current student, a former Governor, a sitting Senator, an erstwhile roommate, a past lover, a one-time contender, an old professor, the present administration, and so on.  

Former baby sounds a bit odd, since babyhood is something that one naturally grows out of, like being a high school student. So, consider a sentence like “A high school student invented a new type of telescope.” The phrase high school student suggests a temporalityso just a date will do. Adding former would confuse matters. 

Other nouns have similar temporal limitations: Murvet Enç’s clever example “Every fugitive is now in jail” means that those who were formerly fugitives are now jailed, since fugitive implies flight. And nouns like captive and hostage can be used to refer to those who have been recently released or freed, though after a certain interval former seems necessary. “Every incumbent is running again” is fine, but with current added, we get redundancy.  

Sometimes the mismatch of verb tense and noun temporal interpretation jumps out at us, like when a baby files a lawsuit. Sometimes it slips right by. The lesson here is to keep an eye on both the tense of verbs and the time stamp of nouns.  

Featured image by Sonja Langford on Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

A Q&A with Bryan Garner, “the least stuffy grammarian around”

A Q&A with Bryan Garner, "the least stuffy grammarian around"

A Q&A with Bryan Garner, “the least stuffy grammarian around”

The fifth edition of Garner’s Modern English Usage has recently been published by OUP. I was happy to talk to Bryan Garner—who has been called “the least stuffy grammarian around” and was declared a “genius” by the late David Foster Wallace—about what it means to write a usage dictionary. 


What possesses someone to undertake a usage dictionary?

“Possesses” is a good word for it. In my case, it was matter of falling in love with the genre as a teenager. I discovered Eric Partridge’s Usage and Abusage (1942) and immediately felt that it was the most fascinating book I’d ever held. Partridge discussed every “problem point” in the language—words that people use imprecisely, phrases that professional editors habitually eliminate, words that get misspelled because people falsely associate them with similar-looking words, the common grammatical blunders, and so on. And then Partridge had essays on such linguistic topics as concessive clauses, conditional clauses, elegancies, hyphenation, negation,   nicknames, and obscurity (“It may be better to be clear than clever; it is still better to be clear and correct.”).

At the age of 16, I was going on a ski trip with friends, and the book had just arrived in the mail as I was leaving for New Mexico. I stuck it in my bag and didn’t open it until we arrived at the ski lodge. Upon starting to read it, I was hooked. In fact, I didn’t even ski the first day: I was soaking up all that I could from Usage and Abusage, which kept mentioning some mysterious man named Fowler.

So when I got home, I ordered Fowler’s Modern English Usage (2d ed. 1965), and when it arrived I decided it was even better. By the time I was 17, I’d memorized virtually every linguistic stance taken by Partridge and Fowler, and I was thoroughly imbued with their approach to language. By the time I’d graduated from high school, I added Wilson Follett, Bergen Evans, and Theodore Bernstein to the mix. I was steeped in English usage—as a kind of closet study. I spent far more time on these books than I did on my schoolwork.

I suppose in retrospect it looks predictable that I’d end up writing a usage dictionary. I started my first one (A Dictionary of Modern Legal Usage) when I was 23, and I’ve been at it ever since. That was 41 years ago, and it ended up being my first book with Oxford University Press.

There must be a further backstory to a teenager who suddenly falls in love with usage books. What explains that?

You’re asking me to psychoanalyze myself? Okay, it’s true. When I was four, in 1962, my grandfather used Webster’s Second New International Dictionary as my booster seat. I started wondering what was in that big book.

Then, in 1974, when I was 15, one of the most important events of my life took place. A pretty girl in my neighborhood, Eloise, said to me, with big eyes and a smile: “You know, you have a really big vocabulary.” I had used the word facetious, and that prompted her comment.

It was a life-changing moment. I would never be the same.

I decided, quite consciously (though misguidedly), that if a big vocabulary impressed girls, I could excel at it as nobody ever had. By that time, my grandparents had given me Webster’s Second New International Dictionary, which for years had sat on a shelf in my room. I took it down and started scouring the pages for interesting, genuinely useful words. I didn’t want obsolete words. I wanted serviceable words and remarkable words. I resolved to copy out, by hand, 30 good ones per day—and to do it without fail.

“I decided, quite consciously (though misguidedly), that if a big vocabulary impressed girls, I could excel at it as nobody ever had.”

I soon discovered I liked angular, brittle words, such as cantankerousimpecuniousrebuke, and straitlaced. I liked aw-shucks, down-home words, such as bumpkinchatterboxhorselaugh, and mumbo-jumbo. I liked combustible, raucous words, such as blastbrayfulminate, and thunder. I liked arch, high-toned words, such as athwartcalumnycynosure, and decrepitude. I liked toga-wearing, Socratic-sounding words, such as eristichomunculuspalimpsest, and theologaster. I liked mellifluous, polysyllabic words, such as antediluvianpostprandialprotuberance, and undulation. I liked the technical and quasi-technical terms of rhetoric, such as asyndetonperiphrasisquodlibet, and synecdoche. I liked frequentative verbs with an onomatopoetic feel, such as gurgle, jostlepiffle, and topple. I liked evocative words about language, such as billingsgatelogolatrywordmonger, and zinger. I liked scatological, I-can’t-believe-this-term-exists words, such as coprolaliafimicolousscatomancy, and stercoraceous. I liked astonishing, denotatively necessary words that more people ought to know, such as mumpsimus and ultracrepidarian. I liked censoriously yelping words, such as balderdashhooey, pishposh, and poppycock. I liked mirthful, tittering words, such as cowlickflapdoodle, horsefeathers, and icky.

In short, I fell in love with language. I filled hundreds of pages in my vocabulary notebooks.

In the end, I decided that I liked the word lexicographer better than copyist, so I tried my hand at it.

What about Eloise? Did she respond well?

I was trying to impress her, it’s true. I never called her. I just started using lots of big words. It took me about two years to realize that big words, in themselves, have no intrinsic value in attracting females. Perhaps the opposite.

But that’s okay. By the time I was 17, I had this prodigious vocabulary. I thought of SAT words as being quite elementary. I had a larger vocabulary then than I do today. You can see why, at the ski lodge in early 1975, this particular teenager was absolutely primed to relish the work of Eric Partridge and H.W. Fowler.

You’re not limited to English usage, are you? You’ve written other language-related books—what, 28 of them with different publishers?

That’s true. But it all began with words and English usage. Then I moved to legal lexicography and other language-related topics.  

Many if not most lexicographers today are interested in slang, in current catchphrases, and in jargon—the more shifting and volatile parts of language. (Always something new!) I’m different. I’ve always been interested in the durable parts. In my usage book, I tackle the difficult question of what, precisely, constitutes Standard Written English. In any era, that’s a complicated question or series of questions. And so I’ve answered it in a 1,200-page book, word by word and phrase by phrase. It’s intended for writers, editors, and serious word lovers.

Bryan Garner, author of Garner’s Modern English Usage, Fifth Edition

Within Garner’s Modern English Usage, you intersperse essays of the kind you mentioned earlier, don’t you?

Of course. I’m very Fowlerian and Partridgean in my mindset. Though all my essays are original, some bear the same category-titles as Fowler’s (for example, “Archaisms,” “Needless Variants,” and “Split Infinitives” ) or Partridge’s (“Clichés,” “Johnsonese,” and “Slang” [yes, that]). Meanwhile, I’ve created new essay-categories of my own, much in the mold of my admired predecessors: “Airlinese,” “Estranged Siblings,” “Hypercorrection,” “Irregular Verbs,” “Skunked Terms,” “Word-Swapping,” and the like). I have a dozen new essays in the fifth edition, including “Irreversible Binomials,” “Loanwords,” “Prejudiced and Prejudicial Terms,” “Race-Related Terms,” and “Serial Comma” (a big one). These essays are some fun.

You also have lots of new short entries, don’t you? Didn’t I read that there are 1,500 of them?

Yes, something like that. Consider an example. Note that an asterisk before a term denotes that it’s nonstandard:      

  tic-tac-toe (the elementary game in which two players draw X’s or O’s within a pattern of nine squares, the object being to get three in a row), a phrase dating from the mid-1800s in AmE, has been predominantly so spelled since about 1965. Before that, the variants *tick-tack-toe, *ticktacktoe, and even *tit-tat-toe were about equally common. The British usually call the game noughts and crosses

                   Current ratio (tic-tac-toe vs. *tit-tat-toe vs. *tick-tack-toe vs. *ticktacktoe): 96:4:3:1

There are thousands of such entries. As you can see, a usage-book entry is entirely different from a normal dictionary entry.

At the ends of your entries, you include ratios about relative frequency in print.

Yes. Those are key. I’m capitalizing on big data, which makes GMEU entries empirically grounded in a way that earlier usage books couldn’t be. This is a great era for lexicographers and grammarians: we can assess word frequencies in various databases that include millions of published and spoken instances of a word or phrase. By comparison, the evidence on which Fowler and Partridge based their opinions was sparse. In my case, opinion is kept to a minimum, and facts come to the fore. Sometimes that entails inconveniently discovering that the received wisdom has been way off base.

Some people ask why we need a new edition of Garner’s Modern English Usage after only six years.

“People who say they’re sticking to the original Fowler might as well be driving an original Model-T.”

I’ve heard that. It’s a naive view. For one thing, the empirical statistics on relative word frequencies have been updated from 2008 to 2019. The language has evolved: email is now predominantly solid. There are thousands of updated ratios, and some of the judgments differ from those in past editions. For example, overly and snuck are now declared to be unobjectionable.

Every single page of the book has new material. It’s a big improvement. The six years have allowed for much more research.

People who say they’re sticking to the original Fowler might as well be driving an original Model-T.

Here’s something reference books have in common with medical devices. There’s no reason for a new one unless it’s a significant improvement over its precursors. That’s how the field gets better and better.

The book has been praised as “a stupendous achievement” (Reference Reviews) and “a thorough tour of the language” (Wall Street Journal). You’ve been called “David Foster Wallace’s favorite grammarian” (New Yorker) and “the world’s leading authority on the English language” (Business Insider). That’s heady stuff, isn’t it?

I’m just a dogged researcher. That’s all. Research is simply formalized curiosity, and I seem to have an inexhaustible curiosity about practical problems that arise for writers and editors. I certainly wouldn’t call myself “the world’s leading authority on the English language.”

I’ve also been helped by generous scholars, especially by John Simpson, the Oxford lexicographer, and Geoffrey K. Pullum, the Edinburgh grammarian. And then I had a panel of 34 critical readers who minutely reviewed 55-page segments for suggested improvements. I can’t tell you how grateful I am for the contributions of all these erudite friends.

In any event, a lexicographer must be especially adept at delayed gratification. You labor for years and then wait. You’re lucky, as Samuel Johnson once said, if you can just “escape censure.” That some people have praised my work, after all these years of toil, is certainly pleasing. But for me, the real pleasure is in the toil itself: asking pertinent questions and finding useful, fact-based answers to all the nettlesome problems that arise in our wildly variegated English language.

Find out more and buy Garner’s Modern English Usage, Fifth Edition

OUPblog - Academic insights for the thinking world.

❌