FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Eukaryotes may all derive from an Asgard ancestor

rows of dividing cells

Eukaryotes, complex life forms with nuclei in their cells, including all the world’s plants, animals, insects, and fungi, trace their roots to a common Asgard archaean ancestor, research finds.

That means eukaryotes are, in the parlance of evolutionary biologists, a “well-nested clade” within Asgard archaea, similar to how birds are one of several groups within a larger group called dinosaurs, sharing a common ancestor. The team has found that all eukaryotes share a common ancestor among the Asgards.

No fossils of eukaryotes have been found from farther back than about 2 billion years ago, suggesting that before that, only various types of microbes existed.

“So, what events led microbes to evolve into eukaryotes?” says Brett Baker, associate professor of integrative biology and marine science at the University of Texas at Austin. “That’s a big question. Having this common ancestor is a big step in understanding that.”

Led by Thijs Ettema of Wageningen University in the Netherlands, the research team identified the closest microbial relative to all complex life forms on the tree of life as a newly described order called the Hodarchaeales (or Hods for short). The Hods, found in marine sediments, are one of several subgroups within the larger group of Asgard archaea. The findings appear in Nature.

The Asgard archaea evolved more than 2 billion years ago, and their descendants are still living. Some have been discovered in deep sea sediments and hot springs around the world, but so far only two strains have been successfully grown in the lab. To identify them, scientists collect their genetic material from the environment and then piece together their genomes. Based on genetic similarities with other organisms that can be grown in the lab and studied, the scientists can infer metabolism and other features of the Asgards.

“Imagine a time machine, not to explore the realms of dinosaurs or ancient civilizations, but to journey deep into the potential metabolic reactions that could have sparked the dawn of complex life,” says Valerie De Anda, a researcher in Baker’s lab. “Instead of fossils or ancient artifacts, we look at the genetic blueprints of modern microbes to reconstruct their past.”

The researchers expanded the known Asgard genomic diversity, adding more than 50 undescribed Asgard genomes as input for their modeling. Their analysis indicates that the ancestor of all modern Asgards appears to have been living in hot environments, consuming CO2 and chemicals to live. Meanwhile, Hods, which are more closely related to eukaryotes, are metabolically more similar to us, eating carbon and living in cooler environments.

“This is really exciting because we are looking for the first time at the molecular blueprints of the ancestor that gave rise to the first eukaryotic cells,” De Anda says.

Support for this research came from the Origin of Eukaryotes program at the Moore and Simons Foundations; the US National Science Foundation; the Wellcome Trust Foundation; the European Research Council; the Swedish Research Council; the Dutch Research Council; the National Natural Science Foundation of China; the Wenner-Gren Foundation; the Science for Life Laboratory (Sweden); and the European Commission’s Marie Skłodowska-Curie Actions.

Source: UT Austin

The post Eukaryotes may all derive from an Asgard ancestor appeared first on Futurity.

What Color is the Sun?

Quick, without looking, what color is the sun? Would you believe it's green? Also, please don't look directly at the sun.

The post What Color is the Sun? appeared first on The Scholarly Kitchen.

Charles Taylor, Psychological Selfhood, and Disenchantment

This essay is part of Public Discourse’s Who’s Who series, which introduces and critically engages with important thinkers who are often referenced in political and cultural debates, but whose ideas might not be widely known or understood. The series previously considered the life and work of Hannah Arendt, Antonio GramsciJacques MaritainMichael OakeshottCharles De Koninckand Harry V. Jaffa and Allan Bloom.

Every year I teach a class on the collapse of Christianity in western society, asking the question why it was so easy to believe in God in the year 1500 and yet so difficult today. And in helping students to answer that question, my most useful guide has been Canadian philosopher Charles Taylor. Indeed, the question itself is drawn from one of his major works, A Secular Age.

Taylor is a remarkable philosopher. He has made significant contributions to studies of Hegel, the importance of language, and the nature of politics. He has also developed theories of selfhood, of which Sources of the Self and its successor, A Secular Age, form a stunning tour de force. In addition, Taylor has also been active politically, helping to found New Left Review and standing for election in Canada as a candidate of the New Democratic Party on three separate occasions. Even though he’s a man of the left, those of a more conservative bent have much to learn from him.

Given the wide-ranging nature of Taylor’s philosophical interests, an introductory essay such as this must be highly selective. Yet there is a theme that ties Taylor’s work together, from Hegel to his interest in language: philosophical anthropology. This subject investigates the question: what is it that makes human beings and their social existence distinctive? This question is central to his arguments in Sources of the Self and A Secular Age as he investigates how individuals imagine themselves.

The Failure of the Popular Narrative

Perhaps the most common explanation of religious decline in western society is that religious belief is now obsolete thanks to the growth of scientific knowledge. This account views religion as a means of control over nature that, in the wake of scientific and technological developments, has lost its importance. Thus, where once the farmer prayed for rain, now we have irrigation. Where once the villager at the foot of the volcano engaged in sacred rites to placate the volcano god, now we understand that seismology, not animal sacrifice, is a better predictor of eruption. Sometimes this is called the subtraction narrative: it accounts for the modern world by seeing it as what is left after religion has been removed or replaced by science.

This is in many ways the classic Enlightenment account of secularization, found variously in Kant’s notion of enlightenment as humanity reaching adulthood, Freud’s critique of religion as infantile, and Darwin’s proclamation of evolution. Taylor’s concern, however, is that it is too simplistic an account for it never asks a very important question: why does science come to have such authority that it is able to displace religion? The point is profound. Most people in the West do not believe in the authority of science because they are deeply read in scientific matters. Rather, they live in a world where science is intuitively plausible, just as five hundred years ago people lived in a world where religion was intuitively plausible. The question of the replacement of the latter with the former cannot therefore be answered simply with reference to science. The question of how and why science has been granted such authority is the real issue, and one that the standard narrative assumes rather than explains.

Where once the villager at the foot of the volcano engaged in sacred rites to placate the volcano god, now understand that seismology, not animal sacrifice, is a better predictor of eruption.

 

The Social Imaginary

In light of this, Taylor points to what he calls the social imaginary. The phrase is somewhat inelegant, using the adjective “imaginary” as a noun. Yet the concept is important. The social imaginary is the set of beliefs and practices that reflect and reinforce the intuitions of a given culture or society. Saluting the flag, celebrating July Fourth as a holiday, and believing in the wisdom embodied in the U.S. Constitution would be three examples of things that have traditionally informed the American social imaginary. Few people ask why they do or believe these things; they are simply intuitive to those who belong to the culture of the United States and provide the framework or the lens through which the nation and its relationship to its citizens and to other nations are understood. Families too have their ritual, rhythms, and assumptions that inform how their members understand themselves and relate to others.

Thus, for Taylor, the question of how religion moves from being the default intuition of the members of a society to being optional or even marginal is a question of how the social imaginary has been transformed. The shift to scientific supremacy is a matter of the imagination, not of the blunt facts of science intruding upon us.

The Disenchantment of the World

Central to this transformation is what Taylor (borrowing from Max Weber) calls disenchantment. While the medieval world was enchanted, the modern world in which we dwell is disenchanted. A naïve response to this might be that our world too is full of interest in the supernatural—not simply in terms of traditional religious commitments, where church, synagogue, temple, and mosque continue to find a place in the lives of many people—but also in the plethora of other spiritualities, from yoga to tarot cards. Do these things not prove that we still live in an enchanted age?

Such an objection carries some weight with those who wish to read “disenchanted” as connoting the wholesale rejection of religion or mystery, but it does not really address what Taylor is pointing to. A disenchanted age is not necessarily characterized by complete repudiation of the supernatural. Rather, it is characterized by a fundamental shift in the function of the supernatural. And a world where we now have a choice of enchantments, so to speak, is a world that is differently enchanted—and arguably disenchanted—because the supernatural no longer stands in the same relation to the world as it once did.

A couple of examples help clarify Taylor’s point. Take a traditional Catholic who believes in the ecumenical creeds. In so doing, he believes the same thing that Christians committed to those creeds have believed throughout the centuries. But there is a difference: today’s Catholic cannot believe them in the same way as, say, a Catholic in 1500. This is because today’s Catholic chooses to believe them, and that in the face of a cultural default that does not do so. The Catholic in 1500 really had no choice and, in believing, reflected the cultural intuitions and dispositions of his day. On this level, such faith represents something different today.

A disenchanted age is not necessarily characterized by complete repudiation of the supernatural. Rather, it is characterized by a fundamental shift in the function of the supernatural.

 

As a second example, imagine being a Christian believer in 1500 and waking up one morning to find that one does not believe in God any more. Everything fundamentally changes at that point. Up until then, you believed that the only thing keeping the universe in order, the only thing that guaranteed that the sun would rise each morning, was the existence of God. To cease believing in him is therefore virtually impossible and, if done, requires a fundamental rethinking of everything.

Today, doubt among religious believers and even complete loss of faith are rarely accompanied by a deep existential crisis about the entire universe, even if they precipitate a certain localized angst about relationships or personal mortality. This is because even religious believers are accustomed to living in a world that seems to operate effectively for believer and unbeliever alike. For example, experience teaches that antibiotics are a more reliable form of medical treatment than prayer alone. One can, of course, see antibiotics as a gift of God and an answer to prayer, but one does not need to do so. Nor does their efficacy depend on that belief. Our world is thus at least much less enchanted than that of 1500, even if individual groups maintain certain supernatural beliefs.

The Buffered Self

At the heart of this disenchantment for Taylor is not the traditional science-versus-religion narrative we noted at the start. Rather, he sees the key as being a transformation in the way in which the self is understood. By self he understands not merely the awareness individuals might have of themselves as individual self-consciousnesses. For Taylor, selfhood is how people understand themselves as individuals in connection to the world around them, and what they see as the nature of being a human person. The contrast between the Middle Ages and today is one that Taylor characterizes as between the porous self and the buffered self.

The porous self is one that does not draw a sharp boundary between the inner and the outer, between the psychological and the material, between the physical world and the spiritual. The buffered self is the self that does make a clear distinction between these things. And it is the rise of the latter that connects to the disenchantment of our current age.

The distinction is important but also complex. Indeed, both Sources of the Self and A Secular Age spend significant time exploring the distinction, and any summary runs the risk of oversimplification. Nonetheless, a couple of examples can again illuminate Taylor’s argument.

One example he uses himself is that of depression. In medieval times, depression—or melancholy as it was called—was connected to the notion of black bile. Today, we connect it to physiological issues such as a hormonal imbalance. One might be tempted to say that the difference between the two is thus simply one of depth of scientific knowledge: we now know that black bile does not exist, but both medievals and moderns see that a physiological cause for psychological dysfunction is in play. But this would be to misunderstand the difference between the two. While we moderns see hormonal imbalance as causing depression, the medieval mind sees black bile as being itself the melancholy. In other words, we distinguish the self—a psychological entity—from the physical, which acts on the real “us” as an external force; the medieval sees the self as in the grip of the physical and inseparable from it or, better still, permeable by it.

The physical world carried a powerful authority that extended to the spiritual and determined the nature of the self. But we moderns do not live in such a world. Ours is a world of immanence, not transcendence, explicable in terms of itself and where the supernatural does not plausibly blend with the natural.

 

A second example is that of the relationship between the supernatural and the natural world. For the medieval mind, the spiritual or supernatural boundary was a physical presence in the natural world: religious relics possessed an intrinsic power, for instance. Thus, when the king touched the one suffering from scrofula, the power of the physical touch healed the illness because the king, by virtue of his status as king, possessed supernatural healing powers. Likewise, when a fragment of the true cross was adored, the pilgrim was blessed. On the negative side, goblins, demons, and even the devil himself were physical realities within the material word. The physical world carried a powerful authority that extended to the spiritual and determined the nature of the self. But we moderns do not live in such a world. Ours is a world of immanence, not transcendence, explicable in terms of itself and where the supernatural does not plausibly blend with the natural. Even Christians who may well believe in a personal devil will typically not imagine him as a discrete physical presence in a particular place, but rather as a supernatural influence that cannot be specifically localized.

The displacement of the porous self with the buffered self is a long story, but with the crisis of the papacy in the fourteenth and fifteenth centuries, and with the Reformation of the sixteenth, the nature and stability of external authority started to become more and more equivocal. Add to this changes in technology, above all the printing press (with the correlative rise in literacy and then private reading), and economies (with the move from dependence on the land and the seasons to production and trade).

These changes meant the old external framework for identity and a sense of self started to weaken and then plunge into constant flux. The material world became less authoritative. The result was that there was an inward move, so that identity and security came to be found more in the individual psychological sphere than in the given external world. Montaigne gave a literary focus to the first person as Descartes did the same from a philosophical perspective.

The exploration of the inner space became critical. And as this happened, so the porous self gave way to the buffered self. Such a self will tilt toward finding science, for example, an increasingly plausible way of understanding the universe, not because it understands the elaborate arguments, but because the scientific way of looking at the world—as material that in itself possesses no intrinsic spiritual significance—resonates with the intuitions of the buffered self. The really important things for the individual are psychological. The material world is a separate sphere.

There is far more to Taylor’s philosophical analysis of modernity than can ever be covered in the space of this article. But above all is the central key to his thinking: to understand our world, we need to understand how human beings intuitively relate to that world. That requires understanding the changes in the notion of selfhood that have taken place over the last five hundred years. Only then will we come to a better understanding of why religion and religious people find themselves in such a highly contested position in our culture.

Image credit: Makhanets – Own work. Previously published here.

How Economic Crises Make Incumbent Leaders Change Their Regimes from Within

Guest post by Vilde Lunnan Djuve and Carl Henrik Knutsen

In March 2020, COVID-19 generated a major emergency in countries across the world with public fear of the virus, lockdowns, and economies going into a tailspin. Yet, observers and citizens in many countries were worried about one additional thing, namely that their leaders would use the ongoing crisis as a window of opportunity for concentrating power in their own hands and thereby (further) undermine democracy. This was the case in Hungary, for example, where Viktor Orban’s government was granted the power to rule by decree. Such fears are not unfounded: History suggests that whenever leaders declare states of emergency in response to a (perceived or real) crisis, democratic decline becomes much more likely.

The COVID-19 crisis, in many ways, was unprecedented in its global scope and wide-ranging ramifications. Yet, even more conventional crises such as a “regular” economic recession with increased unemployment and reduced incomes, could have notable political consequences. From previous research, we also know that crises are related to various tumultuous political events such as civil war, coups d’état, and revolutions.

But very often regimes are changed not by some outside force such as military officers conducting coups or by revolutionaries in the streets. Instead, global data from the last two centuries show that the incumbent regime elites, including the sitting leaders themselves, are very often involved as key actors in processes of regime change. Does economic crisis increase the chances also of such incumbent-guided transitions?

In our new study, we investigate the relationship between economic crisis and regime changes driven by regime incumbents. We find that the relationship between economic crisis and incumbent-driven transitions (when treating them as one category) is very clear and at least as strong as the relationship between crisis and coups d’état. In other words, the risk of regime change driven by sitting presidents or other top leaders increases just about as much as the risk of coups, in the wake of economic crisis.

Why do we find such a robust relationship between economic crisis and incumbent-guided transitions? We propose two complementary explanations:

Are economic crises “windows of opportunity” for aspiring autocrats?

First, we argue that economic crises can work as windows of opportunity for incumbent leaders who are eager to expand their grip on power, make sure that they stay in power in the future, and diminish the role of the opposition. The idea is that, like during a pandemic (albeit typically on a smaller scale), citizens are more willing to accept extreme measures from their incumbents when crises loom. This gives leaders leeway to blame common enemies, ensure support where they otherwise cannot find it, and pursue regime change in a direction they inherently prefer.

Indeed, we find in our study that there is a strong and systematic relationship between economic crises and non-democratizing regime transitions driven by the regime incumbent. For examples of this unfolding in the real world, we can look to the self-coup of President Fujimori in Perú in April 1992, which took place after a long slouch in growth and the ascension of the armed group Sendero Luminoso.

Can crises also trigger democratization by cornering sitting autocrats?

In a more hopeful vein for supporters of democracy, we also have reason to believe that crises can trigger incumbent-guided liberalization. Both previous scholarship and real-world examples suggest that crises may force concessions from cornered autocrats because they ultimately would prefer gradual democratization to full-fledged revolution or armed insurgency. Since we know that crises make both coups and revolutions, perceptive autocrats should anticipate the heightened threat levels and thereby be more motivated to, e.g., hold general elections to diffuse tensions.

For a classic example of crisis driving popular discontent, rising insurgency, and mediated democratization guided by the incumbent, we can look to Zambia when the rule of the United National Independence Party (UNIP) ended in 1991. Kenneth Kaunda and UNIP had ruled Zambia for 27 years, whereof 18 under a formalized one-party state. Yet, in 1991, multi-party elections were held, followed by a relatively peaceful transfer of power to the Movement for Multi-Party Democracy (MMD). Here, the economic crisis built up substantial pressure on the regime by way of widespread protest and increasing opposition alliance building. Under such conditions, the regime ultimately opted to reform a less favorable regime type than the status quo, presumably because this outcome was preferable to them compared to forced regime change by outside actors.

We thus know that crisis can help push the needle in some instances. However, we do not find in our analyses that there exists a robust, systematic relationship between crisis and incumbent-guided democratization, more specifically. It might be that many cornered dictators, during times of crises, preempt the need for concessions by consolidating power instead of liberalizing. Or, they make policy concessions to the opposition that fall short of democratization, but still ease tensions, such as increasing pensions payments.

Crises, incumbents, and watchdogs

Overall, then, we find that crises rarely pressure incumbents to democratize. Rather, crises enable regime leaders to alter their regimes either without affecting their democracy score, or by lowering it. In the midst of a global halt in democratic progress, there is thus particularly good reason to pay close attention to the actions of incumbents in weak democracies during times of crises.

Vilde Lunnan Djuve is a Postdoctoral Fellow at the Department of Political Science at the University of Oslo. Carl Henrik Knutsen is a Professor of Political Science at the University of Oslo and a Researcher at the Peace Research Institute Oslo.

New effort IDs the genes that made the mammoth

Artist's view of a mammoth superimposed over DNA sequencing data.

Enlarge (credit: Beth Zaiken)

An international team of scientists has published the results of their research into 23 woolly mammoth genomes in Current Biology. As of today, we have even more tantalizing insights into their evolution, including indications that, while the woolly mammoth was already predisposed to life in a cold environment, it continued to make further adaptations throughout its existence.

Years of research, as well as multiple woolly mammoth specimens, enabled the team to build a better picture of how this species adapted to the cold tundra it called home. Perhaps most significantly, they included a genome they had previously sequenced from a woolly mammoth that lived 700,000 years ago, around the time its species initially branched off from other types of mammoth. Ultimately, the team compared that to a remarkable 51 genomes—16 of which are new woolly mammoth genomes: the aforementioned genome from Chukochya, 22 woolly mammoth genomes from the Late Quaternary, one genome of an American mastodon (a relative of mammoths), and 28 genomes from extant Asian and African elephants.

From that dataset, they were able to find more than 3,000 genes specific to the woolly mammoth. And from there, they focused on genes where all the woolly mammoths carried sequences that altered the protein compared to the version found in their relatives. In other words, genes where changes appear to have been naturally selected.

Read 21 remaining paragraphs | Comments

The Shocking Medical History of Electric Fish

The Shocking Medical History of Electric Fish

Anteros, a freed slave of the Roman emperor Caesar Augustus, had decided it was a good day for a walk on the beach. Damp sand stuck to his bare feet as he walked, probably deep in great thoughts about matters that continue to remain a mystery, when he felt shock travel from his foot to the rest of his body, knocking him out of breath. The source of the shock, upon close inspection, was a live torpedo ray.

“Although he initially suffered an excruciating cramp, the pain he had long endured from what might have been gout miraculously disappeared,” writes historian of neuroscience Stanley Finger in his book Doctor Franklin’s Medicine.

As painful as it was, the jolt felt by Anteros would lead to the discovery of a treatment for countless diseases such as gout, arthritis, chronic headache, and more.

Long before man had discovered the scientific principles behind electricity, ancient physicians worked with electric current to treat physical and mental disorders such as epilepsy, vertigo and depression. The ancient world depended on nature for many needs now provided by technology, including sourcing electric current. Prior to the discovery of electricity, humans utilized electric fishes for all their ‘shocking’ needs.

The Shocking Medical History of Electric Fish
A bas-relief at the Mastaba of Ti in Saqqara, Egypt, which depicts the nile catfish, one of the first identified electric fish.

Ancient Egyptians were familiar with one such shock generating fish, the Malapterurus electricus or the Nile catfish. Indigenous to the Nile, the first known depiction of the catfish can be found as a mural inscribed in 02750 BC on the tomb of the architect Ti in Saqqara, Egypt. Egyptians were not the only Mediterranean culture to depict the catfish in their art; a thousand miles north and 3000 years after the Saqqara mural , similar murals could also be found in the Roman city of Pompeii. While these murals fail to reveal whether the natives of the two lands used the electric fishes for any medical purposes, ancient Egyptian writings on papyri record the use of the electric fish to relieve pain 4700 years ago. Later records by Pliny and Plutarch also report the Egyptians' use of electric eel to treat joint pain, migraines, melancholy, and epilepsy.

The Shocking Medical History of Electric Fish
A translation of Hippocrates' On Regimen, prescribing boiled torpedo as a treatment.

Approximately 2500 years following the first depiction of the Nile catfish, Hippocrates of Cos, the Greek ancient physician regarded as the "father of medicine" recorded in his book On Regimen medical uses of the Mediterranean electric ray (Torpedo torpedo) and Nile catfish for the treatment of headaches and arthritis.

In 45 AD, word of Anteros’ miraculously cured gout reached Scribonius Largus, the court physician for the Roman Emperor Claudius. In his role as imperial philosopher — a Roman title roughly equivalent to scientist — he began experimenting and recording the medical benefits of the live torpedo fish, and suggested placing a live torpedo on the forehead to treat a headache.

The Shocking Medical History of Electric Fish
An ancient Greek "fish plate", depicting a torpedo ray in the upper left.

“Headache even if it is chronic and unbearable, is taken away and remedied forever by a live torpedo placed on the spot which is in pain, until the pain ceases. As soon as the numbness has been felt, the remedy should be removed lest the ability to feel be taken away from the part,” observed Largus. “Moreover, several torpedoes of the same kind should be prepared because the cure, that is, the torpor which is a sign of betterment, is sometimes effective only after two or three (placement of individual fishes)”.

Noting Anteros’ experience with the live torpedo, Largus, for treating gout, recommended placing a live fish under the patient’s feet. He also opted to use the electric shock produced by a torpedo in the treatment of conversion hysteria.

Scribonius Largus was a reputable physician of his time, and upon learning of his methods, other medical practitioners soon began mirroring his methodology to treat their patients with electric currents of the torpedo fish. Some physicians, hoping to find a cure for paralysis, even attempted to shock their patients with the electric fish, possibly with the hope of artificially causing a muscle contraction or twitch to break through paralysis.

Three decades after Scribonius recorded the first known medical use of electric current in human history, Dioscorides of Anazarbus, a Greek military surgeon, added new treatments with application of the fish to the already existing list. In his book On the Material of Medicine, Dioscorides records a treatment for prolapsed anus that makes use of fish-derived electric current. For centuries, these remedies were applied by the succeeding physicians without enough attempts to confirm their worth as treatments.

Claudius Galenus Galen (130 AD - 201 AD), a Greek physician and surgeon in the Roman Empire, sought to confirm their efficacy by trying the great physicians' remedies on himself. He concluded:

“The whole torpedo, I mean the sea animal, is said by some to cure headache and reduce the prolapsed seat when applied. I indeed tried both of these and found neither to be true. Therefore, I thought that the torpedo should be applied alive to the person who has the headache, and that it could be that this remedy is anodyne and could free the patient from pain as do other remedies which numb the senses: this did so for the above-mentioned reason.”

However, Galen found one use for electric fish: he treated epilepsy with the application of the electric current from the torpedo.

The Shocking Medical History of Electric Fish
From Galen's "Advice for an Epileptic Boy"

The ancient electro-therapeutic remedies continued to be practiced by physicians around the world until at least the eighteenth century. Steady experiments of electric discharge from the fish helped advance the medical treatments for disorders such as depression, seizures, arthritis, vertigo, headache and epilepsy.

When Middle Eastern and Asian physicians were treating diseases with the shock-inducing fishes, the Western world was just beginning to understand animal electricity. In the eighteenth century, a new field in the science of medicine emerged which was known as ‘medical electricity’. In 01745, Johann Gottlob Krüger (01715 - 01759), a professor of philosophy and medicine in Halle, Germany hypothesized that electricity, like all things, must have a utility and since it neither had any use in theology or jurisprudence, “there is obviously nothing left but medicine… The best effect would be found in paralyzed limbs.” Indeed, Krüger wasn’t entirely wrong in his hypothesis. A year later, Christian Gottlieb Kratzenstein, a student of Krüger, successfully treated patients suffering from contracted or otherwise disabled fingers using electrical currents. However, despite several promising results with different mechanisms like the Leyden jar, the first man-made capacitor, the use of electric current as a treatment did not immediately catch on among medical practitioners.

During the eighteenth century, scientists conducted a number of studies on the presence of electricity in nature - with a keen focus on its presence in animals such as the torpedo. By the late eighteenth century, Luigi Galvani, a professor of anatomy, explained via his famous frog experiment the animal body as the source of electricity. Since Galvani regarded animals as the source of electricity, death for him was an extinction of “that most noble electric fluid on which the motion, sensation, blood circulation, life itself seemed to depend.” For the anatomy professor, the idea that “death comes when blood ceases to circulate and to produce the electric fluid by friction in the brain and the nerves” was at least “plausible, if not true.”

Galvani’s explanation of death found reinforcement when western naturalists and researchers learned of the fish that was said to naturally produce shock equivalent or stronger than the man-made Leyden jar. It was John Walsh, a British scientist, who concluded after an in-depth investigation of the torpedo ray and the electric eel of Guyana that the shock produced by the aquatic animal was, in fact, electric in nature. Concluding his research, Walsh wrote:

“That the effect of the Torpedo appears to be absolutely Electrical, by formula its circuit through the same conductors with Electricity, for instance metals, animals and moist substance: and by being intercepted by the same non-conductors, for instance glass and sealing wax.”

Walsh’s research into electric fish, conducted between 01772 and 01775, is often considered as the dawn of electrophysiology.

The Shocking Medical History of Electric Fish
The Shocking Medical History of Electric Fish
R: An illustration of the showing the torpedo's electric organ as seen from the dorsal surface. L: a detailed view of the stacked disks that create electrical current in the torpedo. Both illustrations from John Gray McKendrick's Life in Motion.

However, electrophysiology wasn’t the only field to get its start from Walsh’s findings. Working along with Walsh on this research was the Scottish surgeon John Hunter. Focusing on the anatomical structure of the torpedo ray, Hunter found that the organs responsible for generating the electric current, as well as “picking up and directing” it in the fish, were formed by stacking numerous flat disks on “one above the other”. 30 years later, the Italian physicist Alexandro Volta borrowed the same stacked disc structure first identified by Hunter to create what was known initially as an “artificial electric organ,” a device that could provide constant electric current to a circuit. You may be more familiar with Volta’s invention as the first electrochemical battery.

The Shocking Medical History of Electric Fish
The Voltaic Pile invented by Alexandro Volta – an artificial electric organ directly based on the electrical disks of the torpedo ray.

That artificial electric organ, which produced electricity by chemical means, was at first a breakthrough for the medical field as it allowed the use of Galvani's direct current to treat tumors and other diseases. In the centuries that have followed Galvani’s experiments, electrical current has proven to be far more than a medical curiosity. We have reinvented our daily lives around electricity, taking the currents that were once only found to exist in a few obscure fish species and allowing them to course through a global network of power and information.

Becky Chambers & Annalee Newitz

Becky Chambers & Annalee Newitz

Join us for a thought-provoking conversation between two Hugo award-winning science fiction authors, Becky Chambers and Annalee Newitz. Known for challenging classic science fiction tropes such as war, violence, and colonialism, both authors create vivid and immersive worlds that are filled with non-human persons, peace, and a subtle sense of hope. The authors will discuss what it means to take these alternative themes seriously, delve into their writing & world building process, and explore how science fiction can help us imagine new futures that can make sense of our current civilizational struggles.

Counterrevolutions Are Much More Successful at Toppling Unarmed Revolutions. Here’s Why.

Guest post by Killian Clarke

Counterrevolutions have historically received much less attention than revolutions, but the last decade has shown that counterrevolutions remain a powerful—and insidious—force in the world.

In 2013, Egypt’s revolutionary experiment was cut short by a popular counterrevolutionary coup, which elevated General Abdel Fattah el-Sisi to the presidency. In neighboring Sudan, a democratic revolution that had swept aside incumbent autocrat Omar al-Bashir in 2019 was similarly rolled back by a military counterrevolution in October 2021. Only three months later, soldiers in Burkina Faso ousted the civilian president Roch Marc Christian Kaboré, who had been elected following the 2014 Burkinabè uprising.

These counterrevolutions all have something in common: they all occurred in the aftermath of unarmed revolutions, in which masses of ordinary citizens used largely nonviolent tactics like protests, marches, and strikes to force a dictator from power. These similarities, it turns out, are telling.

In a recent article, I show that counterrevolutionary restorations—the return of the old regime following a successful revolution—are much more likely following unarmed revolutions than those involving armed guerilla war. Indeed, the vast majority of successful counterrevolutions in the 20th and 21st centuries have occurred following democratic uprisings like Egypt’s, Sudan’s, and Burkina Faso’s.

Why are these unarmed revolutions so vulnerable? After all, violent armed revolutions are usually deeply threatening to old regime interests, giving counterrevolutionaries plenty of motivation to try to claw back power. There are at least two possible explanations.

The first is that, even though counterrevolutionaries may be desperate to return, violent revolutions usually destroy their capacity to do so. They grind down their armies through prolonged guerilla war, whereas unarmed revolutions leave these armies largely unscathed. In the three cases above, there was minimal security reform following the ousting of the incumbent, forcing civilian revolutionaries to rule in the shadow of a powerful old regime military establishment.

A second explanation focuses on the coercive resources available to revolutionaries. During revolutions waged through insurgency or guerilla war, challengers build up powerful revolutionary armies, like Fidel Castro’s Rebel Army in Cuba or Mao’s Red Army in China. When these revolutionaries seize power, their armies serve as strong bulwarks against counterrevolutionary attacks. The Bay of Pigs invasion in Cuba is a good example: even though that campaign had the backing of the CIA, it quickly ran aground in the face of Castro’s well-fortified revolutionary defenses. In contrast, unarmed revolutionaries rarely build up these types of coercive organizations, leaving them with little means to fend off counterrevolutions.

After looking at the data, I found that the second explanation has more weight than the first one. I break counterrevolution down into two parts— whether a counterrevolution is launched, and then whether it succeeds—and find that armed revolutions significantly lower the likelihood of counterrevolutionary success, but not counterrevolutionary challenges. In other words, reactionaries are just as likely to attempt a restoration following both armed and unarmed revolutions. But they are far less likely to succeed against the armed revolutions, whose loyal cadres can be reliably called up to defend the revolution’s gains.

Unarmed revolutions are increasing around the world, especially in regions like Latin America, Eastern Europe, the Middle East, and sub-Saharan Africa. At the same time, violent revolutions are declining in frequency, particularly those involving long, grueling campaigns that seek transformational impacts on state and society, what some call social revolutions. In one sense, these should be welcome trends, since unarmed revolutions result in far less destruction and have a record of producing more liberal orders. But given their susceptibility to reversal, should we be concerned that we are actually at the threshold of a new era of counterrevolution?

There are certainly reasons for worry. Counterrevolutions are rare events (by my count, there have only been about 25 since 1900), and the fact that there have been so many in recent years does not augur well. Counterrevolutionaries’ prospects have also been bolstered by changes in the international system, with rising powers like Russia, Saudi Arabia, and the United Arab Emirates acting as enthusiastic champions of counterrevolution, particularly against democratic revolutions in their near-abroads. Today’s unarmed revolutions, already facing uphill battles in establishing their rule, with fractious coalitions and a lack of coercive resources, must now also contend with counterrevolutionary forces drawing support from a muscular set of foreign allies.

But though they may struggle to consolidate their gains, unarmed revolutions have a record of establishing more open and democratic regimes than armed ones do. Violent revolutions too often simply replace one form of tyranny with another. The question, then, is how to bolster these fledgling revolutionary democracies and help them to fend off the shadowy forces of counterrevolution.

International support can be crucial. Strong backing from the international community can deter counterrevolutionaries and help new regimes fend off threats. Ultimately, though, much comes down to the actions of revolutionaries themselves—and whether they can keep their coalitions rallied behind the revolutionary cause. Where they can, they are typically able to defeat even powerful counterrevolutions, by relying on the very same tactics of people power and mass protest that brought them success during the revolution itself.

Killian Clarke is an assistant professor at Georgetown University.

Mutant queens are clues to social parasite ants

cluster of ants on white surface

A type of ants called workerless social parasites could help clarify the molecular mechanisms behind caste differentiation, or how an ant develops into a worker or a queen.

Delving into the genetics of these unique ants could also help illuminate the biological processes that drive the development of all animals.

Ants are known as hard workers, tirelessly attending to their assigned tasks—foraging for food, nurturing larvae, digging tunnels, tidying the nest. But workerless social parasite ants are total layabouts.

The rare species exist only as queens, and they die without workers to tend to them. To survive, parasitic ants infiltrate a colony of closely related ants, where, as long as they keep their numbers relatively low, they and their offspring become the leisure class of the colony.

It’s long been thought that these determinedly lazy insects likely evolved their queenly characteristics one by one, through a series of mutations, in an isolated setting.

Now scientists in the Laboratory of Social Evolution and Behavior at Rockefeller University, together with their collaborators at Harvard University, have a new theory.

As they report in Current Biology, they’ve discovered queen-like mutants—parasitic ants that spontaneously appeared in colonies of clonal raider ants, which are typically queenless.

“This mutant is like the precursor to other parasitic species,” says lead author Waring Trible. “It’s a new way of understanding how ants evolve to become socially parasitic.”

Ant parasite subterfuge

Among the more than 15,000 identified ant species are hundreds that qualify as social parasites. Born inside a host colony, a parasitic ant will leave the colony, use a sex pheromone to attract a male from another colony to mate with, and once pregnant, will infiltrate the original colony or find another nearby. She’ll often use subterfuge to sneak past colony guards.

The shampoo ant, for example, will snatch a few ants just outside a nest entrance, lick them to acquire the colony’s signature chemical scent, and then lick herself all over to transfer it to her own body. Chemically cloaked, she then can slip inside to live out her life and reproduce both new queens and males who mate outside the colony. The males die, and the queens begin the cycle again.

Because of their uniqueness, they’ve been studied extensively by biologists since Charles Darwin. Still there’s been a sticking point in a prevailing theory of their evolution, says Daniel Kronauer, associate professor at Rockefeller University, and head of the lab. They’re closely related to their hosts, but if they had to acquire these parasitic traits over time, they would need to be isolated during reproduction, otherwise interbreeding with their hosts would wash out their unique traits. But no one has found intermediately evolved ants—ones with some social parasite traits but not others—in the wild, says Kronauer.

When queen-like ants suddenly appeared among the clonal raider ants in Kronauer’s lab in 2015, Trible—who was looking to investigate genetic mechanisms behind caste differentiation—took notice.

Because clonal raider ants typically have no queens and reproduce asexually, the queen-like mutants stood out: they were born with wings, larger eyes, and ovaries, and as adults showed a general indifference towards labor.

But it turned out they were nothing new—they’d had been hiding for years in dense colonies whose numbers obscured their presence. Genetic analysis revealed that they had mutated into existence inside the colony in which they were first detected—a community of otherwise normal ants that Kronauer had collected in Okinawa, Japan, in 2008, and that still lived in the lab. It was a clue that the typical story of parasite ant invasion might need rethinking.

The researchers then ran a series of experiments and genetic analyses. One of the first experiments was to isolate them to see whether the phenotype was heritable. Because clonal raider ants reproduce asexually, they didn’t have to worry about interbreeding with other ants.

The queen-like mutants lay eggs that developed into copies of themselves. “We knew we had something cool,” Kronauer says.

They also tested behavior. Foraging parties composed entirely of queen-like mutants were half the size of those of worker ants, and they were far less likely to try to enlist other ants to track down food. These behaviors were a sort of intermediary between the diligence of worker ants and the dependence of queens, and allowed the mutant ants to avoid the dangers inherent to leaving the safety of the colony.

Despite laying twice as many eggs as their hosts, the ants self-regulate their head count. As long as their numbers stay below about 25 percent of the host population, they do well. More than that and they run into trouble. Queens need help from workers to free their wings as they emerge from the pupae, and if there are too many queens for the workers to look after, they’ll die entangled in their pupal skin.

“They seem to have the ability to regulate their own reproduction so that they don’t drive their host colony extinct, which is a very smart thing for a parasite to do,” says Trible, a former member of Kronauer’s lab who now runs his own lab at Harvard studying these and other mutants. “This provides these mutants the capacity to survive for long periods of time.”

Ant caste mystery

Whole-genome sequencing revealed that the parasitic queens have a mutation in chromosome 13, which is structurally similar to chromosomes that regulate colony social structure in other ants. This mutant chromosome seems to contain a “supergene,” a set of genes that work together to create a phenotype. In this case, the supergene contains more than 200 individual genes, a disproportionate number of which assist in the metabolism of hormones.

These include genes that code for cytochrome p450 enzymes, which are required to synthesize hormones in both ants and humans, and may play a role in the creation of these highly unusual mutants. (This enzyme family may be familiar to anyone who has been warned to not drink grapefruit juice while taking certain medications because the juice inhibits the enzymes from detoxifying the drugs.)

It appears that with this single mutation, “their form, the higher egg production, the behavior—it can all shift in a single mutational step,” Kronauer says.

And if that’s the case, Trible says, “it would be a way that it’s actually possible to go from a normal ant to a parasite within a single species.”

That idea—that two very different forms of an animal can arise in a single species—gets at the heart of the mystery of ant castes. Because workerless social parasites arise from a very specific type of mutation affecting ant caste development, studying the queen-like mutants has the potential to reveal insights into the still-unknown molecular mechanisms that allow ant larvae to develop distinct caste morphologies. “It provides a very comprehensive framework in which to study their evolution,” says Kronauer.

The findings could also lead to further research on an important process of development known as allometric scaling that happens in all animals, including humans, Trible says. Allometric scaling keeps the tissues of an organism proportional to its body size as it grows.

Its mechanisms are unknown, but understanding them likely has relevance to many aspects of human biology, including disease, says Trible. Perhaps queen-like mutants can provide a new avenue of investigation.

“We don’t have good examples of mutations in fruit flies or mice or human genetic disorders that break allometric scaling in such a dramatic way,” he says. “We think this queen-like mutant will be a powerful tool for understanding caste development, and caste development is in turn an ideal model to investigate these larger questions about how allometric scaling works.”

Source: Joshua Krisch for Rockefeller University

The post Mutant queens are clues to social parasite ants appeared first on Futurity.

Eppur si muove: The Legend of Galileo

There are few images of the modern world more powerful than that of the humbled Galileo, kneeling before the cardinals of the Holy Roman and Universal Inquisition, being forced to admit that the Earth did not move. The story is familiar: that Galileo represents science fighting to free itself from the clutches of blind faith, biblical literalism, and superstition. The story has fascinated generations, from the philosophes of the Enlightenment to scholars and politicians in the nineteenth and twentieth centuries.

The specter of the Catholic Church’s condemnation of Galileo continues to influence the modern world’s understanding of the relationship between religion and science. In October 1992, Pope John Paul II appeared before the Pontifical Academy of the Sciences to accept formally the findings of a commission tasked with historical, scientific, and theological inquiry into the Inquisition’s treatment of Galileo. The Pope noted that the theologians of the Inquisition who condemned Galileo failed to distinguish properly between particular biblical interpretations and questions pertaining to scientific investigation.

The Pope also observed that one of the unfortunate consequences of Galileo’s condemnation was that it has been used to reinforce the myth of an incompatibility between faith and science. That such a myth is alive and well was immediately apparent in the way the American press described the event in the Vatican. The headline on the front page of The New York Times was representative: “After 350 Years, Vatican Says Galileo Was Right: It Moves.” Other newspapers, as well as radio and television networks, repeated essentially the same claim.

The New York Times story is an excellent example of the persistence and power of the myths surrounding the Galileo affair. The newspaper claimed that the Pope’s address would “rectify one of the Church’s most infamous wrongs—the persecution of the Italian astronomer and physicist for proving the Earth moves about the Sun.” For some, the story of Galileo serves as evidence for the view that the Church has been hostile to science, and the view that the Church once taught what it now denies, namely, that the Earth does not move. Some take it as evidence that teachings of the Church on matters of sexual morality or of women’s ordination to the priesthood are, in principle, changeable. The “reformability” of such teachings is, thus, the real lesson of the “Galileo Affair.”

But modern treatments of the affair not only miss key context surrounding the Inquisition’s condemnation of Galileo; they also misinterpret what the Catholic Church has always taught about faith, science, and their fundamental complementarity.

For some, the story of Galileo serves as evidence for the view that the Church has been hostile to science, and the view that the Church once taught what it now denies, namely, that the Earth does not move.

 

Galileo and the Inquisition in the Seventeenth Century

Galileo’s telescopic observations convinced him that Copernicus was correct. In 1610, Galileo’s first astronomical treatise, The Starry Messenger, reported his discoveries that the Milky Way consists of innumerable stars, that the moon has mountains, and that Jupiter has four satellites. Subsequently, he discovered the phases of Venus and spots on the surface of the sun. He named the moons of Jupiter the “Medicean Stars” and was rewarded by Cosimo de’ Medici, Grand Duke of Tuscany, with appointment as chief mathematician and philosopher at the Duke’s court in Florence. Galileo relied on these telescopic discoveries, and arguments derived from them, to bolster public defense of Copernicus’s thesis that the Earth and the other planets revolve about the sun.

When we speak of Galileo’s defense of the thesis that the Earth moves, we must be especially careful to distinguish between arguments in favor of a position and arguments that prove a position to be true. Despite the claims of The New York Times, Galileo did not prove that the Earth moves about the sun. In fact, Galileo and the theologians of the Inquisition alike accepted the prevailing Aristotelian ideal of scientific demonstration, which required that science be sure and certain knowledge, different in some ways from what we today accept as scientific. Furthermore, to refute the geocentric astronomy of Ptolemy and Aristotle is not the same as to demonstrate that the Earth moves. Danish astronomer Tycho Brahe (1546–1601), for example, had created another account of the heavens. He argued that all the planets revolved about the sun, which itself revolved about a stationary Earth. In fact, Galileo himself did not think that his astronomical observations provided sufficient evidence to prove that the Earth moves, although he did think that they called Ptolemaic geocentric astronomy into question. Galileo hoped eventually to argue from the fact of ocean tides to the double motion of the Earth as the only possible cause, but he did not succeed.

When we speak of Galileo’s defense of the thesis that the Earth moves, we must be especially careful to distinguish between arguments in favor of a position and arguments that prove a position to be true.

 

Cardinal Robert Bellarmine, Jesuit theologian and member of the Inquisition, told Galileo in 1615 that if there were a true demonstration for the motion of the Earth, then the Church would have to abandon its traditional reading of those passages in the Bible that appeared to be contrary. But in the absence of such a demonstration (and especially in the midst of the controversies of the Protestant Reformation), the Cardinal urged prudence: treat Copernican astronomy simply as a hypothetical model that accounts for the observed phenomena. It was not Church doctrine that the Earth did not move. If the Cardinal had thought that the immobility of the Earth were a matter of faith, he could not argue, as he did, that it might be possible to demonstrate that the Earth does move.

The theologians of the Inquisition and Galileo adhered to the ancient Catholic principle that, since God is the author of all truth, the truths of science and the truths of revelation cannot contradict one another. In 1616, when the Inquisition ordered Galileo not to hold or to defend Copernican astronomy, there was no demonstration for the motion of the Earth. Galileo expected that there would be such a demonstration; the theologians did not. It seemed obvious to the theologians in Rome that the Earth did not move and, since the Bible does not contradict the truths of nature, the theologians concluded that the Bible also affirms that the Earth does not move. The Inquisition was concerned that the new astronomy seemed to threaten the truth of Scripture and the authority of the Catholic Church to be its authentic interpreter.

The Inquisition did not think that it was requiring Galileo to choose between faith and science. Nor, in the absence of scientific knowledge for the motion of the Earth, would Galileo have thought that he was being asked to make such a choice. Again, both Galileo and the Inquisition thought that science was absolutely certain knowledge, guaranteed by rigorous demonstrations. Being convinced that the Earth moves is different from knowing that it moves.

The disciplinary decree of the Inquisition was unwise and imprudent. But the Inquisition was subordinating scriptural interpretation to a scientific theory, geocentric cosmology, that would eventually be rejected. Subjecting scriptural interpretation to scientific theory is just the opposite of the subjection of science to religious faith!

In 1632, Galileo published his Dialogue Concerning the Two Chief World Systems, in which he supported the Copernican “world system.” As a result, Galileo was charged with disobeying the 1616 injunction not to defend Copernican astronomy. The Inquisition’s injunction, however ill‑advised, only makes sense if we recognize that the Inquisition saw no possibility of a conflict between science and religion, both properly understood. Thus, in 1633, the Inquisition, to ensure Galileo’s obedience, required that he publicly and formally affirm that the Earth does not move. Galileo, however reluctantly, acquiesced.

From beginning to end, the Inquisition’s actions were disciplinary, not dogmatic, although they were based on the erroneous notion that it was heretical to claim that the Earth moves. Erroneous notions remain only notions; opinions of theologians are not the same as Christian doctrine. The error the Church made in dealing with Galileo was an error in judgment. The Inquisition was wrong to discipline Galileo, but discipline is not dogma.

The Inquisition did not think that it was requiring Galileo to choose between faith and science. Nor, in the absence of scientific knowledge for the motion of the Earth, would Galileo have thought that he was being asked to make such a choice.

 

The Development of the Legend of Galileo

The mythic view of the Galileo affair as a central chapter in the warfare between science and religion became prominent during debates in the late nineteenth century over Darwin’s theory of evolution. In the United States, Andrew Dickson White’s History of the Warfare of Science with Theology in Christendom (1896) enshrined what has become a historical orthodoxy difficult to dislodge. White used Galileo’s “persecution” as an ideological tool in his attack on the religious opponents of evolution. Since it was so obvious by the late nineteenth century that Galileo was right, it was useful to see him as the great champion of science against the forces of dogmatic religion. The supporters of evolution were seen as nineteenth-century Galileos; the opponents of evolution were seen as modern inquisitors. The Galileo affair was also used to oppose claims about papal infallibility, formally affirmed by the First Vatican Council in 1870. As White observed: had not two popes (Paul V in 1616 and Urban VIII in 1633) officially declared that the Earth does not move?

The persistence of the legend of Galileo, and of the image of “warfare” between science and religion, has played a central role in the modern world’s understanding of what it means to be modern. Even today the legend of Galileo serves as an ideological weapon in debates about the relationship between science and religion. It is precisely because the legend has been such an effective weapon that it has persisted.

Galileo and the Inquisition shared common first principles about the nature of scientific truth and the complementarity between science and religion.

 

For example, a discussion in bioethics from several years ago drew on the myths of the Galileo affair. In March 1987, when the Catholic Church published condemnations of in vitro fertilization, surrogate motherhood, and fetal experimentation, there appeared a page of cartoons in one of Rome’s major newspapers, La Repubblica, with the headline: ‘In Vitro Veritas.’ In one of the cartoons, two bishops are standing next to a telescope, and in the distant night sky, in addition to Saturn and the Moon, there are dozens of test-tubes. One bishop turns to the other, who is in front of the telescope, and asks: “This time what should we do? Should we look or not?” The historical reference to Galileo was clear.

In fact, at a press conference at the Vatican, then-Cardinal Joseph Ratzinger was asked whether he thought the Church’s response to the new biology would not result in another “Galileo affair.” The Cardinal smiled, perhaps realizing the persistent power—at least in the popular imagination—of the story of Galileo’s encounter with the Inquisition more than 350 years before. The Vatican office that Cardinal Ratzinger was then the head of, the Congregation for the Doctrine of the Faith, is the direct successor to the Holy Roman and Universal Inquisition into Heretical Depravity.

There is no evidence that in 1633 when Galileo acceded to the Inquisition’s demand that he formally renounce the view that the Earth moves, he muttered under his breath, eppur si muove, “but still it moves.” What continues to move, despite evidence to the contrary, is the legend that Galileo represents reason and science in conflict with faith and religion. Galileo and the Inquisition shared common first principles about the nature of scientific truth and the complementarity between science and religion. In the absence of scientific knowledge, at least as understood by both the Inquisition and Galileo, that the Earth moves, Galileo was required to affirm that it did not. However unwise it was to insist on such a requirement, the Inquisition did not ask Galileo to choose between science and faith.

Super ancient fish skull holds oldest backboned animal brain fossil

A dark fossil shows the head of the fish with a penny next to the fossil on a white surface for scale.

A 319-million-year-old fossilized fish skull holds the oldest example of a well-preserved vertebrate brain.

Scientists pulled the skull from a coal mine in England more than a century ago. The brain and its cranial nerves are roughly an inch long and belong to an extinct bluegill-size fish. The discovery opens a window into the neural anatomy and early evolution of the major group of fishes alive today, the ray-finned fishes, according to the study in Nature.

The serendipitous find also provides insights into the preservation of soft parts in fossils of backboned animals. Most of the animal fossils in museum collections were formed from hard body parts such as bones, teeth, and shells.

The CT-scanned brain analyzed for the new study belongs to Coccocephalus wildi, an early ray-finned fish that swam in an estuary and likely dined on small crustaceans, aquatic insects, and cephalopods, a group that today includes squid, octopuses, and cuttlefish. Ray-finned fishes have backbones and fins supported by bony rods called rays.

When the fish died, the soft tissues of its brain and cranial nerves were replaced during the fossilization process with a dense mineral that preserved, in exquisite detail, their three-dimensional structure.

“An important conclusion is that these kinds of soft parts can be preserved, and they may be preserved in fossils that we’ve had for a long time—this is a fossil that’s been known for over 100 years,” says senior author Matt Friedman, a paleontologist and director of the Museum of Paleontology at the University of Michigan.

Is this really a brain?

“Not only does this superficially unimpressive and small fossil show us the oldest example of a fossilized vertebrate brain, but it also shows that much of what we thought about brain evolution from living species alone will need reworking,” says lead author Rodrigo Figueroa, a doctoral student who did the work as part of his dissertation, under Friedman, in the earth and environmental sciences department.

“With the widespread availability of modern imaging techniques, I would not be surprised if we find that fossil brains and other soft parts are much more common than we previously thought. From now on, our research group and others will look at fossil fish heads with a new and different perspective.”

The skull fossil from England is the only known specimen of its species, so only nondestructive techniques could be used during the study.

The work on Coccocephalus is part of a broader effort by Friedman, Figueroa, and colleagues that uses computed tomography (CT) scanning to peer inside the skulls of early ray-finned fishes. The goal of the larger study is to obtain internal anatomical details that provide insights about evolutionary relationships.

In the case of C. wildi, Friedman wasn’t looking for a brain when he fired up his micro-CT scanner and examined the skull fossil.

“I scanned it, then I loaded the data into the software we use to visualize these scans and noticed that there was an unusual, distinct object inside the skull,” he says.

The unidentified blob was brighter on the CT image—and therefore likely denser—than the bones of the skull or the surrounding rock.

“It is common to see amorphous mineral growths in fossils, but this object had a clearly defined structure,” Friedman says.

The mystery object displayed several features found in vertebrate brains: It was bilaterally symmetrical, it contained hollow spaces similar in appearance to ventricles, and it had multiple filaments extending toward openings in the braincase, similar in appearance to cranial nerves, which travel through such canals in living species.

“It had all these features, and I said to myself, ‘Is this really a brain that I’m looking at?'” Friedman says. “So I zoomed in on that region of the skull to make a second, higher-resolution scan, and it was very clear that that’s exactly what it had to be. And it was only because this was such an unambiguous example that we decided to take it further.”

Fish evolution

Though preserved brain tissue has rarely been found in vertebrate fossils, scientists have had better success with invertebrates. For example, the intact brain of a 310-million-year-old horseshoe crab was reported in 2021, and scans of amber-encased insects have revealed brains and other organs. There is even evidence of brains and other parts of the nervous system recorded in flattened specimens more than 500 million years old.

The preserved brain of a 300-million-year-old shark relative was reported in 2009. But sharks, rays, and skates are cartilaginous fishes, which today hold relatively few species compared to the ray-finned fish lineage containing Coccocephalus.

Early ray-finned fishes like Coccocephalus can tell scientists about the initial evolutionary phases of today’s most diverse fish group, which includes everything from trout to tuna, seahorses to flounder.

There are roughly 30,000 ray-finned fish species, and they account for about half of all backboned animal species. The other half is split between land vertebrates—birds, mammals, reptiles, and amphibians—and less diverse fish groups like jawless fishes and cartilaginous fishes.

The Coccocephalus skull fossil is on loan to Friedman from England’s Manchester Museum. It was recovered from the roof of the Mountain Fourfoot coal mine in Lancashire and was first scientifically described in 1925. The fossil was found in a layer of soapstone adjacent to a coal seam in the mine.

Though only its skull was recovered, scientists believe that C. wildi would have been 6 to 8 inches long. Judging from its jaw shape and its teeth, it was probably a carnivore, Figueroa says.

When the fish died, scientists suspect it was quickly buried in sediments with little oxygen present. Such environments can slow the decomposition of soft body parts.

In addition, a chemical micro-environment inside the skull’s braincase may have helped to preserve the delicate brain tissues and to replace them with a dense mineral, possibly pyrite, Figueroa says.

Evidence supporting this idea comes from the cranial nerves, which send electrical signals between the brain and the sensory organs. In the Coccocephalus fossil, the cranial nerves are intact inside the braincase but disappear as they exit the skull.

“There seems to be, inside this tightly enclosed void in the skull, a little micro-environment that is conducive to the replacement of those soft parts with some kind of mineral phase, capturing the shape of tissues that would otherwise simply decay away,” Friedman says.

Skull scans

Detailed analysis of the fossil, along with comparisons to the brains of modern-fish specimens from the University of Michigan Museum of Zoology collection, revealed that the brain of Coccocephalus has a raisin-size central body with three main regions that roughly correspond to the forebrain, midbrain, and hindbrain in living fishes.

Cranial nerves project from both sides of the central body. Viewed as a single unit, the central body and the cranial nerves resemble a tiny crustacean, such as a lobster or a crab, with projecting arms, legs and claws.

Notably, the brain structure of Coccocephalus indicates a more complicated pattern of fish-brain evolution than is suggested by living species alone, according to the authors.

“These features give the fossil real value in understanding patterns of brain evolution, rather than simply being a curiosity of unexpected preservation,” Figueroa says.

For example, all living ray-finned fishes have an everted brain, meaning that the brains of embryonic fish develop by folding tissues from the inside of the embryo outward, like a sock turned inside out.

All other vertebrates have evaginated brains, meaning that neural tissue in developing brains folds inward.

“Unlike all living ray-finned fishes, the brain of Coccocephalus folds inward,” Friedman says. “So, this fossil is capturing a time before that signature feature of ray-finned fish brains evolved. This provides us with some constraints on when this trait evolved—something that we did not have a good handle on before the new data on Coccocephalus.”

Comparisons to living fishes showed that the brain of Coccocephalus is most similar to the brains of sturgeons and paddlefish, which are often called “primitive” fishes because they diverged from all other living ray-finned fishes more than 300 million years ago.

Friedman and Figueroa are continuing to CT scan the skulls of ray-finned fish fossils, including several specimens that Figueroa brought to Ann Arbor on loan from institutions in his home country, Brazil. Figueroa says his doctoral dissertation was delayed by the COVID-19 pandemic but is expected to be completed in summer 2024.

Friedman and Figueroa says the discovery highlights the importance of preserving specimens in paleontology and zoology museums.

“Here we’ve found remarkable preservation in a fossil examined several times before by multiple people over the past century,” Friedman says. “But because we have these new tools for looking inside of fossils, it reveals another layer of information to us.

“That’s why holding onto the physical specimens is so important. Because who knows, in 100 years, what people might be able to do with the fossils in our collections now.”

The study includes data produced at University of Michigan’s Computed Tomography in Earth and Environmental Science facility, which is supported by the Department of Earth and Environmental Sciences and the College of Literature, Science, and the Arts.

Sam Giles of London’s Natural History Museum and the University of Birmingham is a senior author of the study. Additional coauthors are from the University of Chicago and the University of Michigan Museum of Paleontology.

Source: University of Michigan

The post Super ancient fish skull holds oldest backboned animal brain fossil appeared first on Futurity.

The next de-extinction target: The dodo

Image of a medium sized bird with iridescent feathers

Enlarge / The Nicobar pigeon, the dodo's closest living relative, is quite a bit smaller and capable of flight. (credit: Samuel Hambly / EyeEm)

Colossal is a company that got its start with a splashy announcement about plans to do something that many scientists consider impossible with current technology, all in the service of creating a product with no clear market potential: the woolly mammoth. Since that time, the company has settled into a potentially viable business model and set its sights on a species where the biology is far more favorable: the thylacine, a marsupial predator that went extinct in the early 1900s.

Today, the company is announcing a third de-extinction target and its return to the realm of awkward reproductive biology that will force the project to clear many technical hurdles: It hopes to bring back the dodo.

A shifting symbol

The dodo was a large (up to 1 meter tall), flightless bird that evolved on the island of Mauritius in the Indian Ocean. As European sailors reached the islands, it quickly became a source of food for them and the invasive species that accompanied them. It went extinct within a century of the first descriptions reaching Europe.

Read 11 remaining paragraphs | Comments

Hannah Arendt: Insurrection and Constitutionalism

Even though the post-war consensus over the meaning and value of specifically liberal democratic institutions seems more fragile than ever—polls show that trust in government experts and elected representatives has rarely been lower—democracy as furious dissent flourishes as rarely before, in vivid and vehement outbursts of anger at remote elites and shadowy enemies. ...

Read More

Antibiotic resistance induced by the widespread use of… antidepressants?

Image of a smiley face with a frown, with the lines drawn using pills.

Enlarge (credit: Larry Washburn)

Jianhua Guo is a professor at the Australian Centre for Water and Environmental Biotechnology. His research focuses on removing contaminants from wastewater and the environmental dimensions of antimicrobial resistance. One of those dimensions is the overuse of antibiotics, which promotes resistance to these drugs.

Guo wondered if the same might hold true for other types of pharmaceuticals as well. His lab found that they definitely do. Specific antidepressants—SSRIs and SNRIs—promote resistance to different classes of antibiotics. This resistance is heritable over 33 bacterial generations, even once the antidepressant is removed.

So much work

Antidepressants are among the most prescribed and ingested drugs there are. They account for roughly 5 percent of the pharmaceutical market share—about the same as antibiotics—and four of the top 10 most prescribed psychiatric meds in the US.

Read 6 remaining paragraphs | Comments

Ice Age effects still show up in crocodiles today

An American crocodile walks from water onto the shore.

While changing temperatures and rainfall had little impact on crocodiles’ gene flow over the past three million years, changes to sea levels during the Ice Age had a different effect.

“The American crocodile tolerates huge variations in temperature and rainfall. But about 20,000 years ago—when much of the world’s water was frozen, forming the vast ice sheets of the last glacial maximum—sea levels dropped by more than 100 meters [about 328 feet],” says José Avila-Cervantes, a postdoctoral fellow working under the supervision of Hans Larsson, a professor of biology at the Redpath Museum of McGill University. “This created a geographical barrier that separated the gene flow of crocodiles in Panama.”

The researchers point out that the crocodiles are good swimmers, but they can’t travel long distances on land. As a result, the Caribbean and Pacific crocodile populations were isolated from each other, and consequently have undergone different genetic mutations.

For the study in the journal Evolution, the team compared the climate tolerance of living populations of American crocodiles (Crocodylus acutus) to the paleoclimate estimates for the region over the past 3 million years—the time span of extreme climate variation during the Ice Age.

“This is one of the first times Ice Age effects have been found in a tropical species. It’s exciting to discover effects of the last Ice Age glaciation still resonate in the genomes of Pacific and Caribbean American crocodiles today,” Larsson says.

“Discovering that these animals would have easily tolerated the climate swings of the Ice Age speaks to their resilience over geological time. Only humans in recent decades of hunting and land development seem to really affect crocodiles,” he says.

The findings offer new insight into how environmental drivers affect genetic evolution and where conservation efforts of particular crocodile populations in Panama should be focused.

Source: McGill University

The post Ice Age effects still show up in crocodiles today appeared first on Futurity.

Researchers look a dinosaur in its remarkably preserved face

Researchers look a dinosaur in its remarkably preserved face

Enlarge (credit: Royal Tyrrell Museum of Palaeontology)

Borealopelta markmitchelli found its way back into the sunlight in 2017, millions of years after it had died. This armored dinosaur is so magnificently preserved that we can see what it looked like in life. Almost the entire animal—the skin, the armor that coats its skin, the spikes along its side, most of its body and feet, even its face—survived fossilization. It is, according to Dr. Donald Henderson, curator of dinosaurs at the Royal Tyrrell Museum, a one-in-a-billion find.

Beyond its remarkable preservation, this dinosaur is an important key to understanding aspects of Early Cretaceous ecology, and it shows how this species may have lived within its environment. Since its remains were discovered, scientists have studied its anatomy, its armor, and even what it ate in its last days, uncovering new and unexpected insight into an animal that went extinct approximately 100 million years ago.

Down by the sea

Borealopelta is a nodosaur, a type of four-legged ankylosaur with a straight tail rather than a tail club. Its finding in 2011 in an ancient marine environment was a surprise, as the animal was terrestrial.

Read 49 remaining paragraphs | Comments

❌