FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Our Solar System possibly survived a supernova because of how the Sun formed

Image of a young star inside a disk of orange material

Enlarge / Artist's conception of the early Solar System, which was at risk of a nearby supernova. (credit: NASA/JPL-Caltech)

Stars are thought to form within enormous filaments of molecular gas. Regions where one or more of these filaments meet, known as hubs, are where massive stars form.

These massive stars, located nearby, would have put the early Solar System at risk of a powerful supernova. This risk is more than just hypothetical; a research team at the National Astronomical Observatory of Japan, led by astrophysicist Doris Arzoumanian, looked at isotopes found in ancient meteorites, finding possible evidence of a massive star’s turbulent death.

So why did the Solar System survive? The gas within the filament seems to be able to protect it from the supernova and its onslaught of radioactive isotopes. “The host filament can shield the young Solar System from stellar feedback, both during the formation and evolution of stars (stellar outflow, wind, and radiation) and at the end of their lives (supernovae),” Arzoumanian and her team said in a study recently published in The Astrophysical Journal Letters.

Read 10 remaining paragraphs | Comments

The Rigor of Philosophy & the Complexity of the World (guest post)

“Analytic philosophy gradually substitutes an ersatz conception of formalized ‘rigor’ in the stead of the close examination of applicational complexity.”

In the following guest post, Mark Wilson, Distinguished Professor of Philosophy and the History and Philosophy of Science at the University of Pittsburgh, argues that a kind of rigor that helped philosophy serve a valuable role in scientific inquiry has, in a sense, gone wild, tempting philosophers to the fruitless task of trying to understand the world from the armchair.

This is the third in a series of weekly guest posts by different authors at Daily Nous this summer.


[Roy Lichtenstein, “Bull Profile Series”]

The Rigor of Philosophy & the Complexity of the World
by Mark Wilson

In the course of attempting to correlate some recent advances in effective modeling with venerable issues in the philosophy of science in a new book (Imitation of Rigor), I realized that under the banner of “formal metaphysics,” recent analytic philosophy has forgotten many of the motivational considerations that had originally propelled the movement forward. I have also found that like-minded colleagues have been similarly puzzled by this paradoxical developmental arc. The editor of Daily Nous has kindly invited me to sketch my own diagnosis of the factors responsible for this thematic amnesia, in the hopes that these musings might inspire alternative forms of reflective appraisal.

The Promise of Rigor

Let us return to beginnings. Although the late nineteenth century is often characterized as a staid period intellectually, it actually served as a cauldron of radical reconceptualization within science and mathematics, in which familiar subjects became strongly invigorated through the application of unexpected conceptual adjustments. These transformative innovations were often resisted by the dogmatic metaphysicians of the time on the grounds that the innovations allegedly violated sundry a priori strictures with respect to causation, “substance” and mathematical certainty. In defensive response, physicists and mathematicians eventually determined that they could placate the “howls of the Boeotians” (Gauss) if their novel proposals were accommodated within axiomatic frameworks able to fix precisely how their novel notions should be utilized. The unproblematic “implicit definability” provided within these axiomatic containers should then alleviate any a priori doubts with respect to the coherence of the novel conceptualizations. At the same time, these same scientists realized that explicit formulation within an axiomatic framework can also serve as an effective tool for ferreting out the subtle doctrinal transitions that were tacitly responsible for the substantive crises in rigor that had bedeviled the period.

Pursuant to both objectives, in 1894 the physicist Heinrich Hertz attempted to frame a sophisticated axiomatics to mend the disconnected applicational threads that he correctly identified as compromising the effectiveness of classical mechanics in his time. Unlike his logical positivist successors, Hertz did not dismiss terminologies like “force” and “cause” out of hand as corruptly “metaphysical,” but merely suggested that they represent otherwise useful vocabularies that “have accumulated around themselves more relations than can be completely reconciled with one another” (through these penetrating diagnostic insights, Hertz emerges as the central figure within my book). As long as “force” and “cause” remain encrusted with divergent proclivities of this unacknowledged character, methodological strictures naively founded upon the armchair “intuitions” that we immediately associate with these words are likely to discourage the application of more helpful forms of conceptual innovation through their comparative unfamiliarity.

There is no doubt that parallel developments within symbolic logic sharpened these initial axiomatic inclinations in vital ways that have significantly clarified a wide range of murky conceptual issues within both mathematics and physics. However, as frequently happens with an admired tool, the value of a proposed axiomatization depends entirely upon the skills and insights of the workers who employ it. A superficially formalized housing in itself guarantees nothing. Indeed, the annals of pseudo-science are profusely populated with self-proclaimed geniuses who fancy that they can easily “out-Newton Newton” simply by costuming their ill-considered proposals within the haberdashery of axiomatic presentation (cf., Martin Gardner’s delightful Fads and Fallacies in the Name of Science).

Inspired by Hertz and Hilbert, the logical empiricists subsequently decided that the inherent confusions of metaphysical thought could be eliminated once and for all by demanding that any acceptable parcel of scientific theorizing must eventually submit to “regimentation” (Quine’s term) within a first order logical framework, possibly supplemented with a few additional varieties of causal or modal appeal. As just noted, Hertz himself did not regard “force” as inherently “metaphysical” in this same manner, but simply that it comprised a potentially misleading source of intuitions to rely upon in attempting to augur the methodological requirements of an advancing science.

Theory T Syndrome

Over analytic philosophy’s subsequent career, these logical empiricist expectations with respect to axiomatic regimentation gradually solidified into an agglomeration of strictures upon acceptable conceptualization that have allowed philosophers to criticize rival points of view as “unscientific” through their failure to conform to favored patterns of explanatory regimentation. I have labelled these logistical predilections as the “Theory T syndrome” in other writings.

A canonical illustration is provided by the methodological gauntlet that Donald Davidson thrusts before his opponents in “Actions, Reasons and Causes”:

One way we can explain an event is by placing it in the context of its cause; cause and effect form the sort of pattern that explains the effect, in a sense of “explain” that we understand as well as any. If reason and action illustrate a different pattern of explanation, that pattern must be identified.

In my estimation, this passage supplies a classic illustration of Theory T-inspired certitude. In fact, a Hertz-like survey of mechanical practice reveals many natural applications of the term “cause” that fail to conform to Davidson’s methodological reprimands.

As a result, “regimented theory” presumptions of a confident “Theory T” character equip such critics with a formalist reentry ticket that allows armchair speculation to creep back into the philosophical arena with sparse attention to the real life complexities of effective concept employment. Once again we witness the same dependencies upon a limited range of potentially misleading examples (“Johnny’s baseball caused the window to break”), rather than vigorous attempts to unravel the entangled puzzlements that naturally attach to a confusing word like “cause,” occasioned by the same developmental processes that make “force” gather a good deal of moss as it rolls forward through its various modes of practical application. Imitation of Rigor attempts to identify some of the attendant vegetation that likewise attaches to “cause” in a bit more detail.

As a result, a methodological tactic (axiomatic encapsulation) that was originally championed in the spirit of encouraging conceptual diversity eventually develops into a schema that favors methodological complacency with respect to the real life issues of productive concept formation. In doing so, analytic philosophy gradually substitutes an ersatz conception of formalized “rigor” in the stead of the close examination of applicational complexity that distinguishes Hertz’ original investigation of “force”’s puzzling behaviors (an enterprise that I regard as a paragon of philosophical “rigor” operating at its diagnostic best). Such is the lesson from developmental history that I attempted to distill within Imitation of Rigor (whose contents have been ably summarized within a recent review by Katherine Brading in Notre Dame Philosophical Reviews).

But Davidson and Quine scarcely qualified as warm friends of metaphysical endeavor. The modern adherents of “formal metaphysics” have continued to embrace most of their “Theory T” structural expectations while simultaneously rejecting positivist doubts with respect to the conceptual unacceptability of the vocabularies that we naturally employ when we wonder about how the actual composition of the external world relates to the claims that we make about it. I agree that such questions represent legitimate forms of intellectual concern, but their investigation demands a close study of the variegated conceptual instruments that we actually employ within productive science.  But “formal metaphysics” typically eschews the spadework required and rests its conclusions upon Theory T -inspired portraits of scientific method.

Indeed, writers such as David Lewis and Ted Sider commonly defend their formal proposals as simply “theories within metaphysics” that organize their favored armchair intuitions in a manner in which temporary infelicities can always be pardoned as useful “idealizations” in the same provisional manner in which classical physics allegedly justifies its temporary appeals to “point masses” (another faulty dictum with respect to actual practice in my opinion).

Philosophy’s Prophetic Telescope

These “Theory T” considerations alone can’t fully explicate the unabashed return to armchair speculation that is characteristic of contemporary effort within “formal metaphysics.” I have subsequently wondered whether an additional factor doesn’t trace to the particular constellation of doctrines that emerged within Hilary Putnam’s writings on “scientific realism” in the 1965-1975 period. Several supplementary themes there coalesce in an unfortunate manner.

(1) If a scientific practice has managed to obtain a non-trivial measure of practical capacity, there must be underlying externalist reasons that support these practices, in the same way that external considerations of environment and canvassing strategy help explicate why honey bees collect pollen in the patterns that they do. (This observation is sometimes called Putnam’s “no miracles argument”).

(2) Richard Boyd subsequently supplemented (1) (and Putnam accepted) with the restrictive dictum that “the terms in a mature scientific theory typically refer,” a developmental claim that strikes me as factually incorrect and supportive of the “natural kinds” doctrines that we should likewise eschew as descriptively inaccurate.

(3) Putnam further aligned his semantic themes with Saul Kripke’s contemporaneous doctrines with respect to modal logic which eventually led to the strong presumption that the “natural kinds” that science will eventually reveal will also carry with them enough “hyperintensional” ingredients to ensure that these future terminologies will find themselves able to reach coherently into whatever “possible worlds” become codified within any ultimate enclosing Theory T (whatever it may prove to be like otherwise). This predictive postulate allows present-day metaphysicians to confidently formulate their structural conclusions with little anxiety that their armchair-inspired proposals run substantive risk of becoming overturned in the scientific future.

Now I regard myself as a “scientific realist” in the vein of (1), but firmly believe that the complexities of real life scientific development should dissuade us from embracing Boyd’s simplistic prophecies with respect to the syntactic arrangements to be anticipated within any future science. Direct inspection shows that worthy forms of descriptive endeavor often derive their utilities from more sophisticated forms of data registration than thesis (2) presumes. I have recently investigated the environmental and strategic considerations that provide classical optics with its astonishing range of predictive and instrumental successes, but the true story of why the word “frequency” functions as such a useful term within these applications demands a far more complicated and nuanced “referential” story than any simple “‘frequency’ refers to X” slogan adequately captures (the same criticism applies to “structural realism” and allied doctrines).

Recent developments within so-called “multiscalar modeling” have likewise demonstrated how the bundle of seemingly “divergent relations” connected with the notion of classical “force” can be more effectively managed by embedding these localized techniques within a more capacious conceptual architecture than Theory T axiomatics anticipates. These modern tactics provide fresh exemplars of novel reconceptualizations in the spirit of the innovations that had originally impressed our philosopher/scientist forebears (Imitation of Rigor examines some of these new techniques in greater detail). I conclude that “maturity” in a science needn’t eventuate in simplistic word-to-world ties but often arrives at more complex varieties of semantic arrangement whose strategic underpinnings can usually be decoded after a considerable expenditure of straightforward scientific examination.

In any case, Putnam’s three supplementary theses, taken in conjunction with the expectations of standard “Theory T thinking” outfits armchair philosophy with a prophetic telescope that allows it to peer into an hypothesized future in which all of the irritating complexities of renormalization, asymptotics and cross-scalar homogenization will have happily vanished from view, having appeared along the way only as evanescent “Galilean idealizations” of little metaphysical import. These futuristic presumptions have convinced contemporary metaphysicians that detailed diagnoses of the sort that Hertz provided can be dismissed with an airy wave of the hand, “The complications to which you point properly belong to epistemology or the philosophy of language, whereas we are only interested in the account of worldly structure that science will eventually reach in the fullness of time.”

Science from the Armchair

Through such tropisms of lofty dismissal, the accumulations of doctrine outlined in this note have facilitated a surprising reversion to armchair demands that closely resemble the constrictive requirements on viable conceptualization against which our historical forebears had originally rebelled. As a result, contemporary discussion within “metaphysics” once again finds itself flooded with a host of extraneous demands upon science with respect to “grounding,” “the best systems account of laws” and much else that doesn’t arise from the direct inspection of practice in Hertz’ admirable manner. As we noted, the scientific community of his time was greatly impressed by the realization that “fresh eyes” can be opened upon a familiar subject (such as Euclidean geometry) through the exploration of alternative sets of conceptual primitives and the manner in which unusual “extension element” supplements can forge unanticipated bridges between topics that had previously seemed disconnected. But I find little acknowledgement of these important tactical considerations within the current literature on “grounding.”

From my own perspective, I have been particularly troubled by the fact that the writers responsible for these revitalized metaphysical endeavors frequently appeal offhandedly to “the models of classical physics” without providing any cogent identification of the axiomatic body that allegedly carves out these “models.” I believe that they have unwisely presumed that “Newtonian physics” must surely exemplify some unspecified but exemplary “Theory T” that can generically illuminate, in spite of its de facto descriptive inadequacies, all of the central metaphysical morals that any future “fundamental physics” will surely instantiate. Through this unfounded confidence in their “classical intuitions,” they ignore Hertz’ warnings with respect to tricky words that “have accumulated around [themselves], more relations than can be completely reconciled amongst themselves.” But if we lose sight of Hertz’s diagnostic cautions, we are likely to return to the venerable realm of armchair expectations that might have likewise appealed to a Robert Boyle or St. Thomas Aquinas.


Discussion welcome.

 

COMMENTS POLICY

The post The Rigor of Philosophy & the Complexity of the World (guest post) first appeared on Daily Nous.

Elon Musk, Mars, and bioethics: is sending astronauts into space ethical?

"Elon Musk, Mars, and bioethics: is ending astronauts into space ethical?" by Konrad Szocik on the OUP blog

Elon Musk, Mars, and bioethics: is sending astronauts into space ethical?

The recent crash of the largest-ever space rocket, Starship, developed by Elon Musk’s SpaceX company, has certainly somewhat disrupted optimism about the human mission to Mars that is being prepared for the next few years. It is worth raising the issue of the safety of future participants in long-term space missions, especially missions to Mars, on the background of this disaster. And it is not just about safety from disasters like the one that happened to Musk. Protection from the negative effects of prolonged flight in zero gravity, protection from cosmic radiation, as well as guaranteeing sufficiently high crew productivity over the course of a multi-year mission also play an important role.

Fortunately, no one was killed in the aforementioned crash, as it was a test rocket alone without a crew. However, past disasters in which astronauts died, such as the Space Shuttle Challenger and Space Shuttle Columbia disasters, remind us that it is the seemingly very small details that determine life and death. So far, 15 astronauts and 4 cosmonauts have died in space flights. 11 more have died during testing and training on Earth. It is worth mentioning that space flights are peacekeeping missions, not military operations. They are carried out relatively infrequently and by a relatively small number of people. 

It is also worth noting the upcoming longer and more complex human missions in the near future, such as the mission to Mars. The flight itself, which is expected to last several months, is quite a challenge, and disaster can happen both during takeoff on Earth, landing on Mars, and then on the way back to Earth. And then there are further risks that await astronauts in space. 

The first is exposure to galactic cosmic radiation and solar energetic particles events, especially during interplanetary flight, when the crew is no longer protected by both Earth’s magnetic field and a possible shelter on Mars. Protection from cosmic radiation for travel to Mars is a major challenge, and 100% effective protective measures are still lacking. Another challenge remains being in long-term zero-gravity conditions during the flight, followed by altered gravity on Mars. Bone loss and muscle atrophy are the main, but not only, negative effects of being in these states. Finally, it is impossible to ignore the importance of psychological factors related to stress, isolation, being in an enclosed small space, distance from Earth.

A human mission to Mars, which could take about three years, brings with it a new type of danger not known from the previous history of human space exploration. In addition to the aforementioned amplified impact of factors already known—namely microgravity, cosmic radiation, and isolation—entirely new risk factors are emerging. One of them is the impossibility of evacuating astronauts in need back to Earth, which is possible in missions carried out at the International Space Station. It seems that even the best-equipped and trained crew may not be able to guarantee adequate assistance to an injured or ill astronaut, which could lead to her death—assuming that care on Earth would guarantee her survival and recovery. Another problem is the delay in communication, which will reach tens of minutes between Earth and Mars. This situation will affect the degree of autonomy of the crew, but also their responsibility. Wrong decisions, made under conditions of uncertainty, can have not only negative consequences for health and life, but also for the entire mission.

“It is worth raising the question of the ethicality of the decision to send humans into such a dangerous environment.”

Thus, we can see that a future human mission to Mars will be very dangerous, both as a result of factors already known but intensified, as well as new risk factors. It is worth raising the question of the ethicality of the decision to send humans into such a dangerous environment. The ethical assessment will depend both on the effectiveness of available countermeasures against harmful factors in space and also on the desirability and justification for the space missions themselves. 

Military ethics and bioethics may provide some analogy here. In civilian ethics and bioethics, we do not accept a way of thinking and acting that would mandate the subordination of the welfare, rights, and health of the individual to the interests of the group. In military ethics, however, this way of thinking is accepted, formally in the name of the higher good. Thus, if the mission to Mars is a civilian mission, carried out on the basis of values inherent in civilian ethics and bioethics rather than military ethics, it may be difficult to justify exposing astronauts to serious risks of death, accident, and disease.

One alternative may be to significantly postpone the mission until breakthrough advances in space technology and medicine can eliminate or significantly reduce the aforementioned risk factors. Another alternative may be to try to improve astronauts through biomedical human enhancements. Just as in the army there are known methods of improving the performance of soldiers through pharmacological means, analogous methods could be applied to future participants in a mission to Mars. Perhaps more radical, and thus controversial, methods such as gene editing would be effective, assuming that gene editing of selected genes can enhance resistance to selected risk factors in space. 

But the idea of genetically modifying astronauts, otherwise quite commonsensical, given also the cost of such a mission, as well as the fact that future astronauts sent to Mars would likely be considered representative of the great effort of all humanity, raises questions about the justification for such a mission. What do the organizers of a mission to Mars expect to achieve? Among the goals traditionally mentioned are the scientific merits of such a mission, followed by possible commercial applications for the future. Philosophers, as well as researchers of global and existential catastrophes, often discuss the concept of space refuge, in which the salvation of the human species in the event of a global catastrophe on Earth would be possible only by settling somewhere beyond Earth. However, it seems that the real goals in our non-ideal society will be political and military.

OUPblog - Academic insights for the thinking world.

Sarvagatatva in Nyāya and Vaiśeṣika: ātman, aether and materiality (mūrtatva)

The Sanskrit philosophical school called Vaiśeṣika is the one most directly dealing with ontology. Its fundamental text is the Vaiśeṣikasūtra, which is commented upon by Prāśastapada in the Pādarthadharmasaṅgraha (from now one PDhS) (the following is a summary of Padārthadharmasaṅgraha ad 8.7).

The school distinguishes substances and qualities. The first group includes four types of atoms (earth, water, fire, air) and then aether, time, space, ātmans and internal organs (manas). The latter are needed as a separate category, because they are point-sized and therefore not made of atoms, unlike the external sense faculties.
Among the 17 qualities, it recognises parimāṇa or dimension'. This encompasses at first two possibilities, namely atomic (aṇu), or extended (mahat). The former covers partless entities that have allegedly no spatial dimension, like points in Euclidean geometry and atoms themselves. These are considered to be without extension and permanent through time (nitya). The latter is subdivided into mahat and paramahat. The first covers all objects one encounters in normal life, from triads of atoms (imagined to be of the size of a particle of dust, the first level of atomic structure to be extended) to the biggest mountain. These entities have parts and extension and have an origin and an end in time. The second subdivision covers special substances, listed as ākāśaaether’, space, time and ātmans, which need to be imagined to be present at each location. Such entities are also imagined to be nitya, that is permanent through time. In other words, they are present at each location of time and space.
The above also implies that entities considered to be permanent through time can only be either atomic or all-pervasive.

However, space, time, aether and selves (ātman) are present at all locations in different ways.

About aether, to begin with, texts like Jayanta’s Nyāyamañjarī say that it needs to be accepted as a fifth substance in order to justify the diffusion of sound across multiple media. Texts of the Vaiśeṣika school, and of the allied school of Nyāya specify that aether does not occupy all locations, but rather is in contact with each individual atom):

[The aether’s] all-pervasiveness consists in the fact that it is in contact with each corporal (mūrta) substance.
(sarvamūrtadravyasaṃyogitvam vibhutvam (Tarkasaṃgrahadīpikā ad 14).)

This means that aether does not pervade atoms, but is in contact (saṃyoga) with each one of them.

This point is already explicit in the allied school of Nyāya, the Nyāyabhāṣya, and is needed because of the point-sized nature of atoms. If these were pervaded by aether, then they would have parts, and thus not be permanent. These undesired consequences are examined in the following:

This is impossible, because of the penetration through aether || NS 4.2.18 ||

It is impossible for an atom [to be] partless and permanent. Why? Because of the penetration through ether, that is, because an atom, if it were permeated, that is `penetrated’ by aether, within and outside, then, because of this penetration it would have parts, and due to having parts it would be impermanent.

Or, the aether is not all-located} || 4.2.19 ||

Alternatively, we don’t accept that. There is no aether within the atoms and therefore aether ends up not being all-located

(ākāśavyatibhedāt tadanupapattiḥ || 4.2.18 ||
tasyāṇor niravayasya nityasyānupapattiḥ. kasmāt. ākāśavyatibhedāt. antarbahiścāṇur ākāśena samāviṣṭo vyatibhinno vyatibhedāt sāvayavaḥ sāvayavatvād anitya iti.
ākāśāsarvagatatvaṃ vā || 4.2.19 ||
athaitan neṣyate paramāṇor antar nāsty ākāśam ity asarvagatatvaṃ prasajyeta iti.)

Aether is postulated as a substrate of sound (which can move through solids, liquids and air, thus proving that it has neither earth, nor water, nor air as substrate). Thus, it needs to be unitary (multiple aethers would not explain the propagation of sound, sound would stop at the end of the respective aether) and it needs to be present at all locations (for the same reason). More in detail: Only because of the unitary nature of aether is it possible for sound to travel between different loci. Otherwise, one would have to posit some mechanism to explain how the sound encountered in one aether travels to another one. Instead, the simpler solution is to posit that aether is necessarily both single (eka) and present at all locations (vibhu).

As for ātman, the self is by definition permanent (otherwise, no afterlife nor cycle of rebirths would be possible). It cannot be atomic, though, because the ātman is the principle of awareness and people become aware of things potentially everywhere. The fact that they don’t become perceptually aware of things being, e.g., behind a wall, by contrast, is only due to the fact that the ātman needs to be in touch (via the internal sense organ, manas, which is believed to be atomic and to move quickly from one to the other sense-faculty) to the sense faculties (indriya) in order for perceptual awareness to take place. Yogins are able to perceive things their bodies are not in contact with because their ātmans are omnipresent, like our ātman, and are able, unlike our ātman, to connect with other bodies’ sense faculties.
Within Sanskrit philosophy, Jaina philosophers suggested that the ātman is co-extensive with the body, since it can experience whatever the body can experience. Vaiśeṣika and other non-Jaina authors disagree, because this would lead to the absurd consequence of an ātman changing in size through one’s life.

A further element to be taken into account with regard to theories of location, and in particular while adjudicating whether they are about occupation or non-occupation is materiality.
Occupation of space seems to occur only from the level of atomic triads up to big, but not all-located, objects. Atoms are said to be mūrta and mūrta is usually translated as `material’, but taken in isolations, atom do not have parts and are only point-sized. In this sense, their being mūrta refers more about their being fundamental for material entities, rather than being material if taken in isolation. The distinction is theoretically relevant, but less evident at the pragmatic level, given that atoms are never found in isolation. Being mūrta is attributed to atoms of the four elements (not to aether) as well as to the inner sense organ (Nyāyakośa, s.v.), but not to ātman neither to aether.

Experiencing different ultimate unities

Defenders of cross-cultural mystical experience are right to note that in many widely varying cultures, respected sages have referred to the experience of an ultimate nonduality: a perception that everything, including oneself, is ultimately one. But one might also then rightly ask: which ultimate nonduality?

Nondualism may be the world’s most widespread philosophy, but it can mean different things – not merely different things in different places, but different things in the same place. Members of the Indian Vedānta tradition frequently proclaimed that everything is “one, without a second”, in the words of the Upaniṣads they followed. But they disagreed as to what that meant. Śaṅkara founded the Advaita Vedānta tradition – a-dvaita literally meaning non-dual – which argued that only the one, ultimate truth (sat, braḥman) was real, and all multiplicity and plurality was an illusion. His opponent Rāmānuja agreed that everything is “one, without a second” – but in his Viśiṣṭādvaita (qualified nondual) school, that meant something quite different. All the many things and people we see around us – what Chinese metaphysicians called the “ten thousand things” – are parts of that ultimate one, and they are real, not illusory.

I was reminded of this point in the great comments on my previous post about cross-cultural mysticism. I had cited W.T. Stace as an influential advocate of the view that mysticism is cross-cultural, and noted how Robert Forman’s book defended Stace by pointing to contentless experiences of void, from the Yoga Sūtras to Hasidism, that “blot out” sense perception. Seth Segall made the important point that in Stace’s own work not all mystical experiences are contentless in this way. Leaving aside the “hot” or “visionary” experiences (like St. Teresa and the angel) which Stace does not count as mystical experiences – even among what Stace counts as genuine mystical experiences, he makes a key distinction between introvertive and extrovertive mystical experiences. This isn’t just a distinction between the interpretations applied to the experiences, but between the experiences themselves. The contentless “Pure Consciousness Events” described in Forman’s book, where distinctions fade into void, are introvertive; experiences of merging with a unified natural world, like Teresa saying “it was granted to me in one instant how all things are seen and contained in God”, are extrovertive.

And here’s where I find this all really interesting: that introvertive/extrovertive distinction, between different types of experiences, corresponds to the metaphysical difference between Śaṅkara and Rāmānuja! Neither Śaṅkara nor Rāmānuja cites experience, mystical or otherwise, as the source of their philosophy. Both claim to be deriving it from the Upaniṣads (and other texts like the Bhagavad Gītā), and they each defend their view (of the scriptures and of reality) with logical arguments. Yet even so, the distinction Stace observed in descriptions of mystical experiences turns out to correspond pretty closely to the distinction between their philosophies.

In Śaṅkara’s philosophy, as in an introvertive experience, the many things of the world, including oneself, all fall away; what remains is the one reality alone. In Rāmānuja’s philosophy, as in an extrovertive experience, the things of the world, including oneself, remain, but they are all unified together: they continue to have a real existence, but as connected members of a larger unity.

All this is a major caveat for perennialist-leaning ideas: even if you were to argue that mystical experience pointed to a cross-culturally recognized nondualism, you would still have to specify which nondualism. The smartass response is to say “all the nondualisms are one”, but that’s not really satisfactory, not even to the nondualists themselves. Rāmānuja attacked Śaṅkara’s view, and while Śaṅkara lived centuries before Rāmānuja, he attacked other thinkers who had views like Rāmānuja’s.

Some mystically inclined thinkers take a moderate or intermediate position that compromises between an absolute nondual view and the view of common sense or received tradition. Such was the approach of Shaykh Ahmad Sirhindī, the Indian Sufi who reconciled Sufi experiences of mystical oneness with Qur’anic orthodoxy by proclaiming “not ‘All is Him’ but ‘All is from Him'”. It’s tempting to view Rāmānuja’s approach to Śaṅkara as similar, tempering an absolute mysticism with a common-sense view of the world as real: Śaṅkara’s mystical excesses take him way out there and Rāmānuja pulls him back. But such an approach doesn’t really work. It’s flummoxed not only by the fact that Śaṅkara claimed no mystical grounding for his philosophy, but also by the existence of extrovertive mysticism: the many who have felt an experience of oneness with the grass and trees would not have been drawn by that experience to Śaṅkara’s view, but directly to Rāmānuja’s. (I have previously suggested that Rāmānuja is indeed moderating Śaṅkara’s overall approach – but with respect to Śaṅkara’s possible autism rather than to mysticism.)

None of this is intended as a refutation of mystical views of reality, or even necessarily of perennialism. It seems to me that both introvertive and extrovertive experiences are found across a wide range of cultures, often accompanied by a sense of certainty, and are worth taking seriously for that reason. But we then need to take both seriously: if the world is one, then are our many differing perceptions illusory or real? Here, I think, it helps that both illusionist and realist forms of nondual philosophy – experientially based or otherwise – also occur in multiple places. The debates between them might help us sort out what reality – if any – the experiences are pointing to.

Cross-posted at Love of All Wisdom.

Two conversations about nature and creativity

By: ..
 Featuring two theistic naturalists (panentheists), Robert S. Corrington (Drew University) and Robert Cummings Neville (Boston University).  These are two towering figures in the history of American philosophy of religion, philosophical naturalism, and philosophical theology. The conversations in these two videos span discussion of the meaning of nature, theism versus pantheism versus panentheism

Digital Library Project, Bhaktivedanta Research Center (Kolkata)

I recently received a note from Prof. Nirmalya Chakraborty (Rabindra Bharati University) about an exciting new digital library. It includes three categories: Navya-Nyāya Scholarship in Nabadwip, Philosophers of Modern India, and Twentieth Century Paṇḍitas of Kolkata. You can find the site here: https://darshanmanisha.org

You can learn more about the project from the following announcement.

Anouncement

Introducing the Digital Library Project

By

Bhaktivedanta Research Center, Kolkata, India

Right before the introduction of English education in India, a new style of philosophising emerged, especially in Bengal, known as Navya-Nyāya. Since Nabadwip was one of the main centres of Navya-Nyāya scholarship in Bengal during 15th– 17th Century, many important works on Navya-Nyāya were written during this period by Nabadwip scholars. Some of these were published later, but many of these published works are not available now. The few copies which are available are also not in good condition. These are the works where Bengal’s intellectual contribution shines forth. We have digitized some of these materials and have uploaded these in the present digital platform.  

As a lineage of this Nabadwip tradition, many pandits (traditional scholars) produced many important philosophical works, some in Sanskrit and most in Bengali, who were residents of Kolkata during early nineteenth and twentieth century. Most of these works were published in early 1900 from Kolkata and some from neighbouring cities. These works brought in a kind of Renaissance in reviving classical Indian philosophical deliberations in Bengal. Attempts have been made to upload these books and articles in the present digital platform.

With the introduction of colonial education, a group of philosophers got trained in European philosophy and tried to interpret insights from Classical Indian Philosophy in new light. Kolkata was one of the main centres of this cosmopolitan philosophical scholarship. The works of many of these philosophers from Kolkata were published in early/middle of twentieth century. These philosophers are the true representatives of twentieth century Indian philosophy. Efforts have been made to upload these works in the present digital platform.

The purpose of constructing the present digital platform is to enable the researchers to have access to these philosophical works with the hope that the philosophical contributions of these philosophers will be studied and critically assessed resulting in the enrichment of philosophical repertoire.

We take this opportunity to appeal to fellow scholars to enrich this digital library by lending us their personal collection related to these areas for digitization.

The website address of the Digital Library is: www.darshanmanisha.org

For further correspondence, please write to:

[email protected]

[email protected]

[email protected]

[email protected]

New value for W boson mass dims 2022 hints of physics beyond Standard Model

ATLAS Event Displays: W boson production

Enlarge / Event display of a W-boson candidate decaying into a muon and a muon neutrino inside the ATLAS experiment. The blue line shows the reconstructed track of the muon, and the red arrow denotes the energy of the undetected muon neutrino. (credit: ATLAS Collaboration/CERN)

It's often said in science that extraordinary claims require extraordinary evidence. Recent measurements of the mass of the elementary particle known as the W boson provide a useful case study as to why. Last year, Fermilab physicists caused a stir when they reported a W boson mass measurement that deviated rather significantly from theoretical predictions of the so-called Standard Model of Particle Physics—a tantalizing hint of new physics. Others advised caution, since the measurement contradicted prior measurements.

That caution appears to have been warranted. The ATLAS collaboration at CERN's Large Hadron Collider (LHC) has announced a new, improved analysis of their own W boson data and found the measured value for its mass was still consistent with Standard Model. Caveat: It's a preliminary result. But it lessens the likelihood of Fermilab's 2022 measurement being correct.

"The W mass measurement is among the most challenging precision measurements performed at hadron colliders," said ATLAS spokesperson Andreas Hoecker. "It requires extremely accurate calibration of the measured particle energies and momenta, and a careful assessment and excellent control of modeling uncertainties. This updated result from ATLAS provides a stringent test, and confirms the consistency of our theoretical understanding of electroweak interactions.”

Read 9 remaining paragraphs | Comments

Team detects neutrinos created by particle collider for the first time

A researcher wearing a hard hat works next to equipment in a long hallway.

In a scientific first, researchers have detected neutrinos created by a particle collider.

The discovery promises to deepen scientists’ understanding of the subatomic particles, which were first spotted in 1956 and play a key role in the process that makes stars burn.

The work could also shed light on cosmic neutrinos that travel large distances and collide with the Earth, providing a window on distant parts of the universe.

It’s the latest result from the Forward Search Experiment, or FASER, a particle detector designed and built by an international group of physicists and installed at CERN, the European Council for Nuclear Research in Geneva, Switzerland. There, FASER detects particles produced by CERN’s Large Hadron Collider.

“We’ve discovered neutrinos from a brand-new source—particle colliders—where you have two beams of particles smash together at extremely high energy,” says Jonathan Feng, a particle physicist at the University of California, Irvine, and a co-spokesperson for the FASER Collaboration.

Neutrinos, which were co-discovered nearly 70 years ago by the late physicist and Nobel laureate Frederick Reines, are the most abundant particle in the cosmos and “were very important for establishing the standard model of particle physics,” says FASER co-spokesperson Jamie Boyd, a particle physicist at CERN. “But no neutrino produced at a collider had ever been detected by an experiment.”

Since the groundbreaking work of Reines and others like Hank Sobel, professor of physics and astronomy, the majority of neutrinos studied by physicists have been low-energy neutrinos. But the neutrinos detected by FASER are the highest energy ever produced in a lab and are similar to the neutrinos found when deep-space particles trigger dramatic particle showers in our atmosphere.

“They can tell us about deep space in ways we can’t learn otherwise,” says Boyd. “These very high-energy neutrinos in the LHC are important for understanding really exciting observations in particle astrophysics.”

FASER itself is new and unique among particle-detecting experiments. In contrast to other detectors at CERN, such as ATLAS, which stands several stories tall and weighs thousands of tons, FASER is about one ton and fits neatly inside a small side tunnel at CERN. And it took only a few years to design and construct using spare parts from other experiments.

“Neutrinos are the only known particles that the much larger experiments at the Large Hadron Collider are unable to directly detect, so FASER’s successful observation means the collider’s full physics potential is finally being exploited,” says Dave Casper, an experimental physicist.

Beyond neutrinos, one of FASER’s other chief objectives is to help identify the particles that make up dark matter, which physicists think comprises most of the matter in the universe, but which they’ve never directly observed.

FASER has yet to find signs of dark matter, but with the LHC set to begin a new round of particle collisions in a few months, the detector stands ready to record any that appear.

“We’re hoping to see some exciting signals,” says Boyd.

Brian Petersen, a particle physicist at CERN, announced the results at the 57th Rencontres de Moriond Electroweak Interactions and Unified Theories conference in Italy.

Source: UC Irvine

The post Team detects neutrinos created by particle collider for the first time appeared first on Futurity.

This Yellowstone hot spring’s rhythmic thump makes it a geo-thermometer

The vibrating water surface of Doublet Pool in Yellowstone National Park. Credit: Jamie Farrell/University of Utah

Yellowstone National Park is most famous for Old Faithful, a geyser with fairly predictable periodic eruptions that delight visiting tourists. But it's also home to many other geothermal features like Doublet Pool, a pair of hot springs connected by a small neck with the geothermic equivalent of a pulse. The pool "thumps" every 20-30 minutes, causing the water to vibrate and the ground to shake. Researchers at the University of Utah have measured those thumping cycles with seismometers to learn more about how they change over time. Among other findings, they discovered that the intervals of silence between thumps correlate with how much heat is flowing into the pool, according to a new paper published in the journal Geophysical Research Letters.

“We knew Doublet Pool thumps every 20-30 minutes,” said co-author Fan-Chi Lin, a geophysicist at the University of Utah. “But there was not much previous knowledge on what controls the variation. In fact, I don’t think many people actually realize the thumping interval varies. People pay more attention to geysers.”

Yellowstone's elaborate hydrothermal system is the result of shallow groundwater interacting with heat from a hot magma chamber. The system boasts some 10,000 geothermal features, including steam vents (fumaroles), mud pots, and travertine terraces (chalky white rock), as well as geysers and hot springs.

Read 7 remaining paragraphs | Comments

What happened to Dianna Cowern, aka Physics Girl

Many Boing Boing readers are likely familiar with science and physics educator, Dianna Cowern, aka YouTube's Physics Girl. With nearly 3 million subscribers, Dianna is beloved for her exuberant personality and her clear and entertaining explanations of complex physics concepts and her love for all things science. — Read the rest

Feast your eyes on this image of remnant from earliest recorded supernova

The tattered shell of the first recorded supernova (SN185) was captured by the Dark Energy Camera. This image covers an impressive 45 arcminutes in the sky—a rare view of the entirety of this supernova remnant.

Enlarge / The tattered shell of the first recorded supernova (SN185) was captured by the Dark Energy Camera. This image covers an impressive 45 arcminutes in the sky—a rare view of the entirety of this supernova remnant. (credit: CTIO/NOIRLab/DOE/NSF)

In early December 185 CE, Chinese astronomers recorded a bright "guest star" in the night sky that shone for eight months in the direction of Alpha Centauri before fading away—most likely the earliest recorded supernova in the historical record. The image above gives us a rare glimpse of the entire tattered remnant of that long-ago explosion, as captured by the Dark Energy Camera (DECam), mounted on the 4-meter telescope at the Cerro Tololo Inter-American Observatory in the Andes in Chile. DECam has been operating since 2012, and while it was originally designed to be part of the ongoing Dark Energy Survey, it's also available for other astronomers to use in their research. This new wide-view perspective of the remains of SN185 should help astronomers learn even more about stellar evolution.

As we've written previously, there are two types of known supernovas, depending on the mass of the original star. An iron-core collapse supernova occurs with massive stars (greater than 10 solar masses), which collapse so violently that it causes a huge, catastrophic explosion. The temperatures and pressures become so high that the carbon in the star's core fuses. This halts the core's collapse, at least temporarily, and this process continues, over and over, with progressively heavier atomic nuclei. When the fuel finally runs out entirely, the (by then) iron core collapses into a black hole or a neutron star.

Then there is a Type Ia supernova. Smaller stars (up to about eight solar masses) gradually cool to become dense cores of ash known as white dwarfs. If a white dwarf that has run out of nuclear fuel is part of a binary system, it can siphon off matter from its partner, adding to its mass until its core reaches high enough temperatures for carbon fusion to occur. These are the brightest supernovae, and they also shine with a remarkably consistent peak luminosity, making them invaluable "standard candles" for astronomers to determine cosmic distances.

Read 3 remaining paragraphs | Comments

Scientists have mapped a secret hidden corridor in Great Pyramid of Giza

Map of the known corridors and rooms inside the Great Pyramid of Giza. Evidence of a secret corridor was detected in 2016 behind the famed chevron blocks on the north face (h). Another mysterious large void (i) was discovered in 2017—a possible hidden chamber.

Enlarge / Map of the known corridors and rooms inside the Great Pyramid of Giza. Evidence of a secret corridor was detected in 2016 behind the famed chevron blocks on the north face (h). Another mysterious large void (i) was discovered in 2017—a possible hidden chamber. (credit: Procureur et al., 2023)

In 2016, scientists using muon imaging picked up signals indicating a hidden corridor behind the famous chevron blocks on the north face of the Great Pyramid of Giza in Egypt. The following year, the same team detected a mysterious void in another area of the pyramid, believing it could be a hidden chamber. Two independent teams of researchers, using two different muon imaging methods, have now successfully mapped out the corridor for the first time, according to a new paper published in the journal Nature Communications. Zahi Hawass, Egypt’s former antiquities minister, called it "the most important discovery of the 21st century."

As we've reported previously, there is a long history of using muons to image archaeological structures, a process made easier because cosmic rays provide a steady supply of these particles. An engineer named E.P. George used them to make measurements of an Australian tunnel in the 1950s. But Nobel-prize-winning physicist Luis Alvarez really put muon imaging on the map when he teamed up with Egyptian archaeologists to use the technique to search for hidden chambers in the Pyramid of Khafre at Giza. Although it worked in principle, they didn't find any hidden chambers.

There are many variations of muon imaging, but they all typically involve gas-filled chambers. As muons zip through the gas, they collide with the gas particles and emit a telltale flash of light, which is recorded by the detector, allowing scientists to calculate the particle's energy and trajectory. It's similar to X-ray imaging or ground-penetrating radar, except with naturally occurring high-energy muons rather than X-rays or radio waves. That higher energy makes it possible to image thick, dense substances like the stones used to build pyramids. The denser the imaged object, the more muons are blocked, casting a telltale shadow. Hidden chambers in a pyramid would show up in the final image because they blocked fewer particles.

Read 9 remaining paragraphs | Comments

The Buddhist Theory of No-Self (Anātman/Anattā)

The Buddhist denial of the existence of the self is known as anātman (or anattā). This essay explores some of the basics of anātman/anattā.

no-self

nathannobis

Woman looking in a mirror: does she see her self?

Leonardo noted link between gravity and acceleration centuries before Einstein

Caltech researchers re-created an experiment on gravity and acceleration that Leonardo da Vinci sketched out in his notebooks.

Caltech researchers re-created an experiment on gravity and acceleration that Leonardo da Vinci sketched out in his notebooks. (credit: Caltech)

Caltech engineer Mory Gharib was poring over the digitized notebooks of Leonardo da Vinci one day, looking for sketches of flow visualization to share with his graduate students for inspiration. That's when he noticed several small sketches of triangles, whose geometry seemed to be determined by grains of sand poured out from a jar. Further investigation revealed that Leonardo was attempting to study the nature of gravity, and the little triangles were his attempt to draw an equivalence between gravity and acceleration—well before Isaac Newton came up with his laws of motion, and centuries before Albert Einstein would demonstrate the equivalence principle with his general theory of relativity. [Edited for clarity.] Gharib was even able to re-create a modern version of the experiment.

Gharib and his collaborators described their discovery in a new paper published in the journal Leonardo, noting that, by modern calculations, Leonardo's model produced a value for the gravitational constant (G) to around 97 percent accuracy. What makes this finding even more astonishing is that Leonardo did all this without a means of accurate timekeeping and without the benefit of calculus, which Newton invented in order to develop his laws of motion and universal gravitation in the 1660s.

"We don't know if [Leonardo] did further experiments or probed this question more deeply," Gharib said. "But the fact that he was grappling with the problems in this way—in the early 1500s—demonstrates just how far ahead his thinking was."

Read 15 remaining paragraphs | Comments

Study: Mexican jumping beans use random walk strategy to find shade

Mexican jumping beans are actually seed pods from a shrub native to Mexico with moth larvae living inside.

Enlarge / Mexican jumping beans are actually seed pods from a shrub native to Mexico with moth larvae living inside. (credit: Devon McKee and A. Pasha Tabatabai/APS 2023)

Mexican jumping beans have been a curiosity for many an inquisitive child, and yes, they really do "jump," thanks to the presence of tiny moth larvae inside the seed pods. According to a recent paper published in the journal Physical Review E by physicists at Seattle University, those jumps can help the moth larvae inside find shade to survive on hot days. And the jumping movements seem to follow a random walk strategy in order to do so.

The notion of a random walk is based in part on the physics concept of Brownian motion. Even though this technically describes random collisions between particles, it's a useful model that can easily be adapted to lots of different systems, biological, physical, or otherwise. The concept dates back to 1827, when a scientist named Robert Brown was studying pollen particles floating in water under a microscope. He noted a strange jittery motion and thought the pollen might perhaps be alive. But when he repeated the experiment using particles of dust, which he knew were not "alive," he still saw the jittery motion.

Brown never determined what caused the motion, but Albert Einstein did, in a 1905 paper in which he sought to confirm the existence of atoms and molecules. Einstein's relevant insight was that molecules in a fluid like water would randomly move about and collide with other small particles suspended in the fluid, like pollen or dust, giving rise to the "jittering" Brown had observed some 80 years earlier.

Read 8 remaining paragraphs | Comments

The physics of James Joyce’s Ulysses

An early edition of one of Dublin's most famous literary masterpieces: <em>Ulysses</em> by James Joyce, published in 1922.

Enlarge / An early edition of one of Dublin's most famous literary masterpieces: Ulysses by James Joyce, published in 1922. (credit: Fran Caffrey/AFP/Getty Images)

Ulysses, the groundbreaking modernist novel by James Joyce, marked its 100-year anniversary last year; it was first published on February 2, 1922. The poet T.S Eliot declared the novel to be "the most important expression which the present age has found," and Ulysses has accumulated many other fans in the ages since. Count Harry Manos, an English professor at Los Angeles City College, among those fans. Manos is also a fan of physics—so much so, that he penned a December 2021 paper published in The Physics Teacher, detailing how Joyce had sprinkled multiple examples of classical physics throughout the novel.

"The fact that Ulysses contains so much classical physics should not be surprising," Manos wrote. "Joyce’s friend Eugene Jolas observed: 'the range of subjects he [Joyce] enjoyed discussing was a wide one … [including] certain sciences, particularly physics, geometry, and mathematics.' Knowing physics can enhance everyone’s understanding of this novel and enrich its entertainment value. Ulysses exemplifies what physics students (science and non-science majors) and physics teachers should realize, namely, physics and literature are not mutually exclusive."

Ulysses chronicles the life of an ordinary Dublin man named Leopold Bloom over the course of a single day: June 16, 1904 (now celebrated around the world as Bloomsday). While the novel might appear to be unstructured and chaotic, Joyce modeled his narrative on Homer's epic poem the Odyssey; its 18 "episodes" loosely correspond to the 24 books in Homer's epic. Bloom represents Odysseus; his wife Molly Bloom corresponds to Penelope; and aspiring writer Stephen Daedalus—the main character of Joyce's semi-autobiographical A Portrait of the Artist as a Young Man (1916)—represents Telemachus, son of Odysseus and Penelope.

Read 9 remaining paragraphs | Comments

Neil deGrasse Tyson, High School Wrestling Team Captain, Once Invented a Physics-Based Wrestling Move

By: OC

We know that Neil deGrasse Tyson was something of a wunderkind during his high school years. If you’re an OC regular, you’ve read all about how Carl Sagan personally recruited Tyson to study with him at Cornell. Deftly, politely, the young Tyson declined and went to Harvard.

There’s perhaps another side of the precocious Tyson you might not know as much about. The athletic side. While a student at The Bronx High School of Science, Tyson (class of 1976) wore basketball sneakers belonging to the Knick’s Walt “Clyde” Frazier. He ran an impressive 4:25 mile. And he captained the school’s wrestling team, during which time he conjured up a new-fangled wrestling move. In professional wrestling, Ric Flair had the dreaded Figure Four Leg Lock, and Jimmy Snuka, a devastating Superfly Splash. Tyson? He had the feared “Double Tidal Lock.” He explains and demonstrates the physics-based move in the video below, originally recorded at the University of Indianapolis.

Related Content:

Andy Warhol’s One Minute of Professional Wrestling Fame (1985)

The Ultimate Warrior, Professional Wrestler & Philosopher, Created a Glossary of World Philosophies

Neil deGrasse Tyson Lists 8 (Free) Books Every Intelligent Person Should Read

When Samuel Beckett Drove Young André the Giant to School

This mutant Venus flytrap mysteriously lost its ability to “count”

Comparing stimulation of a Venus flytrap and the mutant DYSC. Credit: Ines Kreuzer, Rainer Hedrich, Soenke Scherzer

In 2011, a horticulturist named Mathias Maier stumbled across an unusual mutant of a Venus flytrap, a carnivorous plant that traps and feeds on insects. Scientists recently discovered that the typical Venus flytrap can actually "count" to five, sparking further research on how the plant manages this remarkable feat. The mutant flytrap might hold the key. According to a new paper published in the journal Current Biology, this mutant flytrap doesn't snap closed in response to stimulation like typical Venus flytraps.

"This mutant has obviously forgotten how to count, which is why I named it Dyscalculia (DYSC)," said co-author Rainer Hedrich, a biophysicist at Julius-Maximilians-Universität Würzburg (JMU) in Bavaria, Germany. (It had previously been called "ERROR.")

As we've reported previously, the Venus flytrap attracts its prey with a pleasing fruity scent. When an insect lands on a leaf, it stimulates the highly sensitive trigger hairs that line the leaf. When the pressure becomes strong enough to bend those hairs, the plant will snap its leaves shut and trap the insect inside. Long cilia grab and hold the insect in place, much like fingers, as the plant begins to secrete digestive juices. The insect is digested slowly over five to 12 days, after which the trap reopens, releasing the dried-out husk of the insect into the wind.

Read 8 remaining paragraphs | Comments

❌