FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

The link rot spreads: GIF-hosting site Gfycat shutting down Sept. 1

Array of GIFs on Gfycat website

Enlarge / A myriad of ways one might react to Gfycat's closure, trending on Gfycat itself at the moment. (credit: Gfycat)

The Internet continues to get a bit more fragmented and less accessible every week. Within the past seven days, Reddit finished its purge of third-party clients, Twitter required accounts to view tweets (temporarily or not), and Google News started pulling news articles from its Canadian results.

Now there's one more to add: Gfycat, a place where users uploaded, created, and distributed GIFs of all sorts, is shutting down as of September 1, according to a message on its homepage.

Users of the Snap-owned service are asked to "Please save or delete your Gfycat content." "After September 1, 2023, all Gfycat content and data will be deleted from gfycat.com."

Read 7 remaining paragraphs | Comments

Ask the Community: What Did SSP 2023 Mean to You?

In the last of this series of posts about this year's Annual Meeting, SSP's Marketing and Communications Committee asked members of our community what the conference meant to them.

The post Ask the Community: What Did SSP 2023 Mean to You? appeared first on The Scholarly Kitchen.

Burnyeat vs Strauss, Again

[This post was first published at: digressions.impressions.substack here. To receive new posts and support my work  consider becoming a paid subscriber at <digressionsimpressions.substack.com>]

In a famous polemical essay (1986) in NYRB, my (recall) teacher, Myles Burnyeat, distinguished between two ways of entering Strauss’ thought: either through his “writings” or “one may sign up for initiation with a Straussian teacher.” That is, as Burnyeat notes, Strauss founded a school – he quotes Lewis Coser’s claim that it is “an academic cult” -- with an oral tradition. In the 1986 essay, Burnyeat spends some time on the details of Strauss’ teaching strategy and style that he draws from autobiographical writings by Bloom and Dannhauser[1] as well as by aptly quoting Strauss’ famous (1941) essay "Persecution and the Art of Writing." Somewhat peculiarly, given what follows, Burnyeat does not comment on the surprising fit he [Burnyeat] discerns between Strauss’ writing and teaching!

Burnyeat goes on to imply that without the oral tradition, Strauss’ writings fall flat, or (and these are not the same thing, of course) lack political influence. I quote:

It is the second method that produces the sense of belonging and believing. The books and papers are freely available on the side of the Atlantic from which I write, but Strauss has no discernible influence in Britain at all. No one writing in the London Review of Books would worry—as Stephen Toulmin worried recently in these pages about the State Department’s policy-planning staff—that Mrs. Thatcher’s civil servants know more about the ideas of Leo Strauss than about the realities of the day. Strauss has no following in the universities where her civil servants are educated. Somehow, the interchange between teacher and pupil gives his ideas a potency that they lack on the printed page.”

I want to draw out to two themes from this quote: first, I’ll focus on the reception of Strauss in the UK. And, second, on the way governing elites are educated. So much for set up.

 

First, this is an extraordinary passage once we remember that already in 1937 Michael Oakeshott wrote an admiring and insightful review of Strauss’ ,The Political Philosophy of Hobbes: its Basis and its Genesis (1936) that is very much worth re-reading. (In his earlier, 1935, essay in Scrutiny on Hobbes, Oakeshott alerts the reader that he is familiar with Strauss’ (1932) French article on Hobbes.) It matters to Burnyeat’s empirical claim because while Oakeshott, who did have an impact British political thinking, certainly is not a slavish follower of Strauss, one would have to be confident that none of Oakeshott’s teachings weren’t taken from Strauss at all. Writing in the London Review of Books a few years later (1992), Perry Anderson alerts his readers to non-trivial differences between Strauss and Oakeshott (which is compatible with my claim), and more importantly for present purposes, treats Strauss as a major influence on the then newish resurgence of the intellectual right (although that can be made compatible with Burnyeat’s claim about Strauss’ purported lack of influence in the UK).[2]

 

But even when taken on its own terms, there is something odd about Burnyeat’s claim. For, even if we grant that Strauss has no following at all in British universities by the mid 1980s, this could have other sources than the lack of potency of Strauss’ ideas. After all, there had been a number of influential polemics against Strauss in the United Kingdom. Most notably, the so-called ‘Cambridge school’ of historiography (associated with Pocock, Dunn, and Skinner amongst others) polemically self-defined, in part, against Strauss and his school; this can be readily ascertained by, for example, word-searching ‘Strauss’ in Quentin Skinner’s (1968) "Meaning and Understanding in the History of Ideas."[3] One can also discern, as I have noted before, such polemics by reading Yolton’s (1958) "Locke on the Law of Nature" in The Philosophical Review.[4] Yolton was then at Kenyon, but he had been an Oxford DPhil student of Ryle’s, who supervised his dissertation on John Locke.[5] Polemic is simply unnecessary with writings one foresees would have no influence or potency at all. So, I am afraid to say that Burnyeat’s presentation does little justice to even the broad outlines of the early reception of Strauss in the U.K.

 

As I noted, there is a second theme lurking in the quoted passage, namely Burnyeat’s interest in how civil servants are educated at university. This theme is developed by Burnyeat as follows in the NYRB essay:

The leading characters in Strauss’s writing are “the gentlemen” and “the philosopher.” “The gentlemen” come, preferably, from patrician urban backgrounds and have money without having to work too hard for it: they are not the wealthy as such, then, but those who have “had an opportunity to be brought up in the proper manner.” Strauss is scornful of mass education. “Liberal education is the necessary endeavor to found an aristocracy within democratic mass society. Liberal education reminds those members of a mass democracy who have ears to hear, of human greatness.” Such “gentlemen” are idealistic, devoted to virtuous ends, and sympathetic to philosophy. They are thus ready to be taken in hand by “the philosopher,” who will teach them the great lesson they need to learn before they join the governing elite.

The name of this lesson is “the limits of politics.” Its content is that a just society is so improbable that one can do nothing to bring it about. In the 1960s this became: a just society is impossible. In either case the moral is that “the gentlemen” should rule conservatively, knowing that “the apparently just alternative to aristocracy open or disguised will be permanent revolution, i.e., permanent chaos in which life will be not only poor and short but brutish as well.”

Burnyeat infers these claims from a number of Strauss’s writings in the 1950s and 60s which he has clearly read carefully. In fact, at the end of the second paragraph, Burnyeat adds in his note (after citing Strauss’ What is Political Philosophy? p. 113), “where Strauss indicates that when this argument is applied to the present day, it yields his defense of liberal or constitutional democracy—i.e., modern democracy is justified, according to him, if and because it is aristocracy in disguise.”

Now, even friends of mass education can admit that modern democracy is an aristocracy in disguise. This is not a strange claim at all when we remember that traditionally ‘democracy’ was associated with what we now call ‘direct’ or ‘popular’/’plebiscite’ democracy, whereas our ‘liberal’ or ‘representative’ democracy was understood as aristocratic in form if only because it functionally preserves rule by the relatively few as Tocqueville intimates. This fact is a common complaint from the left, and, on the right, taken as a vindication of the sociological ‘elite’ school (associated with Mosca, Pareto, etc.). It is not limited to the latter, of course, because the claim can be found in the writing of Max Weber on UK/US party politics (which Strauss knew well.)  

Of course, what matters is what kind of aristocracy modern liberal democracy is, and can be. And now we return, anew, to theme of the education of the governing elite(s) as Burnyeat put front and center in NYRB. That a liberal education can produce a ‘natural’ aristocracy is, in fact, staple of writings in what we may call ‘the conservative tradition’ as can be found in Russell Kirk’s The Conservative Mind. The idea is given a famous articulation in the writings of Edmund Burke (1791) “An Appeal from the New to the Old Whigs.” (I have put the passage from Burke in a note.)[6]

So, as summarized by Burnyeat, Strauss simply echoes a commonplace about how Burke is understood by post WWII conservatives. What’s distinctive then is that Strauss is presented as claiming that the ancient wisdom he discloses is that trying to bring about a fully just society would re-open the Hobbesian state of nature/war, that is, permanent chaos. It won’t surprise that Burnyeat denies this is the unanimous teaching of the ancients (although when I took a seminar with him about fifteen years later he came close to endorsing this himself as a reading of the Republic). For, the closing two paragraphs of his essay, drive this point home:

Strauss believed that civil society must, of necessity, foster warlike habits and make its citizens apply different rules of conduct to one another and to foreigners. The impossibility of international justice was a considerable part of what persuaded him that “the justice which is possible within the city, can only be imperfect or cannot be unquestionably good.” But Strauss spent his life extolling what he believed to be “the truth” on the grounds that it is the unanimous “wisdom of the ancients.” Hence something more than an academic quarrel is taking place when Strauss defends his eccentric view that Plato’s Socrates agrees with Xenophon’s in teaching that the just citizen is one who helps his friends and harms his enemies.

Plato’s Socrates attacks this very notion early in the Republic. No matter: Strauss will demonstrate that it is the only definition of justice from Book I which is “entirely preserved” in the remainder of the Republic. Plato’s Socrates argues passionately in the Gorgias for a revolutionary morality founded on the thesis that one should not return wrong for wrong. Strauss’s unwritten essay on Plato’s Gorgias would have summoned all his Maimonidean skills to show that Socrates does not mean what he says. Much more is at stake here than the correctness or otherwise of the common scholarly opinion that Xenophon, a military man, was incompetent at philosophy and did not understand Socrates. The real issue is Strauss’s ruthless determination to use these old books to “moderate” that idealistic longing for justice, at home and abroad, which grew in the puppies of America during the years when Strauss was teaching and writing.

 

That Xenophon was incompetent at philosophy and did not understand Socrates is, in the context of the debate with Strauss, a petitio principii. That’s compatible with the claim that Burnyeat is right about this. But it's worth noting that this is characteristic of analytic historiography. For example, in his early (1951) review of Strauss, Vlastos also describes Xenophon as having a “pedestrian mind.” (593)

Even so, that international justice between states is impossible is not a strange reading of the Republic (or the other ancients). Plato and Aristotle are not Kant, after all. (Plato may have thought that Kallipolis could have just relations with other Hellenic polities, but I see no reason he thought that this was enduringly possible with non-Greek barbarians.) And if we permit the anachronism by which it is phrased, it strikes me that Strauss is right that for the ancients civil society must, of necessity, foster warlike habits and make its citizens apply different rules of conduct to one another and to foreigners (even if many foreigners could be treated in pacific fashion and in accord with a supra-national moral norms). Part of Plato’s popularity (recall; and here) in the nineteenth century was undoubtedly due to the plausibility of reading him as a pan-hellenic nationalist. It doesn’t follow from this, of course, that for Socrates whatever justice is possible within the city has to be attenuated or imperfect. It is, however, peculiar that even if one rejects Strauss’ purported “great lesson” and if one grants that Kallipolis is, indeed, the ideal city one should treat the effort to bring it into being as anything more than a dangerous fantasy; and while I wouldn’t want to claim that a “just society is so improbable that one can do nothing to bring it about,” it is not odd to wish to moderate those that try knowing, as we do, the crimes of the Gulag or the Great Leap forward, if that's really what Strauss taught.

 

 


[1] “Leo Strauss September 20, 1899–October 18, 1973,” Political Theory 2 (1974), pp. 372–392, which Burnyeat commends, and Werner J. Dannhauser, “Leo Strauss: Becoming Naive Again,” The American Scholar 44 (1974–1975),

[2] Anderson, Perry. "The intransigent right at the end of the century." London Review of Books 14.18 (1992): 7-11. Reprinted in Anderson, Perry. Spectrum. Verso, 2005.

[3] Skinner, Quentin. "Meaning and Understanding in thef History of Ideas." History and theory 8.1 (1969): 3-53. (There is a huge literature on the debates between the Cambridge school and Straussianism.)

[4] John W. Yolton (1958) "Locke on the Law of Nature." The Philosophical Review 67.4: 478.

[5] Buickerood, James G., and John P. Wright. "John William Yolton, 1921-2005." Proceedings and Addresses of The American Philosophical Association. American Philosophical Association, 2006.

[6] “A true natural aristocracy is not a separate interest in the state, or separable from it. It is an essential integrant part of any large body rightly constituted. It is formed out of a class of legitimate presumptions, which, taken as generalities, must be admitted for actual truths. To be bred in a place of estimation; to see nothing low and sordid from one’s infancy; to be taught to respect one’s self; to be habituated to the censorial inspection of the public eye; to look early to public opinion; to stand upon such elevated ground as to be enabled to take a large view of the widespread and infinitely diversified combinations of men and affairs in a large society; to have leisure to read, to reflect, to converse; to be enabled to draw and court the attention of the wise and learned, wherever they are to be found; to be habituated in armies to command and to obey; to be taught to despise danger in the pursuit of honour and duty; to be formed to the greatest degree of vigilance, foresight, and circumspection, in a state of things in which no fault is committed with impunity and the slightest mistakes draw on the most ruinous consequences; to be led to a guarded and regulated conduct, from a sense that you are considered as an instructor of your fellow-citizens in their highest concerns, and that you act as a reconciler between God and man; to be employed as an administrator of law and justice, and to be thereby amongst the first benefactors to mankind; to be a professor of high science, or of liberal and ingenious art; to be amongst rich traders, who from their success are presumed to have sharp and vigorous understandings, and to possess the virtues of diligence, order, constancy, and regularity, and to have cultivated an habitual regard to commutative justice: these are the circumstances of men that form what I should call a natural aristocracy, without which there is no nation.”

See also Kirk, Russell. "Burke and natural rights." The Review of Politics 13.4 (1951): 454.

Guest Post — Towards Global Equity for Open Access Books 

The Directory of Open Access Books (DOAB) is celebrating its 10-year anniversary, a great opportunity to reflect on how far we have come with open infrastructures for the distribution and discoverability of open access books (monographs, edited collections, and other long-form publications).

The post Guest Post — Towards Global Equity for Open Access Books  appeared first on The Scholarly Kitchen.

Joy! H5P in Web Article… Alas, No Metadata

By: cogdog

I sure miss the days of supporting the H5P Kitchen project — if anything really hits the elements of the olde 5 Rs, to me, it’s the portability, platform independence, downloadability, reusability of H5P plus, the thing fe really love, built in metadata.

So when I spotted a reshare of this University Affairs online article on ChatGPT? We need to talk about LLMs my interest was in the writing — and it is a worthy read about getting beyond the AI inevitability to how we grapple with the murk of ethics.

But here is what jumped out to me in the middle of the article– OMG it’s H5P! I can from a kilometer away that’s what it is, an Interactive Hotspot Diagram.

Typical of H5P, this has a Reuse button (so you could download the .h5p source), an Embed code button (I could have inserted it here in my blog), but ones is missing… The one labeled “rights” which is actually the item’s metadata. You see, there is nothing that identifies the author of this content or how it is licensed — well until I squinted, in the image itself is © REBECCA SWEETMAN 2023. So what we have here are a fraction of the 5Rs.

Metadata, metadata, rarely loved or appreciated beyond librarians, archivists, data nerds. In the H5P Kitchen I wrote a guide to why/how this is used:

If you look at any of the H5P content there, the three bottom buttons are all present. The Rights of Use button not only gives the license for the overall H5P, but provides a place to give attribution to all media used within the H5P. It’s a beautiful thing. Oh here, I will just show you by embedding something.

But I was curious about that LLM Hotspot, and it was 15 seconds of a web search on the title and adding “H5P” that got me to a source, of course, in the eCampusOntario H5P Studio— where we at least see the author credit, but alas, it was shared without specifying a license. Oh, I could have gotten there faster if I inspected the embed code the source is in the URL.

This is minor quibbling of course. I was tickled to see an interactive document in a web article. It’s just so close to making the best use of tools, but as the word “virtual” goes, it’s always “almost there”.


Featured Image:

Almost Where?
Almost Where? flickr photo by cogdogblog shared under a Creative Commons (BY) license

The Rigor of Philosophy & the Complexity of the World (guest post)

“Analytic philosophy gradually substitutes an ersatz conception of formalized ‘rigor’ in the stead of the close examination of applicational complexity.”

In the following guest post, Mark Wilson, Distinguished Professor of Philosophy and the History and Philosophy of Science at the University of Pittsburgh, argues that a kind of rigor that helped philosophy serve a valuable role in scientific inquiry has, in a sense, gone wild, tempting philosophers to the fruitless task of trying to understand the world from the armchair.

This is the third in a series of weekly guest posts by different authors at Daily Nous this summer.


[Roy Lichtenstein, “Bull Profile Series”]

The Rigor of Philosophy & the Complexity of the World
by Mark Wilson

In the course of attempting to correlate some recent advances in effective modeling with venerable issues in the philosophy of science in a new book (Imitation of Rigor), I realized that under the banner of “formal metaphysics,” recent analytic philosophy has forgotten many of the motivational considerations that had originally propelled the movement forward. I have also found that like-minded colleagues have been similarly puzzled by this paradoxical developmental arc. The editor of Daily Nous has kindly invited me to sketch my own diagnosis of the factors responsible for this thematic amnesia, in the hopes that these musings might inspire alternative forms of reflective appraisal.

The Promise of Rigor

Let us return to beginnings. Although the late nineteenth century is often characterized as a staid period intellectually, it actually served as a cauldron of radical reconceptualization within science and mathematics, in which familiar subjects became strongly invigorated through the application of unexpected conceptual adjustments. These transformative innovations were often resisted by the dogmatic metaphysicians of the time on the grounds that the innovations allegedly violated sundry a priori strictures with respect to causation, “substance” and mathematical certainty. In defensive response, physicists and mathematicians eventually determined that they could placate the “howls of the Boeotians” (Gauss) if their novel proposals were accommodated within axiomatic frameworks able to fix precisely how their novel notions should be utilized. The unproblematic “implicit definability” provided within these axiomatic containers should then alleviate any a priori doubts with respect to the coherence of the novel conceptualizations. At the same time, these same scientists realized that explicit formulation within an axiomatic framework can also serve as an effective tool for ferreting out the subtle doctrinal transitions that were tacitly responsible for the substantive crises in rigor that had bedeviled the period.

Pursuant to both objectives, in 1894 the physicist Heinrich Hertz attempted to frame a sophisticated axiomatics to mend the disconnected applicational threads that he correctly identified as compromising the effectiveness of classical mechanics in his time. Unlike his logical positivist successors, Hertz did not dismiss terminologies like “force” and “cause” out of hand as corruptly “metaphysical,” but merely suggested that they represent otherwise useful vocabularies that “have accumulated around themselves more relations than can be completely reconciled with one another” (through these penetrating diagnostic insights, Hertz emerges as the central figure within my book). As long as “force” and “cause” remain encrusted with divergent proclivities of this unacknowledged character, methodological strictures naively founded upon the armchair “intuitions” that we immediately associate with these words are likely to discourage the application of more helpful forms of conceptual innovation through their comparative unfamiliarity.

There is no doubt that parallel developments within symbolic logic sharpened these initial axiomatic inclinations in vital ways that have significantly clarified a wide range of murky conceptual issues within both mathematics and physics. However, as frequently happens with an admired tool, the value of a proposed axiomatization depends entirely upon the skills and insights of the workers who employ it. A superficially formalized housing in itself guarantees nothing. Indeed, the annals of pseudo-science are profusely populated with self-proclaimed geniuses who fancy that they can easily “out-Newton Newton” simply by costuming their ill-considered proposals within the haberdashery of axiomatic presentation (cf., Martin Gardner’s delightful Fads and Fallacies in the Name of Science).

Inspired by Hertz and Hilbert, the logical empiricists subsequently decided that the inherent confusions of metaphysical thought could be eliminated once and for all by demanding that any acceptable parcel of scientific theorizing must eventually submit to “regimentation” (Quine’s term) within a first order logical framework, possibly supplemented with a few additional varieties of causal or modal appeal. As just noted, Hertz himself did not regard “force” as inherently “metaphysical” in this same manner, but simply that it comprised a potentially misleading source of intuitions to rely upon in attempting to augur the methodological requirements of an advancing science.

Theory T Syndrome

Over analytic philosophy’s subsequent career, these logical empiricist expectations with respect to axiomatic regimentation gradually solidified into an agglomeration of strictures upon acceptable conceptualization that have allowed philosophers to criticize rival points of view as “unscientific” through their failure to conform to favored patterns of explanatory regimentation. I have labelled these logistical predilections as the “Theory T syndrome” in other writings.

A canonical illustration is provided by the methodological gauntlet that Donald Davidson thrusts before his opponents in “Actions, Reasons and Causes”:

One way we can explain an event is by placing it in the context of its cause; cause and effect form the sort of pattern that explains the effect, in a sense of “explain” that we understand as well as any. If reason and action illustrate a different pattern of explanation, that pattern must be identified.

In my estimation, this passage supplies a classic illustration of Theory T-inspired certitude. In fact, a Hertz-like survey of mechanical practice reveals many natural applications of the term “cause” that fail to conform to Davidson’s methodological reprimands.

As a result, “regimented theory” presumptions of a confident “Theory T” character equip such critics with a formalist reentry ticket that allows armchair speculation to creep back into the philosophical arena with sparse attention to the real life complexities of effective concept employment. Once again we witness the same dependencies upon a limited range of potentially misleading examples (“Johnny’s baseball caused the window to break”), rather than vigorous attempts to unravel the entangled puzzlements that naturally attach to a confusing word like “cause,” occasioned by the same developmental processes that make “force” gather a good deal of moss as it rolls forward through its various modes of practical application. Imitation of Rigor attempts to identify some of the attendant vegetation that likewise attaches to “cause” in a bit more detail.

As a result, a methodological tactic (axiomatic encapsulation) that was originally championed in the spirit of encouraging conceptual diversity eventually develops into a schema that favors methodological complacency with respect to the real life issues of productive concept formation. In doing so, analytic philosophy gradually substitutes an ersatz conception of formalized “rigor” in the stead of the close examination of applicational complexity that distinguishes Hertz’ original investigation of “force”’s puzzling behaviors (an enterprise that I regard as a paragon of philosophical “rigor” operating at its diagnostic best). Such is the lesson from developmental history that I attempted to distill within Imitation of Rigor (whose contents have been ably summarized within a recent review by Katherine Brading in Notre Dame Philosophical Reviews).

But Davidson and Quine scarcely qualified as warm friends of metaphysical endeavor. The modern adherents of “formal metaphysics” have continued to embrace most of their “Theory T” structural expectations while simultaneously rejecting positivist doubts with respect to the conceptual unacceptability of the vocabularies that we naturally employ when we wonder about how the actual composition of the external world relates to the claims that we make about it. I agree that such questions represent legitimate forms of intellectual concern, but their investigation demands a close study of the variegated conceptual instruments that we actually employ within productive science.  But “formal metaphysics” typically eschews the spadework required and rests its conclusions upon Theory T -inspired portraits of scientific method.

Indeed, writers such as David Lewis and Ted Sider commonly defend their formal proposals as simply “theories within metaphysics” that organize their favored armchair intuitions in a manner in which temporary infelicities can always be pardoned as useful “idealizations” in the same provisional manner in which classical physics allegedly justifies its temporary appeals to “point masses” (another faulty dictum with respect to actual practice in my opinion).

Philosophy’s Prophetic Telescope

These “Theory T” considerations alone can’t fully explicate the unabashed return to armchair speculation that is characteristic of contemporary effort within “formal metaphysics.” I have subsequently wondered whether an additional factor doesn’t trace to the particular constellation of doctrines that emerged within Hilary Putnam’s writings on “scientific realism” in the 1965-1975 period. Several supplementary themes there coalesce in an unfortunate manner.

(1) If a scientific practice has managed to obtain a non-trivial measure of practical capacity, there must be underlying externalist reasons that support these practices, in the same way that external considerations of environment and canvassing strategy help explicate why honey bees collect pollen in the patterns that they do. (This observation is sometimes called Putnam’s “no miracles argument”).

(2) Richard Boyd subsequently supplemented (1) (and Putnam accepted) with the restrictive dictum that “the terms in a mature scientific theory typically refer,” a developmental claim that strikes me as factually incorrect and supportive of the “natural kinds” doctrines that we should likewise eschew as descriptively inaccurate.

(3) Putnam further aligned his semantic themes with Saul Kripke’s contemporaneous doctrines with respect to modal logic which eventually led to the strong presumption that the “natural kinds” that science will eventually reveal will also carry with them enough “hyperintensional” ingredients to ensure that these future terminologies will find themselves able to reach coherently into whatever “possible worlds” become codified within any ultimate enclosing Theory T (whatever it may prove to be like otherwise). This predictive postulate allows present-day metaphysicians to confidently formulate their structural conclusions with little anxiety that their armchair-inspired proposals run substantive risk of becoming overturned in the scientific future.

Now I regard myself as a “scientific realist” in the vein of (1), but firmly believe that the complexities of real life scientific development should dissuade us from embracing Boyd’s simplistic prophecies with respect to the syntactic arrangements to be anticipated within any future science. Direct inspection shows that worthy forms of descriptive endeavor often derive their utilities from more sophisticated forms of data registration than thesis (2) presumes. I have recently investigated the environmental and strategic considerations that provide classical optics with its astonishing range of predictive and instrumental successes, but the true story of why the word “frequency” functions as such a useful term within these applications demands a far more complicated and nuanced “referential” story than any simple “‘frequency’ refers to X” slogan adequately captures (the same criticism applies to “structural realism” and allied doctrines).

Recent developments within so-called “multiscalar modeling” have likewise demonstrated how the bundle of seemingly “divergent relations” connected with the notion of classical “force” can be more effectively managed by embedding these localized techniques within a more capacious conceptual architecture than Theory T axiomatics anticipates. These modern tactics provide fresh exemplars of novel reconceptualizations in the spirit of the innovations that had originally impressed our philosopher/scientist forebears (Imitation of Rigor examines some of these new techniques in greater detail). I conclude that “maturity” in a science needn’t eventuate in simplistic word-to-world ties but often arrives at more complex varieties of semantic arrangement whose strategic underpinnings can usually be decoded after a considerable expenditure of straightforward scientific examination.

In any case, Putnam’s three supplementary theses, taken in conjunction with the expectations of standard “Theory T thinking” outfits armchair philosophy with a prophetic telescope that allows it to peer into an hypothesized future in which all of the irritating complexities of renormalization, asymptotics and cross-scalar homogenization will have happily vanished from view, having appeared along the way only as evanescent “Galilean idealizations” of little metaphysical import. These futuristic presumptions have convinced contemporary metaphysicians that detailed diagnoses of the sort that Hertz provided can be dismissed with an airy wave of the hand, “The complications to which you point properly belong to epistemology or the philosophy of language, whereas we are only interested in the account of worldly structure that science will eventually reach in the fullness of time.”

Science from the Armchair

Through such tropisms of lofty dismissal, the accumulations of doctrine outlined in this note have facilitated a surprising reversion to armchair demands that closely resemble the constrictive requirements on viable conceptualization against which our historical forebears had originally rebelled. As a result, contemporary discussion within “metaphysics” once again finds itself flooded with a host of extraneous demands upon science with respect to “grounding,” “the best systems account of laws” and much else that doesn’t arise from the direct inspection of practice in Hertz’ admirable manner. As we noted, the scientific community of his time was greatly impressed by the realization that “fresh eyes” can be opened upon a familiar subject (such as Euclidean geometry) through the exploration of alternative sets of conceptual primitives and the manner in which unusual “extension element” supplements can forge unanticipated bridges between topics that had previously seemed disconnected. But I find little acknowledgement of these important tactical considerations within the current literature on “grounding.”

From my own perspective, I have been particularly troubled by the fact that the writers responsible for these revitalized metaphysical endeavors frequently appeal offhandedly to “the models of classical physics” without providing any cogent identification of the axiomatic body that allegedly carves out these “models.” I believe that they have unwisely presumed that “Newtonian physics” must surely exemplify some unspecified but exemplary “Theory T” that can generically illuminate, in spite of its de facto descriptive inadequacies, all of the central metaphysical morals that any future “fundamental physics” will surely instantiate. Through this unfounded confidence in their “classical intuitions,” they ignore Hertz’ warnings with respect to tricky words that “have accumulated around [themselves], more relations than can be completely reconciled amongst themselves.” But if we lose sight of Hertz’s diagnostic cautions, we are likely to return to the venerable realm of armchair expectations that might have likewise appealed to a Robert Boyle or St. Thomas Aquinas.


Discussion welcome.

 

COMMENTS POLICY

The post The Rigor of Philosophy & the Complexity of the World (guest post) first appeared on Daily Nous.

A Plea for Synthetic Philosophy (guest post)

“There need not be strict disciplinary boundaries between philosophy and other disciplines.”

In the following post, Catarina Dutilh Novaes, Professor of Philosophy at VU Amsterdam and Professorial Fellow at Arché at the University of St. Andrews explains and makes a case for synthetic philosophy.

This is the first in a series of weekly guest posts by different authors at Daily Nous this summer.


[“Abandoned Schoolhouse” by Gary Simmons]

A Plea for Synthetic Philosophy 
by Catarina Dutilh Novaes

A few years ago, the philosopher (and prolific blogger) Eric Schliesser used the term ‘synthetic philosophy’ to describe the work of Daniel Dennett and Peter Godfrey-Smith. Schliesser presented synthetic philosophy as “a style of philosophy that brings together insights, knowledge, and arguments from the special sciences with the aim to offer a coherent account of complex systems and connect these to a wider culture or other philosophical projects (or both). Synthetic philosophy may, in turn, generate new research in the special sciences…” Schliesser did not coin the term itself: the once influential but by now largely forgotten polymath 19th-century thinker Herbert Spencer (of ‘survival of the fittest’-fame) titled his mammoth 10-volume work covering biology, psychology, sociology and ethics System of Synthetic Philosophy.

But Schliesser can be credited for re-introducing the term to denote an approach in philosophy that has become more pervasive and widely accepted over the last two decades, namely one where philosophers engage extensively with work done in relevant (empirical) disciplines to inform their philosophical investigations and theories. Other recent examples of synthetic philosophers include Neil Levy (see his Bad Beliefs) and Kim Sterelny, who describes his book The Evolved Apprentice in the following terms: “The essay is an essay in philosophy in part because it depends primarily on the cognitive toolbox of philosophers: it is work of synthesis and argument, integrating ideas and suggestions from many different research traditions. No one science monopolizes this broad project though many contribute to it. So I exploit and depend on data, but do not provide new data” (Sterelny, 2012, p. xi). If this is a good description of synthetic philosophy, then it is fair to say that I have been (trying to be!) a synthetic philosopher for about 15 years now, so when Schliesser introduced the term, I adopted it wholeheartedly. (It is for sure much catchier than alternatives such as the cumbersome ‘empirically-informed philosophy’.)

The idea that there need not be strict disciplinary boundaries between philosophy and other disciplines enjoyed some popularity in the 20th century, in particular in the tradition of ‘scientific philosophy’ initiated by Bertrand Russell and continued by the Vienna Circle and later with their ‘heirs’ in the United States such as W.V.O. Quine (who used the ambiguous term ‘naturalism’ to describe the idea of continuity between philosophy and other disciplines) and Hilary Putnam. (Much before that, over the centuries, there was for the most arguably no strict separation between philosophy and other disciplines either; Aristotle was first and foremost a biologist.) But there was also much resistance within analytic philosophy to the idea that philosophical inquiry should in any way be informed by scientific findings (see this piece that I co-wrote on the dissonant origins of analytic philosophy). This resistance is to be traced back to G.E. Moore, who insisted that moral philosophy and ethics in particular were strictly non-scientific, purely conceptual domains. It continued with the so-called ‘ordinary language philosophers’, as exemplified by the damning critique of Carnap’s notion of explication in Strawson’s piece for the Carnap Living Philosophers volume.

To motivate a strict separation between science and philosophy, a point sometimes made is that scientists are involved in the merely descriptive inquiry of telling us how things are, while philosophers are involved in conceptual and (or) normative inquiry as well, which includes looking at how things ought to be understood, and how they ought to be. If true, this point has important methodological implications, as different methods are used for different types of investigation. Methods to investigate how things are include data collection, experiments, field work, etc. Methods to investigate how things should be include conceptual analysis, ‘intuitions’, thought experiments, etc. Some decades ago, however, this presumed neat separation was challenged by the so-called experimental philosophy approach, which prompted what might described as a small methodological crisis in analytic philosophy. Could philosophy be empirical/experimental after all? The X-Phi challenge made it clear that more sustained methodological reflection was needed, and philosophers spent much of the first two decades of the 21st century discussing the ins and outs of different methods for philosophical inquiry (see Williamson’s The Philosophy of Philosophy and the Oxford Handbook of Philosophical Methodology).

It is fair to say that scientific/synthetic philosophy ‘won’ the battle in that the Moorean rejection of engagement with empirical findings in philosophical inquiry has become much less widespread in the last decade. I speak from (anecdotal) personal experience: when I was working on the project that culminated in my monograph Formal Languages in Logic (2012), I often got remarks to the effect that it was all very interesting, but what I was doing wasn’t really philosophy. (My standard response was: well, I’m glad no one is saying that it’s all very philosophical, but not interesting.) By contrast, in later years the research that culminated in my monograph The Dialogical Roots of Deduction (2020) did not typically prompt the same kind of reaction, even though it was just as empirically oriented as my earlier work. The fact that The Dialogical Roots of Deduction won the Lakatos Award in 2022 (sorry folks, time for some shameless self-promotion!) attests to the widespread acceptance of synthetic philosophy as an approach to philosophical inquiry (though philosophers of science are, of course, from the start more amenable to the general idea of engaging with other disciplines).

In my opinion, philosophy is especially well-placed to facilitate much-needed interdisciplinary collaboration between different disciplines. It is now widely recognized that so-called ‘wicked problems’ require complementary approaches to be addressed, but each methodology has its limitations and dead angles. Experimental methods may lack ‘ecological validity’ (lab situations do not really reproduce the phenomena in the wild); quantitative methods are often not very fine-grained and may give rise to spurious correlations and ‘noise’; qualitative methods may be instances of ‘cherry-picking’ and have limited reach. Thus, what has become clear, in particular during the Covid-19 pandemic, is that triangulation of methods is essential for investigating complex problems.

It seems to me that philosophers have much to contribute to interdisciplinarity and triangulation efforts for two main reasons: philosophers are trained to engage in careful conceptual inquiry, clarifying and sharpening significant (scientific) concepts and sometimes introducing new ones (Carnapian explication, conceptual engineering); philosophers may be better able to see the forest rather than only the trees, as it were, by drawing on various scientific disciplines (as suggested in the Sterelny quote above) and noticing connections that may remain unnoticed within each specific discipline. Philosophy has much more potential for synthesis than is often recognized (even by us philosophers), and this is perfectly compatible with the centrality of analysis in analytic philosophy. (Traditionally, analysis and synthesis were viewed as two complementary rather than incompatible processes: you break things down to them put them back together again, usually in a different, more fruitful configuration.) (Note: commenting on an earlier draft of this post, Eric Schliesser remarked that my conception of synthetic philosophy differs in some important respects from his. He thinks that my conception resembles that of Kitcher’s, which he discussed in this blog post.)

True enough, there are also a number of difficulties, pitfalls and risks involved in attempting to do synthetic philosophy. For starters, the approach requires that the philosopher be conversant with various different scientific disciplines; she has to be a ‘polymath’ in some sense, which in the current scenario of scientific hyper-specialization is a formidable challenge. Secondly, there is a perennial risk of conceptual confusion/equivocation and ‘talking past each other’ for lack of a common vocabulary (a familiar problem in interdisciplinarity studies). Third, scientific studies themselves are not always reliable guides, with many important results not being replicated (see the famous ‘replication crisis’ in psychology and other disciplines).

However, while these issues are real and must be taken seriously, I submit that they should not be viewed as knock-down arguments against synthetic philosophy. (I have responses to each of them but I’m running out of space!) Sustained methodological reflection on the ins and outs of synthetic philosophy is still needed, but I hope to have established here at least that synthetic philosophy is an interesting and viable approach for philosophical inquiry.


The post A Plea for Synthetic Philosophy (guest post) first appeared on Daily Nous.

Sarvagatatva in Nyāya and Vaiśeṣika: ātman, aether and materiality (mūrtatva)

The Sanskrit philosophical school called Vaiśeṣika is the one most directly dealing with ontology. Its fundamental text is the Vaiśeṣikasūtra, which is commented upon by Prāśastapada in the Pādarthadharmasaṅgraha (from now one PDhS) (the following is a summary of Padārthadharmasaṅgraha ad 8.7).

The school distinguishes substances and qualities. The first group includes four types of atoms (earth, water, fire, air) and then aether, time, space, ātmans and internal organs (manas). The latter are needed as a separate category, because they are point-sized and therefore not made of atoms, unlike the external sense faculties.
Among the 17 qualities, it recognises parimāṇa or dimension'. This encompasses at first two possibilities, namely atomic (aṇu), or extended (mahat). The former covers partless entities that have allegedly no spatial dimension, like points in Euclidean geometry and atoms themselves. These are considered to be without extension and permanent through time (nitya). The latter is subdivided into mahat and paramahat. The first covers all objects one encounters in normal life, from triads of atoms (imagined to be of the size of a particle of dust, the first level of atomic structure to be extended) to the biggest mountain. These entities have parts and extension and have an origin and an end in time. The second subdivision covers special substances, listed as ākāśaaether’, space, time and ātmans, which need to be imagined to be present at each location. Such entities are also imagined to be nitya, that is permanent through time. In other words, they are present at each location of time and space.
The above also implies that entities considered to be permanent through time can only be either atomic or all-pervasive.

However, space, time, aether and selves (ātman) are present at all locations in different ways.

About aether, to begin with, texts like Jayanta’s Nyāyamañjarī say that it needs to be accepted as a fifth substance in order to justify the diffusion of sound across multiple media. Texts of the Vaiśeṣika school, and of the allied school of Nyāya specify that aether does not occupy all locations, but rather is in contact with each individual atom):

[The aether’s] all-pervasiveness consists in the fact that it is in contact with each corporal (mūrta) substance.
(sarvamūrtadravyasaṃyogitvam vibhutvam (Tarkasaṃgrahadīpikā ad 14).)

This means that aether does not pervade atoms, but is in contact (saṃyoga) with each one of them.

This point is already explicit in the allied school of Nyāya, the Nyāyabhāṣya, and is needed because of the point-sized nature of atoms. If these were pervaded by aether, then they would have parts, and thus not be permanent. These undesired consequences are examined in the following:

This is impossible, because of the penetration through aether || NS 4.2.18 ||

It is impossible for an atom [to be] partless and permanent. Why? Because of the penetration through ether, that is, because an atom, if it were permeated, that is `penetrated’ by aether, within and outside, then, because of this penetration it would have parts, and due to having parts it would be impermanent.

Or, the aether is not all-located} || 4.2.19 ||

Alternatively, we don’t accept that. There is no aether within the atoms and therefore aether ends up not being all-located

(ākāśavyatibhedāt tadanupapattiḥ || 4.2.18 ||
tasyāṇor niravayasya nityasyānupapattiḥ. kasmāt. ākāśavyatibhedāt. antarbahiścāṇur ākāśena samāviṣṭo vyatibhinno vyatibhedāt sāvayavaḥ sāvayavatvād anitya iti.
ākāśāsarvagatatvaṃ vā || 4.2.19 ||
athaitan neṣyate paramāṇor antar nāsty ākāśam ity asarvagatatvaṃ prasajyeta iti.)

Aether is postulated as a substrate of sound (which can move through solids, liquids and air, thus proving that it has neither earth, nor water, nor air as substrate). Thus, it needs to be unitary (multiple aethers would not explain the propagation of sound, sound would stop at the end of the respective aether) and it needs to be present at all locations (for the same reason). More in detail: Only because of the unitary nature of aether is it possible for sound to travel between different loci. Otherwise, one would have to posit some mechanism to explain how the sound encountered in one aether travels to another one. Instead, the simpler solution is to posit that aether is necessarily both single (eka) and present at all locations (vibhu).

As for ātman, the self is by definition permanent (otherwise, no afterlife nor cycle of rebirths would be possible). It cannot be atomic, though, because the ātman is the principle of awareness and people become aware of things potentially everywhere. The fact that they don’t become perceptually aware of things being, e.g., behind a wall, by contrast, is only due to the fact that the ātman needs to be in touch (via the internal sense organ, manas, which is believed to be atomic and to move quickly from one to the other sense-faculty) to the sense faculties (indriya) in order for perceptual awareness to take place. Yogins are able to perceive things their bodies are not in contact with because their ātmans are omnipresent, like our ātman, and are able, unlike our ātman, to connect with other bodies’ sense faculties.
Within Sanskrit philosophy, Jaina philosophers suggested that the ātman is co-extensive with the body, since it can experience whatever the body can experience. Vaiśeṣika and other non-Jaina authors disagree, because this would lead to the absurd consequence of an ātman changing in size through one’s life.

A further element to be taken into account with regard to theories of location, and in particular while adjudicating whether they are about occupation or non-occupation is materiality.
Occupation of space seems to occur only from the level of atomic triads up to big, but not all-located, objects. Atoms are said to be mūrta and mūrta is usually translated as `material’, but taken in isolations, atom do not have parts and are only point-sized. In this sense, their being mūrta refers more about their being fundamental for material entities, rather than being material if taken in isolation. The distinction is theoretically relevant, but less evident at the pragmatic level, given that atoms are never found in isolation. Being mūrta is attributed to atoms of the four elements (not to aether) as well as to the inner sense organ (Nyāyakośa, s.v.), but not to ātman neither to aether.

Experiencing different ultimate unities

Defenders of cross-cultural mystical experience are right to note that in many widely varying cultures, respected sages have referred to the experience of an ultimate nonduality: a perception that everything, including oneself, is ultimately one. But one might also then rightly ask: which ultimate nonduality?

Nondualism may be the world’s most widespread philosophy, but it can mean different things – not merely different things in different places, but different things in the same place. Members of the Indian Vedānta tradition frequently proclaimed that everything is “one, without a second”, in the words of the Upaniṣads they followed. But they disagreed as to what that meant. Śaṅkara founded the Advaita Vedānta tradition – a-dvaita literally meaning non-dual – which argued that only the one, ultimate truth (sat, braḥman) was real, and all multiplicity and plurality was an illusion. His opponent Rāmānuja agreed that everything is “one, without a second” – but in his Viśiṣṭādvaita (qualified nondual) school, that meant something quite different. All the many things and people we see around us – what Chinese metaphysicians called the “ten thousand things” – are parts of that ultimate one, and they are real, not illusory.

I was reminded of this point in the great comments on my previous post about cross-cultural mysticism. I had cited W.T. Stace as an influential advocate of the view that mysticism is cross-cultural, and noted how Robert Forman’s book defended Stace by pointing to contentless experiences of void, from the Yoga Sūtras to Hasidism, that “blot out” sense perception. Seth Segall made the important point that in Stace’s own work not all mystical experiences are contentless in this way. Leaving aside the “hot” or “visionary” experiences (like St. Teresa and the angel) which Stace does not count as mystical experiences – even among what Stace counts as genuine mystical experiences, he makes a key distinction between introvertive and extrovertive mystical experiences. This isn’t just a distinction between the interpretations applied to the experiences, but between the experiences themselves. The contentless “Pure Consciousness Events” described in Forman’s book, where distinctions fade into void, are introvertive; experiences of merging with a unified natural world, like Teresa saying “it was granted to me in one instant how all things are seen and contained in God”, are extrovertive.

And here’s where I find this all really interesting: that introvertive/extrovertive distinction, between different types of experiences, corresponds to the metaphysical difference between Śaṅkara and Rāmānuja! Neither Śaṅkara nor Rāmānuja cites experience, mystical or otherwise, as the source of their philosophy. Both claim to be deriving it from the Upaniṣads (and other texts like the Bhagavad Gītā), and they each defend their view (of the scriptures and of reality) with logical arguments. Yet even so, the distinction Stace observed in descriptions of mystical experiences turns out to correspond pretty closely to the distinction between their philosophies.

In Śaṅkara’s philosophy, as in an introvertive experience, the many things of the world, including oneself, all fall away; what remains is the one reality alone. In Rāmānuja’s philosophy, as in an extrovertive experience, the things of the world, including oneself, remain, but they are all unified together: they continue to have a real existence, but as connected members of a larger unity.

All this is a major caveat for perennialist-leaning ideas: even if you were to argue that mystical experience pointed to a cross-culturally recognized nondualism, you would still have to specify which nondualism. The smartass response is to say “all the nondualisms are one”, but that’s not really satisfactory, not even to the nondualists themselves. Rāmānuja attacked Śaṅkara’s view, and while Śaṅkara lived centuries before Rāmānuja, he attacked other thinkers who had views like Rāmānuja’s.

Some mystically inclined thinkers take a moderate or intermediate position that compromises between an absolute nondual view and the view of common sense or received tradition. Such was the approach of Shaykh Ahmad Sirhindī, the Indian Sufi who reconciled Sufi experiences of mystical oneness with Qur’anic orthodoxy by proclaiming “not ‘All is Him’ but ‘All is from Him'”. It’s tempting to view Rāmānuja’s approach to Śaṅkara as similar, tempering an absolute mysticism with a common-sense view of the world as real: Śaṅkara’s mystical excesses take him way out there and Rāmānuja pulls him back. But such an approach doesn’t really work. It’s flummoxed not only by the fact that Śaṅkara claimed no mystical grounding for his philosophy, but also by the existence of extrovertive mysticism: the many who have felt an experience of oneness with the grass and trees would not have been drawn by that experience to Śaṅkara’s view, but directly to Rāmānuja’s. (I have previously suggested that Rāmānuja is indeed moderating Śaṅkara’s overall approach – but with respect to Śaṅkara’s possible autism rather than to mysticism.)

None of this is intended as a refutation of mystical views of reality, or even necessarily of perennialism. It seems to me that both introvertive and extrovertive experiences are found across a wide range of cultures, often accompanied by a sense of certainty, and are worth taking seriously for that reason. But we then need to take both seriously: if the world is one, then are our many differing perceptions illusory or real? Here, I think, it helps that both illusionist and realist forms of nondual philosophy – experientially based or otherwise – also occur in multiple places. The debates between them might help us sort out what reality – if any – the experiences are pointing to.

Cross-posted at Love of All Wisdom.

Two conversations about nature and creativity

By: ..
 Featuring two theistic naturalists (panentheists), Robert S. Corrington (Drew University) and Robert Cummings Neville (Boston University).  These are two towering figures in the history of American philosophy of religion, philosophical naturalism, and philosophical theology. The conversations in these two videos span discussion of the meaning of nature, theism versus pantheism versus panentheism

Digital Library Project, Bhaktivedanta Research Center (Kolkata)

I recently received a note from Prof. Nirmalya Chakraborty (Rabindra Bharati University) about an exciting new digital library. It includes three categories: Navya-Nyāya Scholarship in Nabadwip, Philosophers of Modern India, and Twentieth Century Paṇḍitas of Kolkata. You can find the site here: https://darshanmanisha.org

You can learn more about the project from the following announcement.

Anouncement

Introducing the Digital Library Project

By

Bhaktivedanta Research Center, Kolkata, India

Right before the introduction of English education in India, a new style of philosophising emerged, especially in Bengal, known as Navya-Nyāya. Since Nabadwip was one of the main centres of Navya-Nyāya scholarship in Bengal during 15th– 17th Century, many important works on Navya-Nyāya were written during this period by Nabadwip scholars. Some of these were published later, but many of these published works are not available now. The few copies which are available are also not in good condition. These are the works where Bengal’s intellectual contribution shines forth. We have digitized some of these materials and have uploaded these in the present digital platform.  

As a lineage of this Nabadwip tradition, many pandits (traditional scholars) produced many important philosophical works, some in Sanskrit and most in Bengali, who were residents of Kolkata during early nineteenth and twentieth century. Most of these works were published in early 1900 from Kolkata and some from neighbouring cities. These works brought in a kind of Renaissance in reviving classical Indian philosophical deliberations in Bengal. Attempts have been made to upload these books and articles in the present digital platform.

With the introduction of colonial education, a group of philosophers got trained in European philosophy and tried to interpret insights from Classical Indian Philosophy in new light. Kolkata was one of the main centres of this cosmopolitan philosophical scholarship. The works of many of these philosophers from Kolkata were published in early/middle of twentieth century. These philosophers are the true representatives of twentieth century Indian philosophy. Efforts have been made to upload these works in the present digital platform.

The purpose of constructing the present digital platform is to enable the researchers to have access to these philosophical works with the hope that the philosophical contributions of these philosophers will be studied and critically assessed resulting in the enrichment of philosophical repertoire.

We take this opportunity to appeal to fellow scholars to enrich this digital library by lending us their personal collection related to these areas for digitization.

The website address of the Digital Library is: www.darshanmanisha.org

For further correspondence, please write to:

[email protected]

[email protected]

[email protected]

[email protected]

Artists astound with AI-generated film stills from a parallel universe

An AI-generated image from an #aicinema still series called

Enlarge / An AI-generated image from an #aicinema still series called "Vinyl Vengeance" by Julie Wieland, created using Midjourney. (credit: Julie Wieland / Midjourney)

Since last year, a group of artists have been using an AI image generator called Midjourney to create still photos of films that don't exist. They call the trend "AI cinema." We spoke to one of its practitioners, Julie Wieland, and asked her about her technique, which she calls "synthography," for synthetic photography.

The origins of “AI cinema” as a still image art form

Last year, image synthesis models like DALL-E 2, Stable Diffusion, and Midjourney began allowing anyone with a text description (called a "prompt") to generate a still image in many different styles. The technique has been controversial among some artists, but other artists have embraced the new tools and run with them.

While anyone with a prompt can make an AI-generated image, it soon became clear that some people possessed a special talent for finessing these new AI tools to produce better content. As with painting or photography, the human creative spark is still necessary to produce notable results consistently.

Read 22 remaining paragraphs | Comments

the Oppenheimer Principle revisited

By: ayjay

Eight years ago, I wrote about a dominant and pernicious ideology that features two components: 

Component one: that we are living in a administrative regime built on technocratic rationality whose Prime Directive is, unlike the one in the Star Trek universe, one of empowerment rather than restraint. I call it the Oppenheimer Principle, because when the physicist Robert Oppenheimer was having his security clearance re-examined during the McCarthy era, he commented, in response to a question about his motives, “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success. That is the way it was with the atomic bomb.”

The topic of that essay was the prosthetic reconstruction of bodies and certain incoherent justifications thereof, so I went on: “We change bodies and restructure child-rearing practices not because all such phenomena are socially constructed but because we can — because it’s ‘technically sweet.’” Then:

My use of the word “we” in that last sentence leads to component two of the ideology under scrutiny here: Those who look forward to a future of increasing technological manipulation of human beings, and of other biological organisms, always imagine themselves as the Controllers, not the controlled; they always identify with the position of power. And so they forget evolutionary history, they forget biology, they forget the disasters that can come from following the Oppenheimer Principle — they forget everything that might serve to remind them of constraints on the power they have … or fondly imagine they have.

In light of current debates about the development of AI – debates that have become more heated in the wake of an open letter pleading with AI researchers to pause their experiments and take some time to think about the implications – the power of the Oppenheimer Principle has become more evident than ever. And it’s important, I think, to understand what in this context is making it so powerful.

Before I go any further, let me note that the term Artificial Intelligence may cover a very broad range of endeavors. Here I am discussing a recently emergent wing of the overall AI enterprise, the wing devoted to imitating or counterfeiting actions that most human beings think of as distinctively human: conversation, image-making (through drawing, painting, or photography), and music-making.

I think what’s happening in the development of these counterfeits – and in the resistance to asking hard questions about them – is the Silicon Valley version of what the great economist Thorstein Veblen called “trained incapacity.” As Robert K. Merton explains in a famous essay on “Bureaucratic Structure and Personality,” Veblen’s phrase describes a phenomenon identified also by John Dewey – though Dewey called it “occupational psychosis” – and by Daniel Warnotte – though Warnotte called it “Déformation professionnelle.” It is curious that this same phenomenon gets described repeatedly by our major social scientists; that suggests that it is a powerful and widespread phenomenon indeed. 

Peggy Noonan recently wrote in the Wall Street Journal of the leaders of the major Silicon Valley companies,

I am sure that as individuals they have their own private ethical commitments, their own faiths perhaps. Surely as human beings they have consciences, but consciences have to be formed by something, shaped and made mature. It’s never been clear to me from their actions what shaped theirs. I have come to see them the past 40 years as, speaking generally, morally and ethically shallow—uniquely self-seeking and not at all preoccupied with potential harms done to others through their decisions. Also some are sociopaths.

I want to make a stronger argument: that the distinctive “occupational psychosis” of Silicon Valley is sociopathy – the kind of sociopathy embedded in the Oppenheimer Principle. The people in charge at Google and Meta and (outside Silicon Valley) Microsoft, and at the less well-known companies that are being used by the mega-companies, have been deformed by their profession in ways that prevent them from perceiving, acknowledging, and acting responsibly in relation to the consequences of their research. They have a trained incapacity to think morally. They are by virtue of their narrowly technical education and the strong incentives of their profession moral idiots.

The ignorance of the technocratic moral idiot is exemplified by Sam Altman of OpenAI – an increasingly typical Silicon Valley type, with a thin veneer of moral self-congratulation imperfectly obscuring a thick layer of obedience to perverse incentives. “If you’re making AI, it is potentially very good, potentially very terrible,” but “The way to get it right is to have people engage with it, explore these systems, study them, to learn how to make them safe.” He can’t even imagine that “the way to get it right” might be not to do it at all. (See Scott Alexander on the Safe Uncertainty Fallacy: We have absolutely no idea what will result from this technological development, therefore everything will be fine.) The Oppenheimer Principle trumps all.

These people aren’t going to fix themselves. As Jonathan Haidt (among others) has often pointed out – e.g. here – the big social media companies know just how much damage their platforms are doing, especially to teenage girls, but they do not care. As Justin E. H. Smith has noted, social media platforms are “inhuman by design,” and some of the big companies are tearing off the fig leaf by dissolving their ethics teams. Deepfakes featuring Donald Trump or the Pope are totally cool, but Chairman Xi gets a free pass, because … well, just follow the money.

Decisions about these matters have to be taken out of the hands of avaricious professionally-deformed sociopaths. And that’s why lawsuits like this one matter. 

Some Pre-History on the History and Sociology of Multiple Discovery: Merton, Dicey, Stigler- (etc.)

It may very well, owing to the condition of the world, and especially to the progress of knowledge, present itself at the same time to two or more persons who have had no intercommunication. Bentham and Paley formed nearly at the same date a utilitarian system of morals. Darwin and Wallace, while each ignorant of the other’s labours, thought out substantially the same theory as to the origin of species.--A.V. Dicey [2008] (1905) Lectures on the Relation between Law & Public Opinion in England during the Nineteenth Century, Indianapolis: Liberty Fund, p. 18 n. 6 (based on the 1917 reprint of the second edition).

As regular readers know (recall), I was sent to Dicey because he clearly shaped Milton Friedman's thought at key junctures in the 1940s and 50s. So, I was a bit surprised to encounter the passage quoted above. For, I tend to associate interest in the question of simultaneous invention or multiple discovery with Friedman's friend, George J Stigler (an influential economist) and his son Steven Stigler (a noted historian of statistics). In fairness, the Stiglers are more interested in the law of eponymy. In his (1980) article on that topic, Steven Stigler cites Robert K. Merton's classic and comic (1957) "Priorities in Scientific Discovery: A Chapter in the Sociology of Science." (Merton project was revived in Liam Kofi Bright's well known "On fraud.")

When Merton presented (and first published it) he was a colleague of George Stigler at Columbia University (and also Ernest Nagel). In his (1980) exploration of the law of eponymy, Steven Stigler even attributes to Merton the claim that “all scientific discoveries are in principle multiple." (147) Stigler cites here p. 356 of Merton's 1973 book, The Sociology of Science: Theoretical and Empirical Investigations, which is supposed to be the chapter that reprints the 1957 article. I put it like that because I was unable to find the quoted phrase in the 1957 original (although the idea can certainly be discerned in it, but I don't have the book available to check that page).

Merton himself makes clear that reflection on multiple discovery is co-extensive with modern science because priority disputes are endemic in it. In fact, his paper is, of course, a reflection on why the institution of science generates such disputes. Merton illustrates his points with choice quotes from scientific luminaries on the mores and incentives of science that generate such controversies, many of which are studies in psychological and social acuity and would not be out of place in Rochefoucauld's Maximes. Merton himself places his own analysis in the ambit of the social theory of Talcott Parsons (another important influence on George Stigler) and Durkheim. 

The passage quoted from Dicey's comment is a mere footnote, which occurs in a broader passage on the role of public opinion in shaping development of the law. And, in particular, that many developments are the effect of changes in prevaling public opinion, which are the effect of in the inventiveness of "some single thinker or school of thinkers." (p. 17) The quoted footnote is attached to the first sentence of remarkably long paragraph (which I reproduce at the bottom of this post).* The first sentence is this: "The course of events in England may often at least be thus described: A new and, let us assume, a true idea presents itself to some one man of originality or genius; the discoverer of the new conception, or some follower who has embraced it with enthusiasm, preaches it to his friends or disciples, they in their turn become impressed with its importance and its truth, and gradually a whole school accept the new creed." And the note is attached to 'genius.'

Now, often when one reads about multiple discovery (or simultaneous invention) it is often immediately contrasted to a 'traditional' heroic or genius model (see Wikipedia for an example, but I have found more in a literature survey often influenced by Wikipedia). But Dicey's footnote recognizes that in the progress of knowledge, and presumably division of labor with (a perhaps imperfect) flow of ideas, multiple discovery should become the norm (and the traditional lone genius model out of date).

In fact, Dicey's implicit model of the invention and dissemination of new views is explicitly indebted to Mill's and Taylor's account of originality in chapter 3 of On Liberty. (Dicey only mentions Mill.) Dicey quotes Mill's and Taylor's text: "The initiation of all wise or noble things, comes and must come from individuals; generally at first from some one individual." (Dicey adds that this is also true of  folly or a new form of baseness.)   

The implicit model is still very popular. MacAskill's account (recall) of Benjamin Lay's role in Quaker abolitionism (and itself a model for social movement building among contemporary effective altruists) is quite clearly modelled on Mill and Taylor's model. I don't mean to suggest Mill and Taylor invent the model; it can be discerned in Jesus and his Apostles and his been quite nicely theorized by Ibn Khaldun in his account of prophetic leadership. Dicey's language suggests he recognizes the religious origin of the model because he goes on (in the very next sentence of the long paragraph) as follows: "These apostles of a new faith are either persons endowed with special ability or, what is quite as likely, they are persons who, owing to their peculiar position, are freed from a bias, whether moral or intellectual, in favour of prevalent errors. At last the preachers of truth make an impression, either directly upon the general public or upon some person of eminence, say a leading statesman, who stands in a position to impress ordinary people and thus to win the support of the nation."

So far so good. But Dicey goes on to deny that acceptance of a new idea depends "on the strength of the reasoning" by which it is advocated or "even on the enthusiasm of its adherents." He ascribes uptake of new doctrines to skillful opportunism in particular by a class of political entrepreneurs or statesmanship (or Machiavellian Virtu) in the context of "accidental conditions." (This anticipates Schumpeter, of course, and echoes the elite theorists of the age like Mosca and Michels.) Dicey's main example is the way Bright and Cobden made free trade popular in England. There is space for new directions only after older ideas have been generally discredited and the political circumstances allow for a new orientation. 

It's easy to see that Dicey's informal model (or should I say Mill and Taylor's model?) lends itself to a lot of Post hoc ergo propter hoc reasoning. So I am by no means endorsing it. But the wide circulation of some version of the model helps explain the kind of relentless repetition of much of public criticism (of woke-ism, neoliberalism, capitalism, etc.) that has no other goal than to discredit some way of doing things. If the model is right these are functional part of a strategy of preparing the public for a dramatic change of course. As I have noted Milton Friedman was very interested in this feature of Dicey's argument [Recall:  (1951) “Neo-Liberalism and its Prospects” Farmand, 17 February 1951, pp. 89-93 [recall this post] and his (1962) "Is a Free Society Stable?" New Individualist Review [recall here]].

I admit we have drifted off from multiple discovery. But obviously, after the fact, multiple discovery in social theory or morals can play a functional role in the model as a signpost that the world is getting ready to hear a new gospel. By the end of the eighteenth century, utilitarianism was being re-discovered or invented along multiple dimensions (one may also mention Godwin, and some continental thinkers) as a reformist even radical enterprise. It was responding to visible problems of the age, although its uptake was not a foregone conclusion. (And the model does not imply such uptake.)

It is tempting to claim that this suggests a dis-analogy with multiple discovery in science. But all this suggestion shows is that our culture mistakenly expects or (as I have argued)  tacitly posits an efficient market in ideas in science with near instantaneous uptake of the good ideas; in modern scientific metrics the expectation is that these are assimilated within two to five years on research frontier. But I resist the temptation to go into an extended diatribe why this efficient market in ideas assumption is so dangerous.

*Here's the passage:

The course of events in England may often at least be thus described: A new and, let us assume, a true idea presents itself to some one man of originality or genius; the discoverer of the new conception, or some follower who has embraced it with enthusiasm, preaches it to his friends or disciples, they in their turn become impressed with its importance and its truth, and gradually a whole school accept the new creed. These apostles of a new faith are either persons endowed with special ability or, what is quite as likely, they are persons who, owing to their peculiar position, are freed from a bias, whether moral or intellectual, in favour of prevalent errors. At last the preachers of truth make an impression, either directly upon the general public or upon some person of eminence, say a leading statesman, who stands in a position to impress ordinary people and thus to win the support of the nation. Success, however, in converting mankind to a new faith, whether religious, or economical, or political, depends but slightly on the strength of the reasoning by which the faith can be defended, or even on the enthusiasm of its adherents. A change of belief arises, in the main, from the occurrence of circumstances which incline the majority of the world to hear with favour theories which, at one time, men of common sense derided as absurdities, or distrusted as paradoxes. The doctrine of free trade, for instance, has in England, for about half a century, held the field as an unassailable dogma of economic policy, but an historian would stand convicted of ignorance or folly who should imagine that the fallacies of protection were discovered by the intuitive good sense of the people, even if the existence of such a quality as the good sense of the people be more than a political fiction. The principle of free trade may, as far as Englishmen are concerned, be treated as the doctrine of Adam Smith. The reasons in its favour never have been, nor will, from the nature of things, be mastered by the majority of any people. The apology for freedom of commerce will always present, from one point of view, an air of paradox. Every man feels or thinks that protection would benefit his own business, and it is difficult to realise that what may be a benefit for any man taken alone, may be of no benefit to a body of men looked at collectively. The obvious objections to free trade may, as free traders conceive, be met; but then the reasoning by which these objections are met is often elaborate and subtle, and does not carry conviction to the crowd. It is idle to suppose that belief in freedom of trade—or indeed any other creed—ever won its way among the majority of converts by the mere force of reasoning. The course of events was very different. The theory of free trade won by degrees the approval of statesmen of special insight, and adherents to the new economic religion were one by one gained among persons of intelligence. Cobden and Bright finally became potent advocates of truths of which they were in no sense the discoverers. This assertion in no way detracts from the credit due to these eminent men. They performed to admiration the proper function of popular leaders; by prodigies of energy, and by seizing a favourable opportunity, of which they made the very most use that was possible, they gained the acceptance by the English people of truths which have rarely, in any country but England, acquired popularity. Much was due to the opportuneness of the time. Protection wears its most offensive guise when it can be identified with a tax on bread, and therefore can, without patent injustice, be described as the parent of famine and starvation. The unpopularity, moreover, inherent in a tax on corn is all but fatal to a protective tariff when the class which protection enriches is comparatively small, whilst the class which would suffer keenly from dearness of bread and would obtain benefit from free trade is large, and having already acquired much, is certain soon to acquire more political power. Add to all this that the Irish famine made the suspension of the corn laws a patent necessity. It is easy, then, to see how great in England was the part played by external circumstances—one might almost say by accidental conditions—in determining the overthrow of protection. A student should further remark that after free trade became an established principle of English policy, the majority of the English people accepted it mainly on authority. Men, who were neither land-owners nor farmers, perceived with ease the obtrusive evils of a tax on corn, but they and their leaders were far less influenced by arguments against protection generally than by the immediate and almost visible advantage of cheapening the bread of artisans and labourers. What, however, weighed with most Englishmen, above every other consideration, was the harmony of the doctrine that commerce ought to be free, with that disbelief in the benefits of State intervention which in 1846 had been gaining ground for more than a generation.

 

Better Philosophy Through Time Travel

Here’s one way of thinking about progress in philosophy.

Having determined that progress in philosophy has been too slow, the leaders of the Galactic Philosophy Federation (GPF) take on the mission of improving it. Realizing that the earlier an intervention can be made, other things equal, the more progress is likely to result, they begin by considering changes that can be implemented immediately. Unfortunately, there are not many inspiring options. They then learn about a new invention, the “Passed to the Past” (P2P) device, which allows people in the present to send messages back in time. The past is earlier than the present, so, they figure, we could in principle have even more progress in philosophy if we changed something in the past.

Still in beta, P2P has certain limits. First, it can only send short messages—no more than around 600 characters (roughly the size of the previous paragraph). Second, the recent past is unavailable as a destination—messages have to be sent to a time prior to 1900. And third, it is very expensive. Still, they find it promising and decide to try to make it the case that there has been (and perhaps will continue to be) more progress in philosophy by sending messages back in time to earlier philosophers.

When it comes time to budget for this project, the GPF’s leaders find, alas, that they have enough money to fund only one message. Hopeful that one message could make a difference, they turn to the matter of settling on its content, recipient, and timing. For this, they ask you, the philosophers of the world, for suggestions:

Given the aim of improving philosophy’s progress, what brief message would you send to which past philosopher?
Keep in mind that the message must be around 600 characters or less, and that the message must be sent back to a year prior to 1900; if it matters, be specific about when in the philosopher’s life they should receive the message.

(The question is intentionally open-ended in a few ways, and “progress” is intentionally left unspecified.)

What’s your answer?


Thinker Analytix

The Virtuous Image: Femininity and Portraiture on the Internet

Images of bodies impact young people, especially young girls and women. The normative implications of those images—what a body ought to look like and what a body ought not to look like—affect their self-esteem. A 2021 exposé of internal research conducted by Facebook (now Meta) on its photo-sharing app Instagram revealed the company itself tracked […]

Optimism about Philosophy

“I know a lot of people on twitter and social media complain about the current state of philosophy but I tend to be an optimist.”

That’s Gregg Caruso, professor of philosophy at SUNY Corning, in a new interview at What Is It Like To Be A Philosopher?. 

He continues:

I think the future of philosophy is strong. There is more interesting and diverse work being done today in philosophy than perhaps ever before. In fact, I can barely keep up with all the excellent work being done in areas of philosophy that never previously existed.

The days of philosophy being dominated by one or two figures (or methodologies) at a time is over, and I think that’s a good thing. Let a thousand flowers bloom, as they say.

This isn’t to say there aren’t things to be concerned about:

If I have any fears, they are not about philosophy itself but with direction of higher education, which has been moving away from providing students with a well-rounded liberal arts education and toward vocational training. This trend is bad, not only for the discipline of philosophy but for society as a whole.  

The interview, interesting throughout, ranges over Professor Caruso’s life, education, and work. You can read the whole thing here.

Thinker Analytix

Meow Wolf enters the metaverse mini-golf space

Immersive art collective Meow Wolf is taking its unique brand of creativity to the virtual world with a new mini-golf course set to launch later this year. Meow Wolf has teamed up with Walkabout Mini Golf, one of the most popular multiplayer VR games on the market, to create a course that promises to be unlike anything seen before. — Read the rest

By: ayjay

Mary Harrington:

Increasingly, wave after wave of young people reaches adulthood armed with pop-Butlerism via university and Tumblr alike. No wonder growing numbers long to edit their meat avatars as they might their online ones, and that this isn’t confined to young girls pursuing unattainable beauty ideals. Reddit hosts anecdotal reports from individuals who decided to transition after using the digital funhouse mirror to feminise themselves, and deciding they liked that look better.

But the trouble is that this is only true until you log off. The digital age holds out a promise of total emancipation from material reality — one that, in politics, is now driving an increasingly bitter divide between those who can sustain this illusion and those still forced to deal with the real world. And, implicitly, we’re told we can apply this digital Prometheanism to our bodies, too. But it doesn’t work: the gap between protean sex-swap fantasy and sutured, bleeding, often complication-filled reality can be the stuff of nightmares — one that’s now prompting a surge of lawsuits. All that happens is that we open up a new, futile (but still highly profitable) war of attrition against our own nature. 

As I have often noted, the highlighted phrase is absolutely key. Maybe one way to talk to people who have been captured by the allure of transformation-by-biotech is to ask them to think about all the really cool things they could do with that money. (Though, come to think of it, I’m sure they expect insurance — i.e., everyone contributing insurance premiums — to pay for whatever they want.) 

Meta Quest Pro sees 33 percent price drop after less than five months

The Meta Quest Pro.

Enlarge / The Meta Quest Pro.

When we reviewed the Meta Quest Pro headset less than five months ago, we balked at the device's $1,500 price point, which represented a whopping 275 percent price premium over the Quest 2 (with much less than a 275 percent increase in quality). Meta is already taking steps to scale back that massive asking price, though; as of Sunday, the headset is now available for $1,000 in the US and Canada (a similar price drop will take place March 15 in other Quest Pro countries).

The price drop puts the Quest Pro in line with other high-end headsets, including the untethered $1,100 HTC Vive XR Elite and the $1,000 Valve Index (which requires tethering to a gaming PC). That said, for practically the same money, you can get a $550 PSVR2 and the $500 PlayStation 5 to tether it to. And the Quest Pro is still 150 percent more expensive than the cheapest Quest 2, which supports almost all the same software and delivers a sufficient VR experience for most users.

Speaking of the Quest 2, Meta has also announced a 14 percent price drop for the 256GB version of that headset, from $500 to $430. That price drop brings that expanded-storage option almost all the way back to the $400 that Meta was charging for it before last year's unprecedented price increase.

Read 3 remaining paragraphs | Comments

❌