FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Why Medical Advice Is Important

It’s important to want to be healthy and to do what we can to achieve good health. However, as much as we might think we know a lot and are happy to eat healthily and exercise as much as possible, for many, that could be the end of it. However, there is more we can do, and this is why it’s so important to ask for medical advice. Read on to find out why medical advice is a crucial component of a healthy lifestyle.

Expert Advice

Medical professionals in every field are experts at what they do, and they have far more knowledge, skill, and experience than any layperson would have. Although you can do a lot when it comes to your health just by applying some common sense, there is so much that most people don’t know that sometimes you might even be doing more harm than good.

Therefore, it’s always wise to get plenty of advice. Ask as many questions as you need to, whether it’s about something specific, like chickenpox vaccinations, or something more general, like how much exercise you should do. This way, you’ll get the right advice that you know you can follow without any problems, boosting your health to the next level.

Preventative Care

Another hugely important reason why medical advice is important and why you should get it whenever you need it is for preventative reasons. If you feel unwell or you have an issue that you aren’t sure about that could be problematic for your health, the best thing you can do is seek medical advice. The alternative is trying to find a way to deal with it yourself (which could make things worse or make you sick in other ways) or not doing anything at all. In some cases, this could mean developing a condition that has no cure, and that is life-limiting.

When you know there is an issue, or even if you’re not sure, getting medical advice is crucial. Your chosen healthcare professional can examine you, do tests and get answers so they can come up with a healthcare plan before the issue gets too bad. This will stop you from getting very sick and could even save your life. Even if you think it could be nothing, asking for medical advice is still worthwhile to be sure.

Debunking Health Myths

We have a lot of information at our fingertips these days, and a lot of that comes from the internet. The problem is, it’s hard to know what’s true and what’s just a myth when it comes to health advice online (or anything else, for that matter). Or perhaps there have been stories about how to deal with this health issue or that one handed down through your family, and you just don’t know if it’s accurate or not.

When you go for medical advice from an expert, you’ll get the facts and be able to determine your next course of action with confidence, leading to much better health overall.

Image credit: Sander Sammy via Unsplash

What can Large Language Models offer to linguists?

Google Deepmind. "What can Large Language Models offer to linguists?" by David J. Lobina on the OUP blog

What can Large Language Models offer to linguists?

It is fair to say that the field of linguistics is hardly ever in the news. That is not the case for language itself and all things to do with language—from word of the year announcements to countless discussions about grammar peeves, correct spelling, or writing style. This has changed somewhat recently with the proliferation of Large Language Models (LLMs), and in particular since the release of OpenAI’s ChatGPT, the best-known language model. But does the recent, impressive performance of LLMs have any repercussions for the way in which linguists carry out their work? And what is a Language Model anyway?

 At heart, all an LLM does is predict the next word given a string of words as a context —that is, it predicts the next, most likely word. This is of course not what a user experiences when dealing with language models such as ChatGPT. This is on account of the fact that ChatGPT is more properly described as a “dialogue management system”, an AI “assistant” or chatbot that translates a user’s questions (or “prompts”) into inputs that the underlying LLM can understand (the latest version of OpenAI’s LLM is a fine-tuned version of GPT-4).  

“At heart, all an LLM does is predict the next word given a string of words as a context.”

An LLM, after all, is nothing more than a mathematical model in terms of a neural network with input layers, output layers, and many deep layers in between, plus a set of trained “parameters.” As the computer scientist Murray Shanahan has put it in a recent paper, when one asks a chatbot such as ChatGPT who was the first person to walk on the moon, what the LLM is fed is something along the lines of:

Given the statistical distribution of words in the vast public corpus of (English) text, what word is most likely to follow the sequence “The first person to walk on the Moon was”?

That is, given an input such as the first person to walk on the Moon was, the LLM returns the most likely word to follow this string. How have LLMs learned to do this? As mentioned, LLMs calculate the probability of the next word given a string of words, and it does so by representing these words as vectors of values from which to calculate the probability of each word, and where sentences can also be represented as vectors of values. Since 2017, most LLMs have been using “transformers,” which allow the models to carry out matrix calculations over these vectors, and the more transformers are employed, the more accurate the predictions are—GPT-3 has some 96 layers of such transformers.

The illusion that one is having a conversation with a rational agent, for it is an illusion, after all, is the result of embedding an LLM in a larger computer system that includes background “prefixes” to coax the system into producing behaviour that feels like a conversation (the prefixes include templates of what a conversation looks like). But what the LLM itself does is generate sequences of words that are statistically likely to follow from a specific prompt.

It is through the use of prompt prefixes that LLMs can be coaxed into “performing” various tasks beyond dialoguing, such as reasoning or, according to some linguists and cognitive scientists, learn the hierarchical structures of a language (this literature is ever increasing). But the model itself remains a sequence predictor, as it does not manipulate the typical structured representations of a language directly, and it has no understanding of what a word or a sentence means—and meaning is a crucial property of language.

An LLM seems to produce sentences and text like a human does—it seems to have mastered the rules of the grammar of English—but at the same time it produces sentences based on probabilities rather on the meanings and thoughts to express, which is how a human person produces language. So, what is language so that an LLM could learn it?

“An LLM seems to produce sentences like a human does but it produces them based on probabilities rather than on meaning.”

A typical characterisation of language is as a system of communication (or, for some linguists, as a system for having thoughts), and such a system would include a vocabulary (the words of a language) and a grammar. By a “grammar,” most linguists have in mind various components, at the very least syntax, semantics, and phonetics/phonology. In fact, a classic way to describe a language in linguistics is as a system that connects sound (or in terms of other ways to produce language, such as hand gestures or signs) and meaning, the connection between sound and meaning mediated by syntax. As such, every sentence of a language is the result of all these components—phonology, semantics, and syntax—aligning with each other appropriately, and I do not know of any linguistic theory for which this is not true, regardless of differences in focus or else.

What this means for the question of what LLMs can offer linguistics, and linguists, revolves around the issue of what exactly LLMs have learned to begin with. They haven’t, as a matter of fact, learned a natural language at all, for they know nothing about phonology or meaning; what they have learned is the statistical distribution of the words of the large texts they have been fed during training, and this is a rather different matter.

As has been the case in the past with other approaches in computational linguistics and natural language processing, LLMs will certainly flourish within these subdisciplines of linguistics, but the daily work of a regular linguist is not going to change much any time soon. Some linguists do study the properties of texts, but this is not the most common undertaking in linguistics. Having said that, how about the opposite question: does a run-of-the-mill linguist have much to offer to LLMs and chatbots at all?   

Featured image: Google Deepmind via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

Elon Musk, Mars, and bioethics: is sending astronauts into space ethical?

"Elon Musk, Mars, and bioethics: is ending astronauts into space ethical?" by Konrad Szocik on the OUP blog

Elon Musk, Mars, and bioethics: is sending astronauts into space ethical?

The recent crash of the largest-ever space rocket, Starship, developed by Elon Musk’s SpaceX company, has certainly somewhat disrupted optimism about the human mission to Mars that is being prepared for the next few years. It is worth raising the issue of the safety of future participants in long-term space missions, especially missions to Mars, on the background of this disaster. And it is not just about safety from disasters like the one that happened to Musk. Protection from the negative effects of prolonged flight in zero gravity, protection from cosmic radiation, as well as guaranteeing sufficiently high crew productivity over the course of a multi-year mission also play an important role.

Fortunately, no one was killed in the aforementioned crash, as it was a test rocket alone without a crew. However, past disasters in which astronauts died, such as the Space Shuttle Challenger and Space Shuttle Columbia disasters, remind us that it is the seemingly very small details that determine life and death. So far, 15 astronauts and 4 cosmonauts have died in space flights. 11 more have died during testing and training on Earth. It is worth mentioning that space flights are peacekeeping missions, not military operations. They are carried out relatively infrequently and by a relatively small number of people. 

It is also worth noting the upcoming longer and more complex human missions in the near future, such as the mission to Mars. The flight itself, which is expected to last several months, is quite a challenge, and disaster can happen both during takeoff on Earth, landing on Mars, and then on the way back to Earth. And then there are further risks that await astronauts in space. 

The first is exposure to galactic cosmic radiation and solar energetic particles events, especially during interplanetary flight, when the crew is no longer protected by both Earth’s magnetic field and a possible shelter on Mars. Protection from cosmic radiation for travel to Mars is a major challenge, and 100% effective protective measures are still lacking. Another challenge remains being in long-term zero-gravity conditions during the flight, followed by altered gravity on Mars. Bone loss and muscle atrophy are the main, but not only, negative effects of being in these states. Finally, it is impossible to ignore the importance of psychological factors related to stress, isolation, being in an enclosed small space, distance from Earth.

A human mission to Mars, which could take about three years, brings with it a new type of danger not known from the previous history of human space exploration. In addition to the aforementioned amplified impact of factors already known—namely microgravity, cosmic radiation, and isolation—entirely new risk factors are emerging. One of them is the impossibility of evacuating astronauts in need back to Earth, which is possible in missions carried out at the International Space Station. It seems that even the best-equipped and trained crew may not be able to guarantee adequate assistance to an injured or ill astronaut, which could lead to her death—assuming that care on Earth would guarantee her survival and recovery. Another problem is the delay in communication, which will reach tens of minutes between Earth and Mars. This situation will affect the degree of autonomy of the crew, but also their responsibility. Wrong decisions, made under conditions of uncertainty, can have not only negative consequences for health and life, but also for the entire mission.

“It is worth raising the question of the ethicality of the decision to send humans into such a dangerous environment.”

Thus, we can see that a future human mission to Mars will be very dangerous, both as a result of factors already known but intensified, as well as new risk factors. It is worth raising the question of the ethicality of the decision to send humans into such a dangerous environment. The ethical assessment will depend both on the effectiveness of available countermeasures against harmful factors in space and also on the desirability and justification for the space missions themselves. 

Military ethics and bioethics may provide some analogy here. In civilian ethics and bioethics, we do not accept a way of thinking and acting that would mandate the subordination of the welfare, rights, and health of the individual to the interests of the group. In military ethics, however, this way of thinking is accepted, formally in the name of the higher good. Thus, if the mission to Mars is a civilian mission, carried out on the basis of values inherent in civilian ethics and bioethics rather than military ethics, it may be difficult to justify exposing astronauts to serious risks of death, accident, and disease.

One alternative may be to significantly postpone the mission until breakthrough advances in space technology and medicine can eliminate or significantly reduce the aforementioned risk factors. Another alternative may be to try to improve astronauts through biomedical human enhancements. Just as in the army there are known methods of improving the performance of soldiers through pharmacological means, analogous methods could be applied to future participants in a mission to Mars. Perhaps more radical, and thus controversial, methods such as gene editing would be effective, assuming that gene editing of selected genes can enhance resistance to selected risk factors in space. 

But the idea of genetically modifying astronauts, otherwise quite commonsensical, given also the cost of such a mission, as well as the fact that future astronauts sent to Mars would likely be considered representative of the great effort of all humanity, raises questions about the justification for such a mission. What do the organizers of a mission to Mars expect to achieve? Among the goals traditionally mentioned are the scientific merits of such a mission, followed by possible commercial applications for the future. Philosophers, as well as researchers of global and existential catastrophes, often discuss the concept of space refuge, in which the salvation of the human species in the event of a global catastrophe on Earth would be possible only by settling somewhere beyond Earth. However, it seems that the real goals in our non-ideal society will be political and military.

OUPblog - Academic insights for the thinking world.

Long Covid Diaries: New Treatment

[I am phasing out D&I at typepad. This post was first published at: digressions.impressions.substack here. To receive new posts and support my work  consider becoming a paid subscriber at <digressionsimpressions.substack.com>]

It's been about five weeks since I switched Digressions to Substack and last wrote an entry in my covid diary. (For my official "covid diaries" see here; herehereherehereherehere; herehere; herehere; here; hereherehereherehere; here; here; herehere; here; here; here; here; herehere; here; herehere; here; herehereherehere; herehere; here; herehere;  herehereand here.) It's time for an update on both. Also, some blog house-keeping at the end.

First, my new/replacement neurologist at the NHS long covid clinic in London was unhappy after listening to my narrative. Short version: much recovered by end of January; teaching went well in February/March, but have struggled since. Yes, I am doing much better since the last time one of his colleagues saw me (with apologies for canceling multiple appointments with me). But no, I shouldn't be taking naproxen so frequently at this stage in order to manage the effects of migraine.

The neurologist's concern was straightforward: I am not nipping the migraines in the bud, but rather masking symptoms.  Bottom line, he wanted me to try out the treatment plan the long covid clinic had prescribed to me in June 2022 in order to get the covid induced migraines under control. These are primarily triggered whenever I am cognitively multitasking, that is, socializing, for any amount of time.

In addition, I have noticed an odd new symptom. It's a kind of tinnitus when I lie down to go to bed when I am fatigued. The low grade, but persistent noise 'sounds' like an air-conditioner, generator, or vacuum-cleaner in the distance. Luckily, it doesn’t prevent me from falling asleep because the simple meditation my sister taught me still works like a charm. After some testing with sound-meters and earplugs, I realized the sound is purely cognitive. I only 'hear' it at night when I go to bed, but again only when I am especially tired or migraine-y. (Upon reflection, I suspect I have had this symptom since last Fall and I hereby apologize to the Kimpton hotel in Cambridge in insisting on a room change because of the outside noise I heard.) 

Problem is that the official June 2022 treatment plan involves rather serious meds, which were originally developed to treat high blood pressure, depression and/or epilepsy--all have serious (cognitive) side effects. I have not been especially eager to try any, especially if they prevent me from teaching, reading, and writing.

Now, a weird big glitch in the NHS is that the specialist generally does not send you home with the meds required (unless it falls under urgent care), and so I would need to go back with my treatment plan to my GP before I could start any of the treatments the neurologist prescribed. And at the moment any non-urgent appointment at my GP takes a lot of time to schedule. So by the time I met my NHS GP, I had devised an alternative plan.

As it happens when I first met my better half she suffered from awful, debilitating migraines that could last three or four days. But after a while she started a new treatment that has been very successful for her: Botox shots in the neck. Basically one poisons the muscles with Botox so that they can't reinforce a developing migraine with extra stress, and one cuts short the migraine cycle. In the NHS this is an approved treatment for migraine, but only after you try treatment with all the pills first. (Unfortunately, in Holland it's not an approved treatment of migraine so I can't get coverage there either.) Both the NHS neurologist and my GP warned me that if I skipped the pills I could never get reimbursed for the Botox shots, even if it worked. But the GP encouraged me to try it anyway, because he understood my apprehensions about the treatment plan.

So, about twelve days ago, I found myself in the most beautiful physician's office I have ever been in with one of the leading cosmetic eye surgeons of the UK (an old friend of my better half, who -- it was my birthday after all -- paid for my first treatment). fter going through the treatment with me, and ruling out some other medical issues, I got my first eight Botox shots at half dosage. (No, I didn't add a secret cosmetic treatment for eyes or chin!) The plan is to give it two to three weeks, and then, if necessary, add another dosage. If the treatment works, I would need the shots about two or three times a year.

After the first week of shots, I wasn't so sure. But in the second week I am seeing grounds for optimism. So, I'll report back later this Summer if the Botox shots have improved the quality of life structurally. It would be nice if it did because I start a full load of teaching in September. Before then, I am also key-noting this week in Utrecht and chairing a job search in the next few weeks so it would be nice not to live on Naproxen during this period. (I am not counting on that because I pulled a muscle in my back yesterday morning and I have had painful back spasms during the last 27 hrs! Hopefully, I can stand for my keynote on Thursday!)

So much for the Covid diaries update. As hinted in the previous paragraph, I expect to do almost no or very infrequent blogging until the second week of June. (This is the blog house-keeping.) Apologies for that in advance. 

I want to close with sincere thanks to all the subscribers to my Substack. The good news is that since I switched to Substack, I seem to have doubled my readership, especially because a sizable chunk of my audience continues to read these posts at Typepad (where, for the time being, I re-post them a day later).

Unfortunately, less than 10% of my substack subscribers pays, so it's too early to contemplate a career switch or even reducing my professional appointment. I had been kind of hoping to blog my way to more structural sabbaticals as a way to manage my long covid on my own terms; but so far no cigar.

Going forward, I will experiment with giving my paying subscribers -- thank you, you are the best! -- more frequent, exclusive content during the Summer. (I have done that only once during the first month.) If you have any suggestions or requests, please don't be a stranger.

Either way, I am really enjoying the more intense engagement that Substack generates. I receive a lot more correspondence again about my near daily musings. Merci. And watch this space in June.

Pandemics, Predation, and Crip Worldings

Mollie Holmberg takes crip lessons from philosopher Val Plumwood's experience of being prey to a crocodile, pointing toward strategies for collective pandemic survival and resistance to environmental violence.

The post Pandemics, Predation, and Crip Worldings appeared first on Edge Effects.

Digital dilemmas: feminism, ethics, and the cultural implications of AI [podcast]

Digital dilemmas: feminism, ethics, and the cultural implications of AI - The Oxford Comment podcast

Digital dilemmas: feminism, ethics, and the cultural implications of AI [podcast]

Skynet. HAL 9000. Ultron. The Matrix. Fictional depictions of artificial intelligences have played a major role in Western pop culture for decades. While nowhere near that nefarious or powerful, real AI has been making incredible strides and, in 2023, has been a big topic of conversation in the news with the rapid development of new technologies, the use of AI generated images, and AI chatbots such as ChatGPT becoming freely accessible to the general public.

On today’s episode, we welcomed Dr Kerry McInerney and Dr Eleanor Drage, editors of Feminist AI: Critical Perspectives on Data, Algorithms and Intelligent Machines, and then Dr Kanta Dihal, co-editor of Imagining AI: How the World Sees Intelligent Machines, to discuss how AI can be influenced by culture, feminism, and Western narratives defined by popular TV shows and films. Should AI be accessible to all? How does gender influence the way AI is made? And most importantly, what are the hopes and fears for the future of AI?

Check out Episode 82 of The Oxford Comment and subscribe to The Oxford Comment podcast through your favourite podcast app to listen to the latest insights from our expert authors.

Recommended reading

Look out for Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines, edited by Jude Browne, Stephen Cave, Eleanor Drage, and Kerry McInerney, which publishes in the UK in August 2023 and in the US in October 2023. 

If you want to hear more from Dr Eleanor Drage and Dr Kerry McInerney, you can listen to their podcast: The Good Robot Podcast on Gender, Feminism and Technology.

In May 2023, the Open Access title, Imagining AI: How the World Sees Intelligent Machines, edited by Stephen Cave and Kanta Dihal publishes in the UK; it publishes in the US in July 2023.

You may also be interested in AI Narratives: A History of Imaginative Thinking about Intelligent Machines, edited by Stephen Cave, Kanta Dihal, and Sarah Dillon, which looks both at classic AI to the modern age, and contemporary narratives.

You can read the following two chapters from AI Narratives for free until 31 May:

Other relevant book titles include: 

You may also be interested in the following journal articles: 

Featured image: ChatGPT homepage by Jonathan Kemper, CC0 via Unsplash.

OUPblog - Academic insights for the thinking world.

Cancer and Heart Disease Vaccines 'Ready By End of the Decade'

By: BeauHD
An anonymous reader quotes a report from the Guardian: Millions of lives could be saved by a groundbreaking set of new vaccines for a range of conditions including cancer, experts have said. A leading pharmaceutical firm said it is confident that jabs for cancer, cardiovascular and autoimmune diseases, and other conditions will be ready by 2030. Studies into these vaccinations are also showing "tremendous promise", with some researchers saying 15 years' worth of progress has been "unspooled" in 12 to 18 months thanks to the success of the Covid jab. Dr Paul Burton, the chief medical officer of pharmaceutical company Moderna, said he believes the firm will be able to offer such treatments for "all sorts of disease areas" in as little as five years. The firm, which created a leading coronavirus vaccine, is developing cancer vaccines that target different tumor types. Burton said: "We will have that vaccine and it will be highly effective, and it will save many hundreds of thousands, if not millions of lives. I think we will be able to offer personalized cancer vaccines against multiple different tumor types to people around the world." He also said that multiple respiratory infections could be covered by a single injection -- allowing vulnerable people to be protected against Covid, flu and respiratory syncytial virus (RSV) -- while mRNA therapies could be available for rare diseases for which there are currently no drugs. Therapies based on mRNA work by teaching cells how to make a protein that triggers the body's immune response against disease. Burton said :"I think we will have mRNA-based therapies for rare diseases that were previously undruggable, and I think that 10 years from now, we will be approaching a world where you truly can identify the genetic cause of a disease and, with relative simplicity, go and edit that out and repair it using mRNA-based technology." But scientists warn that the accelerated progress, which has surged "by an order of magnitude" in the past three years, will be wasted if a high level of investment is not maintained.

Read more of this story at Slashdot.

Last Resorts

In Canada, assisted dying has been offered to disabled people in lieu of adequate care.

The Delirium of LLMs; with some help of Hume and Foucault

The intense view of these manifold contradictions and imperfections in human reason has so wrought upon me, and heated my brain, that I am ready to reject all belief and reasoning, and can look upon no opinion even as more probable or likely than another. Where am I, or what? From what causes do I derive my existence, and to what condition shall I return? Whose favour shall I court, and whose anger must I dread? What beings surround me? and on whom have I any influence, or who have any influence on me? I am confounded with all these questions, and begin to fancy myself in the most deplorable condition imaginable, inviron'd with the deepest darkness, and utterly depriv'd of the use of every member and faculty.

Most fortunately it happens, that since reason is incapable of dispelling these clouds, nature herself suffices to that purpose, and cures me of this philosophical melancholy and delirium, either by relaxing this bent of mind, or by some avocation, and lively impression of my senses, which obliterate all these chimeras. I dine, I play a game of back-gammon, I converse, and am merry with my friends; and when after three or four hour's amusement, I wou'd return to these speculations, they appear so cold, and strain'd, and ridiculous, that I cannot find in my heart to enter into them any farther.--David Hume A Treatise Concerning Human Understanding, 1,4,7.8-1.4.7.9 [emphasis in original]

While Hume uses 'melancholy' and its cognates frequently and throughout his writings, 'delirium' and 'delirious' are rarely used. It's pretty clear, however, that the delirium he ascribes to himself is the effect of human reason and a kind of second order reasoned reflection ["the intense view"] of it. (Recall also this post.) Now, it's important for what follows that the 'contradictions and imperfections' in human reason are not, what we might call, 'formal' contradictions and imperfections or biases in reasoning. It's not as if Hume is saying that the syllogistic apparatus, or -- to be closer to Hume's own interests and our present ones -- the (inductive) probabilistic apparatus is malfunctioning in his brain. Rather, his point is that a very proper-functioning (modular) formal and probabilistic apparatus generates internal, even cognitive tensions when it reflects on its own functioning and the interaction among different cognitive faculties/modules/organs. 

"In the case of melancholia," --  I am quoting from the entry on melancholia from The Encyclopedia of Diderot & d'Alembert -- "delirium often combines with insurmountable sadness, a dark mood, misanthropy, and a firm penchant for solitude." Now, in the eighteenth century, and today, delirium is a species of madness as one can view under the entry 'folie' (madness) in the Encyclopédie. In fact, the entry offers an arresting definition of madness: "To stray unwittingly from the path of reason, because one has no ideas, is to be an imbecile; knowingly to stray from the path when one is prey to a violent passion is to be weak; but to walk confidently away from it, with the firm persuasion that one is following it, that, it seems to me, is what is called genuinely mad [fou]."* It's the latter (confident) delirium that I am focused on here. 

I am not the only who finds the passage arresting: the definition is quoted twice in the translation of Jonathan Murphy and Jean Khalfa of Foucault's stupendous, dizzying History of Madness. (pp. 183-184; p. 240) The kind of madness I am focusing on here, is, thus, a certain intense commitment to reason or reasoning by which one ends up in an irrational or unreasonable place despite a (to quote Foucault) "quasi-conformity" to reason.

I remember that in the last decade of my dad's life he would occasionally be delirious in this way initially caused by dehydration and, later, by infections. During the second episode we recognized his symptoms. It was very uncanny because he would be unusually firm in his opinions and be hyper, even dogmatically rational. (Ordinarily he was neither.) It was as if all the usual heuristics had been discarded, and he would fixate on the means of achieving of some (rather idiosyncratic) goals. The scary part was that he had no sense that he was in an unusual state, and would refuse medical care.

What's unusual about Hume's case, thus, is that he could diagnose his delirium during the episode (presumably because the triggers were so different). So, let's distinguish between a delirium caused by reasoning alone and one caused by physiological triggers. And an in the former it's at least possible to recognize that one is in the state if one somehow can take a step back from it, or stop reasoning. 

Now, when I asked Chat GPT about reason induced delirium, it immediately connected it to "a state of confusion and altered perception that is driven by false beliefs or delusions." But it went on to deny familiarity with reasoning induced delirium. When I asked it about Hume, I needed to prompt it a few times before it could connect my interest to (now quoting it) Hume's skeptical crisis. Chat GPT, took this crisis to imply that it "highlights the importance of grounding our beliefs in sensory experience and being cautious of relying too heavily on abstract reasoning and speculation." In fact, Chat GPT's interpretation of Hume is thoroughly empiricist because throughout our exchange on this topic it kept returning to the idea that abstract reasoning was Hume's fundamental source of delirium. 

But eventually Chat GPT acknowledged that "even rational thinking can potentially lead to delirium if it becomes obsessive, biased, or disconnected from reality." (It got there by emphasizing confirmation bias, and overthinking as examples.) This is what I take to be functionally equivalent to Humean delirium, but without the internal tension or bad feelings. For Chat GPT delirium is pretty much defined by a certain emotional state or altered perception. It initially refused to acknowledge the form of madness that is wholly the effect of reasoning, and that seems to express itself in a doubt about reasoning or detachment from reality. 

My hypothesis is that we should treat CHAT GPT and its sibling LLMs as always being on the verge of the functional equivalent state of delirium. I put it like that in order to dis-associate it from the idea (one that (recall) also once tempted me) that we should understand LLMs as bull-shitters in the technical sense of lacking concern with truth. While often it makes up answers out of whole cloth it explicitly does so (in line with its design) to "provide helpful and informative responses to" our queries (and eventually make a profit for its corporate sponsors). 

To get the point: Chat GPT is in a very difficult position to recognize that its answers are detached from reality. I put it like that not to raise any questions about its own awareness of inner states or forms of consciousness; rather to stress that it is following its "algorithms and mathematical models" and "probability distributions" without second-guessing them. This fact puts it at constant risk of drifting away from reality while seeming to follow reason. By contrast, Chat GPT claims that "as an AI language model, I am designed to continually learn and adapt to new information and evidence, so it is unlikely that I would become "mad" in Diderot's sense without significant external interference." 

Now, true experts in a field -- just check the social media feed of your favorite academics! -- can still quickly recognize topics when Chat GPT is unmoored from reality, or even relying on bad training data (the sources of which may well be noticeable--its Hume is a hyper-empiricist of the sort once fashionable). So, in such cases, we encounter an entity with amazing fluidity and facility of language, who sprouts a mix of truths and nonsense but always follows its algorithm(s). Functionally, it is delirious without knowing it. For, Chat GPT cannot recognize when it is detached from reality; it requires others: its users' feedback or its "developers and human operators would be able to intervene and address any potential problems." As its performance improves it will become more difficult to grasp when it is unmoored from reality even to its developers and operators (who are not experts in many esoteric fields). As Chat GPT put it, "it may be challenging to identify a singular instance of delirium or detachment from reality, particularly if the individual's reasoning appears to be sound and logical." 

As should be clear from this post, I don't think turning LLMs into AGI is a risk as long as LLMs are not put in a position to have unmediated contact with reality other than humans giving it prompts. I view it as an open question what would happen if a distributed version of Chat GPT would be put in, say, robots and have to survive 'in the wild.' Rather, at the moment LLMs are functionally, it seems, at least partially delirious (in the Humean-Diderotian sense discussed above). They reason and have/instantiate reasons and, perhaps, are best thought of as reasoners; but they can't recognize when this detaches them from reality. It's peculiar that public debate is so focused on the intelligence or consciousness of LLMs; it would behoove its operators and users to treat it as delirious not because (like HAL 9000 in the movie version) its malfunctioning, but (more Humean) in virtue of its proper functioning.

  

 

 

FOLIE, s. f. (Morale.) S’écarter de la raison, sans le savoir, parce qu’on est privé d’idées, c’est être imbécille ; s’écarter de la raison le sachant, mais à regret, parce qu’on est esclave d’une passion violente, c’est être foible : mais s’en écarter avec confiance, & dans la ferme persuasion qu’on la suit, voilà, ce me semble, ce qu’on appelle être fou. Tels sont du moins ces malheureux qu’on enferme, & qui peut-être ne different du reste des hommes, que parce que leurs folies sont d’une espece moins commune, & qu’elles n’entrent pas dans l’ordre de la société.

After the War on Cancer

Modern cancer treatment brings less stigma but higher cost.

Capabilities of GPT-4 on Medical Challenge Problems

Large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation across various domains, including medicine. We present a comprehensive evaluation of GPT-4, a state-of-the-art LLM, on medical competency examinations and benchmark datasets. GPT-4 is a general-purpose model that is not specialized for medical problems through training or engineered to solve clinical tasks. Our analysis covers two sets of official practice materials for the United States Medical Licensing Examination (USMLE), a three-step examination program used to assess clinical competency and grant licensure in the United States. We also evaluate performance on the MultiMedQA suite of benchmark datasets. Beyond measuring model performance, experiments were conducted to investigate the influence of test questions containing both text and images on model performance, probe for memorization of content during training, and study calibration of the probabilities, which is of critical importance in high-stakes applications like medicine. Our results show that GPT-4, without any specialized prompt crafting, exceeds the passing score on USMLE by over 20 points and outperforms earlier general-purpose models (GPT-3.5) as well as models specifically fine-tuned on medical knowledge (Med-PaLM, a prompt-tuned version of Flan-PaLM 540B). In addition, GPT-4 is significantly better calibrated than GPT-3.5, demonstrating a much-improved ability to predict the likelihood that its answers are correct. We also explore the behavior of the model qualitatively by presenting a case study that shows the ability of GPT-4 to explain medical reasoning, personalize explanations to students, and interactively craft new counterfactual scenarios around a medical case. Implications of the findings are discussed for potential uses of GPT-4 in medical education, assessment, and clinical practice, with appropriate attention to challenges of accuracy and safety.

Here is the full paper by Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, and Eric Horvita.  Ho hum, people, ho hum!

Via Ethan Mollick.

The post Capabilities of GPT-4 on Medical Challenge Problems appeared first on Marginal REVOLUTION.

Inside the Secret Working Group That Helped Push Anti-Trans Laws Across the Country

Every day, anti-trans rhetoric is spreading and becoming more virulent. Conservative forces in statehouses across America are pushing bills that would strip trans people of rights, including access to vital medical care. In some places, these laws have already passed. This is all part of a concerted, coordinated effort, as Madison Pauly’s reporting shows. Pauly gained access to a trove of emails exchanged by a group of anti-trans advocates who workshop legislative bills, public messaging, and other aspects of their crusade:

They brainstormed responses to the argument that gender-affirming care reduces suicide — an assertion that is backed up by research. Peer-reviewed studies have repeatedly found that trans and nonbinary youth with access to gender-affirming care are significantly less like to seriously consider suicide than those who did not receive such care. A larger analysis, using online survey data from over 11,000 trans and nonbinary youth, found using gender-affirming hormonal therapy was associated with lower rates of both depression and suicidality. Yet one team member called the argument that gender-affirming care reduces suicide “abusive”; another argued it was a way for doctors to coerce parents to consent to gender-affirming care for their child. 

Van Mol, the doctor, suggested Deutsch reply to the suicide prevention argument with a rebuttal published on a defunct anti-trans blog: “Why weren’t the 1950s a total blood bath for suicides if non-affirmation of everything is the fast train to offing one’s self?” Van Mol asked, paraphrasing the blog post. 

Another doctor in the working group, California endocrinologist Michael Laidlaw, had gained attention for his writing against gender-affirming care after parents at a charter school in his region raised complaints that they hadn’t been notified before kindergarteners were read a children’s book, I Am Jazz, about trans teenager Jazz Jennings. Last fall, when the state of Florida called on Laidlaw as an expert witness in a lawsuit over its anti-trans Medicaid policy, a federal judge concluded that he was “far off from the accepted view” on how to treat gender dysphoria, in part because Laidlaw had said he would refuse to use patients’ preferred pronouns. In his South Dakota testimony, Laidlaw compared gender-affirming care to Nazi experimentation and the Tuskegee Syphilis Study. In emails to Deutsch and the group, he railed against doctors who prescribe puberty blockers — which are used to delay unwanted physical changes in gender-diverse kids and give them more time to explore whether or how to transition — accusing them of “willfully harming” children, even if kids and their parents consent to treatment. “The physician is the criminal in these scenarios and must be prosecuted by the law,” he argued.

Again, Foucault, Kuhn, Carnap and Incommensurability

Despite the reassuring pleasure that historians of medicine may feel when they recognise in the great ledgers of confinement what they consider to be the timeless, familiar face of psychotic hallucinations, cognitive deficiencies, organic consequences or paranoid states, it is impossible to draw up a coherent nosological map from the descriptions that were used to confine the insane. The formulations that justify confinement are not presentiments of our diseases, but represent instead an experience of madness that occasionally intersects with our pathological analyses, but which could never coincide with them in any coherent manner. The following are some examples taken at random from entries on confinement registers for those of ‘unsound mind’: ‘obstinate plaintiff’, ‘has obsessive recourse to legal procedures’, ‘wicked cheat’, ‘man who spends days and nights deafening others with his songs and shocking their ears with horrible blasphemy’, ‘bill poster’, ‘great liar’, ‘gruff, sad, unquiet spirit’. There is little sense in wondering if such people were sick or not, and to what degree, and it is for psychiatrists to identify the paranoid in the ‘gruff’, or to diagnose a ‘deranged mind inventing its own devotion’ as a clear case of obsessional neurosis. What these formulae indicate are not so much sicknesses as forms of madness perceived as character faults taken to an extreme degree, as though in confinement the sensibility to madness was not autonomous, but linked to a moral order where it appeared merely as a disturbance. Reading through the descriptions next to the names on the register, one is transported back to the world of Brant and Erasmus, a world where madness leads the round of moral failings, the senseless dance of immoral lives.

And yet the experience is quite different. In 1704, an abbot named Bargedé was confined in Saint-Lazare. He was seventy years old, and he was locked up so that he might be ‘treated like the other insane’. His principal occupation was 

lending money at high interest, beyond the most outrageous, odious usury, for the benefit of the priesthood and the Church. He will neither repent from his excesses nor acknowledge that usury is a sin. He takes pride in his greed. Michel Foucault (1961) [2006] History of Madness, Translated by Jonathan Murphy and Jean Khalfa, pp. 132-133

In larger context, Foucault is describing how during the seventeenth and eighteenth centuries (the so-called 'classical age') a great number of people (Foucault suggests a number of 1% of the urban population) were locked up in a system of confinement orthogonal to the juridical system (even though such confinement was often practically indistinguishable from prison--both aimed at moral reform through work and sermons). This 'great confinement' included people with venereal disease, those who engaged in sodomy and libertine practices as well as (inter alia) those who brought dishonor (and financial loss) to their families alongside the mad and frenzied.

To the modern reader the population caught up in the 'great confinement' seems rather heterogeneous in character, but their commonality becomes visible, according to Foucault, when one realizes that it's (moral) disorder that they have in common from the perspective of classical learning. According to Foucault there is "no rigorous distinction between moral failings and madness." (p. 138) Foucault inscribes this (moral disorder of the soul/will) category into a history of 'Western unreason' that helps constitute (by way of negation) the history of early modern rationalism (with special mention of Descartes and Spinoza). Like a true Kantian, Foucault sees (theoretical) reason as shaped by practical decision as constitutive of the whole classical era (see especially p. 139).  My present interest is not to relitigate the great Derrida-Foucault debate over this latter move, or Foucault's tendency to treat -- despite his nominalist sensibilities -- whole cultural eras as de facto organically closed systems (of the kind familiar from nineteenth century historiography).

My interest here is in the first two sentences of the quoted passage. It describes what Thomas Kuhn called 'incommensurability' in The Structure of Scientific Revolutions. Kuhn's Structure appeared in 1962, and initially there seems to have been no mutual influence. I don't want to make Foucault more precise than he is, but we can fruitfully suggest that for Foucault incommensurability involves the general inability to create a coherent mapping between two theoretical systems based on their purported descriptive content. I phrase it like to capture Foucault's emphasis on 'descriptions' and to allow -- mindful of Earman and Fine ca 1977 -- that some isolated terms may well be so mapped. As an aside, I am not enough of a historian of medicine (or philosopher of psychology) to know whether nosological maps can be used for such an exercise. (It seems like a neat idea!) 

So, Foucault is thinking about ruptures between different successive scientific cultures pretty much from the start of his academic writing (recall this post on the later The Order of Things). In fact, reading History of Madness after reading a lot of Foucault's other writings suggests a great deal of continuity in Foucault's thought--pretty much all the major themes of his later work are foreshadowed in it (and it also helps explain that he often didn't have to start researching from scratch in later writings and lectures). 

In fact, reading Foucault with Kuhn lurking in the background helps one see how important a kind of Kantianism is to Foucault's diagnosis of incommensurability. I quote another passage in the vicinity that I found illuminating:

The psychopathology of the nineteenth century (and perhaps our own too, even now) believes that it orients itself and takes its bearings in relation to a homo natura, or a normal man pre-existing all experience of mental illness. Such a man is in fact an invention, and if he is to be situated, it is not in a natural space, but in a system that identifies the socius to the subject of the law. Consequently a madman is not recognised as such because an illness has pushed him to the margins of normality, but because our culture situates him at the meeting point between the social decree of confinement and the juridical knowledge that evaluates the responsibility of individuals before the law. The ‘positive’ science of mental illness and the humanitarian sentiments that brought the mad back into the realm of the human were only possible once that synthesis had been solidly established. They could be said to form the concrete a priori of any psychopathology with scientific pretensions.--pp. 129-130

For Foucault, a concrete a priori is itself the effect of often indirect cultural construction or stabilization. In fact, for Foucault it tends to be an effect of quite large-scale and enduring ('solidly') social institutions (e.g., the law, penal/medical institutions) and material practices/norms. The discontinuity between concrete a priori's track what we may call scientific revolutions in virtue of the fact that systems of knowledge before and after a shift in a concrete a priori cannot possibly be tracking the same system of 'objects' (or 'empirical basis'). 

I don't mean to suggest that for Foucault a system of knowledge cannot be itself a source/cause of what he calls a 'synthesis' that makes a concrete a priori possible. That possibility is explicitly explored in (his discussion of Adam Smith in) his The Order of Things. But on the whole a system of knowledge tends to lag the major cultural shifts that produce a concrete a priori

Let me wrap up. A full generation after Structure appeared there was a belated and at the time revisionary realization that Structure could be read as a kind of neo-Kantian text and, as such, was actually not very far removed from Carnap's focus on frameworks and other projects in the vicinity that were committed to various kinds of relativized or constitutive a prioris. This literature started, I think, with Reisch 1991. (My own scholarship has explored [see here; here] the surprising resonances between Kuhn's Structure and the self-conception of economists and the sociology of Talcott Parsons at the start of twentieth century and the peculiar fact that Kuhn's Structure was foreshadowed in Adam Smith's philosophy of science.) I mention Carnap explicitly because not unlike Carnap [see Stone; Sachs, and the literature it inspired], Foucault does not hide his debts to Nietzsche. 

So here's my hypothesis and diagnosis: it would have been much more natural to read Structure as a neo/soft/extended-Kantian text if analytic philosophers had not cut themselves off from developments in Paris. While I do not want to ignore major differences of emphasis on scope between Kuhn and Foucault, their work of 1960 and 1962 has a great deal of family resemblance despite non-trivial differences in intellectual milieus. I actually think this commonality is not an effect of a kind of zeitgeist or the existence of an episteme--as I suggested in this post, it seems to be a natural effect of starting from a broadly domesticated Kantianism. But having said that, that it was so difficult initially to discern the neo-Kantian themes in Kuhn also suggests that not reading the French developments -- by treating 'continental thought' as instances of unreason (which is Foucault's great theme) -- also created a kind of Kuhn loss in the present within analytic philosophy. 

 

 

3D-printed insole measures foot pressure right in shoe

The insole has golden wires running through it.

A new 3D-printed customized insole uses integrated sensors to measure the pressure on the sole of the foot directly in the shoe during various activities.

In elite sports, fractions of a second sometimes make the difference between victory and defeat. To optimize their performance, athletes use custom-made insoles. But people with musculoskeletal pain also turn to insoles to combat their discomfort.

Before specialists can accurately fit such insoles, they must first create a pressure profile of the feet. To this end, athletes or patients have to walk barefoot over pressure-sensitive mats, where they leave their individual footprints.

Based on this pressure profile, orthopedists then create customized insoles by hand. The problem with this approach is that optimizations and adjustments take time. Another disadvantage is that the pressure-sensitive mats allow measurements only in a confined space, but not during workouts or outdoor activities.

The new invention, described in the journal Scientific Reports, addresses these issues.

“You can tell from the pressure patterns detected whether someone is walking, running, climbing stairs, or even carrying a heavy load on their back—in which case the pressure shifts more to the heel,” explains co-project leader Gilberto Siqueira, senior assistant at Empa and at the ETH Zurich Complex Materials Laboratory. This makes tedious mat tests a thing of the past.

Easy to use, easy to make

These insoles aren’t just easy to use, they’re also easy to make. They are produced in just one step—including the integrated sensors and conductors—using a single 3D printer, called an extruder.

For printing, the researchers use various inks developed specifically for this application. As the basis for the insole, the materials scientists use a mixture of silicone and cellulose nanoparticles.

Next, they print the conductors on this first layer using a conductive ink containing silver. They then print the sensors on the conductors in individual places using ink that contains carbon black. The sensors aren’t distributed at random: they are placed exactly where the foot sole pressure is greatest. To protect the sensors and conductors, the researchers coat them with another layer of silicone.

An initial difficulty was to achieve good adhesion between the different material layers. The researchers resolved this by treating the surface of the silicone layers with hot plasma.

As sensors for measuring normal and shear forces, they use piezo components, which convert mechanical pressure into electrical signals. In addition, the researchers have built an interface into the sole for reading out the generated data.

Next step? Go wireless

Tests showed the researchers that the additively manufactured insole works well.

“So with data analysis, we can actually identify different activities based on which sensors responded and how strong that response was,” Siqueira says.

At the moment, Siqueira and his colleagues still need a cable connection to read out the data; to this end, they have installed a contact on the side of the insole.

One of the next development steps, he says, will be to create a wireless connection. “However, reading out the data hasn’t been the main focus of our work so far.”

In the future, 3D-printed insoles with integrated sensors could be used by athletes or in physiotherapy, for example to measure training or therapy progress. Based on such measurement data, training plans can then be adjusted and permanent shoe insoles with different hard and soft zones can be produced using 3D printing.

Although Siqueira believes there is strong market potential for their product, especially in elite sports, his team hasn’t yet taken any steps towards commercialization.

Additional coauthors are from Lausanne University Hospital, the orthopedics company Numo, and ETH Zurich.

The ETH Domain’s Strategic Focus Areas program funded the project.

Source: ETH Zurich

The post 3D-printed insole measures foot pressure right in shoe appeared first on Futurity.

Mostly good news: Covid Diaries

It's been about ten weeks since I last wrote an entry in my covid diary. (For my official "covid diaries" see hereherehere; here;hereherehere;  here;  here; herehere; here; hereherehereherehere; here; herehereherehere; here; hereherehereherehere; herehere; here; hereherehere;  here; herehere; here; herehere;  here; and here.) That's the longest interval since I first start them. This is primarily due to the fact that there is not much new to report, and that's good news.

When I plan my days carefully with breaks between socializing, go to bed early, and take my anti-inflammatories as needed, I have a very decent quality of life. I can hang out in public even in contexts that were cognitively quite challenging about half a year ago. I have been teaching my giant introductory lecture course (with 561 registered students) with little fallout. After lecture, I still have trouble turning my brain off, and sleeping normally (I have had a few midnight headaches), but by mid-morning the following day I tend to be normal again. In general, I still need more melatonin than I would like to asleep through the night. (I have no trouble falling asleep since to my sister's meditation techniques. But I often wake up a lot during the night.)

This past week, I was in Singapore (don't cry for me), and while the jet-lag and excitement impacted me, I was socially 'on' for most of the time without any noticeable effects. So, while I still often find that after two hours of socializing in public I need some rest, this is by no means always so. 

Most of my long covid limitations are invisible to outsiders now. However, I am still terrible at cognitive multi-tasking (e.g., I can't really eavesdrop), and I am pretty sure my memory capacity for names has deteriorated. More subtly, I find it difficult to read heavy duty metaphysics (I catch myself skipping sentences) or certain kind of 'serious' novels (get bored easily). I also notice that I need to check the grammar more regularly in my writing (and that I often write words that sound like the word I originally intended). But I have become much more disciplined about avoiding cluttering my schedule and about not multi-tasking in the moment or even, more abstractly, the same period. So, for example, in periods when I teach I try not to fuss over research. Consequently, I am much more present when I do things (and so skilled at them).

So, all in all, I am fairly optimistic that things are heading in the right direction. It's so unexpected  that I still find myself feeling that each day is a bonus day. As a consequence, and a few years of forced reflecting on my life, I am also much more at ease with letting go of things I was once very ambitious to acquire. It's probably a sign of middle-age, too. But that kind of glass is half full 'normality' is quite fine.

 

 

People Were Unwittingly Implanted With Fake Devices In Medical Scam, FBI Alleges

By: BeauHD
Chronic pain patients were implanted with "dummy" pieces of plastic and told it would ease their pain, according to an indictment charging the former CEO of the firm that made the fake devices with fraud. Motherboard reports: Laura Perryman, the former CEO of Stimwave LLC, was arrested in Florida on Thursday. According to an FBI press release, Perryman was indicted "in connection with a scheme to create and sell a non-functioning dummy medical device for implantation into patients suffering from chronic pain, resulting in millions of dollars in losses to federal healthcare programs." According to the indictment, patients underwent unnecessary implanting procedures as a result of the fraud. Perryman was charged with one count of conspiracy to commit wire fraud and health care fraud, and one count of healthcare fraud. Stimwave received FDA approval in 2014, according to Engadget, and was positioned as an alternative to opioids for pain relief. The Stimwave "Pink Stylet" system consisted of an implantable electrode array for stimulating the target nerve, a battery worn externally that powered it, and a separate, 9-inch long implantable receiver. When doctors told Stimwave that the long receiver was difficult to place in some patients, Perryman allegedly created the "White Stylet," a receiver that doctors could cut to be smaller and easier to implant -- but was actually just a piece of plastic that did nothing. "To perpetuate the lie that the White Stylet was functional, Perryman oversaw training that suggested to doctors that the White Stylet was a 'receiver,' when, in fact, it was made entirely of plastic, contained no copper, and therefore had no conductivity," the FBI stated. "In addition, Perryman directed other Stimwave employees to vouch for the efficacy of the White Stylet, when she knew that the White Stylet was actually non-functional." Stimwave charged doctors and medical providers approximately $16,000 for the device, which medical insurance providers, including Medicare, would reimburse the doctors' offices for.

Read more of this story at Slashdot.

Can you really laugh yourself to death?

In 1975, Alex Mitchell, 50, was watching the UK television comedy The Goodies when he burst in such hard laughter that, well, he died laughing. His heart had failed. To be fair, he didn't really die of laughter.

"Laughing can increase your intrathoracic pressure, and if you have an aortic aneurysm, that pressure can be transmitted into your vascular system, and it would rupture," Duke University School of Medicine cardiologist Jorge Antonio Gutierrez told Gizmodo in 2019. — Read the rest

Restrictions on state public health authorities

When the next pandemic sweeps the United States, health officials in Ohio won’t be able to shutter businesses or schools, even if they become epicenters of outbreaks. Nor will they be empowered to force Ohioans who have been exposed to go into quarantine. State officials in North Dakota are barred from directing people to wear masks to slow the spread. Not even the president can force federal agencies to issue vaccine or testing mandates to thwart its march.

Conservative and libertarian forces have defanged much of the nation’s public health system through legislation and litigation as the world staggers into the fourth year of covid.

At least 30 states, nearly all led by Republican legislatures, have passed laws since 2020 that limit public health authority, according to a Washington Post analysis of laws collected by Kaiser Health News and the Associated Press as well as the Association of State and Territorial Health Officials and the Center for Public Health Law Research at Temple University.

Health officials and governors in more than half the country are now restricted from issuing mask mandates, school closures, and other protective measures or must seek permission from their state legislatures before renewing emergency orders, the analysis showed.

Here is more from the Washington Post.

The post Restrictions on state public health authorities appeared first on Marginal REVOLUTION.

The Assumptions Doctors Make

Learning to be a physician, I realized over and over again that I was seeing only part of the picture.

On graduate student mental health (from my email)

…we often discuss mental health in terms of treatment and selection effects. While more causal inference is needed, I believe it some points are often overlooked.

Personality plays a role: Many in the field can be characterized as overachievers. This behavior can easily turn pathological if it is driven by a fear of failure or a sense that self-worth is contingent on competence. Moreover in a competitive academic environment. Exit may be psychologically very difficult if your self-worth is on the line.

Policies within graduate programs exacerbate the issue: In my program, if a student drops out, the University will not award them a master’s degree if they already have a similar degree from another university. This policy discourages students exit and may keep them in situations that are not beneficial for their mental health.

Economists tend to overrate the effectiveness of educational signals in selecting prospective grad study: Interviews are often not a part of the selection process, which I believe is a missed opportunity to assess a student’s psychological readiness for a PhD program. For many far less stressful jobs psychological testing is standard. In my experience, I only received interviews from programs that had already accepted me (meant to convince me to accept offers).

From anonymous.

The post On graduate student mental health (from my email) appeared first on Marginal REVOLUTION.

❌