FreshRSS

🔒
☐ ☆ ✇ Hippo Reads

Why Medical Advice Is Important

By: hipporeads — June 15th 2023 at 15:13

It’s important to want to be healthy and to do what we can to achieve good health. However, as much as we might think we know a lot and are happy to eat healthily and exercise as much as possible, for many, that could be the end of it. However, there is more we can do, and this is why it’s so important to ask for medical advice. Read on to find out why medical advice is a crucial component of a healthy lifestyle.

Expert Advice

Medical professionals in every field are experts at what they do, and they have far more knowledge, skill, and experience than any layperson would have. Although you can do a lot when it comes to your health just by applying some common sense, there is so much that most people don’t know that sometimes you might even be doing more harm than good.

Therefore, it’s always wise to get plenty of advice. Ask as many questions as you need to, whether it’s about something specific, like chickenpox vaccinations, or something more general, like how much exercise you should do. This way, you’ll get the right advice that you know you can follow without any problems, boosting your health to the next level.

Preventative Care

Another hugely important reason why medical advice is important and why you should get it whenever you need it is for preventative reasons. If you feel unwell or you have an issue that you aren’t sure about that could be problematic for your health, the best thing you can do is seek medical advice. The alternative is trying to find a way to deal with it yourself (which could make things worse or make you sick in other ways) or not doing anything at all. In some cases, this could mean developing a condition that has no cure, and that is life-limiting.

When you know there is an issue, or even if you’re not sure, getting medical advice is crucial. Your chosen healthcare professional can examine you, do tests and get answers so they can come up with a healthcare plan before the issue gets too bad. This will stop you from getting very sick and could even save your life. Even if you think it could be nothing, asking for medical advice is still worthwhile to be sure.

Debunking Health Myths

We have a lot of information at our fingertips these days, and a lot of that comes from the internet. The problem is, it’s hard to know what’s true and what’s just a myth when it comes to health advice online (or anything else, for that matter). Or perhaps there have been stories about how to deal with this health issue or that one handed down through your family, and you just don’t know if it’s accurate or not.

When you go for medical advice from an expert, you’ll get the facts and be able to determine your next course of action with confidence, leading to much better health overall.

Image credit: Sander Sammy via Unsplash

☐ ☆ ✇ OUPblog » Language

What can Large Language Models offer to linguists?

By: Becky Clifford — June 13th 2023 at 09:30
Google Deepmind. "What can Large Language Models offer to linguists?" by David J. Lobina on the OUP blog

What can Large Language Models offer to linguists?

It is fair to say that the field of linguistics is hardly ever in the news. That is not the case for language itself and all things to do with language—from word of the year announcements to countless discussions about grammar peeves, correct spelling, or writing style. This has changed somewhat recently with the proliferation of Large Language Models (LLMs), and in particular since the release of OpenAI’s ChatGPT, the best-known language model. But does the recent, impressive performance of LLMs have any repercussions for the way in which linguists carry out their work? And what is a Language Model anyway?

 At heart, all an LLM does is predict the next word given a string of words as a context —that is, it predicts the next, most likely word. This is of course not what a user experiences when dealing with language models such as ChatGPT. This is on account of the fact that ChatGPT is more properly described as a “dialogue management system”, an AI “assistant” or chatbot that translates a user’s questions (or “prompts”) into inputs that the underlying LLM can understand (the latest version of OpenAI’s LLM is a fine-tuned version of GPT-4).  

“At heart, all an LLM does is predict the next word given a string of words as a context.”

An LLM, after all, is nothing more than a mathematical model in terms of a neural network with input layers, output layers, and many deep layers in between, plus a set of trained “parameters.” As the computer scientist Murray Shanahan has put it in a recent paper, when one asks a chatbot such as ChatGPT who was the first person to walk on the moon, what the LLM is fed is something along the lines of:

Given the statistical distribution of words in the vast public corpus of (English) text, what word is most likely to follow the sequence “The first person to walk on the Moon was”?

That is, given an input such as the first person to walk on the Moon was, the LLM returns the most likely word to follow this string. How have LLMs learned to do this? As mentioned, LLMs calculate the probability of the next word given a string of words, and it does so by representing these words as vectors of values from which to calculate the probability of each word, and where sentences can also be represented as vectors of values. Since 2017, most LLMs have been using “transformers,” which allow the models to carry out matrix calculations over these vectors, and the more transformers are employed, the more accurate the predictions are—GPT-3 has some 96 layers of such transformers.

The illusion that one is having a conversation with a rational agent, for it is an illusion, after all, is the result of embedding an LLM in a larger computer system that includes background “prefixes” to coax the system into producing behaviour that feels like a conversation (the prefixes include templates of what a conversation looks like). But what the LLM itself does is generate sequences of words that are statistically likely to follow from a specific prompt.

It is through the use of prompt prefixes that LLMs can be coaxed into “performing” various tasks beyond dialoguing, such as reasoning or, according to some linguists and cognitive scientists, learn the hierarchical structures of a language (this literature is ever increasing). But the model itself remains a sequence predictor, as it does not manipulate the typical structured representations of a language directly, and it has no understanding of what a word or a sentence means—and meaning is a crucial property of language.

An LLM seems to produce sentences and text like a human does—it seems to have mastered the rules of the grammar of English—but at the same time it produces sentences based on probabilities rather on the meanings and thoughts to express, which is how a human person produces language. So, what is language so that an LLM could learn it?

“An LLM seems to produce sentences like a human does but it produces them based on probabilities rather than on meaning.”

A typical characterisation of language is as a system of communication (or, for some linguists, as a system for having thoughts), and such a system would include a vocabulary (the words of a language) and a grammar. By a “grammar,” most linguists have in mind various components, at the very least syntax, semantics, and phonetics/phonology. In fact, a classic way to describe a language in linguistics is as a system that connects sound (or in terms of other ways to produce language, such as hand gestures or signs) and meaning, the connection between sound and meaning mediated by syntax. As such, every sentence of a language is the result of all these components—phonology, semantics, and syntax—aligning with each other appropriately, and I do not know of any linguistic theory for which this is not true, regardless of differences in focus or else.

What this means for the question of what LLMs can offer linguistics, and linguists, revolves around the issue of what exactly LLMs have learned to begin with. They haven’t, as a matter of fact, learned a natural language at all, for they know nothing about phonology or meaning; what they have learned is the statistical distribution of the words of the large texts they have been fed during training, and this is a rather different matter.

As has been the case in the past with other approaches in computational linguistics and natural language processing, LLMs will certainly flourish within these subdisciplines of linguistics, but the daily work of a regular linguist is not going to change much any time soon. Some linguists do study the properties of texts, but this is not the most common undertaking in linguistics. Having said that, how about the opposite question: does a run-of-the-mill linguist have much to offer to LLMs and chatbots at all?   

Featured image: Google Deepmind via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

☐ ☆ ✇ Philosophy Archives | OUPblog

Elon Musk, Mars, and bioethics: is sending astronauts into space ethical?

By: Becky Clifford — June 12th 2023 at 09:30
"Elon Musk, Mars, and bioethics: is ending astronauts into space ethical?" by Konrad Szocik on the OUP blog

Elon Musk, Mars, and bioethics: is sending astronauts into space ethical?

The recent crash of the largest-ever space rocket, Starship, developed by Elon Musk’s SpaceX company, has certainly somewhat disrupted optimism about the human mission to Mars that is being prepared for the next few years. It is worth raising the issue of the safety of future participants in long-term space missions, especially missions to Mars, on the background of this disaster. And it is not just about safety from disasters like the one that happened to Musk. Protection from the negative effects of prolonged flight in zero gravity, protection from cosmic radiation, as well as guaranteeing sufficiently high crew productivity over the course of a multi-year mission also play an important role.

Fortunately, no one was killed in the aforementioned crash, as it was a test rocket alone without a crew. However, past disasters in which astronauts died, such as the Space Shuttle Challenger and Space Shuttle Columbia disasters, remind us that it is the seemingly very small details that determine life and death. So far, 15 astronauts and 4 cosmonauts have died in space flights. 11 more have died during testing and training on Earth. It is worth mentioning that space flights are peacekeeping missions, not military operations. They are carried out relatively infrequently and by a relatively small number of people. 

It is also worth noting the upcoming longer and more complex human missions in the near future, such as the mission to Mars. The flight itself, which is expected to last several months, is quite a challenge, and disaster can happen both during takeoff on Earth, landing on Mars, and then on the way back to Earth. And then there are further risks that await astronauts in space. 

The first is exposure to galactic cosmic radiation and solar energetic particles events, especially during interplanetary flight, when the crew is no longer protected by both Earth’s magnetic field and a possible shelter on Mars. Protection from cosmic radiation for travel to Mars is a major challenge, and 100% effective protective measures are still lacking. Another challenge remains being in long-term zero-gravity conditions during the flight, followed by altered gravity on Mars. Bone loss and muscle atrophy are the main, but not only, negative effects of being in these states. Finally, it is impossible to ignore the importance of psychological factors related to stress, isolation, being in an enclosed small space, distance from Earth.

A human mission to Mars, which could take about three years, brings with it a new type of danger not known from the previous history of human space exploration. In addition to the aforementioned amplified impact of factors already known—namely microgravity, cosmic radiation, and isolation—entirely new risk factors are emerging. One of them is the impossibility of evacuating astronauts in need back to Earth, which is possible in missions carried out at the International Space Station. It seems that even the best-equipped and trained crew may not be able to guarantee adequate assistance to an injured or ill astronaut, which could lead to her death—assuming that care on Earth would guarantee her survival and recovery. Another problem is the delay in communication, which will reach tens of minutes between Earth and Mars. This situation will affect the degree of autonomy of the crew, but also their responsibility. Wrong decisions, made under conditions of uncertainty, can have not only negative consequences for health and life, but also for the entire mission.

“It is worth raising the question of the ethicality of the decision to send humans into such a dangerous environment.”

Thus, we can see that a future human mission to Mars will be very dangerous, both as a result of factors already known but intensified, as well as new risk factors. It is worth raising the question of the ethicality of the decision to send humans into such a dangerous environment. The ethical assessment will depend both on the effectiveness of available countermeasures against harmful factors in space and also on the desirability and justification for the space missions themselves. 

Military ethics and bioethics may provide some analogy here. In civilian ethics and bioethics, we do not accept a way of thinking and acting that would mandate the subordination of the welfare, rights, and health of the individual to the interests of the group. In military ethics, however, this way of thinking is accepted, formally in the name of the higher good. Thus, if the mission to Mars is a civilian mission, carried out on the basis of values inherent in civilian ethics and bioethics rather than military ethics, it may be difficult to justify exposing astronauts to serious risks of death, accident, and disease.

One alternative may be to significantly postpone the mission until breakthrough advances in space technology and medicine can eliminate or significantly reduce the aforementioned risk factors. Another alternative may be to try to improve astronauts through biomedical human enhancements. Just as in the army there are known methods of improving the performance of soldiers through pharmacological means, analogous methods could be applied to future participants in a mission to Mars. Perhaps more radical, and thus controversial, methods such as gene editing would be effective, assuming that gene editing of selected genes can enhance resistance to selected risk factors in space. 

But the idea of genetically modifying astronauts, otherwise quite commonsensical, given also the cost of such a mission, as well as the fact that future astronauts sent to Mars would likely be considered representative of the great effort of all humanity, raises questions about the justification for such a mission. What do the organizers of a mission to Mars expect to achieve? Among the goals traditionally mentioned are the scientific merits of such a mission, followed by possible commercial applications for the future. Philosophers, as well as researchers of global and existential catastrophes, often discuss the concept of space refuge, in which the salvation of the human species in the event of a global catastrophe on Earth would be possible only by settling somewhere beyond Earth. However, it seems that the real goals in our non-ideal society will be political and military.

OUPblog - Academic insights for the thinking world.

☐ ☆ ✇ Philosophy Archives | OUPblog

Digital dilemmas: feminism, ethics, and the cultural implications of AI [podcast]

By: Becky Clifford — April 25th 2023 at 09:30
Digital dilemmas: feminism, ethics, and the cultural implications of AI - The Oxford Comment podcast

Digital dilemmas: feminism, ethics, and the cultural implications of AI [podcast]

Skynet. HAL 9000. Ultron. The Matrix. Fictional depictions of artificial intelligences have played a major role in Western pop culture for decades. While nowhere near that nefarious or powerful, real AI has been making incredible strides and, in 2023, has been a big topic of conversation in the news with the rapid development of new technologies, the use of AI generated images, and AI chatbots such as ChatGPT becoming freely accessible to the general public.

On today’s episode, we welcomed Dr Kerry McInerney and Dr Eleanor Drage, editors of Feminist AI: Critical Perspectives on Data, Algorithms and Intelligent Machines, and then Dr Kanta Dihal, co-editor of Imagining AI: How the World Sees Intelligent Machines, to discuss how AI can be influenced by culture, feminism, and Western narratives defined by popular TV shows and films. Should AI be accessible to all? How does gender influence the way AI is made? And most importantly, what are the hopes and fears for the future of AI?

Check out Episode 82 of The Oxford Comment and subscribe to The Oxford Comment podcast through your favourite podcast app to listen to the latest insights from our expert authors.

Recommended reading

Look out for Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines, edited by Jude Browne, Stephen Cave, Eleanor Drage, and Kerry McInerney, which publishes in the UK in August 2023 and in the US in October 2023. 

If you want to hear more from Dr Eleanor Drage and Dr Kerry McInerney, you can listen to their podcast: The Good Robot Podcast on Gender, Feminism and Technology.

In May 2023, the Open Access title, Imagining AI: How the World Sees Intelligent Machines, edited by Stephen Cave and Kanta Dihal publishes in the UK; it publishes in the US in July 2023.

You may also be interested in AI Narratives: A History of Imaginative Thinking about Intelligent Machines, edited by Stephen Cave, Kanta Dihal, and Sarah Dillon, which looks both at classic AI to the modern age, and contemporary narratives.

You can read the following two chapters from AI Narratives for free until 31 May:

Other relevant book titles include: 

You may also be interested in the following journal articles: 

Featured image: ChatGPT homepage by Jonathan Kemper, CC0 via Unsplash.

OUPblog - Academic insights for the thinking world.

❌