FreshRSS

๐Ÿ”’
โŒ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Standing on the shoulders of Chinese (Scientific) Giants โ€“ Evidence for a citation discount for Chinese Researchers

By: Taster
Chinese researchers are increasingly leading scientific research, yet their contributions are not fully recognized, notably by US researchers. Shumin Qiu, Claudia Steinwender and Pierre Azoulay discuss the reasons why articles written by Chinese academics receive significantly fewer citations from US researchers than those written by non-Chinese researchers. Following Chinaโ€™s unprecedented rise as exporter of goods, โ€ฆ Continued

Norms for Publishing Work Created with AI

What should our norms be regarding the publishing of philosophical work created with the help of large language models (LLMs) like ChatGPT or other forms of artificial intelligence?

[Manipulation of M.C. Escherโ€™s โ€œDrawing Handsโ€ by J. Weinberg]

In a recent article, the editors of Nature put forward their position, which they think is likely to be adopted by other journals:

First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.

Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.

A few comments about these:

a. It makes sense to not ban use of the technology. Doing so would be ineffective, would incentivize hiding its use, and would stand in opposition to the development of new effective and ethical uses of the technology in research.

b. The requirement to document how the LLMs were used in the research and writing is reasonable but vague. Perhaps it should be supplemented with more specific guidelines, or with examples of the variety of ways in which an LLM might be used, and the proper way to acknowledge these uses.

c. The requirements say nothing about conflict of interest. The creators of LLMs are themselves corporations with their own interests to pursue. (OpenAI, the creator of ChatGPT, for example, has been bankrolled by Elon Musk, Sam Altman, Peter Thiel, Reid Hoffman, and other individuals, along with companies like Microsoft, Amazon Web Services, Infosys, and others.) Further, LLMs are hardly โ€œneutralโ€ tools. Itโ€™s not just that they learn from and echo existing biases in the materials on which theyโ€™re trained, but their creators can incorporate constraints and tendencies into their functions, affecting the outputs they produce. Just as we would expect a researcher to disclose any funding that has an appearance of conflict of interest, ought we expect researchers to disclose any apparent conflicts of interest concerning the owners of the LLMs or AI they use?

Readers are of course welcome to share their thoughts.

One question to take up, of course, is what publishing norms philosophy journals should adopt in light of the continued development of LLMs and AI tools. Are there distinctive concerns for philosophical work? Would some variation onย Natureโ€˜s approach be sufficient?

Discussion welcome.

โŒ