FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Philosophy Journal Insight Project

Sam Andrews (University of Birmingham) writes in:

I lead the Philosophy Journal Insight Project [pjip.carrd.co]and believe that it might be of interest to readers at the Philosophers' Cocoon. 

The main resource of the project is a spreadsheet that provides a comprehensive overview of around 50 philosophy journals. It contains:

1) Standardisation of key information about journal submission; word counts, peer review anonymity, open access status, etc.

2) Collection of journal rankings from blogs and ranking sites; Leiter Rankings, SJR Rankings, SNIP Rankings, etc.

3) Estimates drawn from journals and APA surveys for acceptance rates, comment chance, average days for a desk rejection, average days for external review, etc.

4) Compilation of various impact statistics; total citations, CiteScore percentile, etc.

The site also contains a resources section that links to various places relevant to journal submission.

This is great service to the profession. Do check it out!

Incorporating your previous work as assumptions in a paper without violating anonymity?

In our newest "how can we help you?" thread, a reader asks:

Let's say I'm working on topic X. I have a novel interpretation of Y, which is related to X, and I published a paper defending this interpretation of Y. In my new paper on X, I want to assume this interpretation of Y. How to do this without violating anonymity? Here are my thoughts:

- I can just cite myself as a third party, explain the view briefly, and then say that I will assume this view. But I think this approach would not realistically protect anonymity since I'm a very early career person, and no one is going to assume my views except for me.
- I can briefly explain the interpretation without citing myself. But this raises worries like: "this interpretation needs much more work to get off the ground" or the claims like "this view is already defended in an earlier paper that the author isn't aware of." So honestly, I don't know what to do.

How do you incorporate your published work into a new paper without violating the rules of anonymized peer review?

These are good questions--in fact it's an issue that I've run into many times myself, both as an early-career scholar and now as a mid-career scholar. While I've heard from many it's best to cite oneself in the third-person to preserve anonymity, this doesn't help all that much in a paper where one is primarily building on other work one has published previously. 

Another reader submitted the following reply:

I think you should worry about what you do assume in a paper if, as you note, "no one is going to assume my views except for me". Assumptions in philosophical papers are to be widely accepted (hence, widely held). Alternatively you can just say, I will assume "...". But if the assumption is not granted by the referee, then the paper is unlikely to be accepted.

I don't think this is exactly right. Assumptions don't have to be widely held to be legitimate to invoke in a philosophical argument; what they need to be is to be defended. But still, the practical problem here is real: if you're the only one who has defended the relevant assumption(s), then, particularly if you're early-career, any referee is likely to suspect that you're the author of the previous paper defending them.

So, what to do? Do any readers have any helpful tips? 

Rejection Rates Should Not Be a Measure of Journal Quality (guest post)

“If philosophy relies too heavily on rejection rates as a measure for journal quality or prestige, we run the risk of further degrading the quality of peer review.”

In the following post, Toby Handfield, Professor of Philosophy at Monash University, and Kevin Zollman, Professor of Philosophy and Social and Decision Sciences at Carnegie Mellon University, explain why they believe the common practice of using journal rejection rates as a proxy for journal quality is bad.

This is the second in a series of weekly guest posts by different authors at Daily Nous this summer.


 

[Mel Bochner, “Counting Alternatives: The Wittgenstein Illustrations” (selections)]

Rejection Rates Should Not Be a Measure of Journal Quality
by Toby Handfield and Kevin Zollman

Ask any philosopher about the state of publishing in academic philosophy and they will complain. Near the top of the list will be the quality of reviews (they’re poor) and rejection rates (they’re high). Indeed, philosophy does have extremely high rejection rates relative to other fields. It’s extremely hard to understand why we have such high rejection rates. Perhaps there is simply more low-quality work in philosophy than other fields. Or, perhaps, rejection rates are themselves something that philosophy journals strive to maintain. Many journals strive to publish only the very best work within their purview, and perhaps they use their rejection rates to show themselves that they are succeeding.

Like many fields, philosophy also has an implicit hierarchy of journals. Of course, people disagree at the margins, but there seems to be widespread agreement among anglophone philosophers (at least) about what counts as a top 5 or top 10 journal. Looking at some (noisy) data about rejection rates, it does appear that the most highly regarded journals have high rejection rates. So, while we complain about rejection rates, we also seem to—directly or indirectly—reward journals that reject often.

It is quite natural to use rejection rates as a kind of proxy for the quality of the journal, especially in a field like philosophy where other qualitative and quantitative measures of quality are somewhat unreliable. We think it is quite common for philosophers to use the rejection rates of journals as a proxy for paper quality when thinking about hiring, promotion, and tenure. It’s impressive when a graduate student has published in The Philosophical Review, in large part because The Philosophical Review rejects so many papers. Rejection rates featured prominently—among many other things—in the recent controversy surrounding the Journal of Political Philosophy.

We, along with co-author Julian García, argue that this might be a dangerous mistake. (This paper is forthcoming in Philosophy of Science—a journal that, we feel obligated to point out, has a high rejection rate.) Our basic argument is that as journals become implicitly or explicitly judged by their rejection rates, the quality of peer review will go down, thus making journals worse. We do so by using a formal model, but the basic idea is not hard to understand.

We start by asking a very basic question: what is it that a journal is striving to achieve? We consider two alternatives: (1) that the journal is trying to maximize the average quality of its published papers or (2) that the journal is trying to maximize its rejection rate. The journal must decide both what threshold counts as good enough for their journal and also how much effort to invest in peer review. They can always make peer review better, but it comes at a cost (something that is all too familiar).

This already shows why judging journals by rejection rates can potentially be quite harmful. If a journal is merely striving to maximize its rejection rate, it doesn’t much care who it rejects. So, it has less incentive to invest in high quality peer review than does a journal that is judged by the average quality of papers in the journal. After all, if a journal only cares about rejection rates, it doesn’t much matter if a rejected paper was good or bad.

This already is probably sufficient to give one pause, but it actually gets much worse. In that quick argument, we implicitly assumed that there was a fixed population of authors who mindlessly submitted to the journal, hoping to get lucky. However, in the real world, authors might be aware of their chance of acceptance and choose not to submit if they regard the effort as not worth the cost.

A journal editor who wants to maintain a high rejection rate now has a problem. If they are too selective, authors of bad papers might opt not to submit, and a paper that isn’t submitted can’t be rejected. If a journal very predictably rejects papers below a given standard, their rejection rates will go down because authors of less good papers will know they don’t stand a chance of being accepted. A journal editor who cares about their journal’s rejection rate will then be motivated to tolerate more error in its peer review process in order to give authors a fighting chance to be accepted. They use their unreliable peer review as a carrot to encourage authors to submit, which in turn allows the journal to keep their rejection rates high.

We consider several variations on our model to demonstrate how this result is robust to different ways that authors might be incentivized to publish in different journals. We would encourage the interested reader to look at the details in the paper.

Of course, our method is to use simplified models, and in doing so we run the risk that a simplification might be driving the results. Most concerning, in our mind, is that our model features a world with only one journal. Philosophy has multiple journals, although in some fields of philosophy a single journal might dominate the area as the premier outlet for work in that area. Future work would need to determine if this is a critical assumption, although our guess is that it is not.

Although we don’t investigate this in our paper, we think that the process we identify might also exist in other selection processes like college and graduate school admission or hiring. In the US, colleges often advertise the selectivity of their admissions process, and we suspect that they face the same perverse incentives we identify.

Whether you share our intuition about this or not, we think the process we identify is concerning. If philosophy relies too heavily on rejection rates as a measure for journal quality or prestige, we run the risk of further degrading the quality of peer review. We think it is potentially problematic that journals sometimes advertise their rejection rates, lest it contribute to rejection rates being a sought after mark of prestige. Furthermore, we think it’s important that philosophy as a discipline walk back its use of rejection rates as a proxy for journal quality. To the extent that we are doing that now, it may actually serve to undermine the very thing we are hoping to achieve.


 

 

The post Rejection Rates Should Not Be a Measure of Journal Quality (guest post) first appeared on Daily Nous.

Guest Post — Addressing Paper Mills and a Way Forward for Journal Security

Wiley's Jay Flynn discusses the impact that paper mills had on Hindawi's publishing program and how all stakeholders must collaborate to address behaviors that undermine research integrity.

The post Guest Post — Addressing Paper Mills and a Way Forward for Journal Security appeared first on The Scholarly Kitchen.

Inquiring about the status of journal submissions

In our newest "how can we help you?" thread, an anxious grad student asks:

I am sure there are topics on this, but it never gets old: how long after the submission is it appropriate to reach out to a journal to inquire about the status of the submission? 2 months? 3 months? 6 months? Is there any difference in norms relative to journals?

I'm pretty sure that we've discussed this on previous occasions, but it could still be good to take people's temperature on this again. Personally, I think I'd probably wait 6 months. The average turnaround time for many journals seems to be 2-4 months, so asking within that timeframe might not have much of a point. Then again, different journals have different average turnaround times (sometimes listed on their website), so maybe go by that?

Anyway, these are just my thoughts. What are yours?

Putting papers in progress on one's website?

In our newest "how can we help you?" thread, a reader asks:

What's the risk of putting the titles of one's "in progress" or "under review" papers on one's website? Sometimes I'll see a website where someone has replaced the title of their paper with something to the effect of "title redacted for review". And sometimes they'll even redact the titles of presentations on their CV (if they have a paper under review with the same title). But redacting titles of papers and presentations is a pain. So what's the risk?

Is the thought that reviewers might discover your identity and form biases? If so, are some authors more likely than others to suffer from deanonymization? Might those who study or work at prestigious institutions, or who have excellent publication records, even benefit from a positive presumption after deanonymization?

Good questions. What do you all think?

Reviewing a paper again for another journal?

In our two most recent "how can we help you?" threads, two comments were posted (by the same reader?) on whether to accept a reviewer assignment for a paper that they already recommended rejecting at another journal:

I've been asked to review a paper in a journal that I've already reviewed in a different journal and recommended rejection. I haven't seen the new paper but the title is the same. I was quite sure the paper was not ready for publication then and its been about 3 months since then that I've received the request and so I think its unlikely the paper has substantially improved. Should I still take up the review request?

----

Should you review a paper that you've reviewed before for another journal that you recommended rejection there? I'm not sure if it seems unfair to them that they have to be reviewed by the same reviewer that already didn't like their work the first time and maybe deserve to have someone else read the work?

Good questions. Another reader submitted the following reply:

I have refused to review for a second time. Reason is that I believe that I may be mistaken. And being on the other end, my currently most cited paper (and among the top cited papers in the relevant debate) was rejected by a particular reviewer several times at different journals, always giving the same set of comments. (Almost prompting me to leave academics.) I think it would be bad to exacerbate one's mistake.

This is my policy as well, and for the same reasons. Peer-review is such a crapshoot, and referees so different in their judgments, that I think an author deserves a shot with different reviewers in different places. 

What do you all think?

Guest Post: Start at the Beginning – The Need for ‘Research Practice’ Training

Danny Kingsley suggests that research integrity begins with the training researchers receive at university. Achieving Open Research and increasing reproducibility requires systematic research training that focuses specifically on research practice.

The post Guest Post: Start at the Beginning – The Need for ‘Research Practice’ Training appeared first on The Scholarly Kitchen.

When anonymous review is compromised?

In our newest "how can we help you?" thread, a reader asks:

What is the protocol for when a journal (accidentally) reveals identities in a blind peer review? I've experienced this a few times lately, from both sides: As an author, I got the reviewer's comments and could still see their name. As a reviewer, I got a paper that still contained the author's name (both top-level special journals in my field, btw). What should one do in such a case? Report to the journal? Pretend nothing happened? Decline to review?

Good questions, and I'm curious to hear what people think. First, I think it's worth noting that not all journals have anonymized review. In some science-y fields (including some philosophy of science journals), journals have authors submit papers with their identifying issue available to reviewers. So, although I don't know for sure, something like that might have happened here. But, if not, I think the right thing to do is to draw the problem to the editors' attention and, if you're a reviewer, decline to review. I've heard some people say that they tell the editor, and then let the editor decide whether to let the review proceed. But I think this is bad practice. If a journal is supposed to have anonymized review and that is compromised, then the review should not proceed, whatever the editors might say or do--as it goes against the journal's own stated editorial policy. 

But these are just my thoughts. What are yours?

For Epistemic Respect – Against Reviewer 2

By: Taster
Despite the efforts of journals and editors to the contrary, the well-known academic folk-devil, Reviewer 2 continues to make the lives of researchers miserable. Gorgi Krlev and Andre Spicer draw on a recent encounter with reviewer 2 and the subsequent twitterstorm over what counts as epistemic respect. They explain what epistemic respect means, what needs … Continued

How much time do editors and referees spend on submissions?

In our newest "how can we help you?" thread, a reader asks:

How much time do editors/referees spend on average reading submissions before making a decision? Do they read them more than once?

Interesting questions. It would be great to hear from readers (both editors and reviewers), but I suspect there is huge variance. As an another reader writes in the following submitted reply:

I suspect the average would be misleading -- a distribution would be much more helpful. Such info, while likely not feasibly accessible, would be interesting to see (probably there's a *huge* variety).

In my own case as a reviewer, how long I take on a paper depends on many things: how long and complex the paper is, how good I think it is, how certain I initially am about the recommendation I plan to make to the editors (sometimes I know that I want to recommend reject, accept, R&R-major revisions, etc.; other times I waffle), how much work I think a paper needs if I recommend an R&R (as in these cases I write extremely detailed reports specifying exactly what issues need to be addressed for me to recommend acceptance), and so on. Then, of course, there is my personal and professional schedule. Some times of the year are busier than others with teaching, family obligations, service obligations at my university, research (sometimes I have revise-and-resubmits and other research deadlines to meet too!), and sometimes these things can be planned for but other times not. All of these things and more result in huge variance in how long it takes me to get a referee report done. Sometimes I get reports done well in advance of the journal's deadline (in fact, sometimes after just a few days or weeks), but sometimes it takes considerably longer. In any case, I have a policy of meeting journal deadlines, and I'd never not complete a report on time (or at most a day or two late).

But this is just my experience. What are yours? It would be great to hear from reviewers and editors! 

Guest Post – AI and Scholarly Publishing: A View from Three Experts

A recap of a recent SSP webinar on artificial intelligence (AI) and scholarly publishing. How can this set of technologies help or harm scholarly publishing, and what are some current trends? What are the risks of AI, and what should we look out for?

The post Guest Post – AI and Scholarly Publishing: A View from Three Experts appeared first on The Scholarly Kitchen.

❌