Open Access Sting

PhD Comics
There is article published in Science today that details a pentest of open access science journals, and finds that they admitted a ridiculous fake article. I was amused to see a range of results scroll by on Facebook today, but surprised a bit at their interpretation of the article. Without naming names…

Of Course Open Access Journals Suck

As a science journalist, the author should recognize that “suckage” is not an absolute: the measurement can only be evaluated in contrast to other measurements. From the write-up in Science, I gather that the acceptance rate shows that many open access journals have very poor (or non-existent) peer review mechanisms, despite advertising to the contrary. You can conclude from this that they are poor forms of scholarly communication, as the author rightfully does.

But this is a condemnation of open access journals only because he limited his submissions to open access journals. In the “coda” of the article, he places this criticism in the mouth of biologist David Roos, noting that there isn’t any reason to believe that a similar level of incompetence or fraud wouldn’t be found in subscription journals. But the author then defends this with a non-response: there are so many more open access journals!

This is true enough, and I do expect you find more fraud in open access journals. But why are we left to speculate, when the same test could have been done with “traditional” publishers just as easily?

It’s Because of the Money

I’ve seen the inference elsewhere in reporting on the study that the reason you find more fraud in open access (again, a conclusion that couldn’t be reached by this approach), is that they prey on authors and have no reason to defend the journal’s reputation, since it doesn’t actually have to sell.

This is a bit strange to me. I mean, I engage in lots of services (my doctor, my lawyer) that are, effectively “author pays.” Yes, especially in medicine, there are discussions about whether that is ideal, but it certainly doesn’t mean that there is less fraud than if they were selling their services as outcomes to, for example, employers. What this argument essentially says is that librarians, sometimes with fairly limited subject matter expertise, are better at assessing journals than researchers are. While I might actually agree with that, I can certainly see arguments on either side.

I haven’t published in a “gold” (author pays) journal, though I have published and prefer open access journals. Would I pay for publication? Absolutely. If, for example, Science accepted a paper from me and told me I would need to pay a grand to have it published, I’d pony that up very quickly. For a journal I’d never heard of? Very unlikely. In other words, author-pays makes me a more careful consumer of the journal I seek for publication.

Some of this also falls at the feet of those doing reviews for tenure. If any old publication venue counts, then authors have no reason to seek out venues in which their work will be read, criticized, and cited. Frankly, if someone comes up for tenure with conference papers or journal articles in venues I don’t know, it requires more than a passing examination of these. While I don’t like “whitelists” for tenure (in part because they often artificially constrain the range of publishing venues–that is, they are not long enough), it would address some of this.

Finally, one could argue that Science, as a subscription-paid journal, has a vested interest in reducing the move to open access, and this might color their objectivity on this topic. I don’t argue that: I just think this was a case where the evidence doesn’t clearly imply the conclusion.

Paging Sokal

Finally, a few people saw this as come-uppance for Alan Sokal, who famously published a paper in Social Text that was likewise nonsense. Without getting into the details of what often riles people on both sides, I actually think there is a problem with calling what happens in science journals and philosophical journals “peer review” and not recognizing some of the significant epistemological differences. But even beyond this, it’s apples and oranges. Had Science published something in a top science journal, we would be looking at something else.

The problem is that this article and Sokal’s attempt both take fairly modest findings and blow them out of proportion. If anything, both show that peer review is a flawed process, easily tricked. Is this a surprising outcome? I can think of very few academics who hold peer review to be anything other than “better than nothing”–and quite a few who think it doesn’t even rise to that faint level of praise. So, this tells us what we already knew: substandard work gets published all the time.

To Conclude

I don’t want to completely discount work like this. I think what it shows, though, is that we are in desperate need of transparency. When an editorial board lists editors, there should be an easy way to find out if they really are on board (a clearinghouse of editorial boards?). We should have something akin to RealName certifications of editors as non-fake people. And we should train our students to more carefully evaluate journals.

This entry was posted in Research. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

WordPress › Error

There has been a critical error on this website.

Learn more about troubleshooting WordPress.