Fake science

One of the standard features of rumors is that they tend to amplify certain parts of a story and ignore others, so that as the story gets passed from person to person, it ends up being shaped, and often shaped into a meme.

Thus when three grad students submitted a computer generated paper to a conference, they were surprised when it was accepted. I think I probably first heard about this on the MEA list, which referenced a Technology Review blog with the story.

This has spread across the net even faster than an earlier story of vigilante justice for plagiarism. The Reuters article, along with other commentators, draws a parallel to the Sokal hoax.

While there are certain similarities, it’s important to note that the objective was not to show that scientific conferences were not providing competent peer review (though this could certainly be argued separately), but that this particular conference, which has somehow found a huge list of academics to spam with their CFP on a monthly basis, appears to be either a cash cow for the organizers or a “vanity conference.”

I actually looked into the conference when it first came into being, and the second year as well. Lacking any backing by a reputable scientific society, and without any scholars that are particularly well known in the field on the organizing committee, it is already suspect. But having attempted to organize an ill-fated conference as a graduate student, I am sympathetic with these failings. But the spamming — both on lists and via personal email — is what has turned many against these conferences. The submission of two entirely computer generated submissions (think of them as Mad Libs for scholarly papers), yielded one acceptance, which is striking because any high-school student after the first paragraph or figure, would recognize it as nonsense.

So, really, this was a kind of check to make sure that the conference was as fake as it seemed. The question remains whether even marginally more carefully constructed fake work can make it into “real” conferences, and anyone that seriously asks this question hasn’t been to a large academic conference lately. The truth is, without replication, the article alone can only cloak itself in credibility. This may be acceptable in the “hard” sciences where replication is not uncommon (though becoming less common, I suspect, due to funding models), but in the social sciences where such replication is a rarer beast, it raises concerns.

[Edited to assign blame appropriately :)]

This entry was posted in Uncategorized and tagged . Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

5 Comments

  1. Posted 4/16/2005 at 1:28 am | Permalink

    I’m not so sure that the difference you are pointing out between the MIT paper and the Sokal hoax (which was the first thing I thought of when I heard of it, Reuters notwithstanding) is a distinction that makes any real difference.

    Sokal’s hoax was to discredit a particular publication, with the intention of making it the stand-in for a much larger school of thought. He elaborates on this ad-nauseum in his incredibly wrong-headed and tedious book.

    My point is that scads of folks who never read Social Text *anyway* used the Sokal publication as a way to jab at post-structuralist literary theory with their rather limp sabres, sure that Sokal’s success was a sign of the apocalypse.

    I find the robotic turnabout to be eminently fair play– and rather amusing to boot.

  2. Posted 4/16/2005 at 9:46 am | Permalink

    The difference, I think, is that Social Text was seen as a central organ of American cultural studies, published by a university press, with a pretty well-respected (within the field) editorial board. It published people like Edward Said, Fredric Jameson, Terry Eagleton, Michel de Certeau, Gayatri Spivak, Simone de Beauvoir, Nancy Fraser, Lawrence Grossberg, Cornel West, Donna Haraway, Henry Giroux, (our own, sort of) Ernesto Laclau, and of course, Stanley Aronowitz, among others. It was a peer-reviewed journal with a serious editorial board, and publication of the Sokal article strongly suggested that a group of peer reviewers thought it was worthy.

    WMSCI, on the other had, despite their website and *very* widely spread CFP, is not a reputable conference (let alone a journal). The authors of the program that authored the paper suggest that one good use of the program is to “auto-generate submissions to ‘fake’ conferences; that is, conferences with no quality standards, which exist only to make money. A prime example, which you may recognize from spam in your inbox, is SCI/IIIS and its dozens of co-located conferences (for example, check out the gibberish on the WMSCI 2005 website). Using SCIgen to generate submissions for conferences like this gives us pleasure to no end.” If you look over the organizing committee for the conference, I doubt you will recognize any of the names, and it is not clear if any of them have institutional affiliations.

    So, in the former case, it’s clear the intent was to show that the emperor had no clothes. That wasn’t the intent, and I don’t think it was the outcome, of the latter. If you wanted a similar target for the latter, you would need to show that a “fake” paper could get into, say, Machine Learning, Computational Linguistics, or Artificial Intelligence.

    Note that I’m not saying doing so is impossible. In fact, I think it is pretty likely you could. After all, academic publishing is built on the assumption of integrity, which in some ways invalidates Sokal’s attempt to discredit cultural studies.

  3. Chheng Hong
    Posted 4/16/2005 at 10:56 pm | Permalink

    Actually, I felt very nervous when I saw this news – since my writing is not so far away from the automatic computer-generated texts, will the faculties here doubt that I use this program to generate my final paper?

  4. Posted 4/18/2005 at 7:05 am | Permalink

    Suggesting that the fact that a % of papers gets through a system stands for proof of anything is shocking regardless of whether you’re qual or quan, or bi. Or a journalist. I think your analysis of the vanity conference based on the configuration of the conference tells us much more. As for computer generated papers, we should be more concerned with research and design errors in the papers that are legitimately produced, IMHO.

  5. Posted 4/18/2005 at 8:36 pm | Permalink

    Alex: that makes sense. The paradox is that big names often get published without editorial scrutiny anyway– it has little to do with general principles of rigor.

    Jason makes sense too– it’s why the Sokal publication really didn’t mean anything!!

    I still maintain that those who used Sokal as a means to try to discredit contemporary theory should be using this to discredit hard science publications– it would demonstrate their equal wrong-headedness… not doing so simply shows that they were driven solely by existing ideology…

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>