I have just been re-reading some of the material Larry Sanger has posted as a critique of Wikipedia. The open source encyclopedia, including his essay on Kuro5hin (Why Wikipedia Must Jettison Its Anti-Elitism) and a memoir posted on Slashdot.
I am of two minds on this issue, and the reason for this is in part personal. When I heard about the Nupedia project (the edited, open source forerunner to Wikipedia), I was thrilled. This seemed to me to be the most obvious way of making the internet live up to it’s potential, a potential hinted at by the World Brain and the memex. And so, I emailed and said “sign me up.”
There was a problem, though: I didn’t have a Ph.D. That meant that I could not be an editor in the communication section. After a little back and forth, I was installed as a peer reviewer, though. Which would have been fine, except for the fact that for many months there was no communication editor, and when he did show up (I don’t recall who it was), it didn’t have any impact. In the end, as far as I know, there were never any communication articles in the pipe for Nupedia.
Wikipedia was successful precisely because it adhered to the open source development ethos. The approach was something along the lines of “ask for forgiveness rather than permission.” For me, it seemed like the perfect way to develop articles that could then be peer reviewed. Of course, that isn’t how things turned out. Those who are regular readers of this blog know that I think Wikipedia is spectacular. I think it is the best thing to ever happen to the web. As I tell my classes, I think our grandkids, a century from now, are going to ask what we contributed to the early days of the Wikipedia. That said, I think that Sanger correctly identifies a flaw, and that is the lack of a stable, credible “freeze” that might be effectively cited and integrated with more academic work.
One of the reasons for this is that the claims of validity and salience fall back on Raymond’s open source dictum that “many eyes make all bugs shallow.” This is a great idea in practice, and is largely correct, though there are certainly counterexamples. Wikipedia, likewise, mostly validly identifies what I will call “interested general knowledge.” That is, while it is true that most Americans think that Sadam Husein is responsible for the events of 9/11, most of them do not have an interest in proclaiming this from the rooftops, or even in everyday conversation. That is, when forced to give an opinion, they may express this attitude, but I doubt that it is something that they would be prepared to argue.
Not all public misinformation is of this type. Tom Cruise considers himself an expert on psychiatry, and especially of psychopharmacology. (Note that his claim to authority on this matter is not some sort of gnosis, delivered by deity or faith, but because he has “read the literature”.) Though I doubt Mr. Cruise is a Wikipedia contributor, the place of scientology on the site is contentious, precisely because the standards of science are only held in as high regard on the Wikipedia as they are in the general public.
Even in practice, however, Wikipedia entries for Scientology or Intelligent Design seem to maintain a degree of balance. I have little doubt that there are errors in Wikipedia, of omission and of commission, but I also have little doubt that most of the entries are trustworthy. The problem is why we trust them. Do we believe in knowledge as a democratic process?
Sanger’s push is for an accreditation of an encyclopedia by “acknowledged experts.” Those experts are established primarily through institutions of acceptance. One of those processes of marking someone an expert has traditionally been publication. The web is already undermining this. I think one of the reasons publication has been a way of demonstrating expertise is that there has been financial risk associated with it. Someone would not be published unless there was some guarantee that their book or article would sell, and one of those guarantees is expertise. (Another is simply an ability to be compelling — non-fiction is not the only sort of publishing.)
The other kind of imprimatur that Sanger originally used is the Ph.D. You would think, having spent a goodly number of my bestest years pursuing just such a degree, I would be be happy with this hurdle. But having been up close and personal with the process of “earning” a Ph.D., I am convinced of two things. First, that some absolute fools manage to get the doctorate. Some of these fools graduate from the best schools out there, and some of the less able programs graduate more fools than scholars. So, the Ph.D. is certainly not a measure of insight. Indeed, how many business cards have you received with the “, Ph.D.” after the name and thought that this was a replacement for any obvious signs of intelligence.
Moreover, there are plenty of brilliant people who will never get a Ph.D. The degree has a lot to do with a conforming to a particular set of social and economic conditions, and it is not the best learning fit for very many people. It was good for me, I think (still working that one out), but it’s not for everybody. And I have flunked some truly brilliant people out of our own program because I knew that while they were smart and able, they were not going to be able to complete a degree.
I do think that the average Ph.D., and perhaps the average faculty member, is an expert in their field and able to teach about it. But just as with the Wikipedia, that expertise is only most of the time on most of their specialized topics. The difference is that there is social acceptance of this form of authority. And the question is how to lend that authority to Wikipedia.
Sanger sees this need:
Nevertheless, everyone familiar with Wikipedia can now see the power of the basic Wikipedia idea and the crying need to get more experts on board and a publicly credible review process in place (so that there is a subset of “approved” articles–not a heavy-handed, complicated process, of course). The only way Wikipedia can achieve these things is to jettison its anti-elitism and to moderate its openness to trolls and fools; but it will almost certainly not do these things. Consequently, as Wikipedia increases in popularity and strength, I do not see how there can not be a more academic fork of the project in the future.
I hope that a university, academic consortium, or thinktank can be found to pursue a project to release vetted versions of Wikipedia articles, and I hope that the new project’s managers will understand very well what has made Wikipedia work as well as it has, before they adopt any policies.
Likewise, others who are directly involved in Wikipedia are looking for a process for vetting and testing the truth claims found on the site. So far, I think these approaches have been stymied by the enormity of the project involved. At least two viable approaches have been suggested in various fora:
1. Estimate the degree to which wikipedia reflects knowledge as expressed in the scientific literature. By taking a sample of pages and “fact-checking” them using human coders, some indication of the reliability (in the colloquial sense) of the resource could be established. Then, someone citing Wikipedia could be confident that 99.44% of the content, for example, was reliable.
2. Use Wikipedia as a free source of pre-written material to peer review and publish as a reliable subsection. This need not be an entire encyclopedia, as such an undertaking would be tremendously difficult, but might work quite well as a way of seeding a peer-reviewed reference work within a narrower field.
In each case, we have to discount material that is simply not reflected in the scholarly literature. As an aside, I think that this material is particularly interesting, but there really isn’t much to check it against.
It may be that this is a turning point for the resource, a kind of coming-of-age that requires that a meta-conversation happens. Given its successes so far, I would not want this to impede its further progress. My thought is that for a reliability check of any sort to have a good chance of success, it needs to remain in some way distributed. In particular, I’m thinking about one of a number of approaches to developing trust that will allow for interesting extrusions of the wikipedia.
One way of doing this would be by using something like Outfoxed, a trust-based plug-in for Firefox. There is real value in egocentric trust networks, but here, I am looking for the imprimatur of a set of experts. So, instead:
1. Create a transparent set of criteria, perhaps field-specific, that allows for the creation of a set of people that have wide acceptance as experts. This might have something to do with educational credentials, academic appointments, and publications, though this really does differ from field to field. What is an acceptable set of criteria for an scholar of physics is unlikely to be the same as what is acceptable for a scholar of music.
2. Present frozen articles to the peer group and allow them to rate the articles as “acceptable” or “needs more work.” Naturally, given the open nature of Wikipedia, they would be encouraged to contribute to that work, within Wikipedia.
3. Identify gaps in Wikipedia and create stubs, perhaps recruiting authors for the item.
4. Present a frozen subset of Wikipedia as authoritative, maintaining all of the licensing requirements. Revisit the version periodically, on a cycle of years rather than minutes.
This has a further advantage in that, as long as experts were picked in a fairly demanding way, association with the project would be something akin to association with the editorial board of a journal or “real” encyclopedia. In other words, it would provide the sort of reputational currency that many scholars require in order to devote time to a project.
This isn’t to say that some of the other projects to check Wikipedia are not good. I am involved in two such projects, both in a very cursory way, and I am in favor of letting a thousand flowers bloom. And perhaps the best place to start is within the (relatively empty) field of communication. The above could be accomplished relatively easily, I think, for such a small slice.
Cruising Wikipedia
I have just been re-reading some of the material Larry Sanger has posted as a critique of Wikipedia. The open source encyclopedia, including his essay on Kuro5hin (Why Wikipedia Must Jettison Its Anti-Elitism) and a memoir posted on Slashdot.
I am of two minds on this issue, and the reason for this is in part personal. When I heard about the Nupedia project (the edited, open source forerunner to Wikipedia), I was thrilled. This seemed to me to be the most obvious way of making the internet live up to it’s potential, a potential hinted at by the World Brain and the memex. And so, I emailed and said “sign me up.”
There was a problem, though: I didn’t have a Ph.D. That meant that I could not be an editor in the communication section. After a little back and forth, I was installed as a peer reviewer, though. Which would have been fine, except for the fact that for many months there was no communication editor, and when he did show up (I don’t recall who it was), it didn’t have any impact. In the end, as far as I know, there were never any communication articles in the pipe for Nupedia.
Wikipedia was successful precisely because it adhered to the open source development ethos. The approach was something along the lines of “ask for forgiveness rather than permission.” For me, it seemed like the perfect way to develop articles that could then be peer reviewed. Of course, that isn’t how things turned out. Those who are regular readers of this blog know that I think Wikipedia is spectacular. I think it is the best thing to ever happen to the web. As I tell my classes, I think our grandkids, a century from now, are going to ask what we contributed to the early days of the Wikipedia. That said, I think that Sanger correctly identifies a flaw, and that is the lack of a stable, credible “freeze” that might be effectively cited and integrated with more academic work.
One of the reasons for this is that the claims of validity and salience fall back on Raymond’s open source dictum that “many eyes make all bugs shallow.” This is a great idea in practice, and is largely correct, though there are certainly counterexamples. Wikipedia, likewise, mostly validly identifies what I will call “interested general knowledge.” That is, while it is true that most Americans think that Sadam Husein is responsible for the events of 9/11, most of them do not have an interest in proclaiming this from the rooftops, or even in everyday conversation. That is, when forced to give an opinion, they may express this attitude, but I doubt that it is something that they would be prepared to argue.
Not all public misinformation is of this type. Tom Cruise considers himself an expert on psychiatry, and especially of psychopharmacology. (Note that his claim to authority on this matter is not some sort of gnosis, delivered by deity or faith, but because he has “read the literature”.) Though I doubt Mr. Cruise is a Wikipedia contributor, the place of scientology on the site is contentious, precisely because the standards of science are only held in as high regard on the Wikipedia as they are in the general public.
Even in practice, however, Wikipedia entries for Scientology or Intelligent Design seem to maintain a degree of balance. I have little doubt that there are errors in Wikipedia, of omission and of commission, but I also have little doubt that most of the entries are trustworthy. The problem is why we trust them. Do we believe in knowledge as a democratic process?
Sanger’s push is for an accreditation of an encyclopedia by “acknowledged experts.” Those experts are established primarily through institutions of acceptance. One of those processes of marking someone an expert has traditionally been publication. The web is already undermining this. I think one of the reasons publication has been a way of demonstrating expertise is that there has been financial risk associated with it. Someone would not be published unless there was some guarantee that their book or article would sell, and one of those guarantees is expertise. (Another is simply an ability to be compelling — non-fiction is not the only sort of publishing.)
The other kind of imprimatur that Sanger originally used is the Ph.D. You would think, having spent a goodly number of my bestest years pursuing just such a degree, I would be be happy with this hurdle. But having been up close and personal with the process of “earning” a Ph.D., I am convinced of two things. First, that some absolute fools manage to get the doctorate. Some of these fools graduate from the best schools out there, and some of the less able programs graduate more fools than scholars. So, the Ph.D. is certainly not a measure of insight. Indeed, how many business cards have you received with the “, Ph.D.” after the name and thought that this was a replacement for any obvious signs of intelligence.
Moreover, there are plenty of brilliant people who will never get a Ph.D. The degree has a lot to do with a conforming to a particular set of social and economic conditions, and it is not the best learning fit for very many people. It was good for me, I think (still working that one out), but it’s not for everybody. And I have flunked some truly brilliant people out of our own program because I knew that while they were smart and able, they were not going to be able to complete a degree.
I do think that the average Ph.D., and perhaps the average faculty member, is an expert in their field and able to teach about it. But just as with the Wikipedia, that expertise is only most of the time on most of their specialized topics. The difference is that there is social acceptance of this form of authority. And the question is how to lend that authority to Wikipedia.
Sanger sees this need:
Likewise, others who are directly involved in Wikipedia are looking for a process for vetting and testing the truth claims found on the site. So far, I think these approaches have been stymied by the enormity of the project involved. At least two viable approaches have been suggested in various fora:
1. Estimate the degree to which wikipedia reflects knowledge as expressed in the scientific literature. By taking a sample of pages and “fact-checking” them using human coders, some indication of the reliability (in the colloquial sense) of the resource could be established. Then, someone citing Wikipedia could be confident that 99.44% of the content, for example, was reliable.
2. Use Wikipedia as a free source of pre-written material to peer review and publish as a reliable subsection. This need not be an entire encyclopedia, as such an undertaking would be tremendously difficult, but might work quite well as a way of seeding a peer-reviewed reference work within a narrower field.
In each case, we have to discount material that is simply not reflected in the scholarly literature. As an aside, I think that this material is particularly interesting, but there really isn’t much to check it against.
It may be that this is a turning point for the resource, a kind of coming-of-age that requires that a meta-conversation happens. Given its successes so far, I would not want this to impede its further progress. My thought is that for a reliability check of any sort to have a good chance of success, it needs to remain in some way distributed. In particular, I’m thinking about one of a number of approaches to developing trust that will allow for interesting extrusions of the wikipedia.
One way of doing this would be by using something like Outfoxed, a trust-based plug-in for Firefox. There is real value in egocentric trust networks, but here, I am looking for the imprimatur of a set of experts. So, instead:
1. Create a transparent set of criteria, perhaps field-specific, that allows for the creation of a set of people that have wide acceptance as experts. This might have something to do with educational credentials, academic appointments, and publications, though this really does differ from field to field. What is an acceptable set of criteria for an scholar of physics is unlikely to be the same as what is acceptable for a scholar of music.
2. Present frozen articles to the peer group and allow them to rate the articles as “acceptable” or “needs more work.” Naturally, given the open nature of Wikipedia, they would be encouraged to contribute to that work, within Wikipedia.
3. Identify gaps in Wikipedia and create stubs, perhaps recruiting authors for the item.
4. Present a frozen subset of Wikipedia as authoritative, maintaining all of the licensing requirements. Revisit the version periodically, on a cycle of years rather than minutes.
This has a further advantage in that, as long as experts were picked in a fairly demanding way, association with the project would be something akin to association with the editorial board of a journal or “real” encyclopedia. In other words, it would provide the sort of reputational currency that many scholars require in order to devote time to a project.
This isn’t to say that some of the other projects to check Wikipedia are not good. I am involved in two such projects, both in a very cursory way, and I am in favor of letting a thousand flowers bloom. And perhaps the best place to start is within the (relatively empty) field of communication. The above could be accomplished relatively easily, I think, for such a small slice.
[Update: I accidentally posted an earlier draft.]
Share this: