This is the sixth in a series of posts about the piece of research I am doing on Digg. You can read it from the beginning if you are interested. In the last section I showed a correlation between how much of a response people got from their comments and their propensity to contribute future comments to the community. In this section, I question whether we can observe some form of “learning” or “training” over time among Digg commenters. Do they figure out how to garner more Diggs, either by learning on an individual basis, or by attrition?
Are later comments better Dugg?
You will recall that we have numbered the comments for each user in our sample from the earliest to the most recent. If people are learning to become more acceptable to the community, we should see a significant difference in responses (Diggs and replies) between people’s first posts and their 25th posts.
Loading all the data into R, I find a fairly strong correlation between post number and upward diggs (.28), as well as with downward diggs (.11), and replies (.08). I’d like to show this as a boxplot, so you can clearly see the growing abilities of users, but R is giving me problems. The issue is simple enough: although I can “turn off” the plotting of outliers outside of the boxes, it still makes the size of the chart based on these. Since one of the comments received over 1,500 diggs up, it means my boxes (which have median values in the twos and threes) are sitting at the bottom of the graph as little lines. After a little digging in the help file, I figure out how to assign limits to the y axis (with a ylim=c(0,10)), and I generate the figure seen to the right.
But this raises the question of what creates this increase. Like failing public high schools, some of the rise in positive marks might just be because the less capable Diggers are dropping out. We need to figure out if this is messing with our results.
Dropping out the Dropouts
In order to filter out the dropouts, I turn to… nope, not Python this time. I could, but it’s just as easy to sort all the comments in Excel by order, so that all the 30th comments are in one place on the worksheet. I then copy and paste these 812 usernames to a new sheet. In the last column of the main sheet, I write up a function that says, if the username is on that list, and if the number of this comment is 30th or less, print a 1 in this column; otherwise, print a 0. If you are curious what that function looks like precisely, it’s this:
I can now sort the list by this new column, and I have all the first 30 comments, by users who have made at least 30 comments, in one place. I pull these into R and rerun the correlations. It turns out that–no surprise–they are reduced. The correlations to buries and responses are near zero, and to diggs are at 0.19.
I’m actually pretty happy with a 0.19 correlation. It means that there is something going on. But I’m curious as to what reviewers will think. The idea of a strong correlation is a bit arbitrary: it depends on what you are doing. If I designed a program that, over a six month period, correlated at -0.19 with body weight, or crime rates, or whatever, it would be really important. The open question is whether there are other stable factors that can explain this, or if the rest of the variability is due to, say, the fact that humans are not ants and tend to do unpredictable stuff now and again. Obviously, this cries out for some form of factor analysis, but I’m not sure how many of the other factors are measurable, or what they might be.
Hidden in these numbers, I suspected, were trolls: experienced users who were seeking out the dark side, learning to be more and more execrable during their first 30 comments. I wanted to get at the average scores of these folks, so I used the “subtotal” function in Excel (which can give you “subaverages” as well), and did some copying, pasting, and sorting to be able to identify the extreme ends. The average average was a score of about 3. The most “successful” poster managed to get an average score of over 33. She had started out with a bit of a bumpy ride. In fact, the first 24 posts had an average score of less than zero. But she cracked the code by the 25th comment, and received scores consistently in the hundreds for the last five of this chunk of data.
On the other end was someone who had an average score of -11. Among the first thirty entries, only one rose above zero, and the rest got progressively worse ratings, employing a litany of racist and sexist slurs, along with attacks on other sacred cows on Digg. It may have been she was just after the negative attention, and not paying any mind to the quantification of that in the form of a Digg score, but it’s clear that the objective was not to fit in.
Enough with the numbers!
I wanted to balance out the questions of timing and learning with at least an initial look at content. I always like to use mixed methods, even though it tends to make things harder to publish. At some point I really need to learn the lesson of Least Publishable Units, and split my work into multiple papers, but I’m not disciplined enough to do that yet. So, in the next sections I take on the question of what kinds of content seem to affect ratings.
[Update: I pretty much ran out of steam on documenting this. The Dr. Suess inspired presentation for the Internet Research conference is here, and a version of the article was eventually published in Information, Communication, and Society.]
[the making of, pt. 6] Are you experienced?
This is the sixth in a series of posts about the piece of research I am doing on Digg. You can read it from the beginning if you are interested. In the last section I showed a correlation between how much of a response people got from their comments and their propensity to contribute future comments to the community. In this section, I question whether we can observe some form of “learning” or “training” over time among Digg commenters. Do they figure out how to garner more Diggs, either by learning on an individual basis, or by attrition?
Are later comments better Dugg?
You will recall that we have numbered the comments for each user in our sample from the earliest to the most recent. If people are learning to become more acceptable to the community, we should see a significant difference in responses (Diggs and replies) between people’s first posts and their 25th posts.
Loading all the data into R, I find a fairly strong correlation between post number and upward diggs (.28), as well as with downward diggs (.11), and replies (.08). I’d like to show this as a boxplot, so you can clearly see the growing abilities of users, but R is giving me problems. The issue is simple enough: although I can “turn off” the plotting of outliers outside of the boxes, it still makes the size of the chart based on these. Since one of the comments received over 1,500 diggs up, it means my boxes (which have median values in the twos and threes) are sitting at the bottom of the graph as little lines. After a little digging in the help file, I figure out how to assign limits to the y axis (with a ylim=c(0,10)), and I generate the figure seen to the right.
But this raises the question of what creates this increase. Like failing public high schools, some of the rise in positive marks might just be because the less capable Diggers are dropping out. We need to figure out if this is messing with our results.
Dropping out the Dropouts
In order to filter out the dropouts, I turn to… nope, not Python this time. I could, but it’s just as easy to sort all the comments in Excel by order, so that all the 30th comments are in one place on the worksheet. I then copy and paste these 812 usernames to a new sheet. In the last column of the main sheet, I write up a function that says, if the username is on that list, and if the number of this comment is 30th or less, print a 1 in this column; otherwise, print a 0. If you are curious what that function looks like precisely, it’s this:
I can now sort the list by this new column, and I have all the first 30 comments, by users who have made at least 30 comments, in one place. I pull these into R and rerun the correlations. It turns out that–no surprise–they are reduced. The correlations to buries and responses are near zero, and to diggs are at 0.19.
I’m actually pretty happy with a 0.19 correlation. It means that there is something going on. But I’m curious as to what reviewers will think. The idea of a strong correlation is a bit arbitrary: it depends on what you are doing. If I designed a program that, over a six month period, correlated at -0.19 with body weight, or crime rates, or whatever, it would be really important. The open question is whether there are other stable factors that can explain this, or if the rest of the variability is due to, say, the fact that humans are not ants and tend to do unpredictable stuff now and again. Obviously, this cries out for some form of factor analysis, but I’m not sure how many of the other factors are measurable, or what they might be.
Hidden in these numbers, I suspected, were trolls: experienced users who were seeking out the dark side, learning to be more and more execrable during their first 30 comments. I wanted to get at the average scores of these folks, so I used the “subtotal” function in Excel (which can give you “subaverages” as well), and did some copying, pasting, and sorting to be able to identify the extreme ends. The average average was a score of about 3. The most “successful” poster managed to get an average score of over 33. She had started out with a bit of a bumpy ride. In fact, the first 24 posts had an average score of less than zero. But she cracked the code by the 25th comment, and received scores consistently in the hundreds for the last five of this chunk of data.
On the other end was someone who had an average score of -11. Among the first thirty entries, only one rose above zero, and the rest got progressively worse ratings, employing a litany of racist and sexist slurs, along with attacks on other sacred cows on Digg. It may have been she was just after the negative attention, and not paying any mind to the quantification of that in the form of a Digg score, but it’s clear that the objective was not to fit in.
Enough with the numbers!
I wanted to balance out the questions of timing and learning with at least an initial look at content. I always like to use mixed methods, even though it tends to make things harder to publish. At some point I really need to learn the lesson of Least Publishable Units, and split my work into multiple papers, but I’m not disciplined enough to do that yet. So, in the next sections I take on the question of what kinds of content seem to affect ratings.
[Update: I pretty much ran out of steam on documenting this. The Dr. Suess inspired presentation for the Internet Research conference is here, and a version of the article was eventually published in Information, Communication, and Society.]
Share this: