One of the themes of my book (you know, the book I keep talking about but keep failing to snatch from the outer atmosphere of my imagination, where it seems to reside) is that by measuring, you can create change in yourself and in others. Given that, and the immense non-being of the book, its chapters, or the words that make it up, engaging in #AcWriMo seems painfully obvious. This is a take-off on the wildly successful National Novel Writing Month, and an effort to produce a lot of drafty text, not worrying so much about editing, making sense, or the like. You know: blogging.
Noodle knows I’ve got a ton of writing that needs to be done, like yesterday. Just so I can keep it straight in my head:
* The #g20 paper. This was an awesome paper that was nearly done two years ago. The data is dated, which is going to make publishing harder, but the ideas and analysis are still really interesting and good, I think. I just need to finish off a small bit of analysis (oh no! that has little to do with writing!) and write the sucker up.
* The aforementioned book. Or at least a couple of the chapters for it, which are now about five years overdue.
* A short piece on Enders Game.
* Updating some research (eek, more non-writing) and writing it up (phew).
* A dozen other little projects.
I also, however, have a bunch of other pressing things: planning for two new courses, maybe coding up a new version of my badge system (although, unless somehow funded, that needs to be a weekend project), and of course the ever-present AoIR duties.
Oh, did I say weekends. Yes, the first caveat to my pledge: I’m trying not to work weekends. My family is my first priority, and while that is easy to say, it’s harder to do. So I will endeavor not to do any work on the weekends. I’ve been trying to do that so far, and it’s not really possible, but it’s a good reach goal. Oh, and I’m taking a chunk of Thanksgiving week off, since my Mom and all her kids and grandkids (including the ones in Barcelona) are coming together at our house for the first time in probably more than two decades. But I’ll do a make-up in December.
Second caveat: I’m counting posting to the blog (since this is where I used to do a lot of my pre-writing). I’d like to count email too, since I did a solid 6 hours of catching up on email today, but I think that’s a no-go.
Really what we are talking about then is four consecutive weeks of completing 6,000 words each week. That may not sound very ambitious, but given how hard it was to push out the last 5,000 words (it took way more than a week–sorry editors!), I think that 1,200 words a day is plenty ambitious. Oh, and by doing it as Monday-Friday weeks, I get to start next Monday. Procrastination FTW.
Now that I’ve managed to negotiate myself down, it doesn’t seem like much of a challenge, but there it is. I will report on my goals here on a weekly basis. I may try to add some other metrics (time on task, people mad at me, etc.) as we go forward. But for now: words, words, words. (Though only 573 of them for this post.)
Gaming Amazon Reviews
I will readily admit it: I trust Amazon reviews. I just bought a toy at Toys R Us for my eldest son for his birthday. It kind of sucks, though he’s a bright kid and can make sucky things work. If I had read the Amazon reviews, I would have found this out before making the purchase.
I’m not stupid–I know that astroturf exists and that Amazon Reviews are a good place for it to exist. Heck, I even bought a book on Amazon that tells authors to do it. I bought the book because it was well-reviewed. It was not a good book. It did get me to plead for folks on Facebook to review my book on Amazon, and a few of them (mostly former students) took me up on the offer.
First Review
I don’t write many Amazon reviews, but I happened across some of them recently. One of the first I wrote was in 2007 for a first novel by David J. Rosen called “I Just Want My Pants Back.” I picked it up in the lobby of our apartment building where I suspect someone in publishing or media worked and discarded her readers’ copies. I got a lot of early reads from big publishers this way, and then returned them to our communal library.
Not wanting to slam an early author–I’m professorial like that–I gave the book three stars. I’ll admit here, that is where the reviewing system first failed. It really deserved two. The extra star was to pull it up to a “Gentleman’s C.” As of today, it has an average of four stars, with 32 reviews. It was made into an series for MTV, which if the novel is any kind of indication, is probably shown to inmates in Guantánamo. (If this sounds harsh to Mr. Rosen or anyone else, pull out those fat paychecks from MTV and feel better.)
Second Review
Now, 32 reviews is usually an indicator to me that an average review is actually pretty legitimate, so where did things go so wrong? Let’s start with the review directly after mine, posted about two weeks later, which is titled “Jason Strider is a modern day Holden Caulfield” and penned by first-time reviewer R. Kash “moneygirl”. Whoa–not shy with the praise there, and it seems we differ in our perception of the work. How did we come to such a failure in intercoder reliability?
We know that Ms. Moneygirl is a real person because her name is followed by the coveted Real Name™ badge, which means that she has ponied up a credit card that presumably reads “R Kash.” That this is her first review may be a warning, but frankly it was very nearly my first Amazon review as well, preceded only by an over-the-top accolade for this very blog. (Given my tendency to dissociation, this is only a mild conflict of interest.) Her other two reviews are also 5-star–but we will get to that.
Despite the “real name” and a self-reported nickname and home city (New York), it’s difficult to find out much more about Ms. Kash without some guessing. But a little noodling around suggests that Rachel Kash on Twitter is a fan of the MTV show. Despite the demure pony-tail photo, I think there are some clear connections to a Rachel Kash who writes for the Huffington Post. Her profile there notes she
She writes fiction, as does her husband, David Rosen. Yes, the author of the book and the subject of Ms. Kash’s first and second reviews on Amazon. Given this, “I hope to see a lot more from Rosen in the future,” could be read in multiple ways. But I think it’s awesome that his spouse is also his number one fan.
Third and Fourth Reviews
The third review comes from someone who was writing his second review. The first review he had written was for the game NBA Live 2004, which coincidentally (?) had close ties to hip-hop artists. (This conspiracy stuff makes you paranoid.) If it is astroturf, it is very forthright astroturf: “This book was passed on to me by a colleague at MTV and I read it in one day.” Perhaps it is only me who wonders if this was at work? For those keeping score, we now have two 5-star reviews, in addition to my three-star.
The next review is the first one published after the book’s actual release date of August 7, 2007, and it is from “Honest Reader” who may very well be just that, but doesn’t review on Amazon much. This was it. He loved it though.
Fifth Review
The fifth review was from a Lisa Violi in Philadelphia, making it our first review from outside the New York area. That is her “real name,” unlike the previous two contributors, and some quick Googling suggests she’s not directly connected either to the publishers or MTV. (Though more thorough searching might turn up a connection.) Her review was one star: “A snore.” This was only a second review from her. All the rest are five star, including Christopher Moore’s Lamb. Perhaps we just have similar tastes, though–and the crowd is against us.
Six and Seven
The next two reviews bring us more unbridled praise. This is the second review from M. Gilbar, “Handsome Donkey,” of a total of four reviews. Each of his reviews garnered five stars. The name really doesn’t get us anywhere. We could take a wild shot in the dark and guess that it might be Marc Gilbar who does something called “branded entertainment” for Davie Brown Entertainment. Given that there is a “Marc Gilbar” who has used the handle “Handsome Donkey” before, perhaps this is not too much of a stretch.
Anne Marie Hollanger is not “real name” certified, and if the person goes by that name elsewhere, she’s hard to find. I suspect it might be a pseudonym. Another five-star review.
Et Cetera
I am not suggesting that all the reviews that disagree with mine are plants. Erik Klee, number eight, has over 200 reviews under his belt, and while I might not agree with his taste in books, I cannot but admire his dedication and careful reviews. Even in this case, where he gives the book five stars, I find enough in his review to form my own opinion, which is strong praise.
There are some interesting other names. Is number nine, Mat Zucker, that Mat Zucker? Is the book appropriately skewered by an acupuncturist? How many of the reviewers are also book authors? Why has Mike Congdon gone through the trouble of setting up a second RealName certified account to write back-to-back five-star reviews of the book? (I am assuming that if he is the Congdon that works for the company that does MTV’s books is coincidental–after all, very few companies are more than a degree from Viacom.)
I could spend all day noodling around the interwebs. Many of these people have public profiles on Facebook naming their friends. I could start printing out photos and hanging yarn connections from wall to wall. I am not sure where that would get me. But I am bothered by that average review, particularly when it seems so heavily influenced by the first few reviews. It seems like there is great enthusiasm for the book in the first few reviews, and then again when the MTV series comes out, but that the four isn’t entirely representative of the reviews outside these peaks…
It also seems like at least some of those earlier commenters might be more than a little interested in the success of the book. I suspect this isn’t an aberration, except perhaps in how mild the influence is. I picked this example pretty much at random when I ran into my short list of reviews for things on Amazon. So, the question is, what can we take from this and is it something we can fix?
Satisficing and BS
You are going to say you don’t actually care about the reviews, and I believe you. I am not an idiot–I read them with a huge grain of salt. But I do read them and they do influence me.
And I am not suggesting that you do what I just did, and launch an investigation of the reviews every time you want to buy something. At present, you can buy a used copy of the book for $0.01, and frankly, you can read it in less time than it took me to track down a hand-full of the early reviewers and write this up. In other words, you could know the answer to whether you would like the book, with 100% certainty, faster than it would take to play amateur detective online. So what we are looking for here is a heuristic; and maybe a heuristic that can be automated.
How much reviewing do you do?
One metric might be how much reviewing you do. Frankly, I trust reviewers who review a lot. It may be that they are paid off as well–it can happen. But when I look on, for example, TripAdvisor and see a bunch of one-star reviews from people who have reviewed only this one hotel, and four-star reviews from people with a dozen reviews under their belt, I am going to assume the one-stars are written by the competition.
That’s why Tripadvisor provides badges for those with more reviews, and indicates how many of those reviews have been considered “helpful.” Without clicking through on Amazon, it’s difficult to know how many reviews each of the contributors have made.
Meta-reviews
One might expect that meta-reviews would just replicate the review totals, with people voting against those who disagree with them. But by name, they are less about agreement and more about “helpfulness.” In fact, Amazon does provide a summary based on the most helpful positive and negative reviews. In the case of this book, the most helpful favorable review gave the book four stars, and was written by someone with 854 reviews to his name. The most helpful unfavorable review gave it two stars, and was written by someone with 59 reviews to his name.
You could weight the averages by their helpful/unhelpful votes. Doing so in this case actually ends up with closer to two stars than four. But this isn’t really the best solution. Research has shown that Amazon’s ratings are unsurprisingly bi-modal–you generally only have a review of something if you like it or dislike it. This favorable/unfavorable breakdown is far more useful than the average number of stars, but Amazon continues to show the latter.
Reliable Sources
The Real Name badge is intended to indicate that a reviewer is more likely to be credible, since they are putting their name behind their review. But other metrics of performance would be helpful as well. How often do they review? Are their reviews generally helpful? These are revealed on profile pages, but not next to the review itself. Also–many of the five-star reviews in this case were from people who only gave five-star reviews. I can understand why–maybe you only feel moved to review the stuff you really love (or really hate). But maybe these could help?
Flock of Seagulls
Finally, there is the question of collaborative filtering. Taste is a tough thing to measure. Some research has suggested that average movie reviews have little impact on the decision to see a movie. Part of that is because we don’t trust the person we don’t know to give us movie advice. My mother told me she enjoyed Howard the Duck and I knew that I would never trust her opinion on movies again.
Likewise, it is perhaps unsurprising that I agree with the review from a person who gave five stars to a book by Christopher Moore, but disagree with the review from the person who gave five stars to a book by Danielle Steele. There is nothing inherently wrong with liking either of these authors, but de gustibus non est disputandum.
Of course, Amazon was a pioneer of pushing the “those who viewed/bought this item also liked…”. I’m a little surprised they don’t do something similar for reviews.
More action research needed
There’s actually quite a bit of lit out there that does things like trying to summarize Amazon reviews automatically, discover what makes a review helpful, and whether helpful reviews have more influence on purchase decisions. Would be fun, if I were looking to waste even more time, to write plugins that helped you to construct your own shortcuts for metrics on reviews, but this would require more time than I have.
Share this: