Research – A Thaumaturgical Compendium http://alex.halavais.net Things that interest me. Wed, 18 Nov 2015 00:22:54 +0000 en-US hourly 1 12644277 My Christmas (To Do) List http://alex.halavais.net/my-christmas-to-do-list/ http://alex.halavais.net/my-christmas-to-do-list/#respond Wed, 18 Nov 2015 00:15:29 +0000 http://alex.halavais.net/?p=19722 check-markWhenever I’m feeling overwhelmed by my “to do” list I get the impulse to make it public, because by doing so, I somehow bracket it and make it feel less overwhelming. This is my “work” to-dos. I have family coming for the holidays, there’s a lot that needs to happen around the house (we are considering selling and finding a place with more space and less commute), and some long-delayed major dental work that needs to be done, but you really don’t care about that. And I’ve left off all the exciting things like committee work, advising, writing letters of reference for people, refereeing, etc.

Honestly, you probably don’t care about any of this, but I’m putting it out there because (a) you might care, and might actually want to be involved in some way and (b) nobody really reads this blog any more so it’s not too self involved to do this. I guess I could always make my To Do list entirely public–but I suspect that would increase rather than decrease my stress. I’ll put that project on a back burner for now.

1. Funding Proposal. Trying to put together a proposal to the NSF for extending the work on BadgePost, applying it to peer-certification of social science methodology expertise. This is a bit of a last-minute push, but I’m hoping to throw it together pretty quickly. Two of the people I asked to serve on the advisory committee are applying to the same solicitation, so my suspicion are odds are pretty low here, but it’s worth a try, at least. Also Co-PI on a different proposal with a colleague.

2. Book Proposal. I’ve been saying I’m writing this book, All Seeing for years, but haven’t actually proposed it to a press. I need to do that before Christmas, and getting the book in some kind of drafty state will be the major project of the first part of 2016.

3. Talk at Harvard. If you are in chilly Cambridge in late January, I’ll be giving a short talk and a discussion with Alison Head about social search, looking especially at the potential for tracing search patterns in a more discoverable way.

4. Article: Death of the Blogosphere. I’ve proposed a chapter that looks at what the blogosphere meant, and what influence it had, and how it might stand as a counter-example to “platformed” social media.

5. Article: Badgifying Linked-In. I have been sitting on survey data that shows what people think when you swap badges in for LinkedIn skills. I really need to write it up, but–to be honest–it’s one of those things that doesn’t quite fit anywhere. I have a potential target journal, but it’s way outside of my normal submission space. At least it might provide interesting reviews. Hard to get excited about this one for some reason.

6. Course: Technology & Collaboration. This is a new course for our new masters program. I actually have a syllabus for it, but it was for the course proposal, and it is kind of horrible. This week, I need to get a new syllabus in working order so people know whether they want to take it. It’s online, which means it has to be more structured that my usual approach, and the hope is that students will have early drafts of Real Research™ by the end of the semester.

7. Course: Sex Online. I’m really hoping I can fall back a lot on what is already “in the can” for this course, though I have a long list of updates I need to make to it, and because FB killed my private group last year, I have to figure out whether to go with Reddit or Slack for discussion.

Projects on Hold

1. International Communication Association. I really wanted to get more than one thing in. I have a great panel on peer-veillance and IoT in, with a great group of people, but ICA is always a crapshoot. I’ve only had one (maybe 2) things ever rejected, but I thought they were my best work. And the worst thing I ever submitted got an award. So, waiting to hear on reviews. This is a long haul (Fukuoka), but if accepted, I will try to see if anyone in Japan wants me to give a talk (or potentially in Korea, Taiwan, Singapore, etc.). And there looks to be a fun post-conf on Human-Machine communication, which is of interest to me.

2. Mapping StackOverflow Achievement Sequences. Did some work with Hazel Kwon on figuring out how to make sense of the order in which people achieve certain badges on Stack Overflow. It was kind of a sticky analysis question, but we finally worked out something, and I think I even ran the analysis. But something came up, and it’s fell off my radar. Now I have to figure out if I took good enough notes to be able to recover it or if I have to re-do a lot of thinking and work on it. Probably won’t touch it until the new year.

]]>
http://alex.halavais.net/my-christmas-to-do-list/feed/ 0 19722
Open Access Sting http://alex.halavais.net/open-access-sting/ http://alex.halavais.net/open-access-sting/#comments Fri, 04 Oct 2013 21:04:11 +0000 http://alex.halavais.net/?p=3836 PhD Comics
There is article published in Science today that details a pentest of open access science journals, and finds that they admitted a ridiculous fake article. I was amused to see a range of results scroll by on Facebook today, but surprised a bit at their interpretation of the article. Without naming names…

Of Course Open Access Journals Suck

As a science journalist, the author should recognize that “suckage” is not an absolute: the measurement can only be evaluated in contrast to other measurements. From the write-up in Science, I gather that the acceptance rate shows that many open access journals have very poor (or non-existent) peer review mechanisms, despite advertising to the contrary. You can conclude from this that they are poor forms of scholarly communication, as the author rightfully does.

But this is a condemnation of open access journals only because he limited his submissions to open access journals. In the “coda” of the article, he places this criticism in the mouth of biologist David Roos, noting that there isn’t any reason to believe that a similar level of incompetence or fraud wouldn’t be found in subscription journals. But the author then defends this with a non-response: there are so many more open access journals!

This is true enough, and I do expect you find more fraud in open access journals. But why are we left to speculate, when the same test could have been done with “traditional” publishers just as easily?

It’s Because of the Money

I’ve seen the inference elsewhere in reporting on the study that the reason you find more fraud in open access (again, a conclusion that couldn’t be reached by this approach), is that they prey on authors and have no reason to defend the journal’s reputation, since it doesn’t actually have to sell.

This is a bit strange to me. I mean, I engage in lots of services (my doctor, my lawyer) that are, effectively “author pays.” Yes, especially in medicine, there are discussions about whether that is ideal, but it certainly doesn’t mean that there is less fraud than if they were selling their services as outcomes to, for example, employers. What this argument essentially says is that librarians, sometimes with fairly limited subject matter expertise, are better at assessing journals than researchers are. While I might actually agree with that, I can certainly see arguments on either side.

I haven’t published in a “gold” (author pays) journal, though I have published and prefer open access journals. Would I pay for publication? Absolutely. If, for example, Science accepted a paper from me and told me I would need to pay a grand to have it published, I’d pony that up very quickly. For a journal I’d never heard of? Very unlikely. In other words, author-pays makes me a more careful consumer of the journal I seek for publication.

Some of this also falls at the feet of those doing reviews for tenure. If any old publication venue counts, then authors have no reason to seek out venues in which their work will be read, criticized, and cited. Frankly, if someone comes up for tenure with conference papers or journal articles in venues I don’t know, it requires more than a passing examination of these. While I don’t like “whitelists” for tenure (in part because they often artificially constrain the range of publishing venues–that is, they are not long enough), it would address some of this.

Finally, one could argue that Science, as a subscription-paid journal, has a vested interest in reducing the move to open access, and this might color their objectivity on this topic. I don’t argue that: I just think this was a case where the evidence doesn’t clearly imply the conclusion.

Paging Sokal

Finally, a few people saw this as come-uppance for Alan Sokal, who famously published a paper in Social Text that was likewise nonsense. Without getting into the details of what often riles people on both sides, I actually think there is a problem with calling what happens in science journals and philosophical journals “peer review” and not recognizing some of the significant epistemological differences. But even beyond this, it’s apples and oranges. Had Science published something in a top science journal, we would be looking at something else.

The problem is that this article and Sokal’s attempt both take fairly modest findings and blow them out of proportion. If anything, both show that peer review is a flawed process, easily tricked. Is this a surprising outcome? I can think of very few academics who hold peer review to be anything other than “better than nothing”–and quite a few who think it doesn’t even rise to that faint level of praise. So, this tells us what we already knew: substandard work gets published all the time.

To Conclude

I don’t want to completely discount work like this. I think what it shows, though, is that we are in desperate need of transparency. When an editorial board lists editors, there should be an easy way to find out if they really are on board (a clearinghouse of editorial boards?). We should have something akin to RealName certifications of editors as non-fake people. And we should train our students to more carefully evaluate journals.

]]>
http://alex.halavais.net/open-access-sting/feed/ 2 3836
Getting Glass http://alex.halavais.net/getting-glass/ http://alex.halavais.net/getting-glass/#respond Wed, 17 Apr 2013 20:35:38 +0000 http://alex.halavais.net/?p=3428 gglass

Google selected me as one of (the many) Google “Glass Explorers”, thanks to a tweet I sent saying how I would use Google Glass, namely:

What this means is that I will, presumably over the next few months, be offered the opportunity to buy Google Glass before most other people get to. Yay! But it is not all good news. I get to do this only if I shell out $1,500 and head out to L.A. to pick them up.

Fifteen hundred dollars is a lot of money. I’d be willing to spend a sizable amount of money for what I think Glass is. Indeed, although $1,500 is on the outside of that range, if it did all I wanted it too, I might still be tempted. But it is an awful lot of money. And that’s before the trip to L.A.

To be clear, the decision is mostly “sooner or later.” I’ve wanted something very like Glass for a very long time. At least since I first read Neuromancer, and probably well before that. So the real question is whether it’s worth the premium and risk to be a “Glass Explorer.”

As with all such decisions, I tend to make two lists: for and against.

For:

  • I get to play with a new toy first, and show it off. Have to admit, I’m not a big “gadget for the sake of gadgets” guy. I don’t really care what conclusions others draw relating to my personal technology: either whether I am a cool early adopter or a “glasshole.” I use tech that works for me. So, this kind of “check me out I got it first” doesn’t really appeal to me. I guess the caveat there is that I would like the opportunity to provide the first reviews of the thing.
  • I get to do simple apps: This is actually a big one. I’m not a big programmer, and I don’t have a lot of slack time this year for extra projects, but I would love to create tools for lecturing, for control, for class management, and the like. And given one of the languages they support for app programming is Python–the one I’m most comfortable in–I can see creating some cool apps for this thing. But… well, see the con column.
  • I could begin integrating it now, and have a better feel for whether I think it will be mass adopted, and what social impacts it might have. I am, at heart, a futurist. I think some people who do social science hope to explain. I am interested in this, but my primary focus is being able to anticipate (“predict” is too strong) social changes and find ways to help shape them. Glass may be this, or it may not, but having hands on early on will help me to figure that out.

Against:

  • Early adopter tax. There is a lot of speculation as to what these things will cost when they are available widely, and when that will be. The only official indication so far is “something less than $1,500.” I suspect they will need to be much less than that if they are to be successful, and while there are those throwing around numbers in the hundreds, I suspect that price point will be right around $1,000, perhaps a bit higher. That means you are paying a $500 premium to be a beta tester, and shouldering a bit of risk in doing so.
  • Still don’t know its weak points. Now that they are actually getting shipped to developers and “thought leaders,” we might start to hear about where they don’t quite measure up. Right now, all we get is the PR machine. That’s great, but I don’t like putting my own money toward something that Google says is great. I actually like most of what Google produces, but “trust but verify” would make me much more comfortable. In particular, I already suspect it has two big downvotes for me. First, I sincerely hope it can support a bluetooth keyboard. I don’t want to talk to my glasses. Ideally, I want an awesome belt- or forearm-mounted keyboard–maybe even a gesture aware keyboard (a la Swype) or a chording keyboard. Or maybe a hand-mounted pointer. If it can’t support these kinds of things, it’s too expensive. (There is talk of a forearm-mounted pad, but not a lot of details.)
  • Strangleware. My Android isn’t rooted, but one of the reasons I like it is that it *could* be. Right now, it looks like Glass can only run apps in the cloud, and in this case, it sounds like it is limited to the Google cloud. This has two effects. First, it means it is harder for the street to find new uses for Glass–the uses will be fairly prescribed by Google. That’s a model that is not particularly appealing to me. Second, developers cannot charge for Glass apps. I can’t imagine this is an effective strategy for Google, but I know from a more immediate perspective that while I am excited to experiment with apps (see above) for research and learning, I also know I won’t be able to recoup my $1,500 by selling whatever I develop. Now, if you can get direct access to Glass from your phone (and this would also address the keyboard issue), that may be another matter.
  • No resale. I guess I could hedge this a bit if I knew I could eBay the device if I found it wasn’t for me. But if the developer models are any indication, you aren’t permitted to resell. You are out the $1,500 with no chance of recovering this.

I will keep an open mind, and check out reviews as they start to trickle in from developers, as well as reading the terms & conditions, but right now, I am leaning to giving up my invite and waiting with the other plebes for broad availability. And maybe spending less on a video enabled quadracopter or a nice Mindstorms set instead.

Or, someone at Google will read this, and send me a dozen of the things as part of a grant to share with grad students so we can do some awesome research in the fall. But, you know, I’m not holding my breath. (I do hope they are doing this for someone though, if not me. If Google is interested in education, they should be making these connections.)

]]>
http://alex.halavais.net/getting-glass/feed/ 0 3428
Empty Endorsements http://alex.halavais.net/empty-endorsements/ http://alex.halavais.net/empty-endorsements/#comments Fri, 05 Apr 2013 19:51:35 +0000 http://alex.halavais.net/?p=3417 end-smallIt seems like every day, I get another message from LinkedIn that someone has endorsed me. I suppose my first reaction is a short burst of pride or happiness. It’s hard not to feel this when someone says you are good at something. Then the resentment takes over. Because LinkedIn endorsements are meaningless. At best, they are a craven attempt to get you coming back to the site.

That’s not to say endorsements are generally meaningless, although, for reasons I’ll discuss below, even the full text endorsements on LinkedIn have a systematic problem. But the basic issue here is: who are these people and are they qualified to judge?

I Like You as an X

As a friend noted upon receiving an endorsement in a field she has had only marginal experience with, and the endorser knew nothing about: “how can it possibly make sense for someone to endorse me for something I know nothing about? He might just as well endorse me for operating a crane :).” It is because endorsements are merely proxies for an expression of trust. There are no criteria for endorsement, nor anything beyond the binary “skilled or not.”

And it seems the interface is designed to encourage endorsements, with one recent implementation letting you do mass endorsement of a set of skills. The truth is, even with close colleagues, I have only a passing knowledge of, say, many of my LinkedIn connections’ teaching abilities. Some of them have been my students, and so they probably can say with some authority that I have the skill “teaching” but even then, are they saying I am a “good” teacher, a “great” teacher, or just that I am a “minimally acceptable” teacher.

Paging Mauss

One of the root issues of the new endorsement system is one it shared with the old endorsement process: implicit reciprocity. There was nothing built into the old system that provided this, but there was certainly the feeling that if you endorsed someone, they should endorse you back.

Perhaps this is in some general sense true of such textual endorsements in the real world, but if so, the connection is very tenuous. If I write a letter of recommendation for a student, I don’t expect her to write one back for me–not immediately at least, and probably not at all. Likewise, if I write a short endorsement for a consultant, for use in getting new clients, I have no expectation of a similar endorsement back. But on LinkedIn, it seems that one endorsement directly begets another. I suppose you could analyze this and see how many one-way endorsements there are, but I suspect there aren’t very many. I now generally don’t endorse people with textual statements, unless they specifically ask, because I don’t want it to look like I am attempting to get endorsements back. And, just to make this more complicated, if they don’t endorse me back, I wonder what this means.

This reciprocity is made even more extreme in the case of the new endorsements. When I get an email, and follow it to linked in, it prompts me: “Now it’s your turn.”

end-big

The idea of turn-taking is deeply ingrained in our social lives. Someone has done us a turn, and now we are expected to reciprocate. And just to make matters easier, I can by-pass all this messy “thinking” and just endorse-’em-all.

Brand Will Eat Itself

It’s not clear why LinkedIn would do something like this: increasing traffic at the cost of making their system laughable. Yes, I suppose they could just quietly kill off the project, but I suspect that a lot of people would be hopping mad if their hundreds of meaningless endorsements suddenly were no longer featured on their page.

Imagine an alternative LinkedIn–one that included elements of a portfolio, and asked for you to assess the work presented, or indicate the basis of your endorsement. Not just a collection of mutual back-scratchers (I’m forgoing the more obvious metaphor as this is a family blog), but a space in which people could say something real about their colleagues and their competencies. I suspect such a network would blow LinkedIn off the map.

]]>
http://alex.halavais.net/empty-endorsements/feed/ 1 3417
The Badges of Oz http://alex.halavais.net/ozbadge/ http://alex.halavais.net/ozbadge/#comments Fri, 15 Mar 2013 19:21:23 +0000 http://alex.halavais.net/?p=3386 ybrAlmost a year ago I wrote a post about being a “skeptical evangelist” when it comes to the uses of badges in learning. This was spurred, in large part, by a workshop run by Mitch Resnick at DML2012 that was critical of the focus on badges. This year Resnick was back, as part of a panel, and the designated “chief worrier.” Then, as now, I find nothing to disagree with in his skepticism.

To provide what is perhaps too brief a gloss on Mitch Resnick’s critique, he is concerned that the badges come to replace the authentic learning experiences. He illustrated this by relaying a story about hiking the Appalachian trail, and having people talk about “peaking”–hitting as many peaks as possible in a given day. This misses the reason for doing the hike in the first place. He worries–as Alfie Kohn did about gold stars–that badges will be used to motivate students. He showed a short conversation between Salmon Kahn and Bill Gates in which they joke about how badges shape kids’ motivations. I am really glad that Resnick raises (and keeps raising) these issues. When badges end up replacing learning, rather than enhancing it, we are producing an anti-learning technology. We need to not be creating a technology of motivation, but one that provides recognition, authentic assessment, and an effective alternative to traditional credentials and learning records.

Which brings us to Oz, and a charlatan wizard from Kansas. You may not remember this, but when Dorothy and her friends show up to get their hearts and minds, the wizard instead awards them with badges. To go back to the source:

“I think you are a very bad man,” said Dorothy.

“Oh, no, my dear; I’m really a very good man, but I’m a very bad Wizard, I must admit.”

“Can’t you give me brains?” asked the Scarecrow.

“You don’t need them. You are learning something every day. A baby has brains, but it doesn’t know much. Experience is the only thing that brings knowledge, and the longer you are on earth the more experience you are sure to get.”

“That may all be true,” said the Scarecrow, “but I shall be very unhappy unless you give me brains.”

The false Wizard looked at him carefully.

“Well,” he said with a sigh, “I’m not much of a magician, as I said; but if you will come to me tomorrow morning, I will stuff your head with brains. I cannot tell you how to use them, however; you must find that out for yourself.”

“Oh, thank you–thank you!” cried the Scarecrow. “I’ll find a way to use them, never fear!”

“But how about my courage?” asked the Lion anxiously.

“You have plenty of courage, I am sure,” answered Oz. “All you need is confidence in yourself. There is no living thing that is not afraid when it faces danger. The True courage is in facing danger when you are afraid, and that kind of courage you have in plenty.”

“Perhaps I have, but I’m scared just the same,” said the Lion. “I shall really be very unhappy unless you give me the sort of courage that makes one forget he is afraid.”

“Very well, I will give you that sort of courage tomorrow,” replied Oz.

“How about my heart?” asked the Tin Woodman.

“Why, as for that,” answered Oz, “I think you are wrong to want a heart. It makes most people unhappy. If you only knew it, you are in luck not to have a heart.”

“That must be a matter of opinion,” said the Tin Woodman. “For my part, I will bear all the unhappiness without a murmur, if you will give me the heart.”

“Very well,” answered Oz meekly. “Come to me tomorrow and you shall have a heart. I have played Wizard for so many years that I may as well continue the part a little longer.”

In the end, he gives them tokens in the book which the three companions take to be real. But in the movie, these mere tokens are replaced by their modern equivalents: the diploma, a testimonial, and a purple heart.

Now, as someone who sees badges as useful and helpful, it may seem odd to raise this as an example. After all, the Wizard keeps his eyes wide open about the value of things like military badges or diplomas. He has no illusions about the ways in which these things are abused in the strange world of “Kansas.” And, as I said, he is a faker.

On the other hand, the Wizard’s actions are about recognizing the achievements of the three. The viewer, of course, knows that the three already have demonstrated their desired abilities, through their journey along the YBR, and their experience meeting with a significant challenge. They have already achieved more than they themselves knew. Badges represent recognition, and as those in the badge community who like the game mechanics metaphor (I don’t) say “leveling up.” In this case, the badges are being used not just to let the world know about the protagonists’ achievements and experience, but also to open their eyes to their own accomplishments–to mark that learning as important.

There will continue to be a tension between motivation–stepping up to meet others’ achievement–and recognizing the achievements of learners. It’s an important tension, and I think there needs to be a significant amount of focus on how we can effectively walk that line. How can we avoid the worst kinds of badging?

I don’t have a good answer to that, but I have two suggestions:

First, the evidence behind the badge should not–cannot–be ignored. Right now the “evidence link” is optional for the OBI. I am happy it is there at all, but I wish that it were required. Of course, it’s wide open–that “evidence” could just be a score on a quiz. But there is the potential for backing badges with authentic assessment. I would love for badges to essentially be pointers to portfolios.

Second, I think it’s vital that learners be involved in the creation of badges. People often drag out the apocryphal quote from Napoleon about soldiers giving their lives for bits of ribbon. There is a significant danger that the future of badges will be dictated by the state (at whatever level) or standardized curricula. I think it is important to keep badging weird. One of the best ways to do that, and to undermine the colonization of badging by commercial interests and authoritative educational institutions is to makes sure the tools to create and issue badges are widely available and dead simple to use.

]]>
http://alex.halavais.net/ozbadge/feed/ 2 3386
Re-Presenting Badges http://alex.halavais.net/re-presenting-badges/ http://alex.halavais.net/re-presenting-badges/#comments Thu, 03 Jan 2013 20:05:05 +0000 http://alex.halavais.net/?p=3341 5kbadgeYes, it’s another badge post. Feel free to skip, or take a look at some of the other badge-related stuff I’ve posted earlier to get some background.

One of my earliest questions, asked a couple of years ago, about badges and the Open Badge Infrastructure is whether you could put badges into the infrastructure that the issuers didn’t intend (specifically) to go into the OBI. And here we have a great example: the “I walked 5000 steps” badge you see here. When I earned it, via FitBit, it let me share it via Facebook or Twitter, and so I did. Now, on my FB page is a note that I was awarded the badge.

The question is simple. Can you have a “helper app” that takes badges earned on FitBit, or on StackOverflow, or on Four Square (you get the idea), and places them in my badge backpack. Let’s just assume for the moment that these badges, like the FitBit badge, are sitting somewhere out in the open. So, here are some of the questions this raises:

Who owns the badge image

Can I assume, since they are allowing me to put the image on FB, that I can use the image to represent myself in various venues, particularly if the badge is “properly earned”? E.g., is my use above “fair” or am I impinging on their copyright/trademark by using the image? Legally, it seems to me that they could easily claim that they haven’t granted me an explicit license, though I think it would be a mistake to *stop* the flow. After all, why give a badge if you don’t want people to display it? So, at this point, I would say it is an issue of asking forgiveness rather than permission…

More technically, I would assume that the “helper site” would cache the image, rather than providing the OBI with the original image location on, e.g., the FitBit site. That means the helper site would be taking on some liability, but I assume they could easily claim to be a DMCA safe harbor and have appropriate take-down processes?

More generally, the ownership of badge images is likely to become a pretty hot topic. All of the sudden, a lot more people are creating trademarks, and they are likely going to be coming too close to comfort to one another.

Finally, on this topic, I could create a badge that suggests that the FitBit badge was earned, but was my own badge design. At that point, I think there isn’t much FitBit could do to complain. But it would make so much more sense if you could just use the initial badge image.

Are you that person?

The issue of stolen glory is tougher. If you have this middle layer, how does it know you are who you say you are on the other services? If they have some form of data sharing or identity API (e.g., OAuth) there may be ways to “connect” to the account and demonstrate ownership, but these are not universal.

For systems that do not provide an open API for authentication, you would have to play with some kind of work-around to get users to prove they have access to the site. That is possible, but likely too cumbersome to make sense. Luckily, OAuth seems to be more common these days than it once was, and it might be possible to set up a kind of middle layer that helps users on systems that already employ badges move those badges to an OBI backpack. (Some of this could likely be made easier with broad adoption of Persona, but even if I am encouraging my students this semester to set up a Persona account, I’m not going to put too many eggs in that basket until it gets widespread adoption.)

BadgePost2

I unceremoniously killed off the BadgePost system. It was a great learning system, and I can see how it might be useful to others. That said, I think it makes more sense to leverage existing systems. And I like the idea of building a kind of “middle layer” that can draw on badge systems that already exist. I expect my early targets to be:

  • FitBit: why not?
  • FourSquare: Since even people who no longer use it seem to have 4sq badges :).
  • Reddit: They have no badges, really, but we can layer our own on based on karma alone. (Like the Reddit badge for this course, though more automagically generated.)
  • WordPress: There is an OAuth plug-in for WordPress already, and several badge plugins. Should be possible to leverage a WordPress site running the right stack…
]]>
http://alex.halavais.net/re-presenting-badges/feed/ 2 3341
Gaming Amazon Reviews http://alex.halavais.net/gaming-amazon-reviews/ http://alex.halavais.net/gaming-amazon-reviews/#respond Wed, 28 Nov 2012 00:05:07 +0000 http://alex.halavais.net/?p=3296
I will readily admit it: I trust Amazon reviews. I just bought a toy at Toys R Us for my eldest son for his birthday. It kind of sucks, though he’s a bright kid and can make sucky things work. If I had read the Amazon reviews, I would have found this out before making the purchase.

I’m not stupid–I know that astroturf exists and that Amazon Reviews are a good place for it to exist. Heck, I even bought a book on Amazon that tells authors to do it. I bought the book because it was well-reviewed. It was not a good book. It did get me to plead for folks on Facebook to review my book on Amazon, and a few of them (mostly former students) took me up on the offer.

First Review

I don’t write many Amazon reviews, but I happened across some of them recently. One of the first I wrote was in 2007 for a first novel by David J. Rosen called “I Just Want My Pants Back.” I picked it up in the lobby of our apartment building where I suspect someone in publishing or media worked and discarded her readers’ copies. I got a lot of early reads from big publishers this way, and then returned them to our communal library.

Not wanting to slam an early author–I’m professorial like that–I gave the book three stars. I’ll admit here, that is where the reviewing system first failed. It really deserved two. The extra star was to pull it up to a “Gentleman’s C.” As of today, it has an average of four stars, with 32 reviews. It was made into an series for MTV, which if the novel is any kind of indication, is probably shown to inmates in Guantánamo. (If this sounds harsh to Mr. Rosen or anyone else, pull out those fat paychecks from MTV and feel better.)

Second Review

Now, 32 reviews is usually an indicator to me that an average review is actually pretty legitimate, so where did things go so wrong? Let’s start with the review directly after mine, posted about two weeks later, which is titled “Jason Strider is a modern day Holden Caulfield” and penned by first-time reviewer R. Kash “moneygirl”. Whoa–not shy with the praise there, and it seems we differ in our perception of the work. How did we come to such a failure in intercoder reliability?

We know that Ms. Moneygirl is a real person because her name is followed by the coveted Real Name™ badge, which means that she has ponied up a credit card that presumably reads “R Kash.” That this is her first review may be a warning, but frankly it was very nearly my first Amazon review as well, preceded only by an over-the-top accolade for this very blog. (Given my tendency to dissociation, this is only a mild conflict of interest.) Her other two reviews are also 5-star–but we will get to that.

Despite the “real name” and a self-reported nickname and home city (New York), it’s difficult to find out much more about Ms. Kash without some guessing. But a little noodling around suggests that Rachel Kash on Twitter is a fan of the MTV show. Despite the demure pony-tail photo, I think there are some clear connections to a Rachel Kash who writes for the Huffington Post. Her profile there notes she

is a fiction writer who tweets under the name @MomsBitching. She is also the founder and principal of KashCopy, Inc., a copywriting and brand strategy consultancy. Rachel currently lives with her husband and young son in Brooklyn, New York.

She writes fiction, as does her husband, David Rosen. Yes, the author of the book and the subject of Ms. Kash’s first and second reviews on Amazon. Given this, “I hope to see a lot more from Rosen in the future,” could be read in multiple ways. But I think it’s awesome that his spouse is also his number one fan.

Third and Fourth Reviews

The third review comes from someone who was writing his second review. The first review he had written was for the game NBA Live 2004, which coincidentally (?) had close ties to hip-hop artists. (This conspiracy stuff makes you paranoid.) If it is astroturf, it is very forthright astroturf: “This book was passed on to me by a colleague at MTV and I read it in one day.” Perhaps it is only me who wonders if this was at work? For those keeping score, we now have two 5-star reviews, in addition to my three-star.

The next review is the first one published after the book’s actual release date of August 7, 2007, and it is from “Honest Reader” who may very well be just that, but doesn’t review on Amazon much. This was it. He loved it though.

Fifth Review

The fifth review was from a Lisa Violi in Philadelphia, making it our first review from outside the New York area. That is her “real name,” unlike the previous two contributors, and some quick Googling suggests she’s not directly connected either to the publishers or MTV. (Though more thorough searching might turn up a connection.) Her review was one star: “A snore.” This was only a second review from her. All the rest are five star, including Christopher Moore’s Lamb. Perhaps we just have similar tastes, though–and the crowd is against us.

Six and Seven

The next two reviews bring us more unbridled praise. This is the second review from M. Gilbar, “Handsome Donkey,” of a total of four reviews. Each of his reviews garnered five stars. The name really doesn’t get us anywhere. We could take a wild shot in the dark and guess that it might be Marc Gilbar who does something called “branded entertainment” for Davie Brown Entertainment. Given that there is a “Marc Gilbar” who has used the handle “Handsome Donkey” before, perhaps this is not too much of a stretch.

Anne Marie Hollanger is not “real name” certified, and if the person goes by that name elsewhere, she’s hard to find. I suspect it might be a pseudonym. Another five-star review.

Et Cetera

I am not suggesting that all the reviews that disagree with mine are plants. Erik Klee, number eight, has over 200 reviews under his belt, and while I might not agree with his taste in books, I cannot but admire his dedication and careful reviews. Even in this case, where he gives the book five stars, I find enough in his review to form my own opinion, which is strong praise.

There are some interesting other names. Is number nine, Mat Zucker, that Mat Zucker? Is the book appropriately skewered by an acupuncturist? How many of the reviewers are also book authors? Why has Mike Congdon gone through the trouble of setting up a second RealName certified account to write back-to-back five-star reviews of the book? (I am assuming that if he is the Congdon that works for the company that does MTV’s books is coincidental–after all, very few companies are more than a degree from Viacom.)

I could spend all day noodling around the interwebs. Many of these people have public profiles on Facebook naming their friends. I could start printing out photos and hanging yarn connections from wall to wall. I am not sure where that would get me. But I am bothered by that average review, particularly when it seems so heavily influenced by the first few reviews. It seems like there is great enthusiasm for the book in the first few reviews, and then again when the MTV series comes out, but that the four isn’t entirely representative of the reviews outside these peaks…

It also seems like at least some of those earlier commenters might be more than a little interested in the success of the book. I suspect this isn’t an aberration, except perhaps in how mild the influence is. I picked this example pretty much at random when I ran into my short list of reviews for things on Amazon. So, the question is, what can we take from this and is it something we can fix?

Satisficing and BS

You are going to say you don’t actually care about the reviews, and I believe you. I am not an idiot–I read them with a huge grain of salt. But I do read them and they do influence me.

And I am not suggesting that you do what I just did, and launch an investigation of the reviews every time you want to buy something. At present, you can buy a used copy of the book for $0.01, and frankly, you can read it in less time than it took me to track down a hand-full of the early reviewers and write this up. In other words, you could know the answer to whether you would like the book, with 100% certainty, faster than it would take to play amateur detective online. So what we are looking for here is a heuristic; and maybe a heuristic that can be automated.

How much reviewing do you do?

One metric might be how much reviewing you do. Frankly, I trust reviewers who review a lot. It may be that they are paid off as well–it can happen. But when I look on, for example, TripAdvisor and see a bunch of one-star reviews from people who have reviewed only this one hotel, and four-star reviews from people with a dozen reviews under their belt, I am going to assume the one-stars are written by the competition.

That’s why Tripadvisor provides badges for those with more reviews, and indicates how many of those reviews have been considered “helpful.” Without clicking through on Amazon, it’s difficult to know how many reviews each of the contributors have made.

Meta-reviews

One might expect that meta-reviews would just replicate the review totals, with people voting against those who disagree with them. But by name, they are less about agreement and more about “helpfulness.” In fact, Amazon does provide a summary based on the most helpful positive and negative reviews. In the case of this book, the most helpful favorable review gave the book four stars, and was written by someone with 854 reviews to his name. The most helpful unfavorable review gave it two stars, and was written by someone with 59 reviews to his name.

You could weight the averages by their helpful/unhelpful votes. Doing so in this case actually ends up with closer to two stars than four. But this isn’t really the best solution. Research has shown that Amazon’s ratings are unsurprisingly bi-modal–you generally only have a review of something if you like it or dislike it. This favorable/unfavorable breakdown is far more useful than the average number of stars, but Amazon continues to show the latter.

Reliable Sources

The Real Name badge is intended to indicate that a reviewer is more likely to be credible, since they are putting their name behind their review. But other metrics of performance would be helpful as well. How often do they review? Are their reviews generally helpful? These are revealed on profile pages, but not next to the review itself. Also–many of the five-star reviews in this case were from people who only gave five-star reviews. I can understand why–maybe you only feel moved to review the stuff you really love (or really hate). But maybe these could help?

Flock of Seagulls

Finally, there is the question of collaborative filtering. Taste is a tough thing to measure. Some research has suggested that average movie reviews have little impact on the decision to see a movie. Part of that is because we don’t trust the person we don’t know to give us movie advice. My mother told me she enjoyed Howard the Duck and I knew that I would never trust her opinion on movies again.

Likewise, it is perhaps unsurprising that I agree with the review from a person who gave five stars to a book by Christopher Moore, but disagree with the review from the person who gave five stars to a book by Danielle Steele. There is nothing inherently wrong with liking either of these authors, but de gustibus non est disputandum.

Of course, Amazon was a pioneer of pushing the “those who viewed/bought this item also liked…”. I’m a little surprised they don’t do something similar for reviews.

More action research needed

There’s actually quite a bit of lit out there that does things like trying to summarize Amazon reviews automatically, discover what makes a review helpful, and whether helpful reviews have more influence on purchase decisions. Would be fun, if I were looking to waste even more time, to write plugins that helped you to construct your own shortcuts for metrics on reviews, but this would require more time than I have.

]]>
http://alex.halavais.net/gaming-amazon-reviews/feed/ 0 3296
Quantified Scholar http://alex.halavais.net/quantified-scholar/ http://alex.halavais.net/quantified-scholar/#comments Fri, 02 Nov 2012 05:16:02 +0000 http://alex.halavais.net/?p=3291 One of the themes of my book (you know, the book I keep talking about but keep failing to snatch from the outer atmosphere of my imagination, where it seems to reside) is that by measuring, you can create change in yourself and in others. Given that, and the immense non-being of the book, its chapters, or the words that make it up, engaging in #AcWriMo seems painfully obvious. This is a take-off on the wildly successful National Novel Writing Month, and an effort to produce a lot of drafty text, not worrying so much about editing, making sense, or the like. You know: blogging.

Noodle knows I’ve got a ton of writing that needs to be done, like yesterday. Just so I can keep it straight in my head:

* The #g20 paper. This was an awesome paper that was nearly done two years ago. The data is dated, which is going to make publishing harder, but the ideas and analysis are still really interesting and good, I think. I just need to finish off a small bit of analysis (oh no! that has little to do with writing!) and write the sucker up.

* The aforementioned book. Or at least a couple of the chapters for it, which are now about five years overdue.

* A short piece on Enders Game.

* Updating some research (eek, more non-writing) and writing it up (phew).

* A dozen other little projects.

I also, however, have a bunch of other pressing things: planning for two new courses, maybe coding up a new version of my badge system (although, unless somehow funded, that needs to be a weekend project), and of course the ever-present AoIR duties.

Oh, did I say weekends. Yes, the first caveat to my pledge: I’m trying not to work weekends. My family is my first priority, and while that is easy to say, it’s harder to do. So I will endeavor not to do any work on the weekends. I’ve been trying to do that so far, and it’s not really possible, but it’s a good reach goal. Oh, and I’m taking a chunk of Thanksgiving week off, since my Mom and all her kids and grandkids (including the ones in Barcelona) are coming together at our house for the first time in probably more than two decades. But I’ll do a make-up in December.

Second caveat: I’m counting posting to the blog (since this is where I used to do a lot of my pre-writing). I’d like to count email too, since I did a solid 6 hours of catching up on email today, but I think that’s a no-go.

Really what we are talking about then is four consecutive weeks of completing 6,000 words each week. That may not sound very ambitious, but given how hard it was to push out the last 5,000 words (it took way more than a week–sorry editors!), I think that 1,200 words a day is plenty ambitious. Oh, and by doing it as Monday-Friday weeks, I get to start next Monday. Procrastination FTW.

Now that I’ve managed to negotiate myself down, it doesn’t seem like much of a challenge, but there it is. I will report on my goals here on a weekly basis. I may try to add some other metrics (time on task, people mad at me, etc.) as we go forward. But for now: words, words, words. (Though only 573 of them for this post.)

]]>
http://alex.halavais.net/quantified-scholar/feed/ 1 3291
Welcome to IR13.0 http://alex.halavais.net/welcome-to-ir13-0/ http://alex.halavais.net/welcome-to-ir13-0/#respond Wed, 10 Oct 2012 07:28:44 +0000 http://alex.halavais.net/?p=3270 Here’s my welcome letter for the IR13.0 program(me):

I have always considered 13 a lucky number, and I feel particularly lucky that we have the opportunity to come back to the United Kingdom for IR13.0.

I recently moved to a new house, and in the process ran across a T-shirt from the first IR conference I attended in Minneapolis a dozen years ago. At the time I was a graduate student in communication, and while there were a few faculty members and other students in my home department who were quickly trying to come to grips with what the internet might mean for our field of study, we were certainly not a program that focused on the internet in particular. It was a wonderful experience to come to a conference and not only meet a group of scholars who were reading the same things I was and thinking about many of the same problems, but to encounter theoretical approaches and practical methods of inquiry that were unfamiliar.

Twelve years covers a lot of ground in “Internet Time” and today it is hard to find a disciplinary conference that doesn’t have a crowd of people looking at the social and cultural effects of networked information technologies. But it remains difficult to think of another conference that attracts such a significant number of the most influential scholars in our field. Just as importantly, you will find that the Internet Research conference opens its arms to students and other newcomers to our field, and remains at a scale that–while large–still encourages the kinds of real conversations that make great conferences great.

I hope you will find time to talk to some of the many volunteers who have helped this year’s conference happen, including the members of the conference committee: Ben Light, Feona Attwood, and Lori Kendall. (Michael Zimmer is with us in spirit.) It takes a lot of volunteers to make this work each year, and I want to thank the volunteers here in Salford, the editorial committees, the awards committees, and the many reviewers who have helped us get here. And finally, I would like to thank our sponsors–both those who have been with us for many years, and those who are new to us in Salford.

I always come home from an Internet Research conference with more ideas than I know what to do with. Coming to a conference with that as an expectation is a high bar for success, but one that I hope we meet for as many attendees this year as we have in past years.

]]>
http://alex.halavais.net/welcome-to-ir13-0/feed/ 0 3270
The Coming Gaming Machine, 1975-1985 http://alex.halavais.net/the-coming-gaming-machine-1975-1985/ http://alex.halavais.net/the-coming-gaming-machine-1975-1985/#comments Fri, 13 Jul 2012 20:35:28 +0000 http://alex.halavais.net/?p=3258 Was going through old backups in the hope of finding some stuff I’ve lost and I ran into this, a draft I was working on around the turn of the millennium. Never went anywhere, and was never published. I was just pressing delete, when I realized it might actually be of interest to someone. It was a bit of an attempt at a history of the future of gaming, during the heyday of the console. Please excuse any stupidity–I haven’t even looked at it, just copied it over “as is.”

The Coming Gaming Machine, 1975-1985

Abstract

The early 1980s are sometimes referred to as the ‘Golden Age’ of computer games. The explosion of video games–in arcades, as home consoles, and eventually on home computers–led many to question when the fad would end. In fact, rather than an aberration, the decade from 1975 to 1985 shaped our view of what a computer is and could be. In gaming, we saw the convergence of media appliances, the rise of the professional software, and the first ‘killer app’ for networking. During this period, the computer moved from being a ‘giant brain’ to a home appliance, in large part because of the success of computer gaming.

Introduction

Sony’s offering in the game console arena, the Playstation 2, was among the most anticipated new products for the 2000 Christmas season. Although rumors and reviews added to the demand, much of this eagerness was fueled by an expensive international advertising campaign. One of the prominent television spots in the US listed some of the features of a new gaming console, including the ability to ‘tap straight into your adrenal gland’ and play ‘telepathic personal music.’ The product advertised was not the Playstation 2, but the hypothetical Playstation 9, ‘new for 2078.’ The commercial ends with an image of the Playstation 2 and a two-word tag line: ‘The Beginning’ 1.

The beginning, however, came over twenty-five years earlier with the introduction of home gaming consoles. For the first time, the computer became an intimate object within the home, and became the vehicle for collective hopes and fears about the future. In 1975 there were hundreds of thousands of gaming consoles sold, and there were dozens of arcade games to choose from. By 1985, the year the gaming console industry was (prematurely) declared dead, estimates put the number of Atari 2600 consoles alone at over 20 million world-wide2.

The natural assumption would be that gaming consoles paved the way for home computers, that the simple graphics and computing power of the Atari 2600 was an intermediary evolutionary step toward a ‘real’ computer. Such a view would obscure both the changes in home computers that made them more like gaming consoles, and the fact that many bought these home computers almost exclusively for gaming. But during the decade following 1975, the view of what gaming was and could be changed significantly. Since gaming was the greatest point of contact between American society and computing machinery, gaming influenced the way the public viewed and adopted the new technology, and how that technology was shaped to meet these expectations.

The Place of Gaming

When the University of California at Irvine recently announced that they may offer an undergraduate minor in computer gaming, many scoffed at the idea. The lead in an article in the Toronto Star, quipped, ‘certainly, it sounds like the punchline to a joke’3. As with any academic study of popular culture, many suggested the material was inappropriate for the university. In fact, despite the relatively brief history of computer gaming, it has had an enormous impact on the development of computing technology, how computers are seen and used by a wide public, and the degree to which society has adapted to the technology. Games help define how society imagines and relates to computers, and how they imagine future computers will look and how they will be used. The shift in the public view of computers from ‘giant brains’ to domestic playthings occurred on a broad scale during the ten years between 1975 and 1985, the period coincident with the most explosive growth of computer gaming.

Games have also played a role in both driving and demonstrating the cutting edge of computing. While they are rarely the sole purpose for advances in computing, they are often the first to exploit new technology and provide a good way for designers and promoters to easily learn and demonstrate the capabilities of new equipment. Programmers have used games as a vehicle for developing more sophisticated machine intelligence4, as well as graphic techniques. Despite being seen as an amusement, and therefore not of import, ‘the future of “serious” computer software—educational products, artistic and reference titles, and even productivity applications—first becomes apparent in the design of computer games’5. Tracing a history of games then provides some indication of where technology and desire meet. Indeed, while Spacewar might not have been the best use of the PDP-1’s capabilities, it (along with adventure games created at Stanford and the early massively multiplayer games available on the PLATO network) foreshadowed the future of computer entertainment surprisingly well. Moreover, while the mainstream prognostications of the future of computing are often notoriously misguided, many had better luck when the future of computing technology was looked at through the lens of computer games.

Computer Gaming to 1975

The groundwork of computer gaming was laid well before computer games were ever implemented. Generally, video games grew out of earlier models for gaming: board and card games, war games, and sports, for example. William Higinbotham’s implementation of a Pong-like game (‘Tennis for Two’) in 1958, using an oscilloscope as a display device, deserves some recognition as being the first prototype of what would come to be a popular arcade game. Generally, though, the first computer game is credited to Steve Russel, who with the help of a group of programmers wrote the first version of the Spacewar game at MIT in 1961. The game quickly spread to other campuses, and was modified by enterprising players. Although Spacewar remained ensconced within the milieu of early hackers, it demonstrated a surprisingly wide range of innovations during the decade following 1961. The earliest versions were quite simple, two ships that could be steered in real time on a CRT and could shoot torpedoes at one another. Over time, elaborations and variations were added: gravity, differing versions of hyperspace, dual monitors, and electric shocks for the losing player, among others. As Alan Kay noted: ‘The game of Spacewar blossoms spontaneously wherever there is a graphics display connected to a computer’6.

In many ways, Spacewar typified the computer game until the early 1970s. It was played on an enormously expensive computer, generally within a research university, often after hours. Certainly, there was little thought to this being the sole, or even a ‘legitimate,’ use of the computer. While time was spent playing the game, equally as important was the process of creating the game. The differentiation between player and game author had yet to be drawn, and though a recreational activity—and not the intended use of the system—this game playing took place in a research environment. There was no clear relationship between computer gaming and the more prosaic pinball machine.

However, after a ten year diffusion, Spacewar marked a new kind of computing: a move from the ‘giant brain’ of the forties to a more popular device in the 1970s. Stewart Brand wrote an article in Rolling Stone in 1972 that clearly hooked the popular diffusion of computing to ‘low-rent’ development in computer gaming. Brand begins his article by claiming that ‘ready or not, computers are coming to the people.’ It was within the realm of gaming that the general public first began to see computers as personal machines.

Perhaps more importantly, by taking games seriously, Brand was able to put a new face on the future of computing. At a time when Douglas Englebart’s graphical user interfaces were being left aside for more traditional approaches to large-scale scientific computing, Brand offered the following:

… Spacewar, if anyone cared to notice, was a flawless crystal ball of things to come in computer science and computer use:
1. It was intensely interactive in real time with the computer.
2. It encouraged new programming by the user.
3. It bonded human and machine through a responsive broadhand (sic) interface of live graphics display.
4. It served primarily as a communication device between humans.
5. It was a game.
6. It functioned best on, stand-alone equipment (and disrupted multiple-user equipment).
7. It served human interest, not machine. (Spacewar is trivial to a computer.)
8. It was delightful. (p. 58.)

Brand’s focus was on how people could get hold of a computer, or how they could build one for themselves. The article ends with a listing of the code for the Spacewar game, the first and only time computer code appeared in Rolling Stone. He mentions off-handedly that an arcade version of Spacewar was appearing on university campuses. Brand missed the significance of this. Gaming would indeed spread the use of computing technology, but it would do so without the diffusion of programmable computers. Nonetheless, this early view of the future would be echoed in later predictions over the next 15 years.

On the arcade front, Nolan Bushnell (who would later found Atari), made a first foray into the arcade game market with a commercial version of Spacewar entitled Computer Space in 1971. The game was relatively unsuccessful, in large part, according to Bushnell, because of the complicated game play. His next arcade game was much easier to understand: a game called Pong that had its roots both in a popular television gaming console and earlier experimentation in electronic gaming. Pong’s simple game play (with instructions easily comprehended by inebriated customers: ‘Avoid missing ball for high score’) drove its success and encouraged the development of a video gaming industry.

Equally important was the tentative television and portable gaming technologies that began to sprout up during the period. Though Magnavox’s Odyssey system enjoyed some popularity with its introduction in 1972, the expense of the television gaming devices and their relatively primitive game play restricted early diffusion. It would take the combination of microprocessor controlled gaming with the television gaming platform to drive the enormous success of the Atari 2600 and its successors. At the same time, the miniaturization of electronics generally allowed for a new wave of hand-held toys and games. These portable devices remain at the periphery of gaming technology, though these early hand-held games would be forerunners to the Lynx, Gameboy and PDA-based games that would come later.

By 1975, it was clear that computer gaming, at least in the form of arcade games and home gaming systems, was more than an isolated trend. In the previous year, Pong arcade games and clones numbered over 100,000. In 1975, Sears pre-sold 100,000 units of Atari’s Pong home game, selling out before it had shipped7. It had not yet reached its greatest heights (the introduction of Space Invaders several years later would set off a new boom in arcade games, and drive sales of the Atari 2600), but the success of Pong in arcades and at home had secured a place for gaming.

The personal computer market, on the other hand, was still dominated by hobbyists. This would be a hallmark year for personal computing, with the Altair system being joined by the Commodore PET, Atari’s 400 and 800, and Apple computers. Despite Atari’s presence and the focus on better graphics and sound, the computer hobbyists remained somewhat distinct from the console gaming and arcade gaming worlds. Byte magazine, first published in 1975, made infrequent mention of computer gaming, and focused more heavily on programming issues.

Brand was both the first and among the most pronounced to use gaming as a guide to the future of computing and society. In the decade between 1975 and 1985, there were a number of predictions about the future of gaming made, but most of these were off-handed comments of a dismissive nature. It is still possible to draw out a general picture of what was held as the future of gaming—and with it the future of computing—by examining contemporaneous accounts and predictions8.

Many of these elements are already present in Brand’s prescient view from 1972. One that he seemed to have missed is the temporary bifurcation of computer gaming into machines built for gaming specifically, and more general computing devices. (At the end of the article, it is clear that Alan Kay—who was at Xerox PARC at the time and would later become chief scientist for Atari—has suggested that Spacewar can be programmed on a computer or created on a dedicated machine, a distinction that Brand appears to have missed.) That split, and its continuing re-combinations, have driven the identity of the PC as both a computer and a communications device. As a corollary, there are periods in which the future seems to be dominated by eager young programmers creating their own games, followed by a long period in which computer game design is increasingly thought of as an ‘art,’ dominated by a new class of pop stars. Finally, over time there evolves an understanding of the future as a vast network, and how this will affect gaming and computer use generally.

Convergence

1975 marks an interesting starting point, because it is in this year that the microprocessor emerges as a unifying element between personal computers and video games. Although early visions of the home gaming console suggested the ability to play a variety of games, most of the early examples, like their arcade counterparts, were limited to a single sort of game, and tended to be multi-player rather than relying upon complex computer-controlled opponents. Moreover, until this time console games were more closely related to television, and arcade video games to earlier forms of arcade games. Early gaming systems, even those that made extensive use of microprocessors, were not, at least initially, computers ‘in the true sense’9. They lacked the basic structure that allowed them to be flexible, programmable machines. The emerging popularity of home computers, meanwhile, was generally limited to those with an electronics and programming background, as well as a significant disposable income.

As consoles, arcade games, and personal computers became increasingly similar in design, their futures also appeared to be more closely enmeshed. At the high point of this convergence, home computers were increasingly able to emulate gaming systems—an adaptor for the Vic-20 home computer allowed it to play Atari 2600 console game cartridges, for example. On the other side, gaming consoles were increasingly capable of doing more ‘computer-like’ operations. As an advertisement in Electronic Gaming for Spectravideo’s ‘Compumate’ add-on to the Atari 2600 asks ‘Why just play video games? … For less than $80, you can have your own personal computer.’ The suggestion is that rather than ‘just play games,’ you can use your gaming console to learn to program and ‘break into the exciting world of computing.’ Many early computer enthusiasts were gamers who tinkered with the hardware in order to create better gaming systems10. This led some to reason that video game consoles might be a ‘possible ancestor of tomorrow’s PC’11. As early as 1979, one commentator noted that the distinction between home computers and gaming consoles seemed to have ‘disappeared’12. An important part of this world is learning to program and using the system to create images and compose music. Just before console sales began to lose momentum in the early 1980s, and home computer sales began to take off, it became increasingly difficult to differentiate the two platforms.

Those who had gaming consoles often saw personal computers as ultimate gaming machines, and ‘graduated’ to these more complex machines. Despite being termed ‘home computers,’ most were installed in offices and schools13. Just as now, there were attempts to define the home computer and the gaming console in terms of previous and future technologies, particularly those that had a firm domestic footing. While electronic games (and eventually computer games) looked initially like automated versions of traditional games, eventually they came to be more closely identified with television and broadcasting. With this association came a wedding of their futures. It seemed natural that games would be delivered by cable companies and that videodisks with ‘live’ content would replace the blocky graphics of the current systems. This shift influenced not only the gaming console but the home computer itself. Now associated with this familiar technology, it seemed clear that the future of gaming lay in the elaborations of Hollywood productions. This similarity played itself out in the authoring of games and in attempts to network them, but also in the hardware and software available for the machines.

Many argued that the use of cartridges (‘carts’) for the Atari 2600, along with the use of new microprocessors and the availability of popular arcade games like Space Invaders, catapulted the product to success. Indeed, the lack of permanent storage for early home computers severely limited their flexibility. A program (often in the BASIC programming language) would have to be painstakingly typed into the computer, then lost when the computer was turned off. As a result, this was only appealing to the hard-core hobbyist, and kept less expert users away14. Early on, these computers began using audio cassette recorders to record programs, but the process of loading a program into memory was a painstaking one. More importantly, perhaps, this process of loading a program into the computer made copy-protection very difficult. By the end of the period, floppy disk drives were in wide use. This remained an expensive technology in the early days, and could easily exceed the cost of the computer itself. Taking a cue from the gaming consoles, many of these new home computers accepted cartridges, and most of these cartridges were games.

The effort to unite the computer with entertainment occurred on an organizational level as well. Bushnell’s ‘Pizza Time Theaters’ drew together food and arcade gaming and were phenomenally successful, at one point opening a new location every five days. Not surprisingly, the traditional entertainment industry saw electronic gaming as an opportunity for growth. Since the earliest days of gaming, the film industry served as an effective ‘back story’ for many of the games. It was no coincidence that 1975’s Shark Jaws (with the word ‘shark’ in very small type), for example, was released very soon after Jaws hit the theaters. The link eventually went the other direction as well, from video games and home computer gaming back into motion pictures, with such films as Tron (1982), WarGames (1983) and The Last Starfighter (1984).

In the early 1980s the tie between films and gaming was well established, with a partnership between Atari and Lucasfilm yielding a popular series of Star Wars based games, and the creation of the E.T. game (often considered the worst mass-marketed game ever produced for the 2600). Warner Communications acquired Atari—the most successful of the home gaming producers, and eventually a significant player in home computing—in 1976. By 1982, after some significant work in other areas (including the ultimately unsuccessful Qube project, which was abandoned in 1984), Atari accounted for 70% of the group’s total profits. Despite these clear precedents, it is impossible to find any predictions that future ties between popular film and gaming would continue to grow as they have over the interceding fifteen years.

This new association did lead to one of the most wide-spread misjudgments about the future of gaming: the rise of the laserdisc and interactive video. Dragon’s Lair was the first popular game to make use of this technology. Many predicted that this (or furtive attempts at holography15) would save arcade and home games from the dive in sales suffered after 1983, and that just as the video game market rapidly introduced computers to the home, they would also bring expensive laserdisc players into the home. The use of animated or live action video, combined with decision-based narrative games or shooting games, provided a limited number of possible outcomes. Despite the increased attractiveness of the graphics, the lack of interactivity made the playability of these games fairly limited, and it was not long before the Dragon’s Lair machines were collecting dust. Because each machine required (at the time) very expensive laserdisc technology, and because the production costs of games for the system rivaled that of film and television, it eventually became clear that arcade games based on laserdisc video were not profitable, and that home-based laserdisc systems were impractical.

The prediction that laserdiscs would make up a significant part of the future of gaming is not as misguided as it at first seems. The diffusion of writable CD-ROM drives, DVD drives, and MP3 as domestic technologies owes a great deal to gaming—both computer and console-based. At present, few applications make extensive use of the storage capacities of CD-ROMs in the way that games do, and without the large new computer games, there would be little or no market for DVD-RAM and other new storage technologies in the home. Unfortunately, neither the software nor the hardware of the mid-1980s could make good use of the video capability of laserdiscs, and the technology remained too costly to be effective for gaming. A few saw the ultimate potential of optical storage. Arnie Katz, in his column in Electronic Games in 1984, for example, suggests that new raster graphics techniques would continue to be important, and that ‘ultimately, many machines will blend laserdisc and computer input to take advantage of the strengths of both systems’ 16 (this despite the fact that eight months earlier he had predicted that laserdisc gaming would reach the home market by the end of 1983). Douglas Carlston, the president of Broderbund, saw a near future in which Aldous Huxley’s ‘feelies’ were achieved and a user ‘not only sees and hears what the characters in the films might have seen and heard, but also feels what they touch and smells what they smell’17. Overall, it is instructive to note the degree to which television, gaming systems, and home computers each heavily influenced the design of the other. The process continues today, with newer gaming consoles like the Playstation 2 and Microsoft’s new Xbox being internally virtually indistinguishable from the PC. Yet where, in the forecasting of industry analysts and work of social scientists, is the video game?

A Whole New Game

Throughout the 1970s and 1980s, arcade games and console games were heavily linked. New games were released first as dedicated arcade games, and later as console games. The constraints of designing games for the arcade—those which would encourage continual interest and payment—often guided the design of games that also appeared on console systems. In large part because of this commercial constraint, many saw video games (as opposed to computer games) as a relatively limited genre. Even the more flexible PC-based games, though, were rarely seen as anything but an extension of traditional games in a new modality. Guides throughout the period suggested choosing games using the same criteria that they would apply to choosing traditional games. Just as importantly, it was not yet clear how wide the appeal of computerized versions of games would be in the long run. As one board game designer suggested, while video games would continue to become more strategic and sophisticated, they would never capture the same kind of audience enjoyed by the traditional games18.

Throughout the rapid rise and fall of gaming during the early 1980s, two changes came about in the way people began to think about the future of gaming. On the one hand, there emerged a new view of games not merely as direct translations of traditional models (board games, etc.), but as an artistic pursuit. The media and meta-discourse surrounding the gaming world gave rise to a cult of personality. At the same time, it became increasingly difficult for a single gaming author to create a game in its entirety. The demand cycle for new games, and increasingly more complex and intricate games, not only excluded the novice programmer, it made the creation of a game a team effort by necessity. As such, the industrial scale of gaming increased, leaving smaller companies and individuals unable to compete in the maturing market.
This revolution began with home computers that were capable of more involved and long-term gaming. As one sardonic newspaper column in 1981 noted:

The last barriers are crumbling between television and life. On the Apple II you can get a game called Soft Porn Adventure. The Atari 400 and 800 home computers already can bring you games on the order of Energy Czar or SCRAM, which is a nuclear power plant simulation. This is fun? These are games? 19

The capabilities of new home computers were rapidly exploited by the new superstars of game design. An article in Popular Computing in 1982 noted that game reviewers had gone so far overboard in praising Chris Crawford’s Eastern Front, that they recommended buying an Atari home computer, if you didn’t have one, just to be able to play the game20. Crawford was among the most visible group of programmers who were pushing game design beyond the limits of traditional games:

Crawford hopes games like Eastern Front and Camelot will usher in a renaissance in personal computer games, producing games designed for adults rather than teenagers. He looks forward to elaborate games that require thought and stimulate the mind and even multiplayer games that will be played cross-country by many players at the same time, with each player’s computer displaying only a part of the game and using networks linked by telephone lines, satellites, and cable TV.

Crawford extended his views in a book entitled, naturally, The Art of Computer Game Design (1982), in which he provided a taxonomy of computer games and discussed the process of creating a video game. He also devotes a chapter to discussing the future of the computer game. Crawford notes that changes in technology are unlikely to define the world of gaming. Instead, he hoped for new diversity in gaming genres:

I see a future in which computer games are a major recreational activity. I see a mass market of computer games not too different from what we now have, complete with blockbuster games, spin-off games, remake games, and tired complaints that computer games constitute a vast wasteland. I even have a term for such games—cyberschlock. I also see a much more exciting literature of computer games, reaching into almost all spheres of human fantasy. Collectively, these baby market games will probably be more important as a social force than the homogenized clones of the mass market, but individual games in this arena will never have the economic success of the big time games.21

In an interview fifteen years later, Crawford laments that such hopes were well off base. Though such hopes were modest—that in addition to the ‘shoot the monsters!’ formula, as he called it, there would be a ‘flowering of heterogeneity’ that would allow for ‘country-western games, gothic romance games, soap-opera games, comedy games, X-rated games, wargames, accountant games, and snob games’ and eventually games would be recognized as ‘a serious art form’—he suggests that over fifteen years they proved to be misguided22. In fact, there were some interesting developments in the interim years: everything from Sim City and Lemmings to Myst and Alice. A new taxonomy would have to include the wide range of ‘god games’ in addition to the more familiar first-person shooters. In suggesting the diversification of what games could be, Crawford was marking out a new territory, and reflecting the new-found respectability of an industry that was at the peak of its influence. The view that ‘programmer/artists are moving toward creating an art form ranging from slapstick to profundity,’ appeared throughout the next few years23.

During the same period, there was a short window during which the future of gaming was all about the computer owner programming games rather than purchasing them. Indeed, it seemed that the ability to create your own arcade-quality games would make home computers irresistible24. Listings in the BASIC programming language could be found in magazines and books into the early 1980s. It seemed clear that in the future, everyone would know how to program. Ralph Baer noted in an interview in the same year that students ‘should be able to speak one or two computer languages by the age of 18, those who are interested. We’re developing a whole new generation of kids who won’t be afraid to generate software’25. By the time computers began to gain a foothold in the home, they increasingly came with a slot for gaming cartridges, much like the consoles that were available. In part, this was dictated by economic concerns—many of the new manufacturers of home computers recognized that software was both a selling point for the hardware and a long-terms source of income26—but part of it came with a new view of the computer as an appliance, and not the sole purview of the enthusiast. Computer games during the 1980s outgrew the ability of any single programmer to create, and it became clear that, in the future, games would be designed more often by teams27.

Connected Gaming

By the 1980s, there was little question that networking would be a part of the future of gaming. The forerunners of current networked games were already in place. The question, instead, was what form these games would take and how important they would be. The predictions regarding networking tended to change from the highly interactive experiments in networked computing, to the experiments in cable-television and telephone distribution of games in the 1980s. A view from 1981 typifies the importance given to communications and interfaces for the future of gaming. It suggests that in five years time:

Players will be able to engage in intergalactic warfare against opponents in other cities, using computers connected by telephone lines. With two-way cable television, viewers on one side of town might compete against viewers on the other side. And parents who think their children are already too attached to the video games might ponder this: Children in the future might be physically attached to the games by wires, as in a lie detector28.

A 1977 article suggests the creation of persistent on-line worlds that ‘could go on forever,’ and that your place in the game might even be something you list in a will29. Others saw these multi-player simulations as clearly a more ‘adult’ form of gaming, that began to erase the ‘educational/ entertainment dichotomy’30. The short-term reality of large-scale on-line gaming remained in many ways a dream during this period, at least for the general public. But the ability to collect a subscription fee led many to believe that multiplayer games were ‘too lucrative for companies to ignore’31. Indeed, the multiplayer games like Mega Wars could cost up to $100 a week to play, and provided a significant base of subscribers for Compuserve32.

The software industry had far less ambitious plans in mind, including a number of abortive attempts to use cable and telephone networks to distribute gaming software for specialized consoles. Despite failures in cable and modem delivery, this was still seen as a viable future into the middle-1980s. Even with early successes in large-scale on-line gaming, it would be nearly a decade before the mainstream gaming industry would become involved in a significant way.

Retelling the Future

The above discussions suggests that when predictions are made about the future of gaming, they are often not only good predictors of the future of computing technology, but also indicators of general contemporaneous attitudes toward the technology. Given this, it would seem to make sense that we should turn to current games to achieve some kind of grasp on the future of the technology. It is not uncommon to end a small piece of history with a view to the future, but here I will call for just the opposite: we should look more closely at the evolution of gaming and its social consequences at present.

Despite a recognition that games have been important in the past, we seem eager to move ‘beyond’ games to something more serious. Games seem, by definition, to be trivial. Ken Uston, in an article appearing in 1983 in Creative Computing on the future of video games expressed the feeling:

Home computers, in many areas, are still a solution in search of a problem. It is still basically games, games, games. How can they seriously expect us to process words on the low-end computers? The educational stuff will find a niche soon enough. But home finance and the filing of recipes and cataloguing of our stamp collections has a long way to go.

A similar contempt of gaming was suggested by a New York Times article two years later: ‘The first generation of video games swept into American homes, if ever so briefly. And that was about as far as the home-computer revolution appeared ever destined to go’33. More succinctly, in an issue in which Time named the personal computer its ‘Man’ of the Year, it notes that the ‘most visible aspect of the computer revolution, the video game, is its least significant’34. Though later the article goes on to suggest that entertainment and gaming will continue to be driving forces over the next decade, the idea of games (at least in their primitive state) is treated disdainfully.

This contempt of gaming, of the audience, and of popular computing, neglects what has been an extremely influential means by which society and culture have come to terms with the new technology. Increasingly, much of the work with computers is seen from the perspective of game-playing35. Games are also central to our social life. Certainly, such a view is central to many of the post-modern theorists that have become closely tied to new technologies, who view all discourse as gaming36. Within the more traditional sociological and anthropological literature, games have been seen as a way of acculturating our young and ourselves. We dismiss this valuable window on society at our own peril.

A recognition of gaming’s central role in computer technology, as a driving force and early vanguard, should also turn our attention to today’s gamers. Recent advances in gaming, from involved social simulations like The Sims, to ‘first-person shooters’ like Quake that have evolved new communal forms around them, to what have come to be called ‘massively multiplayer on-line role playing games’ (MMORPGs) like Everquest and Ultima Online, the games of today are hard to ignore. They have the potential not only to tell us about our relation to technology in the future, but about the values of our society today. Researchers lost out on this opportunity in the early days of popular computing, we should not make the same mistake.

Notes

1. A copy of this advertisement is available at ‘AdCritic.com’: http:// www.adcritic.com/content/sony-playstation2-the-beginning.html (accessed 1 April 2001).
2. Donald A. Thomas, Jr., ‘I.C. When,’ http://www.icwhen.com (accessed 1 April 2001).
3. David Kronke, ‘Program Promises Video Fun N’ Games’, Toronto Star, Entertainment section, 19 March 2000.
4. Ivars Peterson, ‘Silicon Champions of the Game,’ Science News Online, 2 August 1997, http://www.sciencenews.org/ sn_arc97/8_2_97/bob1.htm (accessed 1 April 2000).
5. Ralph Lombreglia, ‘In Games Begin Responsibilities,’ The Atlantic Unbound, 21 December 1996, http://www.theatlantic.com/unbound/digicult/dc9612/dc9612.htm (accessed 1 April 2001).
6. Stewart Brand, ‘Spacewar: Fanatic Life and Symbolic Death Among the Computer Bums,’ Rolling Stone, 7 December 1972, p 58.
7. Thomas.
8. While there is easy access to many of the popular magazines of the period, it remains difficult to obtain some of the gaming magazines and books, and much of the ephemera. The reasons are two-fold: First, academic and public libraries often did not subscribe to the gaming monthlies. Often these were strong advertising vehicles for the gaming industry, and as already suggested, the subject matter is not ‘serious,’ and is often very time-sensitive. More importantly, there has been a strong resurgence of nostalgia for gaming during the period, and this has led to the theft of many periodical collections from libraries. It is now far easier to find early copies of Electronic Games magazine on Ebay than it is to locate them in libraries.
9. Martin Campbell-Kelly and William Aspray, Computer: A History of the Information Machine (New York: BasicBooks, 1996), p. 228.
10. Jake Roamer, ‘Toys or Tools,’ Personal Computing, Nov/Dec, 1977, pp. 83-84.
11. Jack M. Nilles, Exploring the World of the Personal Computer (Englewood Cliffs, NJ: Prentice-Hall, 1982), p. 21.
12. Peter Schuyten, ‘Worry Mars Electronics Show,’ New York Times, 7 June 1979, sec. 4, p2, col. 1.
13. Richard Schaffer, ‘Business Bulletin: A Special Background Report,’ Wall Street Journal, 14 September 1978, p.1, col. 5.
14. Mitchell C. Lynch, ‘Coming Home,’ Wall Street Journal, 14 May 1979, p. 1, col. 4.
15. Stephen Rudosh, Personal Computing, July 1981, pp.42-51, 128.
16. Arnie Katz, ‘Switch On! The Future of Coin-Op Video Games,’ Electronic Games, September 1984. Also available on-line at http://cvmm.vintagegaming.com/egsep84.htm (accessed 1 April 2001).
17. Douglas G. Carlston, Software People: An Insider’s Look at the Personal Computer Industry (New York: Simon & Schuster, 1985), p. 269.
18. William Smart, ‘Games: The Scramble to Get On Board,’ Washington Post, 8 December 1982, pg. C5.
19. Henry Allen, ‘Blip! The Light Fantastic,’ Washington Post, 23 December 1981, C1.
20. A. Richard Immel, ‘Chris Crawford: Artist as a Game Designer,’ Popular Computing 1(8), June 1982, pp. 56-64.
21. Chris Crawford, The Art of Computer Game Design (New York: Osborn/McGraw-Hill, 1984). Also available at http:// www.vancouver.wsu.edu/fac/peabody/game-book/ and at http://members.nbci.com/kalid/art/art.html (accessed 1 April 2001).
22. Sue Peabody, ‘Interview With Chris Crawford: Fifteen Years After Excalibur and the Art of Computer Game Design,’ 1997, http://www.vancouver.wsu.edu/fac/peabody/game-book/Chris-talk.html (accessed 1 April 2001).
23. Lee The, ‘Giving Games? Go with the Classics’ Personal Computing, Dec. 1984, pp. 84-93.
24. ‘Do it yourself,’ Personal Computing, Nov/Dec 1977, p. 87.
25. Ralph Baer, ‘Getting Into Games’ (Interview), Personal Computing, Nov/Dec 1977.
26. Carlston, p. 30.
27. Ken Uston, ‘Whither the Video Games Industry?’ Creative Computer 9(9), September 1983, pp. 232-246.
28. Andrew Pollack, ‘Game Playing: A Big Future,’ New York Times, 31 December 1981, sec. 4, pg. 2, col. 1.
29. Rick Loomis, ‘Future Computing Games,’ Personal Computing, May/June 1977, pp. 104-106.
30. H. D. Lechner, The Computer Chronicles (Belmont, CA: Wadsworth Publishing, 1984).
31. Richard Wrege, ‘Across Space & Time: Multiplayer Games are the Wave of the Future,’ Popular Computing 2(9), July 1983, pp. 83-86.
32. Jim Bartimo, ‘Games Executives Play,’ Personal Computing, July, 1985, pp. 95-99.
33. Erik Sandberg, ‘A Future for Home Computers,’ New York Times, 22 September 1985, sec. 6, part 2, pg. 77, col. 5.
34. Otto Friedrich, ‘Machine of the Year: The Computer Moves In,’ 3 January 1983.
35. Richard Thieme, ‘Games Engineers Play,’ CMC Magazine 3(12), 1 December 1996, http:// www.december.com/ cmc/ mag/ (accessed 1 April 2001).
36. For overview, see Ronald E. Day, ‘The Virtual Game: Objects, Groups, and Games in the Works of Pierre Levy,’ Information Society 15(4), 1999, pp. 265-271.

]]>
http://alex.halavais.net/the-coming-gaming-machine-1975-1985/feed/ 2 3258
Research Universities and the Future of America http://alex.halavais.net/research-universities-and-the-future-of-america/ http://alex.halavais.net/research-universities-and-the-future-of-america/#respond Wed, 04 Jul 2012 21:21:20 +0000 http://alex.halavais.net/?p=3244 In case you haven’t seen this yet…

This is one of those cases where fostering the elite is a good thing. Nothing wrong with funding community colleges or making tuition at four-year institutions more reasonable, but we are systematically undermining our country both economically and culturally by undercutting our large research universities. All hail the MOOC, and for goodness sake, make higher education function more effectively, but don’t use it as an excuse to take the “research” out of research university…

]]>
http://alex.halavais.net/research-universities-and-the-future-of-america/feed/ 0 3244
The Privacy Trade Myth http://alex.halavais.net/the-privacy-trade-myth/ http://alex.halavais.net/the-privacy-trade-myth/#respond Wed, 06 Jun 2012 16:35:56 +0000 http://alex.halavais.net/?p=3214 Crow Tengu Riding Boar (Karasu Tengu 烏天狗騎猪)Cory Doctorow has a new essay in Technology Review entitled “The Curious Case of Internet Privacy”. He begins by outlining the idea of “the trade” an idea he rightly suggests has risen to the level of myth.

“The trade” is simply that you are permitted to use a system like Facebook for free, and in return you give them permission to sell information about what you say and do on the service. This trade has been criticized on a number of grounds. The user often does not understand what she is giving up, either because it isn’t clear what damage that loss of privacy might bring in the future, or that the deal is cleverly concealed in 30 pages of legalese that constitutes the End-User License Agreement. Others suggest that privacy itself is a human right and not any more subject to barter than is your liver.

But Doctorow doubles down on the myth of the trade, suggesting merely that it is a bad deal, a deal with the devil. You are trading your immortal privacy for present-day reward. I don’t disagree with the details of his argument, but in this case I don’t know that the devil really is in the details. Maybe it’s not a deal with the devil, but a deal with a Tengu.

A tengu, for those who are not familiar, is a long nosed beastie from Japanese mythology, often tied to esoteric Buddhism and specifically the yamabushi. (Those of you who have visited me in the office have probably seen one or two tengu masks, left over from when I lived near the Daiyuzan Saijyouji temple.) The deal with the Tengu is sometimes told a bit differently, with, in one case, the human claiming that he is afraid of gold or mochi (and the Tengu producing these in abundance to scare him off), or a tengu getting nailed with a splinter while a woodcutter is doing his work, and complaining about the human tendency to not think about the consequences of their actions. In other words, there is a deal, but maybe the end user is making out like a bandit.

Right now, it’s not clear what value Facebook, to take our earlier example, is extracting from this personal data. Clearly it is part of some grail of behavioral marketing. Yes, they present ads based on browsing behavior now, and yes, I suspect those targeted ads are more effective (they’ve worked on me at least once), but I’m not sure that the marginal price Facebook can command for this data adds up to all that much, except in the aggregate. Indeed, for many users of the service, the bet against future value of privacy is a perfectly reasonable one to make.

I’ll put off for now an argument that comes dangerously close to “Zuck is right,” and suggests that our idea of “privacy” is pretty unstable, and that we are seeing a technologically mediated change in what “privacy” means not unlike the change we saw at the beginning of the last century. In other words “it’s complicated.”

Doctorow seems to suggest that all we are getting from this deal is a trickle of random emotional rewards in the form of responses from our social network. Is this the same guy who invented Whuffie‽ Those connections are not mere cheap treats, but incredibly valuable connections. The are not provided by Facebook (or Twitter or Google, etc.) but they are brokered by them. Facebook is the eBay of social interaction, and so they take a small slice out of each deal. Can Facebook be disintermediated? Of course! But for now they are the disintermediator, making automatic the kinds of introductions and social maintenance that in earlier times was handled by a person.

In other words, if there is an exchange–and again, I’m not sure this idea of a trade adequately represents the complexity of the relationship–it isn’t at all clear that it is zero-sum, or that the user loses as much as she gains.

This does not at all obviate some of the solutions Doctorow suggests. Strategically lying to systems is, I think, and excellent way of mediating the ability of systems to tie together personal data in ways you would prefer do not happen. But I suspect that people will continue to cede personal data not just because the EULA is obscure, or because they poorly estimate future cost of sharing, but because they find it to be a good deal. Providing them the tools to be able to make these decisions well is good practice because arming citizens with both information and easy ways of making choices is essentially a Good Thing™. But I would be surprised if it led to less sharing. I expect just the opposite.

]]>
http://alex.halavais.net/the-privacy-trade-myth/feed/ 0 3214
Review: Planned Obsolescence http://alex.halavais.net/review-planned-obsolescence/ http://alex.halavais.net/review-planned-obsolescence/#respond Thu, 12 Apr 2012 20:58:48 +0000 http://alex.halavais.net/?p=3165
It is rare that how a book is made is as important as its content. Robert Rodriguez’s El Mariachi stands on its own as an outstanding action film, yet it is a rare review that does not mention the tiny budget with which it was accomplished. And here it is difficult to resist the urge to note that I, like many others, read Kathleen Fitzpatrick’s new book, Planned Obsolescence, before ink ever met paper. Fitzpatrick opened up the work at various stages of its creation, inviting criticism openly from the public. But in this case, the making of the book–the process of authorship and the community that came together around it–also has direct bearing on the content of the work.

Fitzpatrick’s book is a clear and well-thought out response to what is widely accepted as a deeply dysfunctional form of scholarly dissemination: the monograph. In the introduction, Fitzpatrick suggests that modern academic publishing in many ways operates via zombie logic, reanimating dead forms, feeding off of the living. As a result, it is tempting to conclude that the easiest way to deal with academic publishing is similar to the best cure for zombies: a quick death.

Planned Obsolescence does not take this easy path, and instead seeks to understand what animates the undead book. For Fitzpatrick, this begins with questioning the place and process of peer review, and this in turn forces us to peel back the skin of what lays beneath: authorship, texts, preservation, and the university. The essential question here is whether the cure to the zombification of scholarly communication may be found in a new set of digital tools for dissemination, and explores what the side effects of that cure may be.

In what constitutes the linchpin of her argument, the first chapter takes a bite out of one of the sacred cows of modern academia: the flawed nature of peer review as it is currently practiced. Unlike those who have argued–perhaps tactically–that open access and online journals will keep sacrosanct peer review in its current form, Fitzpatrick suggests that new bottles need new wine, and draws on a wide-reaching review of the history and problems of the present system of peer review, a system driven more by credentialing authors than promoting good ideas.

Fitzpatrick does not offer an alternative as much as suggests some existing patterns that may work, including successful community-filtered websites. She acknowledges that these sites tend to promote an idiosyncratic view of “quality,” and that problems like that of the male-dominated discourse on Slashdot would need to be addressed if we do not want to replace one calcified system of groupthink with another. The argument would be strengthened here, I think, with a clearer set of requirements for a proposed alternative system. She presents MediaCommons, an effort she has been involved in that provides a prototype for “peer-to-peer review,” as itself a work in progress. It is not clear that the dysfunctional ranking and rating function of the current peer review system is avoided in many of the alternative popular models she suggests, in which “karma whoring” is often endemic. As such, the discussion of what is needed, and how it might be effectively achieved could have been expanded; meta-moderation of texts is important, but it is not clear whether this is a solution or a temporary salve.

If we move from peer review to “peer-to-peer review,” it will have a significant effect on what we think of as “the author.” In her discussion of the changing nature of authorship, Fitzpatrick risks either ignoring a rich theoretic discussion of the “death of the author” or becoming so embroiled in that discussion that she misses the practical relationship to authors working in new environments of scholarly discourse. She does neither, masterfully weaving together a history of print culture, questions of authorship, and ways in which digital technologies enable and encourage the cutting up and remixing of work, and complicate the question of authorship.

The following two chapters discuss texts and their preservation. As the process of authorship changes, we should expect this to be revealed in the texts produced. Naturally, this includes the potential for hypermedia, but Fitzpatrick suggests a range of potential changes, not least those that make the process of scholarly review and conversation more transparent. This discussion of the potential edges of digital scholarship provides some helpful examples of the variety of scholarly discourse that is afforded by new media forms–a set of potentialities that is richer than the future that is sometimes presented by academic publishers whose visions are clouded by models and technologies that require profitability. The following chapter on the processes of disseminating and preserving this work I found to be particularly enlightening. As in earlier chapters, Fitzpatrick manages to draw together a surprisingly broad set of experiments and analyses into an intriguing and concise synthesis.

The penultimate chapter of the book discusses the question of how to support and sustain the creation of these new texts. The chapter argues that university presses should not try to beat commercial presses at their own game, but should instead invent a new game. It presents a number of models and strategies through which this might be achieved, and suggests those that show the most promise: notably, providing for open access and drawing the university press more directly into the work of the university and its library.

Planned Obsolescence itself was born of the realization that Fitzpatrick’s previous book was rejected due not to the quality of its thought but by the potential for press profit. That her next book is of now in a bound physical volume, published by New York University Press, and that this review will itself appear in a bound journal, published by Sage, seems to suggest that in some ways this born-digital scholarly conversation has itself succomed to the slow-moving process of traditional scholarly publication, and as such, might appear as something of a rebuttal to the argument that the only good zombie is a dead one. On the other hand, any criticism I provide above represents a form of slow, printed conversation that is largely outmoded by digital scholarly communication.

In fact, this neatly reflects the complexity of the new structures of scholarly publishing, and the promise for its future; a future in which we stop hiding from zombie books and invite them to a more convivial scholarly conversation. Anyone who is serious about understanding the future of scholarly publishing–and anyone who cares about knowledge and society should share this concern–will find Fitzpatrick’s book an essential, thought-provoking, and highly approachable introduction to the conversation.

Kathleen Fitzpatrick, Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: NYU Press, 2011, vii+245 pp. ISBN 0-8147-2788-1, $23 (pbk)

A version of this review is to appear in New Media & Society, which provided me with a review copy.

]]>
http://alex.halavais.net/review-planned-obsolescence/feed/ 0 3165
Brief Introduction to BadgePost Prototype http://alex.halavais.net/brief-introduction-to-badgepost-prototype/ http://alex.halavais.net/brief-introduction-to-badgepost-prototype/#comments Wed, 07 Mar 2012 22:02:00 +0000 http://alex.halavais.net/?p=3127

]]>
http://alex.halavais.net/brief-introduction-to-badgepost-prototype/feed/ 5 3127
Badges: The Skeptical Evangelist http://alex.halavais.net/badges-the-skeptical-evangelist/ http://alex.halavais.net/badges-the-skeptical-evangelist/#comments Tue, 06 Mar 2012 06:53:57 +0000 http://alex.halavais.net/?p=3103
I have been meaning to find a moment to write about learning badges for some time. I wanted to respond to the last run of criticisms of learning badges, and the most I managed was a brief comment on Alex Reid’s post. Now, with the announcement of the winners of this year’s DML Competition, there comes another set of criticisms of the idea of badges in learning. This isn’t an attempt to defend badges–I don’t think such a defence is necessary. It is instead an attempt to understand why they are worthy of such easy dismissal by many people.

Good? Bad?

My advisor one day related the story of a local news crew that came to interview him in his office. This would have been in the mid-1990s. The first question the reporter asked him was: “The Internet: Good? Or Bad?”

Technologies have politics, but the obvious answer to that obvious question is “Yes.” Just as when people ask about computers and learning, the answer is that technology can be a force for oppressive, ordered, adaptive multiple-choice “Computer Aided Teaching,” or it can be used to provide a platform for autonomous, participatory, authentic interaction. If there is a tendency, it is one that is largely reflective of existing structures of power. But that doesn’t mean you throw the baby out with the bathwater. On the whole, I think computers provide more opportunities for learning than threats to it, but I’ll be the first to admit that outcome was neither predestined nor obvious. It still isn’t.

Are there dangers inherent to the very idea of badges? I think there are. I’ve written a bit about them in a recent article on the genealogy of badges. But just as I can find Herb Schiller’s work on the role of computer technology in cultural hegemony compelling, but still entertain its emancipatory possibilities, I can acknowledge that badges have a long and unfortunate past, and still recognize in them a potential tool for disrupting the currently dominant patterns of assessment in institutionalized settings, and building bridges between informal and formal learning environments.

Ultimately, what is so confusing to me is that I agree wholeheartedly with many of the critics of badges, and reach different conclusions. To look at how some badges have been used in the past and not be concerned about the ways they might be applied in the future would require a healthy amount of selective perception. I have no doubt that badges, badly applied, are dangerous. But so are table saws and genetic engineering. The question is whether they can also be used to positive ends.

Over the last year, I’ve used badges to such positive ends. My own experience suggests that they can be an effective way of improving and structuring peer learning communities and forms of authentic assessment. I know others have had similar successes. So, I will wholeheartedly agree with many of the critics: badges can be poorly employed. Indeed, I suspect they will be poorly employed. But the same can be said of just about any technology. The real question is if there is also some promise that they could represent an effective tool for opening up learning, and providing the leverage needed to create new forms of assessment.

Gold Stars

One of the main critiques of badges suggests that they represent extrinsic forms of motivation to the natural exclusion of intrinsic motivation. Mitch Resnick makes the case here:

I worry that students will focus on accumulating badges rather than making connections with the ideas and material associated with the badges – the same way that students too often focus on grades in a class rather than the material in the class, or the points in an educational game rather than the ideas in the game.

I worry about the same thing. I will note in passing that at worst, he is describing a situation that does no harm: replacing a scalar (A-F letter grades) with a system of extrinsic motivation that is more multidimensional. But the problem remains: if badges are being used chiefly as a way of motivating students, this is probably not going to end well.

And I will note that many educators I’ve met are excited about badges precisely because they see them as ways of motivating students. I think that if you had to limit the influences of using badges to three areas, they would be motivation, assessment, and credentialing. The first of these if often seen as the most important, and not just by the “bad” badgers, but by many who are actively a part of the community promoting learning badges.

(As an aside, I think there are important applications of badges beyond these “big three.” I think they can be used, for example, as a way for a community to collaboratively structure and restructure their view of how different forms of local knowledge are related and I think they can provide a neophyte a map of this knowledge, and an expert a way of tracing their learning autobiography over time. I suspect there are other implications as well.)

Perhaps my biggest frustration is the ways in which badges are automatically tied to gamification. I think there are ways that games can be used for learning, and I know that a lot of the discussion around badges comes from their use in computer games, but for a number of reasons I think the tie is unfortunate; not least, badges in games are often seen primarily as a way of motivating players to do something they would otherwise not do.

Badges and Assessment

The other way in which I worry about computer gaming badges as a model is the way they are awarded. I think that both learning informatics and “stealth assessment,” have their place, but if misapplied they can be very dangerous. My own application of badges puts formative assessment by actual humans (especially peers) at the core. Over time I have come to believe that the essential skill of the expert is an ability to assess. If someone can effectively determine whether something is “good”–a good fit, a good solution, aesthetically pleasing, interesting, etc.–she can then apply that to her own work. Only through this critical view can learning take place.

For me, badges provide a framework for engaging effectively in assessment within a learning community. This seems also to be true for Barry Joseph, who suggests some good and bad examples of badge use here. Can this kind of re-imagination of assessment happen outside of a “badge” construct? Certainly. But badges provide a way of structuring assessment that provides scaffolding without significant constraints. This is particularly true when the community is involved in the continual creation and revision of the badges and what they represent.

Boundary Objects

Badges provide the opportunity to represent knowledge and power within a learning community. Any such representation comes with a dash of danger. The physical structuring of communities: who gets to talk to whom and when, where people sit and stand, gaze–all these things are dangerous. But providing markers of knowledge is not inherently a bad thing, and particularly as learning communities move online and lose some of the social and cultural context, finding those who know something can be difficult.

This becomes even more difficult as people move from one learning community to another. Georg Simmel described the intersection of such social circles as the quintessential property of modern society. You choose your circles, and you have markers of standing that might travel with you to a certain degree. We know what these are: and the college degree is one of the most significant.

I went to graduate school with students who finished their undergraduate degrees at Evergreen State College, and have been on admissions committees that considered Evergreen transcripts in making admissions decisions. Evergreen provides narrative assessments of student work, and while I wholeheartedly stand by the practice–as a great divergence if not a model–it makes understanding a learning experience difficult for those outside the community. Wouldn’t it be nice to have a table of contents? A visual guide though a learning portfolio and narrative evaluation? A way of representing abilities and learning to those unfamiliar with the community in which occurred?

I came to badges because I was interested in alternative ways of indicating learning. I think that open resources and communities of learning are vitally important, but I know that universities will cling to the diploma as a source of tuition dollars and social capital. Badges represent one way of nibbling at the commodity of the college diploma.

Badges, if done badly, just become another commodity: a replacement of authentic learning with an powerful image. To me, badges when done well are nothing more than a pointer. In an era when storing and transmitting vast amounts of content is simple, there is no technical need for badges as a replacement. But as a way of structuring and organizing a personal narrative, and relating knowledge learned in one place to the ideas found in another, badges represent a bridge and a pointer.

This is one reason I strongly endorsed the inclusion of an “evidence” url in the Mozilla Open Badge Infrastructure schema. Of course, the OBI is not the only way of representing badges, nor does it intend to represent only learning badges–there is a danger here of confusing the medium and the message. Nonetheless, it does make for an easier exchange and presentation of badges, and importantly, a way of quickly finding the work that under-girds a personal learning history.

All the Cool Kids Are Doing It

Henry Jenkins provides one of the most compelling cases against badges I’ve seen, though it’s less a case against badges and more a case against the potential of a badgecopalypse, in which a single sort of badging system becomes ubiquitous and totalizing. Even if such a badge system followed more of the “good” patterns on Barry Joseph’s list than the “bad,” it would nonetheless create a space in which participation was largely expected and required.

Some of this comes of the groups that came together around the badge competition. If it were, like several years ago, something that a few people were experimenting with on the periphery, I suspect we would see little conversation. But when foundations and technologists, the Department of Education and NASA, all get behind a new way of doing something, I think it is appropriate to be concerned that it might obliterate other interesting approaches. I share Jenkins’ worry that interesting approaches might easily be cast aside by the DML Competition (though I will readily concede that may be because I was a loser“unfunded winner” in the competition) and hope that the projects that move forward do so with open, experimental eyes, allowing their various communities to help iteratively guide the application of badges to their own ends. I worry that by winnowing 500 applications to 30, we may have already begun to centralize what “counts” in approaches to badges. But perhaps the skeptical posts I’ve linked to here provide evidence of the contrary: that the competition has encouraged a healthy public dialog around alternative assessment, and badges represent a kind of “conversation piece.”

Ultimately, it is important that critical voices of approaches to badges remain at the core of the discussion. My greatest concern is that the perception that there are badge evangelists and skeptics is in fact true. I certainly think of myself as both, and I hope that others feel the same way.

]]>
http://alex.halavais.net/badges-the-skeptical-evangelist/feed/ 9 3103
Open Analytics and Social Fascination Talk http://alex.halavais.net/open-analytics-and-social-fascination-talk/ http://alex.halavais.net/open-analytics-and-social-fascination-talk/#comments Sat, 17 Dec 2011 04:48:05 +0000 http://alex.halavais.net/?p=3066

]]>
http://alex.halavais.net/open-analytics-and-social-fascination-talk/feed/ 1 3066
IRBs and Clean Secrets http://alex.halavais.net/irbs-clean-secrets/ http://alex.halavais.net/irbs-clean-secrets/#comments Thu, 08 Dec 2011 19:49:57 +0000 http://alex.halavais.net/?p=3057 There’s a comment piece I wrote that appears in today’s issue of the journal Nature that talks a bit about the role of open data and IRBs. But I worry that perhaps in the number of iterations it made before publication the main point got muddied a bit. So here it is:

Funding agencies and journals should require authors to submit and openly publish protocols as submitted to the IRB.

There are a bunch of reasons for this. First, IRB protocols should be public. Right now, it’s treated on the one hand a bit like the Napoleonic code: it doesn’t matter what others have decided, the board decides entirely on the basis of your submitted application. This has some real negative implications.

First, the same protocol may be accepted at one campus and rejected at another. Or research in the same stream (or seeking to replicate) may be rejected at a later date.

Second, the IRBs have to make an original determination each time wasting, in many respects, the efforts of other competent IRBs who have already made a determination. They don’t need to be bound by earlier determinations, but don’t you think it would be worthwhile to be at least aware of them?

Third, when applicants feel as though they are handled unfairly, a transparent system is better for all.

Fourth, the best way to learn to do ethical research is to be able to observe the process from the periphery, and listen to the queries of IRBs and the responses by researchers. Releasing the approved protocols may not get at that deeper conversation completely, but it at least provides a small window.

Fifth, the protocols are an excellent way of “indexing” open data. Open research data often is published with codebooks and other ancillary material, but IRB protocols in many ways are the ideal introduction to an open collection of data, explaining why it was collected, how it was collected, and how it might be used.

For these reasons, among others, there should be open repositories of IRB protocols. Now, we could just try to convince individual IRBs on campuses to open up their process and publish protocols they approve, and I hope that they might. But IRBs are by nature a conservative group, intended to protect, not to disrupt. In many cases, they are made even more conservative by the institutions that they are housed in, and concerns by that institution either that they might be sued by subjects or that they might be investigated by federal regulators. (Those regulators, naturally, have access to the protocols and the decision process of the IRB once an investigation or audit begins, but they might not want to provide any sort of “probable cause.”)

Individuals might be encouraged to submit their own protocols to a repository, and in fact, self-archiving has made an important impact on the way publishing happens. But there are enough open questions surrounding this that it’s a hard place to start the ball rolling.

Funding agencies, and more recently journals, either insist on or facilitate the sharing of research data. In many ways IRB protocols are an important part of those research data. If funders required that IRB protocols be shared just like any other research data, and if journals provided authors the resources to share these protocols, it would revitalize in some ways the role of scholarly publishers and it would make for ethical oversight that was more robust and transparent.

But what if you were not IRB approved? It may be you didn’t need pre-approval of research by the IRB, and as I argue in the article above, I think this should be the case for much of the research that is currently placed under some level of review by IRBs. But if you don’t have to have IRB approval, I think funders should still require you to talk about the ethical considerations of your research, and journals should require you to publish this online when you do not have an IRB-approved protocol to provide.

What this does is creates an environment in which ethical post-review is encouraged. Certainly, when it comes to drug trials–and even to invasive forms of research of vulnerable populations in the social sciences–there should be some sort of oversight before the research occurs. But even after it occurs, peer-reviewers and the reading public should be able to see how the researchers weighed the needs and rights of subjects against the importance of their research questions.

]]>
http://alex.halavais.net/irbs-clean-secrets/feed/ 3 3057
Mozilla Drumbeat: Enter the Lizard http://alex.halavais.net/mozilla-drumbeat-enter-the-lizard/ http://alex.halavais.net/mozilla-drumbeat-enter-the-lizard/#comments Tue, 16 Nov 2010 20:33:04 +0000 http://alex.halavais.net/?p=2943 Who knew so much could be packed into so few hours. I’ve spent the last week burrowing out from things that piled up while I was in Barcelona at the Mozilla Drumbeat festival, an event dedicated to creating a learning web. Over this period, some things have marinated a bit, and so this is not really “conference blogging,” but rather a series of posts that have been triggered by what happened at the conference. In this entry, I just want to provide a broad evaluation of the conference itself: what worked, and a few things that could have been improved.

Overall, it was an outstanding conference, and I wanted to mark why both in terms of the content (which should find legs in other places) and the structure (which is finding its way slowly into other conferences).

Why I Conference

I think for most people, the main thing that holds them back from going to conferences is that they are expensive: in terms of registration costs, travel costs, and–probably most importantly–in terms of time and logistics. This is certainly the case for me. Also, like many others, I am introverted–while I like people in theory, actually being around them, especially if they are not people I already know pretty well, is uncomfortable. And on top of all of that, I have a soon-to-be-two-year-old who doesn’t want me to leave, and it’s very difficult because I would rather stay with him as well.

With all that, why go to conferences at all? There are a few reasons, but personally I judge the success of a conference by two main criteria. The first is whether I learn new things or get new ideas from being there. Now, I have yet to have gone to a conference where I didn’t learn at least something (even if it was only “I don’t ever want to go to this conference again”), but there needs to be a particular saturation of ideas in order to make it worthwhile. And secondly, if there is a tangible collaboration or opportunity to work together either at the conference or coming out of the conference, and if these have a good chance of coming to fruition, I consider the conference a success. Most conferences are designed explicitly to meet the first criterion, and implicitly to meet the second.

By those two goalposts alone, Drumbeat was one of the best conferences I’ve been to, and I would go again in a minute. It managed to neatly strike the balance between drawing together likeminded people (including, to echo Rafi Santo’s comment, people who may not have previously known they were likeminded) and put together some diversity of thought and background. There was certainly some feeling of “drinking the Koolaid”–Mozilla represents as much a movement as it does a topical area, and there are some sometimes unspoken, and often shouted, underlying ideals in play. These ideals are ones I share, but it is always difficult to walk the line between coordinated movement and groupthink.

There were also those at this conference who come from the Open Ed world, where “Ed” is still pretty operative. They think seriously about the way materials are shared and how they can be improved, but are often not so radical when it comes to the alteration or avoidance of educational institutions. And I think some of those present from the technical community think about learning in a very particular and practical way. They may recognize that there are more than purely technical skills in play when it comes to designing and building software (and hardware and movements) but some have not thought about what this means outside of a fairly limited range of training opportunities, or at least see learning through the lens of what they have experienced as learning.

In all, there seemed to me to be just the right amount of common ground and uncommon territory. At times, I felt like there was some preaching to the choir, or a bit of redundancy, but this was surprisingly rare, and that allowed people to come more quickly to some of the deeper questions and problems that needed to be addressed.

What Worked and Didn’t

As someone in the midst of planning a conference, I think it’s worth briefly noting what things worked at Drumbeat and what things did not.

First, what did not: I think there were two technical details that the organizers would agree that just went wrong. The first was the lightweight WiFi infrastructure. Yes, I recognize that not having internet is not the end of the world, but for a conference like this, it is really important to be bathed in the glow of high-speed access. I also know personally how hard it is to plan for 400 people who are not just briefly checking their mail, but constantly tweeting, uploading photos, and trying to slip by streams of media in some cases. Most providers look at the number of attendees and make certain assumptions about usage that are just way off. But I am kicking myself for spending the hours after I arrived sleeping off the jetlag rather than heading out to Orange to pick up a PAYG SIM for the iPad.

Acoustics, especially on the first night, but throughout, were terrible. If you weren’t trying to make sense of discussions as they were amplified by cavernous, ancient rooms, you were outside fighting against the scratch-and-roll of the skateboarders or the real tweets of the birds. I feel particularly bad for those for whom English was not their first language, at a largely English-language conference. That said, there are tradeoffs to be made, and it was likely worth the hassle for such a great venue.

I recognize and appreciate the openness to chaos, but it would have been great, especially as the second day trailed on, if there could have been a bit more discipline with the schedule. Given the calls to end the repressive factory nature of schools, calling for bells or chimes is probably misplaced, but I did miss the gentle cowbell we had at the IR conference this year reminding us that the next session was getting started. So, Drumbeat: more cowbell!

Finally, as the conference wore on, it started to feel like we were missing some people here. In particular, the Prof. Hacker crowd would have been a welcome middle ground between some of the more academic people, some of those working in informal learning, and some of the techies. There are people out there who live and breathe this stuff and who weren’t at the conference. I suspect it’s because it didn’t quite make their radar–or because Barcelona was a long haul–and that’s a shame.

There was too much right to list here. This was not, strictly speaking, a BarCamp–they front loaded with more scaffolding than is normally the case–something I think was necessary given the size (number of people) and scope (breadth of participation). As I noted above, I think there were opportunities for even more scaffolding, but I was really glad to be freed from the stand-and-deliver. There were a few plenary sessions, but these were universally excellent and short–two attributes that likely run together. Most of the sessions were organized around structured conversation, something missing from most academic conference, where it swings pretty wildly between presentation (sometimes fine, but often boring and unproductive) and unstructured conversation (often excellent but sometimes without clear goals or outcomes). I think Drumbeat did a nice job of zeroing in on semi-structured conversations with a dedication to making outcomes and building tangible and intangible products.

Facilitating this requires a bit of self-reflection, a bit of a reminder of why we are putting up with small bumps along the way, and quite a bit of “follow me.” And so it is important to note just how important the dedicated organizers were to making it work as well as it did. The sessions were exciting because those leading them were excited about what they were doing–and knew what they wanted out of it. And it only functioned smoothly because of a really dedicated group of volunteers.

Fantastic conference food and drink, as you would expect in Barcelona. And it’s hard to beat the setting, both in terms the structures we met in and being able to walk in the city. And if anyone does a conference in El Raval again, I think they should designate a tiger team of bag snatchers with a prize for the most competent, or invite local pickpockets to run a session on what we can learn about theft as social engineering.

No Respect for Authority

It seemed to me that there was a great charge of revolution in the room at a number of moments, with the traditional school and university structures firmly in the crosshairs. Two of the plenary speakers were proud dropouts of traditional educational institutions, and there was a general feeling that we can do it better ourselves. As Cathy Davidson noted in one of the early talks, we needed to find the “joy in insurgency.”

And you will find no one more responsive to that general feeling than I am. But I think it is worth tempering. After all, I am a high school dropout with a Ph.D.–a condition that probably reflects my intermediate position on the issue fairly well. Are schools and universities broken? Of course they are, always have been, and always will be. IE is broken too. The solution, however, was not to throw the browser out with the bathwater, it was to make a better browser. (Oh, and BTW, Firefox is broken; there is nothing fundamentally wrong with brokenness, as long as you are also always in the process of fixing, and the ability to fix is not impeded.)

I think that a hard stance against the university is strategically the wrong way to go. As Mitchell Baker noted in her brief introduction, one of the successes of openness is that it kills with kindness. I thnk that in the case of free and open software, that means adoption by commercial software producers, and for open learning, it means universities and schools that embrace open learning as obvious rather than a radical concept. This is not total war; the objective for me is a quiet but unstoppable change that leads to the crumbling of structures that do not adapt, not their explosion.

There is too much good in universities to throw them out, and although there is a certain strategic value in both rhetoric and actions that challenge its existence, at least in current form, as leverage in making substantial changes, I still think there is so much that the university model represents that is good that the most valuable approach in one that is probably more familiar to those in Barcelona, opening up the institutions that are traditional, authoritative, and highly structured, so that we may walk off with their resources, ideas, people, and capital. Since only the last of these is really alienable, we are not robbing, but liberating.

Just Do It

One of the reasons I like this group more than most is the willingness to, to borrow from the esteemed Tim Gunn, make it work. As academics, we are extraordinarily good at talking, and not always as good at actually doing. This is a problem worth building our way put of, and the people at Drumbeat are essentially learning bricoleurs, willing to disassemble, take the parts that work, and repurpose them. This is necessarily a process of experimentation, and of research through practice. I’m going to drop this 30-minute presentation right in the middle of the blog post because it embodies this spirit better than I can.

This presentation, by Aza Raskin, includes a nice overview of what participatory design means (not just what it is) and was quickly put on my “must watch” list for our starting students. (If the mention of jQuery freaks you out, feel free to abandon half-way through!) I think most of the people in the room were thinking about prototyping socio-technical systems in the form of web software, but it is equally true of design in other contexts. The push to prototype your way through a project–both as a way of creating and as a way of getting people “co-excited” about an idea–is important.

I think the one-day prototype needs to find its way into our learning environments far more often.

Turn up the base

A lot of the direction here was toward learning for us by us, and that is fine. But it is worth noting that those at the conference did not represent all learners. I think one of the more inspiring plenary talks was by Anna Debenham. (Assuming I get the chance to edit it down, I’ll post that talk shortly.) There were a few other young people at the conference, but it would have been great to see more. Of course, youth are not the only target, and they may not even be as much of a special case as they are often considered. Some have suggested we need to stop treating adult education like education for kids–I think we also need to stop treating learning environments for kids like education for kids.

The other problem is that we may be designing for users that do not yet exist. Of course, this is always the case to a certain extent–users are a moving target–but particularly when it comes to learning, our ultimate aim is to change the users, even when they are ourselves. So, it’s important to get an early view from potential users, but is also difficult. When we are successful, our systems help to co-create new kinds of users.

Given that difficulty, it is especially important to do two things. First, we need to create resources that assume very little about the end user, and make it as simple as possible for them to customize the tools and materials that we create. The second is that they need to know about those tools as soon as they have a bit of resilience and polish–advertising matters. Firefox was important as a piece of software, but putting it into people’s hands was another project, at least equally as important.

Moving Forward

We covered a lot of ground in this meeting, and although I have not been involved in its planning it is clear that a lot of people spent a lot of time on both the projects presented and on organizing the conference itself. It takes a great deal of scaffolding to provide a conference that is this open to tinkering. There were, throughout the conference, calls to make sure that this was not a one-off or end-point, but rather a starting point. Despite efforts to encapsulate one unifying starting point, what I saw was a broad spectrum of starting points. As I continue thinking about the conference, I’m going to be focusing on one aspect that really got me excited about moving forward: badges. I am sure that others found their own pet projects, and I hope that in many cases these were different than the projects they had going in.

]]>
http://alex.halavais.net/mozilla-drumbeat-enter-the-lizard/feed/ 2 2943
The New University Press http://alex.halavais.net/the-new-university-press/ http://alex.halavais.net/the-new-university-press/#comments Wed, 16 Jun 2010 23:10:53 +0000 http://alex.halavais.net/?p=2821 The future of the book, and of the publishing industry, has far less to do with what you produce, and far more to do with enabling an ongoing conversation. This isn’t news to any of you, you live it. But it’s easy, in the midst of a project, to get seduced by the myth that all you do is take ideas and make them into physical objects. The work of scholarly publishing begins and ends in conversation, and always has.

Digitizing my personal library

Where does this leave books? I love books. In fact, I may love books too much. Each time I’ve moved, books made up 90% of the weight of the move; far more than that in my moves to and from Japan. I’m still paying off credit card bills for books I bought years ago and haven’t yet had time to read.

When I first started to rip my library, it didn’t come easily. Unlike ripping CDs, for most of my library going digital means literally ripping: destroying books in the process of scanning them. Even describing this process feels a bit–and this isn’t hyperbole–blasphemous. This is probably different for a publisher–after all, many of you end up destroying books as a matter of your trade–but for me, it is still not an easy thing to do.

First I cut the boards off, and then slice the bindings. I have tried a table saw, but a cheap stack cutter works better. Then I feed them into my little page-fed scanner, OCR them (imperfectly) in Acrobat, and back them up to a small networked attached storage device. This is a slow process: I only manage a few hundred books a year, at best. I’ve only just started experimenting with non-destructive scanning. My hope is that the industry and technology will catch up enough that I don’t have to keep this up.

So why am I doing it? There are lots of reasons. One is simply a matter of space. I live in a Manhattan apartment and have a one-year-old. I suggested to my partner that we keep the room we use as a library and let him sleep in the closet, but this didn’t go over well. More importantly, my home and my school office are now a sizable commute away from each other. It was hard enough to decide which books should go where when I lived only a few minutes from the university, but now it’s even more important that I can get at my library at either location. More to the point, I work with research groups on two coasts of the US, and spend a decent amount of time on the road or in the field. I need to be able to access not just journals, but books when I am travelling.

The second reason is that although I still do read books, starting on one end and ending at the other, I just as frequently “gut” them, reading the bits that I find most useful, often out of order, often in conjunction with other bits from other books. These days, when looking for something, I am less likely to page through a book than I am to do a keyword search. Even for the books that are not yet scanned, if Google Books has scanned them I can search them for a phrase half-remembered, find the page, and then pull it down from my shelves. My primary use of books these days is not to engage with them individually, but to see how they engage with the other books in my library and in my head.

My hope is that one day I can do even more with this. That I can move beyond keyword searching to do some level of concept mapping and networking authors’ ideas and citations. Maybe even imposing new structures–geographic, chronological, or social–that were not originally present. That day isn’t here yet, but having the books in a fungible form is a necessary prerequisite.

It’s worth noting here the things that I cannot do, because it’s important. I can no longer share my books with students and colleagues. Or rather I can, but if I do I tread on uncomfortably unstable legal ground. No one would object to my lending a book to a friend, but lending them the PDF of a book I’ve purchased is another question. As a result, my private library is essentially even more private than when I started. Which brings us to the field of digital humanities.

Digital Humanities

It’s clear that humanists work with texts. All academics work with texts, of course; scholarship is based in production and exchange of texts–otherwise it is not scholarship. But humanists also work with the idea of working with texts, and for that reason they have what may be a privileged perspective on the transition of scholarship in a networked world. And particularly important in this transition is a movement toward transliteracy, and an acceptance of the idea that scholarly expression happens on different platforms in different ways at different times and that ideas form pathways through these platforms.

The book–in its traditional ink on dead-trees format–remains one of these platforms. And I expect that in twenty years, I will still be able to walk into my local bookshop and plonk down a hundred dollars for a beautifully printed and bound book. After all, Western Union missed the boat on telephony and the internet, and still didn’t send its last telegram until 2006. Media have staying power. It’s true that I see Kindles and other electronic readers more frequently than bound books on flights these days, but there remain certain books that I will want to keep in bound form, for myself and for my son.

And even as the book is changing form, that change is not radical. Most of the ebooks we are talking about are really not that different than what we’ve been doing for two hundred years. And for those of us who write books, it’s a non-step. After all, we provide you with an e-manuscript, in the vast majority of cases. The step to electronic books is actually a pretty small one, though it is important in what it enables.

For the digital humanities, it has opened up new scales of analysis. The model of one person and one book is no longer the only way. Texts are no longer found only in books, and understanding them can be done in ways besides deep reading. None of this removes the possibility of studying books, or of studying them by reading them carefully and deeply. But having the material in electronic format allows for new perspectives, both by examining work at micro scales–the study of stylistics, for example–and at macro scales–the networking of books in a wider literature.

I’m most interested, however, in the ways in which making books electronic provides the opportunity to link them to other kinds of conversations that exist in the online world. Before talking a bit about what kinds of conversations I mean, I should pause for a moment to talk about whether we are all becoming a bit too shallow.

Your Brain Without Books

Nicholas Carr is in the news a lot lately promoting his new book, The Shallows. I had dinner with John Seeley Brown last week and he admitted he really couldn’t get further than the first few pages in the book. It’s easy to toss that off as situational irony, but I also have done no more than skim his new book, because once I noted some of the conclusions he was drawing from his evidence, I honestly found it not worth my time to engage it more deeply. I could learn more by investing my attention elsewhere.

He suggests that in order to write the book, he stopped following Facebook and Twitter. He relates what anyone who writes has known for many years: if you want to write, it’s good to shape your informational environment appropriately. In fact, I would suggest if you want to remain undistracted, a traditional library is perhaps the worst place to be. I’ve wasted hours at libraries and bookstores–wasted them enjoyably, but wasted them nonetheless.

This is not an argument against books; again, I am a book lover. But it is important, I think, to notice that books are a particular kind of conversation–and a peculiar one at that. If I were to tell you that I planned to talk to you today for three hours–and be assured, as a professor, I am perfectly capable of sustaining a three hour talk–most of you would walk out the door. One of the nice things about a book is precisely that you don’t have to read it deeply, that it is open to other uses, and that you can gut it intellectually just as easily as I am gutting my books physically.

Publishing

This will be my first visit to the AAUP conference, and I don’t tend to spend a lot of time with those in the publishing industry. I guess I hold out some romantic hope that I will see some ink-stained hands, but I’m not counting on it. Some university presses may actually retain printing & binding facilities in-house, but I am also sure that is pretty unusual. At the other end, bookstores are now printing on demand, which raises the question of what a “press” does. It is far too easy to get tied up in the idea of product, when the only reason presses continue to exist is because of what they do really well: process.

Dan Cohen has a great blog post in which he discusses the social contract surrounding the book, one in which authors work with publishers to put together a work that is thoroughly researched, well structured, and presented well in terms of language and visual design. The readers, in turn, enter into the contract by being willing to attend to the work seriously, think about it, and incorporate it into their own work. He goes on to suggest that some of the elements of this contract, and particularly the fact that it allows for only one genre of scholarly communication, are flawed, but that the idea of a social contract between author and reader is not. It’s a matter of evolving that contract.

Part of that evolution is to recognize that the process of the book is as important as the product, and that a book finds its success in that process, and in the conversation that happens around that process. I don’t buy that many books these days, but I can tell you some that I know that I will. I know I’m going to buy Kathleen Fitzpatrick’s Planned Obsolescence and Siva Vaidhyanathan’s Googlization of Everything. Both of these books were presented to the world before they were entirely baked, and are being reviewed openly by peers ahead of publication. Of course, we have always shared manuscripts and email has made this easier. But by making the process even more transparent, there is an opportunity for this to extend beyond the constraints of personal social networks. I’ll say more about that in a moment, because it is important. The other book I’m going to be buying is Hacking the Academy. The initial draft of this was written in seven days by a distributed group of over 200 authors who tagged their posts with a common hashtag. This is now being edited together to be presented as a cohesive work.

Part of the reason that I’m going to be buying these books is that I am already connected to them, before they have ever been printed. I’ve read the work in blog posts and in tweets, in conversations both in real life and online. To be crass about it, it is about the best possible marketing for a book you can imagine. It’s cheap, honest, and effective.

And the connections will extend beyond the physical manifestation of the books. Books are great for a lot of reasons: you can read them in the bath, you can cite them and know they will not change their mind, they work during power outages. But they also tend to freeze conversation in time and be difficult to update. Of course, it’s an author’s responsibility to make their text timeless, but timeless texts are not always good for scholarship. We may be reading Plato’s words a couple of millennia after he drank himself to death, but if he were around, he’d likely be the first to tell us we’re doing it wrong.

The ability to open up and recontextualize–even when that does not immediately happen–is vital. It futureproofs a text, and makes it more likely to be taken up by later authors and in later conversations. This means asking at each step in the process, Can this be more open? Can we invite more people to this conversation?

These are precisely the kinds of questions that are being asked in other places where scholarly communication happens. At conferences, folks are sharing work before, during, and after. I’m working with the Digital Media and Learning Hub at UCHRI to try to create a collaboratory that provides a virtual space for researchers to talk more openly with their peers about work currently underway. It’s unfortunate that the publishing process is often is seen as somehow an appendage to these conversations rather than being a partner from the outset. I can understand why this might be the case for mass-printed trade fiction, for example, where the audience might be more clearly disjoint from the authors, but it certainly does not make any sense in the academy, where presses cannot afford to remain on the margins.

The unbinding and reconnecting of texts across media ultimately has little to do with texts and everything to do with people. The creation of a book is a social process. When people begin to talk about the ways in which the internet has changed publishing they think of the web as largely a publishing platform, which is fair enough. But the real changes are in how people connect, how they maintain relationships, how they work together, and how they coalesce into publics.

The process of the book is intimately tied to these networks. When I write a book, I certainly don’t do it to make money–I’m in the wrong business for that. I do it in part to get attention for some ideas that I think are important. But I do it most of all as a kind of dating profile, an indication of the kinds of things I’m interested in, with the hope that I can meet other scholars who share my interests. In other words, the social network isn’t just the most important input into the book process, it is the most important outcome.

Game Plan

What does this mean from a practical perspective? It means drawing on what you already do well. You already have to make decisions about what is important in a field, who has ideas worth sharing, and coordinating the review process. You already, across the board, are able to organize and schedule review and production processes. You already are creating some conversation around your books by creating a web presence and connecting with various publics. In other words, you’re already doing a lot of the things you need to, it just seems to be unevenly distributed.

The first thing you need to be doing more of is monitoring the environment, sharing with your colleagues things that work and things that don’t. I have no doubt that most of you are already following some of the things that are happening with the projects I mentioned earlier. It’s important that as you create your own projects, you are keeping your colleagues aware of these, and recording both successes and failures. This conference is a great start, but a once a year update isn’t good enough. Make sure those in your personal networks know what you are doing, and you know what they are doing. Make sure that scholars in your field are part of that network as well.

Second, connect at a much deeper level with those in the fields you work with. Some of you have acquisitions editors who are very good at networking in the old sense, and getting to know the leaders in a field. You need to go beyond this and look for and foster the new leaders. That means tracking those early in their careers–particularly those who are leading the charge in new forms of scholarly communication. Scholarly associations play an important role here, and I suspect they will continue to do so. But it isn’t enough to follow their lead. If you want to show your value, you need to show that you can innovate, and that you aren’t just adding your name and imprimatur to innovations supported by scholarly associations, universities, and funding agencies.

In the end, you need to look seriously at your value proposition. What can you bring to the table? What can you add to networked scholarly discussion? I know that there are a lot of bright folks in publishing, and that you have a great deal of experience and intellectual capital to bring forward. I also know that some of you are content to coast on your names for as long as possible, hoping to wait until the wind has shifted before hoisting your own sails.

Don’t Watch Your Polls

I’ve talked a little bit of what I think university presses should be doing to move scholarly communication forward. I cannot say that I represent the average scholar in my field or in any field. Everything I have seen suggests to me that while we may indeed drink lattes and drive Volvos, academics tend to be conservative in many ways. We are conservative in part because we work in institutions that update medieval traditions with twentieth-century bureaucracy. I’ve yet to hear a tenure committee say “We like the two books you published with Oxford University Press, but that blog post really wowed us.”

On the other hand, change doesn’t come from the center. Many of you are concerned about the future of scholarly publishing, and you should be. But don’t look to academics to lead the way alone, and don’t assume that you can rely on a second-mover advantage. There are still successful music labels and newspaper publishers, but they didn’t get there by waiting out the storm. Success over the next five years is pinned to a willingness to take risks, open up texts, and create new spaces for conversation.

Finally, can we get over the “books are (not) dead” trope? It’s boring. Scholarly communication is thriving today more than ever in history. We are in the midst of a new Renaissance, and university presses find themselves at the center of that revolution. Please don’t waste your good fortune or the opportunities around you.

[In case it isn’t obvious, the above is the text of a talk I planned to give to the Association of American University Presses meeting. I ended up presenting something that only vaguely resembled this, but you get the idea. ]

]]>
http://alex.halavais.net/the-new-university-press/feed/ 11 2821
Rethinking the human subjects process http://alex.halavais.net/rethinking-the-human-subjects-process/ Mon, 14 Jun 2010 17:51:41 +0000 http://alex.halavais.net/?p=2814 Get a group of social scientists together to talk about prospective research and it won’t take long before the conversation turns to the question of human subjects board approval. Most researchers have a war story, and all have an opinion of the Institutional Review Board (IRB), the committee in US universities that must approve any planned investigation to make certain that the subjects of the research are protected. Before too long, someone will suggest doing away with the IRB, or avoiding human subjects altogether.

Research in the field of Digital Media and Learning (DML) tends to focus on youth participants, occur in dynamic, mediated environments, and often consists of researchers working in different locations and sharing their observations. All of these factors can complicate the process of seeking and receiving approval from local IRBs, leading to a substantial amount of effort by researchers and unnecessary delay in doing good research. Particularly vexing is the difficulty in sharing data among researchers at different universities, a vital prerequisite to collaborative social science. In the hope of improving this process for everyone involved–the researchers, members of the IRBs, the participants in the research, and the public at large–the Digital Media and Learning Research Hub supported the first of a pair of one-day workshops intended to discuss potential solutions. A number of groups have been looking at how IRBs are working and how they might work better, and we were lucky to be able to bring to Irvine a group of people with significant experience working with the IRB process in various contexts, including Tom Boellstorff, Alex Halavais, Heather Horst, Montana Miller, Dan Perkel, Ivor Pritchard, Jason Schultz, Laura Stark, and Michael Zimmer. Each of the participants shared their research and other materials with the group beforehand, as did others who were unable to join us.

We found that while there might be some fairly intractable issues, as there are for any established institution, some of the difficulties that IRBs and investigators encountered were a result of reinventing the wheel locally, and a general lack of transparency in the process of approving human subjects research. The elements required to make good decisions on planned research tend to be obscure and unevenly distributed across IRBs. From shared vocabularies between IRBs and investigators, to knowledge of social computing contexts, to a clear understanding of the regulations and empirical evidence of risk, many of the elements that delay the approval of protocols and frustrate researchers and IRBs could be addressed if the information necessary was more widely accessible and easily discoverable.

Rather than encouraging the creation of national or other centralized IRBs, more awareness and transparency would allow local solutions to be shared widely. Essentially, this is a problem of networked learning: how is it that investigators, IRB members, and administrators can come quickly to terms with the best practices in DML research? Not surprisingly, we think digital media in some form can be helpful in that process of learning.

The devil is in the details. First, it’s important to identify what should be shared, how to share that information in a way that is most helpful, and how to get from where we are now to that point. Much of this information sharing already takes place today informally, with colleagues contacting one another for advice on protocols, technologies, and the like. Our hope is to create a resource that opens this sharing up a bit more, highlights a core set of ideas held commonly in the disciplines that make up DML, and makes the IRB process quicker and more effective.

As a group we would love to hear your suggestions on how best to improve the IRB process, or questions you might have in the.

NB: This post originally appeared on DMLcentral. Please make any comments there.

]]>
2814
8 hours of TV? http://alex.halavais.net/8-hours-of-tv/ http://alex.halavais.net/8-hours-of-tv/#comments Tue, 11 May 2010 20:30:08 +0000 http://alex.halavais.net/?p=2762
We talked quite a bit about TV when Jasper was about to be born. We talked about getting rid of it entirely. We watch what I consider to be a lot of television, though it is perhaps not as much as in some households. It was a lot less before we got a PVR; like many Americans, the number of hours we watch is up. I would say we probably watch somewhere in the neighborhood of 8 hours a week. We’ll usually watch a show during dinner, and maybe the Daily Show as a chaser. That’s a lot of time, and yes, we could replace that with scintillating conversation, but–well, for all the reasons lots of people don’t, we don’t.

But back to Jasper. He’s been exposed to exactly the same amount of TV we have, and until recently, to the same shows. During his first year of life, he generally was uninterested in what appeared on the screen, with the exception of dancing. Now, he will watch our shows for a little bit if something interesting is going on. And, for the first time, we’ve been watching kids shows: Sesame Street and Dora the Explorer.

I know perhaps better than most that exposure to TV has lots of negative outcomes. I more recently ran into a study that looked at TV viewing at 29 months and 53 months, and found that it made the kids fat, innumerate, and picked on by fourth grade. Yes, there are more influential factors, like mother’s education (naturally, Dad’s education doesn’t matter in the aggregate because Dad’s at work?), but I think the evidence is pretty overwhelming at this point: TV is bad for kids.

Only… It sure doesn’t seem that way. Sesame street is way better than I remember it being. Elmo is cloying and annoying as ever, but his name is easy to say. And Dora, while not ideal, does appeal to my love of puzzles and maps. I can see the thinking that goes into these programs, not just by the creators, but by Jasper when he watches. Yes, he gets the thousand-mile stare sometimes, but he also knows a little bit more about backpacks and maps now.

And this is further confused by his addiction to a particular form of television. Our TV, of course, is just a computer, and so he generally wants us to play his music. This is usually accompanied by the visualizations of whatever music player we are using. He doesn’t actually ask for TV, but he certainly asks for his music. (He’s goes to a Music Together class each week, and there is an associated CD.) The natural tendency is to think of TV as bad and a desire to listen to music good, but I’m not at all sure that’s the case. And it’s not like he’s asking for particularly good music.

In the end, I’ve decided it’s stupid to worry about it. He’ll continue to watch TV, even at this young age, limited to about 3 hours each week, and watched with heavy involvement by us. Yes, this is in violation of the AAP recommendations, but since when did we follow the rules?

]]>
http://alex.halavais.net/8-hours-of-tv/feed/ 3 2762
Networked Teaching http://alex.halavais.net/networked-teaching/ http://alex.halavais.net/networked-teaching/#comments Thu, 06 May 2010 18:01:59 +0000 http://alex.halavais.net/?p=2758 Abstract for my “Internet Research 11.0” paper, to be presented this coming October…

Networked Teaching: Institutional Changes to Support Personal Learning Networks

Much of the educational literature of late has made a marked shift to the perspective of the individual learner at the center of a network of learning resources in the form of other people, environments, and information artifacts. These provide affordances for learning, shaping a highly individualized environment that corresponds, in many cases, only imperfectly to the structures of educational institutions that the individual may engage. With the attention on the learner, and on informal learning, the words “education” and “teacher” seem somehow archaic. Yet schools, including institutions of higher education, have always supported learning networks to some degree. We are rapidly encountering a crisis in higher education: structures that were established (and ossified) over the last two centuries to formalize and standardize knowledge seem ill-suited to the needs of today’s citizens.

Faculty in universities who attempt to support what have come to be called “personal learning networks” often find their institutions to be challenges rather than partners in attempts to better serve the broader learning environment. Part of the reason for this is that, despite digital media and learning research that has examined networked learning in situ, particularly over the last several years, there is not yet a consensus regarding the appropriate role of the teacher in a learning network. Particularly in schools during the years of compulsory education, and especially in the United States over the last few years, there has been a renewed effort to assess students along the “fundamental” skills of reading, writing, and mathematics, leaving more comprehensive understanding and areas like art and music unassessed and generally ignored. Higher education has provided more opportunities for experimentation, but has also been influenced by new efforts at standardization of process and assessment. Without strong evidence of the mechanisms of informal learning and their benefits, working against such policies is difficult.

On the other hand, many of these policies exist largely as a result of inertia and tradition. Conservative views as to what a good education consists of are rarely evidence-based, but are privileged by policy-makers because they represent familiar “common knowledge.” This paper presents the current state of understanding regarding personal learning networks, and suggests ways in which university policy makes it difficult for university faculty to actively engage in such networks. The suggestion is not that universities should necessarily retool their curricula to engage networked learning environments, but rather that small changes in policy, and particularly changes in the ways in which teaching is assessed for junior scholars and for departments, will provide greater room for experimentation with new forms of learning. The evolution of open source within the traditional software industry provides some indication of ways in which openness can be allowed without radical shifts in values or practices. Efforts at opening up the education process, beginning with open educational resources and open access to research, have already gained a foothold in many institutions. By building on these successes and creating incentives for teacher-scholars to engage in broader learning networks, it is possible to provide for new spaces for teaching experimentation.

The paper concludes by suggesting some concrete measures that can be taken by faculty and by students to encourage policy frameworks that provide for networked research and teaching. Equally important is bringing together these experiments with the means to assess their effectiveness and communicate this with broader publics. The best teachers have always engaged in experiments within their own teaching, often without the consent or notice of the institutions in which they teach. There is a danger–as we have seen with the charter school movement in K-12 institutions in the US–that working models will not find their way back into the mainstream of educational practices, and that failed models will not be shared widely enough to avoid reinventing a broken wheel. By creating spaces for engaging personal learning networks within universities, and by creating the infrastructure for sharing this active experimentation in teaching and learning, we can ensure the relevance of the institutions of higher education to the new learning society.

]]>
http://alex.halavais.net/networked-teaching/feed/ 1 2758
Abstract of a Non-Existent Paper http://alex.halavais.net/abstract-of-a-non-existent-paper/ http://alex.halavais.net/abstract-of-a-non-existent-paper/#comments Sun, 28 Feb 2010 06:51:16 +0000 http://alex.halavais.net/?p=2555 A view of a quipu

A brief paper on the long history of mobile ICTs

Especially over the last decade, the rapid diffusion of mobile telephones and related worn technologies left many struggling to understand how they might relate to social change. Although there can be little argument that we have seen rapid development in the technology of mobile communication and computation, at least some of our surprise must be related to a flawed overall frame for understanding technology and place. After all, these technologies seem on first blush to be very different from the kinds of communication devices we are more familiar with: technologies of the screen. When faced with technologies that are inherently displacing us, the literature tends to frame them from the perspective of dwelling and settlement, drawing on metaphors of cyberspace and virtual settlements. But mobile telephony is not as new as it first appears, and our focus on dwelling as a metaphor for all technology leads to a gap in understanding the social role of these new devices.

Understanding communication technologies and networks through the lens of the built environment is natural. The evolution of modern human society might be traced through a shift from biological to social change. Rather than adapting to our environments, we change our environments to suit our needs. No particularly acute skills are needed to discover human habitation: we build. And the creation of the built environment has been seen as key to creating physical proximity and urbanity at the core of the modern human experience. We have, however, been unable to build ourselves out of significant human ills, and in many cases the magical and spiritual nature of our built environment has been engineered away. The problems of modern society can be found most acutely in its characteristic environment: the metropolis.

But the rapid diffusion of the mobile phone both within the more and less developed world has turned this seemingly unbreakable bond between urbanity and evolution on its ear. Rich Ling, Mimi Ito, and others write about the new uses of these media to tightly control collaborative processes, particularly among the younger generation. The ability to act in coordination without being co-present, though of course always possible, is now more easily available to larger and larger populations. The favoring of these personal, ephemeral network brings the magical and spectral world back to us. These days, we all hear voices.

This article counters claims to novelty by suggesting that there are long-standing historical precedents to many of social functions of modern mobile devices, and that our tendency to think in terms of physical environments has blinded us to these long-term social uses of mobile technologies. Moreover, it is useful to understand a range of worn technologies, from sidearms to spectacles, as inherently information, communication, and control technologies. By providing an outline and taxonomy of worn technologies, it is possible to more easily distinguish dimensions along which change may be occurring, and find historical precedents to seemingly novel arrangements, like Castells’ “insurgent communities of practice.” It’s dangerous to assume that social arrangements made possible through the affordances of new technologies represent a revolution. As Robert Darnton has suggested, we tend to forget earlier technologies (like the “Tree of Cracow”) and social organization isomorphic to these modern shifts.

Within that “longue durée” of worn technologies, I suggest we can identify a set of functions for addressing and manipulating social networks, from communicating authority, to record keeping and surveillance, to command and control. While these mobile technologies are inextricably spatial, it is important to recognize that thinking of them from the perspective of geography and place represents only one way of framing the understanding of such technologies, and unfortunately such framing is often done unconsciously and relatively uncritically. What does it mean to move beyond debates of space and place, of cybernomadism and locative technologies? Does the mobile device–from quipu and ehekachichtli, to the saber and flounce, to the iPhone and pacemaker–represent a technology of binding the Bund as much as binding space or time?

]]>
http://alex.halavais.net/abstract-of-a-non-existent-paper/feed/ 3 2555
TL;DR: The Future of Attention http://alex.halavais.net/tldr-the-future-of-attention/ http://alex.halavais.net/tldr-the-future-of-attention/#comments Mon, 11 Jan 2010 18:24:07 +0000 http://alex.halavais.net/?p=2549 I am proposing a session for Internet Research 11.0 (Gothenburg, Sweden, 21-23 Oct 2010) that focuses on the role of attention in internet-mediated communication. The panelists will be asked to present very briefly on a topic relating to attention and networked technologies, with the aim of spurring a lively conversation.

While there are a range of potential topics, some might include:

  • Has mediated culture changed to fit new regimes of attention?
  • With an infinite number of channels, is it still possible to get citizens to talk about the same topic?
  • How do marketers get attention when technologies, from TiVo to pop-up blockers, allow for filtering?
  • Does the ability to work anywhere, thanks to mobile devices, break down the idea of attending to home or family within particular temporal and spatial blocks?
  • How does a new disciplining of attention (or a lack of such disciplining?) affect learning inside and outside of schools and universities?
  • Are “Lifehacker” and “Four Hour Work Week” just continuations of a long interest in efficiency, or do they mark a move beyond the workplace for such efforts?
  • Is there a consensus regarding “multi-tasking” and “continuous partial attention” vs. task focus in terms of effectiveness?
  • How do individuals create their own “situational awareness”? To what degree is our attention locationally based?
  • How have technologies of social networking affected who we attend to and how we attend to them?

Presentations will be in “pecha kucha” format (http://en.wikipedia.org/wiki/Pecha_kucha), and presenters are expected to abide by that formalism. These should represent positions and perspectives, or thinking-in-progress. Papers are welcome, but not expected. These presentations should be designed to create controversy and conversation.

If you’ve gotten this far, and are interested in presenting on the panel, please post your proposal to Twitter, using the #tldr11 hashtag, no later than February 1, 2009.

Please forward widely.

]]>
http://alex.halavais.net/tldr-the-future-of-attention/feed/ 5 2549