Computing – A Thaumaturgical Compendium https://alex.halavais.net Things that interest me. Wed, 17 Apr 2013 20:36:26 +0000 en-US hourly 1 12644277 Getting Glass https://alex.halavais.net/getting-glass/ https://alex.halavais.net/getting-glass/#respond Wed, 17 Apr 2013 20:35:38 +0000 http://alex.halavais.net/?p=3428 gglass

Google selected me as one of (the many) Google “Glass Explorers”, thanks to a tweet I sent saying how I would use Google Glass, namely:

What this means is that I will, presumably over the next few months, be offered the opportunity to buy Google Glass before most other people get to. Yay! But it is not all good news. I get to do this only if I shell out $1,500 and head out to L.A. to pick them up.

Fifteen hundred dollars is a lot of money. I’d be willing to spend a sizable amount of money for what I think Glass is. Indeed, although $1,500 is on the outside of that range, if it did all I wanted it too, I might still be tempted. But it is an awful lot of money. And that’s before the trip to L.A.

To be clear, the decision is mostly “sooner or later.” I’ve wanted something very like Glass for a very long time. At least since I first read Neuromancer, and probably well before that. So the real question is whether it’s worth the premium and risk to be a “Glass Explorer.”

As with all such decisions, I tend to make two lists: for and against.

For:

  • I get to play with a new toy first, and show it off. Have to admit, I’m not a big “gadget for the sake of gadgets” guy. I don’t really care what conclusions others draw relating to my personal technology: either whether I am a cool early adopter or a “glasshole.” I use tech that works for me. So, this kind of “check me out I got it first” doesn’t really appeal to me. I guess the caveat there is that I would like the opportunity to provide the first reviews of the thing.
  • I get to do simple apps: This is actually a big one. I’m not a big programmer, and I don’t have a lot of slack time this year for extra projects, but I would love to create tools for lecturing, for control, for class management, and the like. And given one of the languages they support for app programming is Python–the one I’m most comfortable in–I can see creating some cool apps for this thing. But… well, see the con column.
  • I could begin integrating it now, and have a better feel for whether I think it will be mass adopted, and what social impacts it might have. I am, at heart, a futurist. I think some people who do social science hope to explain. I am interested in this, but my primary focus is being able to anticipate (“predict” is too strong) social changes and find ways to help shape them. Glass may be this, or it may not, but having hands on early on will help me to figure that out.

Against:

  • Early adopter tax. There is a lot of speculation as to what these things will cost when they are available widely, and when that will be. The only official indication so far is “something less than $1,500.” I suspect they will need to be much less than that if they are to be successful, and while there are those throwing around numbers in the hundreds, I suspect that price point will be right around $1,000, perhaps a bit higher. That means you are paying a $500 premium to be a beta tester, and shouldering a bit of risk in doing so.
  • Still don’t know its weak points. Now that they are actually getting shipped to developers and “thought leaders,” we might start to hear about where they don’t quite measure up. Right now, all we get is the PR machine. That’s great, but I don’t like putting my own money toward something that Google says is great. I actually like most of what Google produces, but “trust but verify” would make me much more comfortable. In particular, I already suspect it has two big downvotes for me. First, I sincerely hope it can support a bluetooth keyboard. I don’t want to talk to my glasses. Ideally, I want an awesome belt- or forearm-mounted keyboard–maybe even a gesture aware keyboard (a la Swype) or a chording keyboard. Or maybe a hand-mounted pointer. If it can’t support these kinds of things, it’s too expensive. (There is talk of a forearm-mounted pad, but not a lot of details.)
  • Strangleware. My Android isn’t rooted, but one of the reasons I like it is that it *could* be. Right now, it looks like Glass can only run apps in the cloud, and in this case, it sounds like it is limited to the Google cloud. This has two effects. First, it means it is harder for the street to find new uses for Glass–the uses will be fairly prescribed by Google. That’s a model that is not particularly appealing to me. Second, developers cannot charge for Glass apps. I can’t imagine this is an effective strategy for Google, but I know from a more immediate perspective that while I am excited to experiment with apps (see above) for research and learning, I also know I won’t be able to recoup my $1,500 by selling whatever I develop. Now, if you can get direct access to Glass from your phone (and this would also address the keyboard issue), that may be another matter.
  • No resale. I guess I could hedge this a bit if I knew I could eBay the device if I found it wasn’t for me. But if the developer models are any indication, you aren’t permitted to resell. You are out the $1,500 with no chance of recovering this.

I will keep an open mind, and check out reviews as they start to trickle in from developers, as well as reading the terms & conditions, but right now, I am leaning to giving up my invite and waiting with the other plebes for broad availability. And maybe spending less on a video enabled quadracopter or a nice Mindstorms set instead.

Or, someone at Google will read this, and send me a dozen of the things as part of a grant to share with grad students so we can do some awesome research in the fall. But, you know, I’m not holding my breath. (I do hope they are doing this for someone though, if not me. If Google is interested in education, they should be making these connections.)

]]>
https://alex.halavais.net/getting-glass/feed/ 0 3428
Undo It Yourself (U.i.Y.) https://alex.halavais.net/undo-it-yourself-u-i-y/ https://alex.halavais.net/undo-it-yourself-u-i-y/#respond Sat, 16 Mar 2013 01:41:44 +0000 http://alex.halavais.net/?p=3395 disThere is a TV show called (in the US) Junkyard Wars. The premise of the show is simple enough: two teams meet in a junkyard and are assigned to build something: a trebuchet, a crane, or some other device. I think we can assume that the collection of stuff is, let us say, “semi-random.” I don’t know whether they start with a real junkyard and just make sure to seed it with useful bits, or they start with useful bits and cover it in random crap, or what, but I just cannot assume that they do this in a real, random scrapyard. The challenge is to make the most of the stuff at hand, and to create something that will work for the purposes of the challenge.

I was thinking about this during the Digital Medial and Learning conference in Chicago this week, and especially during the session titled Make, Do, Engage. The whole conference has a double set of themes. The official theme has to do with civic culture, and my favorite sessions this year have talked about new forms of activism and ways of encouraging social justice. But there is also a focus (including a pre-conference) on making stuff. Panelists spoke about ways students subvert game construction, the idea of jugaad, and thoughts about hacking-based media literacies. There seemed to be an interweaving here between building “stuff” (technology) and building government, and learning. This nexus (learning, politics, and making) was very present at the conference, and hits directly on my specific intersection of interests, so it has been an especially engaging conference for me this year.

In particular, the question is how to lead people to be more willing to engage in hacking, and how to create environments and ecosystems that encourage hacking of the environment. Rafi Santo talked a bit about the “emergence” of the hashtag as an example of Twitter’s relative hackability when compared with Facebook. (The evolution of features of Twitter is something I write about in a short chapter in the upcoming volume Twitter and Society.) Chris Hoadley also talked about the absence of any sort of state support for physical infrastructure led people to have to engage in their own hacks. This recalled for me a point made by Ethan Zuckerman about Occupy Sandy as being an interesting example of collective action that had a very real impact.

At one point Ingrid Erickson mentioned that she had been talking with Rafi about “do it together” technologies–making the hacking process more social. But part of me is much more interested in infrastructure for creativity–forcing people to work together. No one would wish Sandy on any group, but that particular pressure, and the vacuum of institutional support, led to a Temporary Autonomous Government of sorts that stepped in and did stuff because it needed to be done. I also recalled danah boyd mentioning earlier something that anyone who has ever taught in a grad program knows full well: placing a group in a difficult or impossible situation is a good way to quickly build an esprit de corps and bring together those who would otherwise not necessarily choose to collaborate. With all of these ideas mixing around, I wonder if we need a new aesthetic of “undoing it yourself.”

Yes, I suppose that could be what jailbreaking a phone is about, or you might associate this with frame-breaking or other forms of sabotage. But I am thinking of something a bit more pre-constructive.

I went to a lot of schools as a kid; more than one built on one or another piece of the Montessori model. At one, there was a pile of wood, a hammer, and some nails. It wasn’t in a classroom, as I recall, it was down at the end of a hall. If I asked, they would let me go mess with it. It was dangerous: I managed to hammer my thumb with some consistency. And I would be very surprised if they had an outcome in mind; or even if I did. I think I made a model boat. I don’t think anyone would have guessed it was a model boat unless I had told them.

In a more structured setting, piles of Lego bricks might want to look like what is on the cover of the box. And I am sure there are kids who manage–at least once–to achieve the vehicles or castles shown there. But that’s not why you play with Lego. Some part of me really rebels against the new Lego world, with the huge proliferations of specialized pieces. But the truth is that as a kid the specialized pieces were the interesting bits, not the bare blocks. The core 8×2 were there almost as a glue to keep the fun bits together.

Especially in the postmodern world we celebrate the bricoleur, we recognize hybridized work and kludges as interesting and useful, but far less thought is put into where that stuff comes from. Disassembly precedes assembly. I’m interested in what it means to be an effective disassembler, to unmake environments. There is space for scaffolding only once you’ve actually torn down the walls.

I think we need an Undo-it-Yourself movement. People who individually loosen bolts and disconnect wires. Who destroy mindfully. Those who leave junk in your way, knowing that you might see yourself in it. Our world is ripe for decomposition. New ideas about how we shape our built environment and our society are not born out of the ashes of the past, but out of the bits and pieces that are no longer attached the way the Designer intended.

I am not advocating chaos. I’m not suggesting that we should start an evil organization that turns every screw we encounter twice anti-clockwise. Perhaps what I am suggesting is something somewhere between the kit and the junkyard. Something with possibilities we know and we don’t know. Disassemblies of things for playing with.

]]>
https://alex.halavais.net/undo-it-yourself-u-i-y/feed/ 0 3395
The Coming Gaming Machine, 1975-1985 https://alex.halavais.net/the-coming-gaming-machine-1975-1985/ https://alex.halavais.net/the-coming-gaming-machine-1975-1985/#comments Fri, 13 Jul 2012 20:35:28 +0000 http://alex.halavais.net/?p=3258 Was going through old backups in the hope of finding some stuff I’ve lost and I ran into this, a draft I was working on around the turn of the millennium. Never went anywhere, and was never published. I was just pressing delete, when I realized it might actually be of interest to someone. It was a bit of an attempt at a history of the future of gaming, during the heyday of the console. Please excuse any stupidity–I haven’t even looked at it, just copied it over “as is.”

The Coming Gaming Machine, 1975-1985

Abstract

The early 1980s are sometimes referred to as the ‘Golden Age’ of computer games. The explosion of video games–in arcades, as home consoles, and eventually on home computers–led many to question when the fad would end. In fact, rather than an aberration, the decade from 1975 to 1985 shaped our view of what a computer is and could be. In gaming, we saw the convergence of media appliances, the rise of the professional software, and the first ‘killer app’ for networking. During this period, the computer moved from being a ‘giant brain’ to a home appliance, in large part because of the success of computer gaming.

Introduction

Sony’s offering in the game console arena, the Playstation 2, was among the most anticipated new products for the 2000 Christmas season. Although rumors and reviews added to the demand, much of this eagerness was fueled by an expensive international advertising campaign. One of the prominent television spots in the US listed some of the features of a new gaming console, including the ability to ‘tap straight into your adrenal gland’ and play ‘telepathic personal music.’ The product advertised was not the Playstation 2, but the hypothetical Playstation 9, ‘new for 2078.’ The commercial ends with an image of the Playstation 2 and a two-word tag line: ‘The Beginning’ 1.

The beginning, however, came over twenty-five years earlier with the introduction of home gaming consoles. For the first time, the computer became an intimate object within the home, and became the vehicle for collective hopes and fears about the future. In 1975 there were hundreds of thousands of gaming consoles sold, and there were dozens of arcade games to choose from. By 1985, the year the gaming console industry was (prematurely) declared dead, estimates put the number of Atari 2600 consoles alone at over 20 million world-wide2.

The natural assumption would be that gaming consoles paved the way for home computers, that the simple graphics and computing power of the Atari 2600 was an intermediary evolutionary step toward a ‘real’ computer. Such a view would obscure both the changes in home computers that made them more like gaming consoles, and the fact that many bought these home computers almost exclusively for gaming. But during the decade following 1975, the view of what gaming was and could be changed significantly. Since gaming was the greatest point of contact between American society and computing machinery, gaming influenced the way the public viewed and adopted the new technology, and how that technology was shaped to meet these expectations.

The Place of Gaming

When the University of California at Irvine recently announced that they may offer an undergraduate minor in computer gaming, many scoffed at the idea. The lead in an article in the Toronto Star, quipped, ‘certainly, it sounds like the punchline to a joke’3. As with any academic study of popular culture, many suggested the material was inappropriate for the university. In fact, despite the relatively brief history of computer gaming, it has had an enormous impact on the development of computing technology, how computers are seen and used by a wide public, and the degree to which society has adapted to the technology. Games help define how society imagines and relates to computers, and how they imagine future computers will look and how they will be used. The shift in the public view of computers from ‘giant brains’ to domestic playthings occurred on a broad scale during the ten years between 1975 and 1985, the period coincident with the most explosive growth of computer gaming.

Games have also played a role in both driving and demonstrating the cutting edge of computing. While they are rarely the sole purpose for advances in computing, they are often the first to exploit new technology and provide a good way for designers and promoters to easily learn and demonstrate the capabilities of new equipment. Programmers have used games as a vehicle for developing more sophisticated machine intelligence4, as well as graphic techniques. Despite being seen as an amusement, and therefore not of import, ‘the future of “serious” computer software—educational products, artistic and reference titles, and even productivity applications—first becomes apparent in the design of computer games’5. Tracing a history of games then provides some indication of where technology and desire meet. Indeed, while Spacewar might not have been the best use of the PDP-1’s capabilities, it (along with adventure games created at Stanford and the early massively multiplayer games available on the PLATO network) foreshadowed the future of computer entertainment surprisingly well. Moreover, while the mainstream prognostications of the future of computing are often notoriously misguided, many had better luck when the future of computing technology was looked at through the lens of computer games.

Computer Gaming to 1975

The groundwork of computer gaming was laid well before computer games were ever implemented. Generally, video games grew out of earlier models for gaming: board and card games, war games, and sports, for example. William Higinbotham’s implementation of a Pong-like game (‘Tennis for Two’) in 1958, using an oscilloscope as a display device, deserves some recognition as being the first prototype of what would come to be a popular arcade game. Generally, though, the first computer game is credited to Steve Russel, who with the help of a group of programmers wrote the first version of the Spacewar game at MIT in 1961. The game quickly spread to other campuses, and was modified by enterprising players. Although Spacewar remained ensconced within the milieu of early hackers, it demonstrated a surprisingly wide range of innovations during the decade following 1961. The earliest versions were quite simple, two ships that could be steered in real time on a CRT and could shoot torpedoes at one another. Over time, elaborations and variations were added: gravity, differing versions of hyperspace, dual monitors, and electric shocks for the losing player, among others. As Alan Kay noted: ‘The game of Spacewar blossoms spontaneously wherever there is a graphics display connected to a computer’6.

In many ways, Spacewar typified the computer game until the early 1970s. It was played on an enormously expensive computer, generally within a research university, often after hours. Certainly, there was little thought to this being the sole, or even a ‘legitimate,’ use of the computer. While time was spent playing the game, equally as important was the process of creating the game. The differentiation between player and game author had yet to be drawn, and though a recreational activity—and not the intended use of the system—this game playing took place in a research environment. There was no clear relationship between computer gaming and the more prosaic pinball machine.

However, after a ten year diffusion, Spacewar marked a new kind of computing: a move from the ‘giant brain’ of the forties to a more popular device in the 1970s. Stewart Brand wrote an article in Rolling Stone in 1972 that clearly hooked the popular diffusion of computing to ‘low-rent’ development in computer gaming. Brand begins his article by claiming that ‘ready or not, computers are coming to the people.’ It was within the realm of gaming that the general public first began to see computers as personal machines.

Perhaps more importantly, by taking games seriously, Brand was able to put a new face on the future of computing. At a time when Douglas Englebart’s graphical user interfaces were being left aside for more traditional approaches to large-scale scientific computing, Brand offered the following:

… Spacewar, if anyone cared to notice, was a flawless crystal ball of things to come in computer science and computer use:
1. It was intensely interactive in real time with the computer.
2. It encouraged new programming by the user.
3. It bonded human and machine through a responsive broadhand (sic) interface of live graphics display.
4. It served primarily as a communication device between humans.
5. It was a game.
6. It functioned best on, stand-alone equipment (and disrupted multiple-user equipment).
7. It served human interest, not machine. (Spacewar is trivial to a computer.)
8. It was delightful. (p. 58.)

Brand’s focus was on how people could get hold of a computer, or how they could build one for themselves. The article ends with a listing of the code for the Spacewar game, the first and only time computer code appeared in Rolling Stone. He mentions off-handedly that an arcade version of Spacewar was appearing on university campuses. Brand missed the significance of this. Gaming would indeed spread the use of computing technology, but it would do so without the diffusion of programmable computers. Nonetheless, this early view of the future would be echoed in later predictions over the next 15 years.

On the arcade front, Nolan Bushnell (who would later found Atari), made a first foray into the arcade game market with a commercial version of Spacewar entitled Computer Space in 1971. The game was relatively unsuccessful, in large part, according to Bushnell, because of the complicated game play. His next arcade game was much easier to understand: a game called Pong that had its roots both in a popular television gaming console and earlier experimentation in electronic gaming. Pong’s simple game play (with instructions easily comprehended by inebriated customers: ‘Avoid missing ball for high score’) drove its success and encouraged the development of a video gaming industry.

Equally important was the tentative television and portable gaming technologies that began to sprout up during the period. Though Magnavox’s Odyssey system enjoyed some popularity with its introduction in 1972, the expense of the television gaming devices and their relatively primitive game play restricted early diffusion. It would take the combination of microprocessor controlled gaming with the television gaming platform to drive the enormous success of the Atari 2600 and its successors. At the same time, the miniaturization of electronics generally allowed for a new wave of hand-held toys and games. These portable devices remain at the periphery of gaming technology, though these early hand-held games would be forerunners to the Lynx, Gameboy and PDA-based games that would come later.

By 1975, it was clear that computer gaming, at least in the form of arcade games and home gaming systems, was more than an isolated trend. In the previous year, Pong arcade games and clones numbered over 100,000. In 1975, Sears pre-sold 100,000 units of Atari’s Pong home game, selling out before it had shipped7. It had not yet reached its greatest heights (the introduction of Space Invaders several years later would set off a new boom in arcade games, and drive sales of the Atari 2600), but the success of Pong in arcades and at home had secured a place for gaming.

The personal computer market, on the other hand, was still dominated by hobbyists. This would be a hallmark year for personal computing, with the Altair system being joined by the Commodore PET, Atari’s 400 and 800, and Apple computers. Despite Atari’s presence and the focus on better graphics and sound, the computer hobbyists remained somewhat distinct from the console gaming and arcade gaming worlds. Byte magazine, first published in 1975, made infrequent mention of computer gaming, and focused more heavily on programming issues.

Brand was both the first and among the most pronounced to use gaming as a guide to the future of computing and society. In the decade between 1975 and 1985, there were a number of predictions about the future of gaming made, but most of these were off-handed comments of a dismissive nature. It is still possible to draw out a general picture of what was held as the future of gaming—and with it the future of computing—by examining contemporaneous accounts and predictions8.

Many of these elements are already present in Brand’s prescient view from 1972. One that he seemed to have missed is the temporary bifurcation of computer gaming into machines built for gaming specifically, and more general computing devices. (At the end of the article, it is clear that Alan Kay—who was at Xerox PARC at the time and would later become chief scientist for Atari—has suggested that Spacewar can be programmed on a computer or created on a dedicated machine, a distinction that Brand appears to have missed.) That split, and its continuing re-combinations, have driven the identity of the PC as both a computer and a communications device. As a corollary, there are periods in which the future seems to be dominated by eager young programmers creating their own games, followed by a long period in which computer game design is increasingly thought of as an ‘art,’ dominated by a new class of pop stars. Finally, over time there evolves an understanding of the future as a vast network, and how this will affect gaming and computer use generally.

Convergence

1975 marks an interesting starting point, because it is in this year that the microprocessor emerges as a unifying element between personal computers and video games. Although early visions of the home gaming console suggested the ability to play a variety of games, most of the early examples, like their arcade counterparts, were limited to a single sort of game, and tended to be multi-player rather than relying upon complex computer-controlled opponents. Moreover, until this time console games were more closely related to television, and arcade video games to earlier forms of arcade games. Early gaming systems, even those that made extensive use of microprocessors, were not, at least initially, computers ‘in the true sense’9. They lacked the basic structure that allowed them to be flexible, programmable machines. The emerging popularity of home computers, meanwhile, was generally limited to those with an electronics and programming background, as well as a significant disposable income.

As consoles, arcade games, and personal computers became increasingly similar in design, their futures also appeared to be more closely enmeshed. At the high point of this convergence, home computers were increasingly able to emulate gaming systems—an adaptor for the Vic-20 home computer allowed it to play Atari 2600 console game cartridges, for example. On the other side, gaming consoles were increasingly capable of doing more ‘computer-like’ operations. As an advertisement in Electronic Gaming for Spectravideo’s ‘Compumate’ add-on to the Atari 2600 asks ‘Why just play video games? … For less than $80, you can have your own personal computer.’ The suggestion is that rather than ‘just play games,’ you can use your gaming console to learn to program and ‘break into the exciting world of computing.’ Many early computer enthusiasts were gamers who tinkered with the hardware in order to create better gaming systems10. This led some to reason that video game consoles might be a ‘possible ancestor of tomorrow’s PC’11. As early as 1979, one commentator noted that the distinction between home computers and gaming consoles seemed to have ‘disappeared’12. An important part of this world is learning to program and using the system to create images and compose music. Just before console sales began to lose momentum in the early 1980s, and home computer sales began to take off, it became increasingly difficult to differentiate the two platforms.

Those who had gaming consoles often saw personal computers as ultimate gaming machines, and ‘graduated’ to these more complex machines. Despite being termed ‘home computers,’ most were installed in offices and schools13. Just as now, there were attempts to define the home computer and the gaming console in terms of previous and future technologies, particularly those that had a firm domestic footing. While electronic games (and eventually computer games) looked initially like automated versions of traditional games, eventually they came to be more closely identified with television and broadcasting. With this association came a wedding of their futures. It seemed natural that games would be delivered by cable companies and that videodisks with ‘live’ content would replace the blocky graphics of the current systems. This shift influenced not only the gaming console but the home computer itself. Now associated with this familiar technology, it seemed clear that the future of gaming lay in the elaborations of Hollywood productions. This similarity played itself out in the authoring of games and in attempts to network them, but also in the hardware and software available for the machines.

Many argued that the use of cartridges (‘carts’) for the Atari 2600, along with the use of new microprocessors and the availability of popular arcade games like Space Invaders, catapulted the product to success. Indeed, the lack of permanent storage for early home computers severely limited their flexibility. A program (often in the BASIC programming language) would have to be painstakingly typed into the computer, then lost when the computer was turned off. As a result, this was only appealing to the hard-core hobbyist, and kept less expert users away14. Early on, these computers began using audio cassette recorders to record programs, but the process of loading a program into memory was a painstaking one. More importantly, perhaps, this process of loading a program into the computer made copy-protection very difficult. By the end of the period, floppy disk drives were in wide use. This remained an expensive technology in the early days, and could easily exceed the cost of the computer itself. Taking a cue from the gaming consoles, many of these new home computers accepted cartridges, and most of these cartridges were games.

The effort to unite the computer with entertainment occurred on an organizational level as well. Bushnell’s ‘Pizza Time Theaters’ drew together food and arcade gaming and were phenomenally successful, at one point opening a new location every five days. Not surprisingly, the traditional entertainment industry saw electronic gaming as an opportunity for growth. Since the earliest days of gaming, the film industry served as an effective ‘back story’ for many of the games. It was no coincidence that 1975’s Shark Jaws (with the word ‘shark’ in very small type), for example, was released very soon after Jaws hit the theaters. The link eventually went the other direction as well, from video games and home computer gaming back into motion pictures, with such films as Tron (1982), WarGames (1983) and The Last Starfighter (1984).

In the early 1980s the tie between films and gaming was well established, with a partnership between Atari and Lucasfilm yielding a popular series of Star Wars based games, and the creation of the E.T. game (often considered the worst mass-marketed game ever produced for the 2600). Warner Communications acquired Atari—the most successful of the home gaming producers, and eventually a significant player in home computing—in 1976. By 1982, after some significant work in other areas (including the ultimately unsuccessful Qube project, which was abandoned in 1984), Atari accounted for 70% of the group’s total profits. Despite these clear precedents, it is impossible to find any predictions that future ties between popular film and gaming would continue to grow as they have over the interceding fifteen years.

This new association did lead to one of the most wide-spread misjudgments about the future of gaming: the rise of the laserdisc and interactive video. Dragon’s Lair was the first popular game to make use of this technology. Many predicted that this (or furtive attempts at holography15) would save arcade and home games from the dive in sales suffered after 1983, and that just as the video game market rapidly introduced computers to the home, they would also bring expensive laserdisc players into the home. The use of animated or live action video, combined with decision-based narrative games or shooting games, provided a limited number of possible outcomes. Despite the increased attractiveness of the graphics, the lack of interactivity made the playability of these games fairly limited, and it was not long before the Dragon’s Lair machines were collecting dust. Because each machine required (at the time) very expensive laserdisc technology, and because the production costs of games for the system rivaled that of film and television, it eventually became clear that arcade games based on laserdisc video were not profitable, and that home-based laserdisc systems were impractical.

The prediction that laserdiscs would make up a significant part of the future of gaming is not as misguided as it at first seems. The diffusion of writable CD-ROM drives, DVD drives, and MP3 as domestic technologies owes a great deal to gaming—both computer and console-based. At present, few applications make extensive use of the storage capacities of CD-ROMs in the way that games do, and without the large new computer games, there would be little or no market for DVD-RAM and other new storage technologies in the home. Unfortunately, neither the software nor the hardware of the mid-1980s could make good use of the video capability of laserdiscs, and the technology remained too costly to be effective for gaming. A few saw the ultimate potential of optical storage. Arnie Katz, in his column in Electronic Games in 1984, for example, suggests that new raster graphics techniques would continue to be important, and that ‘ultimately, many machines will blend laserdisc and computer input to take advantage of the strengths of both systems’ 16 (this despite the fact that eight months earlier he had predicted that laserdisc gaming would reach the home market by the end of 1983). Douglas Carlston, the president of Broderbund, saw a near future in which Aldous Huxley’s ‘feelies’ were achieved and a user ‘not only sees and hears what the characters in the films might have seen and heard, but also feels what they touch and smells what they smell’17. Overall, it is instructive to note the degree to which television, gaming systems, and home computers each heavily influenced the design of the other. The process continues today, with newer gaming consoles like the Playstation 2 and Microsoft’s new Xbox being internally virtually indistinguishable from the PC. Yet where, in the forecasting of industry analysts and work of social scientists, is the video game?

A Whole New Game

Throughout the 1970s and 1980s, arcade games and console games were heavily linked. New games were released first as dedicated arcade games, and later as console games. The constraints of designing games for the arcade—those which would encourage continual interest and payment—often guided the design of games that also appeared on console systems. In large part because of this commercial constraint, many saw video games (as opposed to computer games) as a relatively limited genre. Even the more flexible PC-based games, though, were rarely seen as anything but an extension of traditional games in a new modality. Guides throughout the period suggested choosing games using the same criteria that they would apply to choosing traditional games. Just as importantly, it was not yet clear how wide the appeal of computerized versions of games would be in the long run. As one board game designer suggested, while video games would continue to become more strategic and sophisticated, they would never capture the same kind of audience enjoyed by the traditional games18.

Throughout the rapid rise and fall of gaming during the early 1980s, two changes came about in the way people began to think about the future of gaming. On the one hand, there emerged a new view of games not merely as direct translations of traditional models (board games, etc.), but as an artistic pursuit. The media and meta-discourse surrounding the gaming world gave rise to a cult of personality. At the same time, it became increasingly difficult for a single gaming author to create a game in its entirety. The demand cycle for new games, and increasingly more complex and intricate games, not only excluded the novice programmer, it made the creation of a game a team effort by necessity. As such, the industrial scale of gaming increased, leaving smaller companies and individuals unable to compete in the maturing market.
This revolution began with home computers that were capable of more involved and long-term gaming. As one sardonic newspaper column in 1981 noted:

The last barriers are crumbling between television and life. On the Apple II you can get a game called Soft Porn Adventure. The Atari 400 and 800 home computers already can bring you games on the order of Energy Czar or SCRAM, which is a nuclear power plant simulation. This is fun? These are games? 19

The capabilities of new home computers were rapidly exploited by the new superstars of game design. An article in Popular Computing in 1982 noted that game reviewers had gone so far overboard in praising Chris Crawford’s Eastern Front, that they recommended buying an Atari home computer, if you didn’t have one, just to be able to play the game20. Crawford was among the most visible group of programmers who were pushing game design beyond the limits of traditional games:

Crawford hopes games like Eastern Front and Camelot will usher in a renaissance in personal computer games, producing games designed for adults rather than teenagers. He looks forward to elaborate games that require thought and stimulate the mind and even multiplayer games that will be played cross-country by many players at the same time, with each player’s computer displaying only a part of the game and using networks linked by telephone lines, satellites, and cable TV.

Crawford extended his views in a book entitled, naturally, The Art of Computer Game Design (1982), in which he provided a taxonomy of computer games and discussed the process of creating a video game. He also devotes a chapter to discussing the future of the computer game. Crawford notes that changes in technology are unlikely to define the world of gaming. Instead, he hoped for new diversity in gaming genres:

I see a future in which computer games are a major recreational activity. I see a mass market of computer games not too different from what we now have, complete with blockbuster games, spin-off games, remake games, and tired complaints that computer games constitute a vast wasteland. I even have a term for such games—cyberschlock. I also see a much more exciting literature of computer games, reaching into almost all spheres of human fantasy. Collectively, these baby market games will probably be more important as a social force than the homogenized clones of the mass market, but individual games in this arena will never have the economic success of the big time games.21

In an interview fifteen years later, Crawford laments that such hopes were well off base. Though such hopes were modest—that in addition to the ‘shoot the monsters!’ formula, as he called it, there would be a ‘flowering of heterogeneity’ that would allow for ‘country-western games, gothic romance games, soap-opera games, comedy games, X-rated games, wargames, accountant games, and snob games’ and eventually games would be recognized as ‘a serious art form’—he suggests that over fifteen years they proved to be misguided22. In fact, there were some interesting developments in the interim years: everything from Sim City and Lemmings to Myst and Alice. A new taxonomy would have to include the wide range of ‘god games’ in addition to the more familiar first-person shooters. In suggesting the diversification of what games could be, Crawford was marking out a new territory, and reflecting the new-found respectability of an industry that was at the peak of its influence. The view that ‘programmer/artists are moving toward creating an art form ranging from slapstick to profundity,’ appeared throughout the next few years23.

During the same period, there was a short window during which the future of gaming was all about the computer owner programming games rather than purchasing them. Indeed, it seemed that the ability to create your own arcade-quality games would make home computers irresistible24. Listings in the BASIC programming language could be found in magazines and books into the early 1980s. It seemed clear that in the future, everyone would know how to program. Ralph Baer noted in an interview in the same year that students ‘should be able to speak one or two computer languages by the age of 18, those who are interested. We’re developing a whole new generation of kids who won’t be afraid to generate software’25. By the time computers began to gain a foothold in the home, they increasingly came with a slot for gaming cartridges, much like the consoles that were available. In part, this was dictated by economic concerns—many of the new manufacturers of home computers recognized that software was both a selling point for the hardware and a long-terms source of income26—but part of it came with a new view of the computer as an appliance, and not the sole purview of the enthusiast. Computer games during the 1980s outgrew the ability of any single programmer to create, and it became clear that, in the future, games would be designed more often by teams27.

Connected Gaming

By the 1980s, there was little question that networking would be a part of the future of gaming. The forerunners of current networked games were already in place. The question, instead, was what form these games would take and how important they would be. The predictions regarding networking tended to change from the highly interactive experiments in networked computing, to the experiments in cable-television and telephone distribution of games in the 1980s. A view from 1981 typifies the importance given to communications and interfaces for the future of gaming. It suggests that in five years time:

Players will be able to engage in intergalactic warfare against opponents in other cities, using computers connected by telephone lines. With two-way cable television, viewers on one side of town might compete against viewers on the other side. And parents who think their children are already too attached to the video games might ponder this: Children in the future might be physically attached to the games by wires, as in a lie detector28.

A 1977 article suggests the creation of persistent on-line worlds that ‘could go on forever,’ and that your place in the game might even be something you list in a will29. Others saw these multi-player simulations as clearly a more ‘adult’ form of gaming, that began to erase the ‘educational/ entertainment dichotomy’30. The short-term reality of large-scale on-line gaming remained in many ways a dream during this period, at least for the general public. But the ability to collect a subscription fee led many to believe that multiplayer games were ‘too lucrative for companies to ignore’31. Indeed, the multiplayer games like Mega Wars could cost up to $100 a week to play, and provided a significant base of subscribers for Compuserve32.

The software industry had far less ambitious plans in mind, including a number of abortive attempts to use cable and telephone networks to distribute gaming software for specialized consoles. Despite failures in cable and modem delivery, this was still seen as a viable future into the middle-1980s. Even with early successes in large-scale on-line gaming, it would be nearly a decade before the mainstream gaming industry would become involved in a significant way.

Retelling the Future

The above discussions suggests that when predictions are made about the future of gaming, they are often not only good predictors of the future of computing technology, but also indicators of general contemporaneous attitudes toward the technology. Given this, it would seem to make sense that we should turn to current games to achieve some kind of grasp on the future of the technology. It is not uncommon to end a small piece of history with a view to the future, but here I will call for just the opposite: we should look more closely at the evolution of gaming and its social consequences at present.

Despite a recognition that games have been important in the past, we seem eager to move ‘beyond’ games to something more serious. Games seem, by definition, to be trivial. Ken Uston, in an article appearing in 1983 in Creative Computing on the future of video games expressed the feeling:

Home computers, in many areas, are still a solution in search of a problem. It is still basically games, games, games. How can they seriously expect us to process words on the low-end computers? The educational stuff will find a niche soon enough. But home finance and the filing of recipes and cataloguing of our stamp collections has a long way to go.

A similar contempt of gaming was suggested by a New York Times article two years later: ‘The first generation of video games swept into American homes, if ever so briefly. And that was about as far as the home-computer revolution appeared ever destined to go’33. More succinctly, in an issue in which Time named the personal computer its ‘Man’ of the Year, it notes that the ‘most visible aspect of the computer revolution, the video game, is its least significant’34. Though later the article goes on to suggest that entertainment and gaming will continue to be driving forces over the next decade, the idea of games (at least in their primitive state) is treated disdainfully.

This contempt of gaming, of the audience, and of popular computing, neglects what has been an extremely influential means by which society and culture have come to terms with the new technology. Increasingly, much of the work with computers is seen from the perspective of game-playing35. Games are also central to our social life. Certainly, such a view is central to many of the post-modern theorists that have become closely tied to new technologies, who view all discourse as gaming36. Within the more traditional sociological and anthropological literature, games have been seen as a way of acculturating our young and ourselves. We dismiss this valuable window on society at our own peril.

A recognition of gaming’s central role in computer technology, as a driving force and early vanguard, should also turn our attention to today’s gamers. Recent advances in gaming, from involved social simulations like The Sims, to ‘first-person shooters’ like Quake that have evolved new communal forms around them, to what have come to be called ‘massively multiplayer on-line role playing games’ (MMORPGs) like Everquest and Ultima Online, the games of today are hard to ignore. They have the potential not only to tell us about our relation to technology in the future, but about the values of our society today. Researchers lost out on this opportunity in the early days of popular computing, we should not make the same mistake.

Notes

1. A copy of this advertisement is available at ‘AdCritic.com’: http:// www.adcritic.com/content/sony-playstation2-the-beginning.html (accessed 1 April 2001).
2. Donald A. Thomas, Jr., ‘I.C. When,’ http://www.icwhen.com (accessed 1 April 2001).
3. David Kronke, ‘Program Promises Video Fun N’ Games’, Toronto Star, Entertainment section, 19 March 2000.
4. Ivars Peterson, ‘Silicon Champions of the Game,’ Science News Online, 2 August 1997, http://www.sciencenews.org/ sn_arc97/8_2_97/bob1.htm (accessed 1 April 2000).
5. Ralph Lombreglia, ‘In Games Begin Responsibilities,’ The Atlantic Unbound, 21 December 1996, http://www.theatlantic.com/unbound/digicult/dc9612/dc9612.htm (accessed 1 April 2001).
6. Stewart Brand, ‘Spacewar: Fanatic Life and Symbolic Death Among the Computer Bums,’ Rolling Stone, 7 December 1972, p 58.
7. Thomas.
8. While there is easy access to many of the popular magazines of the period, it remains difficult to obtain some of the gaming magazines and books, and much of the ephemera. The reasons are two-fold: First, academic and public libraries often did not subscribe to the gaming monthlies. Often these were strong advertising vehicles for the gaming industry, and as already suggested, the subject matter is not ‘serious,’ and is often very time-sensitive. More importantly, there has been a strong resurgence of nostalgia for gaming during the period, and this has led to the theft of many periodical collections from libraries. It is now far easier to find early copies of Electronic Games magazine on Ebay than it is to locate them in libraries.
9. Martin Campbell-Kelly and William Aspray, Computer: A History of the Information Machine (New York: BasicBooks, 1996), p. 228.
10. Jake Roamer, ‘Toys or Tools,’ Personal Computing, Nov/Dec, 1977, pp. 83-84.
11. Jack M. Nilles, Exploring the World of the Personal Computer (Englewood Cliffs, NJ: Prentice-Hall, 1982), p. 21.
12. Peter Schuyten, ‘Worry Mars Electronics Show,’ New York Times, 7 June 1979, sec. 4, p2, col. 1.
13. Richard Schaffer, ‘Business Bulletin: A Special Background Report,’ Wall Street Journal, 14 September 1978, p.1, col. 5.
14. Mitchell C. Lynch, ‘Coming Home,’ Wall Street Journal, 14 May 1979, p. 1, col. 4.
15. Stephen Rudosh, Personal Computing, July 1981, pp.42-51, 128.
16. Arnie Katz, ‘Switch On! The Future of Coin-Op Video Games,’ Electronic Games, September 1984. Also available on-line at http://cvmm.vintagegaming.com/egsep84.htm (accessed 1 April 2001).
17. Douglas G. Carlston, Software People: An Insider’s Look at the Personal Computer Industry (New York: Simon & Schuster, 1985), p. 269.
18. William Smart, ‘Games: The Scramble to Get On Board,’ Washington Post, 8 December 1982, pg. C5.
19. Henry Allen, ‘Blip! The Light Fantastic,’ Washington Post, 23 December 1981, C1.
20. A. Richard Immel, ‘Chris Crawford: Artist as a Game Designer,’ Popular Computing 1(8), June 1982, pp. 56-64.
21. Chris Crawford, The Art of Computer Game Design (New York: Osborn/McGraw-Hill, 1984). Also available at http:// www.vancouver.wsu.edu/fac/peabody/game-book/ and at http://members.nbci.com/kalid/art/art.html (accessed 1 April 2001).
22. Sue Peabody, ‘Interview With Chris Crawford: Fifteen Years After Excalibur and the Art of Computer Game Design,’ 1997, http://www.vancouver.wsu.edu/fac/peabody/game-book/Chris-talk.html (accessed 1 April 2001).
23. Lee The, ‘Giving Games? Go with the Classics’ Personal Computing, Dec. 1984, pp. 84-93.
24. ‘Do it yourself,’ Personal Computing, Nov/Dec 1977, p. 87.
25. Ralph Baer, ‘Getting Into Games’ (Interview), Personal Computing, Nov/Dec 1977.
26. Carlston, p. 30.
27. Ken Uston, ‘Whither the Video Games Industry?’ Creative Computer 9(9), September 1983, pp. 232-246.
28. Andrew Pollack, ‘Game Playing: A Big Future,’ New York Times, 31 December 1981, sec. 4, pg. 2, col. 1.
29. Rick Loomis, ‘Future Computing Games,’ Personal Computing, May/June 1977, pp. 104-106.
30. H. D. Lechner, The Computer Chronicles (Belmont, CA: Wadsworth Publishing, 1984).
31. Richard Wrege, ‘Across Space & Time: Multiplayer Games are the Wave of the Future,’ Popular Computing 2(9), July 1983, pp. 83-86.
32. Jim Bartimo, ‘Games Executives Play,’ Personal Computing, July, 1985, pp. 95-99.
33. Erik Sandberg, ‘A Future for Home Computers,’ New York Times, 22 September 1985, sec. 6, part 2, pg. 77, col. 5.
34. Otto Friedrich, ‘Machine of the Year: The Computer Moves In,’ 3 January 1983.
35. Richard Thieme, ‘Games Engineers Play,’ CMC Magazine 3(12), 1 December 1996, http:// www.december.com/ cmc/ mag/ (accessed 1 April 2001).
36. For overview, see Ronald E. Day, ‘The Virtual Game: Objects, Groups, and Games in the Works of Pierre Levy,’ Information Society 15(4), 1999, pp. 265-271.

]]>
https://alex.halavais.net/the-coming-gaming-machine-1975-1985/feed/ 2 3258
iPad for $250 https://alex.halavais.net/ipad-for-250/ https://alex.halavais.net/ipad-for-250/#comments Wed, 05 Jan 2011 23:02:38 +0000 http://alex.halavais.net/?p=3001 It is the season for resolutions and predictions. Instead I offer questions and metrics. Let’s start with the questions. What do you do with a $250 iPad. Just before Christmas, Amazon cut its basic Kindle to under $100, but I don’t see the first generation being offered by Apple for $250 any time soon. I do wonder what will happen with all those first-gens when the second comes out sometime this year.

I suspect there will be some generational spread, with serious Apple fanatics needing the new-new, they may end up giving their current device to kids, parents, and significant others. I am not one of those who goes out to get the new thing right away; I prefer to let others beta test. I wish I had done the iPad sooner–it’s a great device–but I suspect that the “marketable features” that are required to push many to upgrade won’t be enough for me. That said, if I did upgrade, my current iPad would stay in the family.

But what about all those single Apple fanboys out there. Will we see a flood of second-hand iPads on eBay? A lot of these devices have been sold, and given that they are solid-state devices without a lot of moving parts, I can see how they could have a long useful life beyond their first owners.

Part of this thinking is a result of the sea of iPads I saw installed at the Delta terminal at JFK last week. One use of iPads seems to be obvious: micro-kiosks. With a bit of tweaking, these seem to be the obvious replacement for the public telephone, or for use anywhere you see interactive kiosks now. Museum displays, floor directories, employment forms at Target–you get the idea.

The downside is the nastiness of the screen after being touched frequently. It’s bad enough when it’s my own hands that have caused it. Ick.

And they also seem like the sort of empty control panel for any manner of interesting devices. Yes, hackers love the eees for the same reason: they are cheap and easily interfaced and programmed.

But the eee and similar devices, besides being much cheaper than the iPad, are also much more easily hacked and tweaked. Sure, you can jailbreak your iPad, but–as a couple of students asked me recently–why bother? There isn’t much you can do with it once you do. Many net top devices already have a linux variant installed, and Loading the iPad with something else is not trivial.

That said, there is a great deal that can be done with HTML, javascript, and a good back end. So, if there is a prediction in here, I would say expect to find iPads in unexpected places, and affixed to other stuff.

]]>
https://alex.halavais.net/ipad-for-250/feed/ 3 3001
Blogging in the plural https://alex.halavais.net/blogging-in-the-plural/ https://alex.halavais.net/blogging-in-the-plural/#comments Sat, 29 Oct 2005 04:07:41 +0000 http://alex.halavais.net/?p=1281 Most scholarly treatments of blogging begin with a reference to Rebecca Blood’s (2000) history of the idea of blogging, or draw on a standardized definition like the one offered by Jill Walker (2003), which suggests that a weblog is “a frequently updated website consisting of dated entries arranged in reverse chronological order,” and goes on to note the tendency of weblogs to be unedited, made up of brief entries, authored by an individual, and making extensive use of hyperlinks. The focus here in such definitions is on the epiphenomenal product of the practice of blogging. As Richard Feynman (1968) notes, “You can know the name of a bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird.” We learn about the bird by watching what it is doing.

So varied are the behaviors of bloggers that it is a bit surprising that the same term is used to cover them all. When journalists refer to bloggers, they generally are referring to a group of widely-read, politically-motivated editorialists. Others identify bloggers by a representative average, suggesting, for example, that “the typical blog is written by a teenage girl who uses it twice a month to update her friends and classmates on happenings in her life” (Perseus, 2003). But few weblogs focus on day-to-day politics, and talking about an average blogger is as meaningless as talking about an average book author (Lawley, 2003).

If defining blogging in terms of its artifacts (the software and the web document) or the characteristics of its average participant limits us, where may we turn for a definition? There are four themes that seem to form a core set of practices and beliefs among bloggers: the networked nature of communication, the opportunity for engaging in ongoing conversation, easily produced microcontent, and transparency.

There are some weblogs that have audiences akin to a major newspaper or magazine, but most weblogs have few consistent readers. Traditionally, media that is designed for and reaches such a small audience is referred to as “narrowcasting,” but narrowcasting–cable channels that run only game shows or magazines for Smurf collectors–target particular, established niche audiences, and often make content available exclusively to these small audiences. Weblogs in-stead provide content to as narrow or as broad an audience as might encounter and enjoy the site. These audiences may share little in common except for being regular or irregular readers of a particular site. While other media may act to collect audiences and aggregate opinion and attention, weblogs encourage individualized views of the informational world. Nearly a century ago, Simmel (1964) described the tendency of metropolitans to opt to become part of a number of various social circles that may not fully intersect. Weblogs represent the alternative to broadcasting that allows for communication networks to more accurately represent and support these dispersed social networks.

The second hallmark of blogging is that it encourages reciprocal communication. Often commentators have focused on so-called “A-list” weblogs, those who attract the largest number of readers and links, and this has reduced the emphasis on conversation. On the other end, a large number of bloggers might be classified as “mumblers.” The structural equivalent of “lurkers” in other forms of group communication online, mumblers seem to post weblogs to a void, without obvious comments or readers. Even in this case, though, it is clear that one of the motivations for blogging is feedback through comments, links, and other channels. Trackbacks, blogrolls, Technorati tags, and other ways of detecting, measuring, and displaying links help to fulfill this conversational desire.

That blogging content is often accumulated in small segments, and with little commitment of time, represents a third theme. Particularly with the wide availability of free blogging software and hosts, the barriers to entry for blogging are extraordinarily low. While many bloggers invest a significant amount of time in reading and writing within the blogosphere, it is possible to engage this process as little or as much as desired. There is no minimum investment required, and even during a busy day, many bloggers may find the fifteen minutes required to type out a paragraph of commentary.

Finally, weblogs represent a relatively open and unfiltered view of thinking-in-progress. As with each of these themes, it is possible to identify exceptions, but most weblogs are marked by the absence of clear gatekeepers beyond the authors themselves. In one sense, this makes weblogs–even those that are maintained by a group–fairly personal. When companies have attempted to create weblogs written by brand characters or public relations specialists, they have been pilloried by many bloggers. This dedication to transparency has affinities with the open source and free culture movements, and this open process provides others with a model to emulate when they decide to start blogging. This dedication to openness in some cases collides with ex-isting institutionalized business practices that put a premium on secrecy.

These four themes are not unique to blogging. They apply more broadly to systems that support social interaction, including user-editable sites (wikis), tag-driven sites like del.icio.us and Flickr. The community that makes use of weblogs tends to be among the first to take up other social technologies as well. Though it will almost cer-tainly change over time–and the word “blog” may disappear from the vocabulary–these larger themes seem to have taken hold socially and are likely to continue to be influential.

It is not difficult to find antecedents to these overall themes in both the culture of hacking and of scholarship–two cultures that share significant common ground (Himanen, 2001). A decade ago Harrison and Stephen (1996) explained that computer networking was of such interest to academics. It played to long held ideals among scholars that had yet to be realized: “unending and inclusive scholarly conversation; collaborative inquiry limited only by mutual interests; unrestrained access to scholarly resources; independent, decentralized learning; and a timely and universally accessible system for rep-resenting, distributing, and archiving knowledge” (p. 32). Weblogs, while not addressing all of these ideals, have already shown them-selves to be effective in ways that other, centrally-organized efforts at scholarly networking have not.

Works Cited

Blood, Rebecca. “Weblogs: a History and Perspective.” Rebecca’s Pocket, Sept. 7, 2000.

Feynman, Richard. “What is Science?” The Physics Teacher, 7(6), 1968.

Harrison, Teresa M., and Timothy Stephen. “Computer Networking, Communication, and Scholarship.” Computer Networking and Scholarly Communication in the Twenty-First-Century University. Albany: State University of New York Press, 1996.

Himanen, Pekka. The Hacker Ethic. New York: Random House, 2001.

Lawley, Elizabeth L. Comments during discussion at the Media Ecology Association Annual Conference, Rochester, New York, 2004.

Perseus (2003). “The Blogging Iceberg.”

Simmel, Georg. Conflict and the Web of Group-Affiliations. Trans. Kurt H. Worlff and Reinhard Bendix. New York: Free Press, 1964.

Walker, Jill. “Final Version of Weblog Definition.” jill/txt (June 28, 2003).

[ This is a chunk of stuff that ended up on the “cutting room floor”; part of a chapter for the coming Uses of Blogs book that the editors asked to be excised–or at least substantially reduced. So it ends up in my blog, of course :) ]

]]>
https://alex.halavais.net/blogging-in-the-plural/feed/ 7 1281