The Coming Gaming Machine, 1975-1985

Was going through old backups in the hope of finding some stuff I’ve lost and I ran into this, a draft I was working on around the turn of the millennium. Never went anywhere, and was never published. I was just pressing delete, when I realized it might actually be of interest to someone. It was a bit of an attempt at a history of the future of gaming, during the heyday of the console. Please excuse any stupidity–I haven’t even looked at it, just copied it over “as is.”

The Coming Gaming Machine, 1975-1985

Abstract

The early 1980s are sometimes referred to as the ‘Golden Age’ of computer games. The explosion of video games–in arcades, as home consoles, and eventually on home computers–led many to question when the fad would end. In fact, rather than an aberration, the decade from 1975 to 1985 shaped our view of what a computer is and could be. In gaming, we saw the convergence of media appliances, the rise of the professional software, and the first ‘killer app’ for networking. During this period, the computer moved from being a ‘giant brain’ to a home appliance, in large part because of the success of computer gaming.

Introduction

Sony’s offering in the game console arena, the Playstation 2, was among the most anticipated new products for the 2000 Christmas season. Although rumors and reviews added to the demand, much of this eagerness was fueled by an expensive international advertising campaign. One of the prominent television spots in the US listed some of the features of a new gaming console, including the ability to ‘tap straight into your adrenal gland’ and play ‘telepathic personal music.’ The product advertised was not the Playstation 2, but the hypothetical Playstation 9, ‘new for 2078.’ The commercial ends with an image of the Playstation 2 and a two-word tag line: ‘The Beginning’ 1.

The beginning, however, came over twenty-five years earlier with the introduction of home gaming consoles. For the first time, the computer became an intimate object within the home, and became the vehicle for collective hopes and fears about the future. In 1975 there were hundreds of thousands of gaming consoles sold, and there were dozens of arcade games to choose from. By 1985, the year the gaming console industry was (prematurely) declared dead, estimates put the number of Atari 2600 consoles alone at over 20 million world-wide2.

The natural assumption would be that gaming consoles paved the way for home computers, that the simple graphics and computing power of the Atari 2600 was an intermediary evolutionary step toward a ‘real’ computer. Such a view would obscure both the changes in home computers that made them more like gaming consoles, and the fact that many bought these home computers almost exclusively for gaming. But during the decade following 1975, the view of what gaming was and could be changed significantly. Since gaming was the greatest point of contact between American society and computing machinery, gaming influenced the way the public viewed and adopted the new technology, and how that technology was shaped to meet these expectations.

The Place of Gaming

When the University of California at Irvine recently announced that they may offer an undergraduate minor in computer gaming, many scoffed at the idea. The lead in an article in the Toronto Star, quipped, ‘certainly, it sounds like the punchline to a joke’3. As with any academic study of popular culture, many suggested the material was inappropriate for the university. In fact, despite the relatively brief history of computer gaming, it has had an enormous impact on the development of computing technology, how computers are seen and used by a wide public, and the degree to which society has adapted to the technology. Games help define how society imagines and relates to computers, and how they imagine future computers will look and how they will be used. The shift in the public view of computers from ‘giant brains’ to domestic playthings occurred on a broad scale during the ten years between 1975 and 1985, the period coincident with the most explosive growth of computer gaming.

Games have also played a role in both driving and demonstrating the cutting edge of computing. While they are rarely the sole purpose for advances in computing, they are often the first to exploit new technology and provide a good way for designers and promoters to easily learn and demonstrate the capabilities of new equipment. Programmers have used games as a vehicle for developing more sophisticated machine intelligence4, as well as graphic techniques. Despite being seen as an amusement, and therefore not of import, ‘the future of “serious” computer software—educational products, artistic and reference titles, and even productivity applications—first becomes apparent in the design of computer games’5. Tracing a history of games then provides some indication of where technology and desire meet. Indeed, while Spacewar might not have been the best use of the PDP-1’s capabilities, it (along with adventure games created at Stanford and the early massively multiplayer games available on the PLATO network) foreshadowed the future of computer entertainment surprisingly well. Moreover, while the mainstream prognostications of the future of computing are often notoriously misguided, many had better luck when the future of computing technology was looked at through the lens of computer games.

Computer Gaming to 1975

The groundwork of computer gaming was laid well before computer games were ever implemented. Generally, video games grew out of earlier models for gaming: board and card games, war games, and sports, for example. William Higinbotham’s implementation of a Pong-like game (‘Tennis for Two’) in 1958, using an oscilloscope as a display device, deserves some recognition as being the first prototype of what would come to be a popular arcade game. Generally, though, the first computer game is credited to Steve Russel, who with the help of a group of programmers wrote the first version of the Spacewar game at MIT in 1961. The game quickly spread to other campuses, and was modified by enterprising players. Although Spacewar remained ensconced within the milieu of early hackers, it demonstrated a surprisingly wide range of innovations during the decade following 1961. The earliest versions were quite simple, two ships that could be steered in real time on a CRT and could shoot torpedoes at one another. Over time, elaborations and variations were added: gravity, differing versions of hyperspace, dual monitors, and electric shocks for the losing player, among others. As Alan Kay noted: ‘The game of Spacewar blossoms spontaneously wherever there is a graphics display connected to a computer’6.

In many ways, Spacewar typified the computer game until the early 1970s. It was played on an enormously expensive computer, generally within a research university, often after hours. Certainly, there was little thought to this being the sole, or even a ‘legitimate,’ use of the computer. While time was spent playing the game, equally as important was the process of creating the game. The differentiation between player and game author had yet to be drawn, and though a recreational activity—and not the intended use of the system—this game playing took place in a research environment. There was no clear relationship between computer gaming and the more prosaic pinball machine.

However, after a ten year diffusion, Spacewar marked a new kind of computing: a move from the ‘giant brain’ of the forties to a more popular device in the 1970s. Stewart Brand wrote an article in Rolling Stone in 1972 that clearly hooked the popular diffusion of computing to ‘low-rent’ development in computer gaming. Brand begins his article by claiming that ‘ready or not, computers are coming to the people.’ It was within the realm of gaming that the general public first began to see computers as personal machines.

Perhaps more importantly, by taking games seriously, Brand was able to put a new face on the future of computing. At a time when Douglas Englebart’s graphical user interfaces were being left aside for more traditional approaches to large-scale scientific computing, Brand offered the following:

… Spacewar, if anyone cared to notice, was a flawless crystal ball of things to come in computer science and computer use:
1. It was intensely interactive in real time with the computer.
2. It encouraged new programming by the user.
3. It bonded human and machine through a responsive broadhand (sic) interface of live graphics display.
4. It served primarily as a communication device between humans.
5. It was a game.
6. It functioned best on, stand-alone equipment (and disrupted multiple-user equipment).
7. It served human interest, not machine. (Spacewar is trivial to a computer.)
8. It was delightful. (p. 58.)

Brand’s focus was on how people could get hold of a computer, or how they could build one for themselves. The article ends with a listing of the code for the Spacewar game, the first and only time computer code appeared in Rolling Stone. He mentions off-handedly that an arcade version of Spacewar was appearing on university campuses. Brand missed the significance of this. Gaming would indeed spread the use of computing technology, but it would do so without the diffusion of programmable computers. Nonetheless, this early view of the future would be echoed in later predictions over the next 15 years.

On the arcade front, Nolan Bushnell (who would later found Atari), made a first foray into the arcade game market with a commercial version of Spacewar entitled Computer Space in 1971. The game was relatively unsuccessful, in large part, according to Bushnell, because of the complicated game play. His next arcade game was much easier to understand: a game called Pong that had its roots both in a popular television gaming console and earlier experimentation in electronic gaming. Pong’s simple game play (with instructions easily comprehended by inebriated customers: ‘Avoid missing ball for high score’) drove its success and encouraged the development of a video gaming industry.

Equally important was the tentative television and portable gaming technologies that began to sprout up during the period. Though Magnavox’s Odyssey system enjoyed some popularity with its introduction in 1972, the expense of the television gaming devices and their relatively primitive game play restricted early diffusion. It would take the combination of microprocessor controlled gaming with the television gaming platform to drive the enormous success of the Atari 2600 and its successors. At the same time, the miniaturization of electronics generally allowed for a new wave of hand-held toys and games. These portable devices remain at the periphery of gaming technology, though these early hand-held games would be forerunners to the Lynx, Gameboy and PDA-based games that would come later.

By 1975, it was clear that computer gaming, at least in the form of arcade games and home gaming systems, was more than an isolated trend. In the previous year, Pong arcade games and clones numbered over 100,000. In 1975, Sears pre-sold 100,000 units of Atari’s Pong home game, selling out before it had shipped7. It had not yet reached its greatest heights (the introduction of Space Invaders several years later would set off a new boom in arcade games, and drive sales of the Atari 2600), but the success of Pong in arcades and at home had secured a place for gaming.

The personal computer market, on the other hand, was still dominated by hobbyists. This would be a hallmark year for personal computing, with the Altair system being joined by the Commodore PET, Atari’s 400 and 800, and Apple computers. Despite Atari’s presence and the focus on better graphics and sound, the computer hobbyists remained somewhat distinct from the console gaming and arcade gaming worlds. Byte magazine, first published in 1975, made infrequent mention of computer gaming, and focused more heavily on programming issues.

Brand was both the first and among the most pronounced to use gaming as a guide to the future of computing and society. In the decade between 1975 and 1985, there were a number of predictions about the future of gaming made, but most of these were off-handed comments of a dismissive nature. It is still possible to draw out a general picture of what was held as the future of gaming—and with it the future of computing—by examining contemporaneous accounts and predictions8.

Many of these elements are already present in Brand’s prescient view from 1972. One that he seemed to have missed is the temporary bifurcation of computer gaming into machines built for gaming specifically, and more general computing devices. (At the end of the article, it is clear that Alan Kay—who was at Xerox PARC at the time and would later become chief scientist for Atari—has suggested that Spacewar can be programmed on a computer or created on a dedicated machine, a distinction that Brand appears to have missed.) That split, and its continuing re-combinations, have driven the identity of the PC as both a computer and a communications device. As a corollary, there are periods in which the future seems to be dominated by eager young programmers creating their own games, followed by a long period in which computer game design is increasingly thought of as an ‘art,’ dominated by a new class of pop stars. Finally, over time there evolves an understanding of the future as a vast network, and how this will affect gaming and computer use generally.

Convergence

1975 marks an interesting starting point, because it is in this year that the microprocessor emerges as a unifying element between personal computers and video games. Although early visions of the home gaming console suggested the ability to play a variety of games, most of the early examples, like their arcade counterparts, were limited to a single sort of game, and tended to be multi-player rather than relying upon complex computer-controlled opponents. Moreover, until this time console games were more closely related to television, and arcade video games to earlier forms of arcade games. Early gaming systems, even those that made extensive use of microprocessors, were not, at least initially, computers ‘in the true sense’9. They lacked the basic structure that allowed them to be flexible, programmable machines. The emerging popularity of home computers, meanwhile, was generally limited to those with an electronics and programming background, as well as a significant disposable income.

As consoles, arcade games, and personal computers became increasingly similar in design, their futures also appeared to be more closely enmeshed. At the high point of this convergence, home computers were increasingly able to emulate gaming systems—an adaptor for the Vic-20 home computer allowed it to play Atari 2600 console game cartridges, for example. On the other side, gaming consoles were increasingly capable of doing more ‘computer-like’ operations. As an advertisement in Electronic Gaming for Spectravideo’s ‘Compumate’ add-on to the Atari 2600 asks ‘Why just play video games? … For less than $80, you can have your own personal computer.’ The suggestion is that rather than ‘just play games,’ you can use your gaming console to learn to program and ‘break into the exciting world of computing.’ Many early computer enthusiasts were gamers who tinkered with the hardware in order to create better gaming systems10. This led some to reason that video game consoles might be a ‘possible ancestor of tomorrow’s PC’11. As early as 1979, one commentator noted that the distinction between home computers and gaming consoles seemed to have ‘disappeared’12. An important part of this world is learning to program and using the system to create images and compose music. Just before console sales began to lose momentum in the early 1980s, and home computer sales began to take off, it became increasingly difficult to differentiate the two platforms.

Those who had gaming consoles often saw personal computers as ultimate gaming machines, and ‘graduated’ to these more complex machines. Despite being termed ‘home computers,’ most were installed in offices and schools13. Just as now, there were attempts to define the home computer and the gaming console in terms of previous and future technologies, particularly those that had a firm domestic footing. While electronic games (and eventually computer games) looked initially like automated versions of traditional games, eventually they came to be more closely identified with television and broadcasting. With this association came a wedding of their futures. It seemed natural that games would be delivered by cable companies and that videodisks with ‘live’ content would replace the blocky graphics of the current systems. This shift influenced not only the gaming console but the home computer itself. Now associated with this familiar technology, it seemed clear that the future of gaming lay in the elaborations of Hollywood productions. This similarity played itself out in the authoring of games and in attempts to network them, but also in the hardware and software available for the machines.

Many argued that the use of cartridges (‘carts’) for the Atari 2600, along with the use of new microprocessors and the availability of popular arcade games like Space Invaders, catapulted the product to success. Indeed, the lack of permanent storage for early home computers severely limited their flexibility. A program (often in the BASIC programming language) would have to be painstakingly typed into the computer, then lost when the computer was turned off. As a result, this was only appealing to the hard-core hobbyist, and kept less expert users away14. Early on, these computers began using audio cassette recorders to record programs, but the process of loading a program into memory was a painstaking one. More importantly, perhaps, this process of loading a program into the computer made copy-protection very difficult. By the end of the period, floppy disk drives were in wide use. This remained an expensive technology in the early days, and could easily exceed the cost of the computer itself. Taking a cue from the gaming consoles, many of these new home computers accepted cartridges, and most of these cartridges were games.

The effort to unite the computer with entertainment occurred on an organizational level as well. Bushnell’s ‘Pizza Time Theaters’ drew together food and arcade gaming and were phenomenally successful, at one point opening a new location every five days. Not surprisingly, the traditional entertainment industry saw electronic gaming as an opportunity for growth. Since the earliest days of gaming, the film industry served as an effective ‘back story’ for many of the games. It was no coincidence that 1975’s Shark Jaws (with the word ‘shark’ in very small type), for example, was released very soon after Jaws hit the theaters. The link eventually went the other direction as well, from video games and home computer gaming back into motion pictures, with such films as Tron (1982), WarGames (1983) and The Last Starfighter (1984).

In the early 1980s the tie between films and gaming was well established, with a partnership between Atari and Lucasfilm yielding a popular series of Star Wars based games, and the creation of the E.T. game (often considered the worst mass-marketed game ever produced for the 2600). Warner Communications acquired Atari—the most successful of the home gaming producers, and eventually a significant player in home computing—in 1976. By 1982, after some significant work in other areas (including the ultimately unsuccessful Qube project, which was abandoned in 1984), Atari accounted for 70% of the group’s total profits. Despite these clear precedents, it is impossible to find any predictions that future ties between popular film and gaming would continue to grow as they have over the interceding fifteen years.

This new association did lead to one of the most wide-spread misjudgments about the future of gaming: the rise of the laserdisc and interactive video. Dragon’s Lair was the first popular game to make use of this technology. Many predicted that this (or furtive attempts at holography15) would save arcade and home games from the dive in sales suffered after 1983, and that just as the video game market rapidly introduced computers to the home, they would also bring expensive laserdisc players into the home. The use of animated or live action video, combined with decision-based narrative games or shooting games, provided a limited number of possible outcomes. Despite the increased attractiveness of the graphics, the lack of interactivity made the playability of these games fairly limited, and it was not long before the Dragon’s Lair machines were collecting dust. Because each machine required (at the time) very expensive laserdisc technology, and because the production costs of games for the system rivaled that of film and television, it eventually became clear that arcade games based on laserdisc video were not profitable, and that home-based laserdisc systems were impractical.

The prediction that laserdiscs would make up a significant part of the future of gaming is not as misguided as it at first seems. The diffusion of writable CD-ROM drives, DVD drives, and MP3 as domestic technologies owes a great deal to gaming—both computer and console-based. At present, few applications make extensive use of the storage capacities of CD-ROMs in the way that games do, and without the large new computer games, there would be little or no market for DVD-RAM and other new storage technologies in the home. Unfortunately, neither the software nor the hardware of the mid-1980s could make good use of the video capability of laserdiscs, and the technology remained too costly to be effective for gaming. A few saw the ultimate potential of optical storage. Arnie Katz, in his column in Electronic Games in 1984, for example, suggests that new raster graphics techniques would continue to be important, and that ‘ultimately, many machines will blend laserdisc and computer input to take advantage of the strengths of both systems’ 16 (this despite the fact that eight months earlier he had predicted that laserdisc gaming would reach the home market by the end of 1983). Douglas Carlston, the president of Broderbund, saw a near future in which Aldous Huxley’s ‘feelies’ were achieved and a user ‘not only sees and hears what the characters in the films might have seen and heard, but also feels what they touch and smells what they smell’17. Overall, it is instructive to note the degree to which television, gaming systems, and home computers each heavily influenced the design of the other. The process continues today, with newer gaming consoles like the Playstation 2 and Microsoft’s new Xbox being internally virtually indistinguishable from the PC. Yet where, in the forecasting of industry analysts and work of social scientists, is the video game?

A Whole New Game

Throughout the 1970s and 1980s, arcade games and console games were heavily linked. New games were released first as dedicated arcade games, and later as console games. The constraints of designing games for the arcade—those which would encourage continual interest and payment—often guided the design of games that also appeared on console systems. In large part because of this commercial constraint, many saw video games (as opposed to computer games) as a relatively limited genre. Even the more flexible PC-based games, though, were rarely seen as anything but an extension of traditional games in a new modality. Guides throughout the period suggested choosing games using the same criteria that they would apply to choosing traditional games. Just as importantly, it was not yet clear how wide the appeal of computerized versions of games would be in the long run. As one board game designer suggested, while video games would continue to become more strategic and sophisticated, they would never capture the same kind of audience enjoyed by the traditional games18.

Throughout the rapid rise and fall of gaming during the early 1980s, two changes came about in the way people began to think about the future of gaming. On the one hand, there emerged a new view of games not merely as direct translations of traditional models (board games, etc.), but as an artistic pursuit. The media and meta-discourse surrounding the gaming world gave rise to a cult of personality. At the same time, it became increasingly difficult for a single gaming author to create a game in its entirety. The demand cycle for new games, and increasingly more complex and intricate games, not only excluded the novice programmer, it made the creation of a game a team effort by necessity. As such, the industrial scale of gaming increased, leaving smaller companies and individuals unable to compete in the maturing market.
This revolution began with home computers that were capable of more involved and long-term gaming. As one sardonic newspaper column in 1981 noted:

The last barriers are crumbling between television and life. On the Apple II you can get a game called Soft Porn Adventure. The Atari 400 and 800 home computers already can bring you games on the order of Energy Czar or SCRAM, which is a nuclear power plant simulation. This is fun? These are games? 19

The capabilities of new home computers were rapidly exploited by the new superstars of game design. An article in Popular Computing in 1982 noted that game reviewers had gone so far overboard in praising Chris Crawford’s Eastern Front, that they recommended buying an Atari home computer, if you didn’t have one, just to be able to play the game20. Crawford was among the most visible group of programmers who were pushing game design beyond the limits of traditional games:

Crawford hopes games like Eastern Front and Camelot will usher in a renaissance in personal computer games, producing games designed for adults rather than teenagers. He looks forward to elaborate games that require thought and stimulate the mind and even multiplayer games that will be played cross-country by many players at the same time, with each player’s computer displaying only a part of the game and using networks linked by telephone lines, satellites, and cable TV.

Crawford extended his views in a book entitled, naturally, The Art of Computer Game Design (1982), in which he provided a taxonomy of computer games and discussed the process of creating a video game. He also devotes a chapter to discussing the future of the computer game. Crawford notes that changes in technology are unlikely to define the world of gaming. Instead, he hoped for new diversity in gaming genres:

I see a future in which computer games are a major recreational activity. I see a mass market of computer games not too different from what we now have, complete with blockbuster games, spin-off games, remake games, and tired complaints that computer games constitute a vast wasteland. I even have a term for such games—cyberschlock. I also see a much more exciting literature of computer games, reaching into almost all spheres of human fantasy. Collectively, these baby market games will probably be more important as a social force than the homogenized clones of the mass market, but individual games in this arena will never have the economic success of the big time games.21

In an interview fifteen years later, Crawford laments that such hopes were well off base. Though such hopes were modest—that in addition to the ‘shoot the monsters!’ formula, as he called it, there would be a ‘flowering of heterogeneity’ that would allow for ‘country-western games, gothic romance games, soap-opera games, comedy games, X-rated games, wargames, accountant games, and snob games’ and eventually games would be recognized as ‘a serious art form’—he suggests that over fifteen years they proved to be misguided22. In fact, there were some interesting developments in the interim years: everything from Sim City and Lemmings to Myst and Alice. A new taxonomy would have to include the wide range of ‘god games’ in addition to the more familiar first-person shooters. In suggesting the diversification of what games could be, Crawford was marking out a new territory, and reflecting the new-found respectability of an industry that was at the peak of its influence. The view that ‘programmer/artists are moving toward creating an art form ranging from slapstick to profundity,’ appeared throughout the next few years23.

During the same period, there was a short window during which the future of gaming was all about the computer owner programming games rather than purchasing them. Indeed, it seemed that the ability to create your own arcade-quality games would make home computers irresistible24. Listings in the BASIC programming language could be found in magazines and books into the early 1980s. It seemed clear that in the future, everyone would know how to program. Ralph Baer noted in an interview in the same year that students ‘should be able to speak one or two computer languages by the age of 18, those who are interested. We’re developing a whole new generation of kids who won’t be afraid to generate software’25. By the time computers began to gain a foothold in the home, they increasingly came with a slot for gaming cartridges, much like the consoles that were available. In part, this was dictated by economic concerns—many of the new manufacturers of home computers recognized that software was both a selling point for the hardware and a long-terms source of income26—but part of it came with a new view of the computer as an appliance, and not the sole purview of the enthusiast. Computer games during the 1980s outgrew the ability of any single programmer to create, and it became clear that, in the future, games would be designed more often by teams27.

Connected Gaming

By the 1980s, there was little question that networking would be a part of the future of gaming. The forerunners of current networked games were already in place. The question, instead, was what form these games would take and how important they would be. The predictions regarding networking tended to change from the highly interactive experiments in networked computing, to the experiments in cable-television and telephone distribution of games in the 1980s. A view from 1981 typifies the importance given to communications and interfaces for the future of gaming. It suggests that in five years time:

Players will be able to engage in intergalactic warfare against opponents in other cities, using computers connected by telephone lines. With two-way cable television, viewers on one side of town might compete against viewers on the other side. And parents who think their children are already too attached to the video games might ponder this: Children in the future might be physically attached to the games by wires, as in a lie detector28.

A 1977 article suggests the creation of persistent on-line worlds that ‘could go on forever,’ and that your place in the game might even be something you list in a will29. Others saw these multi-player simulations as clearly a more ‘adult’ form of gaming, that began to erase the ‘educational/ entertainment dichotomy’30. The short-term reality of large-scale on-line gaming remained in many ways a dream during this period, at least for the general public. But the ability to collect a subscription fee led many to believe that multiplayer games were ‘too lucrative for companies to ignore’31. Indeed, the multiplayer games like Mega Wars could cost up to $100 a week to play, and provided a significant base of subscribers for Compuserve32.

The software industry had far less ambitious plans in mind, including a number of abortive attempts to use cable and telephone networks to distribute gaming software for specialized consoles. Despite failures in cable and modem delivery, this was still seen as a viable future into the middle-1980s. Even with early successes in large-scale on-line gaming, it would be nearly a decade before the mainstream gaming industry would become involved in a significant way.

Retelling the Future

The above discussions suggests that when predictions are made about the future of gaming, they are often not only good predictors of the future of computing technology, but also indicators of general contemporaneous attitudes toward the technology. Given this, it would seem to make sense that we should turn to current games to achieve some kind of grasp on the future of the technology. It is not uncommon to end a small piece of history with a view to the future, but here I will call for just the opposite: we should look more closely at the evolution of gaming and its social consequences at present.

Despite a recognition that games have been important in the past, we seem eager to move ‘beyond’ games to something more serious. Games seem, by definition, to be trivial. Ken Uston, in an article appearing in 1983 in Creative Computing on the future of video games expressed the feeling:

Home computers, in many areas, are still a solution in search of a problem. It is still basically games, games, games. How can they seriously expect us to process words on the low-end computers? The educational stuff will find a niche soon enough. But home finance and the filing of recipes and cataloguing of our stamp collections has a long way to go.

A similar contempt of gaming was suggested by a New York Times article two years later: ‘The first generation of video games swept into American homes, if ever so briefly. And that was about as far as the home-computer revolution appeared ever destined to go’33. More succinctly, in an issue in which Time named the personal computer its ‘Man’ of the Year, it notes that the ‘most visible aspect of the computer revolution, the video game, is its least significant’34. Though later the article goes on to suggest that entertainment and gaming will continue to be driving forces over the next decade, the idea of games (at least in their primitive state) is treated disdainfully.

This contempt of gaming, of the audience, and of popular computing, neglects what has been an extremely influential means by which society and culture have come to terms with the new technology. Increasingly, much of the work with computers is seen from the perspective of game-playing35. Games are also central to our social life. Certainly, such a view is central to many of the post-modern theorists that have become closely tied to new technologies, who view all discourse as gaming36. Within the more traditional sociological and anthropological literature, games have been seen as a way of acculturating our young and ourselves. We dismiss this valuable window on society at our own peril.

A recognition of gaming’s central role in computer technology, as a driving force and early vanguard, should also turn our attention to today’s gamers. Recent advances in gaming, from involved social simulations like The Sims, to ‘first-person shooters’ like Quake that have evolved new communal forms around them, to what have come to be called ‘massively multiplayer on-line role playing games’ (MMORPGs) like Everquest and Ultima Online, the games of today are hard to ignore. They have the potential not only to tell us about our relation to technology in the future, but about the values of our society today. Researchers lost out on this opportunity in the early days of popular computing, we should not make the same mistake.

Notes

1. A copy of this advertisement is available at ‘AdCritic.com’: http:// www.adcritic.com/content/sony-playstation2-the-beginning.html (accessed 1 April 2001).
2. Donald A. Thomas, Jr., ‘I.C. When,’ http://www.icwhen.com (accessed 1 April 2001).
3. David Kronke, ‘Program Promises Video Fun N’ Games’, Toronto Star, Entertainment section, 19 March 2000.
4. Ivars Peterson, ‘Silicon Champions of the Game,’ Science News Online, 2 August 1997, http://www.sciencenews.org/ sn_arc97/8_2_97/bob1.htm (accessed 1 April 2000).
5. Ralph Lombreglia, ‘In Games Begin Responsibilities,’ The Atlantic Unbound, 21 December 1996, http://www.theatlantic.com/unbound/digicult/dc9612/dc9612.htm (accessed 1 April 2001).
6. Stewart Brand, ‘Spacewar: Fanatic Life and Symbolic Death Among the Computer Bums,’ Rolling Stone, 7 December 1972, p 58.
7. Thomas.
8. While there is easy access to many of the popular magazines of the period, it remains difficult to obtain some of the gaming magazines and books, and much of the ephemera. The reasons are two-fold: First, academic and public libraries often did not subscribe to the gaming monthlies. Often these were strong advertising vehicles for the gaming industry, and as already suggested, the subject matter is not ‘serious,’ and is often very time-sensitive. More importantly, there has been a strong resurgence of nostalgia for gaming during the period, and this has led to the theft of many periodical collections from libraries. It is now far easier to find early copies of Electronic Games magazine on Ebay than it is to locate them in libraries.
9. Martin Campbell-Kelly and William Aspray, Computer: A History of the Information Machine (New York: BasicBooks, 1996), p. 228.
10. Jake Roamer, ‘Toys or Tools,’ Personal Computing, Nov/Dec, 1977, pp. 83-84.
11. Jack M. Nilles, Exploring the World of the Personal Computer (Englewood Cliffs, NJ: Prentice-Hall, 1982), p. 21.
12. Peter Schuyten, ‘Worry Mars Electronics Show,’ New York Times, 7 June 1979, sec. 4, p2, col. 1.
13. Richard Schaffer, ‘Business Bulletin: A Special Background Report,’ Wall Street Journal, 14 September 1978, p.1, col. 5.
14. Mitchell C. Lynch, ‘Coming Home,’ Wall Street Journal, 14 May 1979, p. 1, col. 4.
15. Stephen Rudosh, Personal Computing, July 1981, pp.42-51, 128.
16. Arnie Katz, ‘Switch On! The Future of Coin-Op Video Games,’ Electronic Games, September 1984. Also available on-line at http://cvmm.vintagegaming.com/egsep84.htm (accessed 1 April 2001).
17. Douglas G. Carlston, Software People: An Insider’s Look at the Personal Computer Industry (New York: Simon & Schuster, 1985), p. 269.
18. William Smart, ‘Games: The Scramble to Get On Board,’ Washington Post, 8 December 1982, pg. C5.
19. Henry Allen, ‘Blip! The Light Fantastic,’ Washington Post, 23 December 1981, C1.
20. A. Richard Immel, ‘Chris Crawford: Artist as a Game Designer,’ Popular Computing 1(8), June 1982, pp. 56-64.
21. Chris Crawford, The Art of Computer Game Design (New York: Osborn/McGraw-Hill, 1984). Also available at http:// www.vancouver.wsu.edu/fac/peabody/game-book/ and at http://members.nbci.com/kalid/art/art.html (accessed 1 April 2001).
22. Sue Peabody, ‘Interview With Chris Crawford: Fifteen Years After Excalibur and the Art of Computer Game Design,’ 1997, http://www.vancouver.wsu.edu/fac/peabody/game-book/Chris-talk.html (accessed 1 April 2001).
23. Lee The, ‘Giving Games? Go with the Classics’ Personal Computing, Dec. 1984, pp. 84-93.
24. ‘Do it yourself,’ Personal Computing, Nov/Dec 1977, p. 87.
25. Ralph Baer, ‘Getting Into Games’ (Interview), Personal Computing, Nov/Dec 1977.
26. Carlston, p. 30.
27. Ken Uston, ‘Whither the Video Games Industry?’ Creative Computer 9(9), September 1983, pp. 232-246.
28. Andrew Pollack, ‘Game Playing: A Big Future,’ New York Times, 31 December 1981, sec. 4, pg. 2, col. 1.
29. Rick Loomis, ‘Future Computing Games,’ Personal Computing, May/June 1977, pp. 104-106.
30. H. D. Lechner, The Computer Chronicles (Belmont, CA: Wadsworth Publishing, 1984).
31. Richard Wrege, ‘Across Space & Time: Multiplayer Games are the Wave of the Future,’ Popular Computing 2(9), July 1983, pp. 83-86.
32. Jim Bartimo, ‘Games Executives Play,’ Personal Computing, July, 1985, pp. 95-99.
33. Erik Sandberg, ‘A Future for Home Computers,’ New York Times, 22 September 1985, sec. 6, part 2, pg. 77, col. 5.
34. Otto Friedrich, ‘Machine of the Year: The Computer Moves In,’ 3 January 1983.
35. Richard Thieme, ‘Games Engineers Play,’ CMC Magazine 3(12), 1 December 1996, http:// www.december.com/ cmc/ mag/ (accessed 1 April 2001).
36. For overview, see Ronald E. Day, ‘The Virtual Game: Objects, Groups, and Games in the Works of Pierre Levy,’ Information Society 15(4), 1999, pp. 265-271.

Posted in Research | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 2 Comments

Mind the MOOC?

Siva Vaidhyanathan has a new post up on the Chronicle blog that takes on the hype cycle around MOOCs. Which is a good thing. Experimenting with new ways learning online and off, particularly in higher ed, is more than a worthwhile venture. I think it probably does have a lot to do with the future of the university.

But maybe not in the way University of Virginia Rector Helen Dragas and others seem to think. For those not playing at home, the UVa recently went through a very public and destructive firing and rehiring of their president. The reason, it turned out, is that their Board of Visitors seemed to think the university should be engaging in creative destruction more quickly. Or something similar to that. They wanted more motion, faster. And MOOCs seem to be the current darling of what elite institutions can do to… well to forestall the inevitable.

To be clear, I agree with the economic doom-casters. I think we are in for a cataclysmic and rapid change in what universities do in the US. I think it will feel a bit like an echo of the newspaper collapse, and in particular, we will see a large number of universities and colleges not make it through the process. Part of that is that there will be challengers outside of traditional universities, and part of it will be that traditional universities will find ways of reaching new students. A big part will be rapid changes in how universities–particularly private universities–are funded.

But I think Siva has MOOCs wrong, in part by assuming that there is a thing called a MOOC and that it is a stable sort of a thing. In particular:

He notes:

Let me pause to say that I enjoy MOOCs. I watch course videos and online instruction like those from the Khan Academy … well, obsessively. I have learned a lot about a lot of things beyond my expertise from them. My life is richer because of them. MOOCs inform me. But they do not educate me. There is a difference.

So, there is a question of terminology. Are Khan courses MOOCs? Let’s assume they hold together into courses and curricula, even then, are they MOOCs? Are MIT’s Open Courses MOOCs? I think calling these MOOCs makes about as much sense as calling a BOOK a MOOC. These are the open resources that make up an important part of a scalable online open course (a SOOC! I can wordify too!).

The main issue here is, I think, his insistence on this idea of “education.” I don’t think I believe in education any more. I’m not sure I believe teaching is much more than setting the stage for the important bit: learning. But he is suggesting that there is more here. That education consists of more than just learning.

But I also think it is way too early to guess at what “MOOCs” do well, when they are a moving target. The idea that calculus or chemistry instruction scales well but history or philosophy does not I think has a lot more to do with institutional structures and university politics than it does with the nature of learning these things.

I think one of the major problems universities–both the elite institutions Siva is talking about and the “less elite” universities and colleges–is that they are the wrong tool for the problem they face. They face students coming to college not well prepared by high schools. The first two years is remedial work, often outsourced to adjunct labor. And since the university wants to put its resources into the “meat” of education, the cool stuff students don’t get to until senior year, they are screwing up what is happening up to that point.

The result is Bio 101 and English 101. Courses that best reflect the worst in college education. They are either 30-student courses taught by first year grad students and/or adjuncts, or 1,200-student courses that involve showing up to class, memorizing key terms, and regurgitating them into the appropriate bubble on a Scantron form. It’s not the 20-person senior seminar on Kierkegaard’s less known knitting patterns that are the target of MOOCs, it is the Bio 101s.

Now, part of the problem is that many large state schools (and small private colleges) only have Bio 101s. I regularly had students at the senior level at SUNY Buffalo who had never written a term paper. At Quinnipiac (which boasts very few giant lectures courses), I heard something similar. As bad as Bio 101 is, it’s a cash cow for the university. If you are able to can that cash cow, all the better.

But here’s the trick, if you are able to can it, and make it available to all for free, it’s not a cash cow, it’s an open service to society. It is not the best solution to the problem (reminder: the problem is failing public secondary and primary education in the US), but it is a stop-gap that doesn’t soak the student.

At present, scaled courses follow the trajectory of scaled courses in giant lecture halls over the last two decades: lecture and multiple choice. The real innovation in MOOCs is the potential for creating networked learning communities within these massive courses. I think it’s possible we can do that. I also think it’s going to take a lot of work, and a lot of time. Which means money.

So, if administrators are excited about MOOCs, I say: good. If they don’t understand the monetization of open education resources, I say: join the crowd.

Posted in Teaching | Tagged , , , , , , , , , , , , , , , , | 5 Comments

Research Universities and the Future of America

In case you haven’t seen this yet…

This is one of those cases where fostering the elite is a good thing. Nothing wrong with funding community colleges or making tuition at four-year institutions more reasonable, but we are systematically undermining our country both economically and culturally by undercutting our large research universities. All hail the MOOC, and for goodness sake, make higher education function more effectively, but don’t use it as an excuse to take the “research” out of research university…

Posted in Research, Teaching | Tagged , , , , , | Leave a comment

MaKey MaKey

This makes physical computing dead simple. I’m in for one

Posted in Technology | Tagged , , | Leave a comment