BlogPost Progress Report: peer assessment

Over the last four semesters, beginning in the spring of 2011, I have been using a badge system that allows for peer review and the awarding of badges that can then be shared on the open badge infrastructure. As with many of my experiments with educational technologies, I figured the best way to learn what works is just to dive in and muddle through. I initially intended to start without any specific infrastructure, just running through the process via a wiki, but instead I coded a simple system for managing the badge process, and have tweaked it over time.

It doesn’t really work, but it works well enough, and thanks to some patient and very helpful students, I now know a great deal more about how badges can work in higher education. I make no claim to my successes being best practices, but I at least know more now than when I started, and figured I would share some of this experience.

Why did you do that?

More than a decade ago, I coded my first blog system for a course, though the term was not widely used then. I did it because there were particular kinds of interactions I wanted to encourage, and existing applications didn’t do quite what I wanted them to. I created my BadgePost system for the same reason. I am not really a coder (I dabble) but what I wanted did not exist, and so I took a shot at prototyping something that might work. (As an aside, I also hope that what happened with blogs happens with badges, and I can download the equivalent of WordPress soon instead of having to roll my own.) I knew I wanted:

Peer assessment. I wanted to get myself out of the sole role of sole reviewer. In many cases peers can give better advice than I can. One of the main difficulties of teaching is rewinding to the perspective of the student, and that can be easier, in some cases, for those who have just learned something. I wanted to enable that kind of open peer review in both hybrid courses and those taught entirely online.

Mastery. I also wanted desperately to get away from letter grades, as they seemed like a plague, not just for undergrad courses, but for grad as well. Students seemed far more interested in the grade than they were in learning something, a refrain I’ve heard frequently from a lot of my colleagues. I wanted to move the focus off of the grade.

Peers as cases. Students often ask me for models of good work, and because I change assignments so frequently, I rarely have a “model.” The advantage to open assessment that travels beyond a single course is that there are exemplars to look at, and (hopefully) they are diverse enough not to stifle creative interpretations by new students.

Unbundling the credential from the course. I had a number of problems that seemed to swirl around the equation of course time to learning objectives. For one, in the required technical courses, some people came in with nothing and others with extensive knowledge, and I wanted to try to address the issue of not all students moving through a program in lock-step. I wanted a back door to reduce redundancy and have instructors know that their students were coming into a course with certain skills. Finally, I wanted to give students a range of choices so that they could pursue the areas they were most interested in.

I also wanted non-paying non-Quinnipiac students participating in my courses to have a portable credential to show for it. And I wanted paying, matriculating students to have an easier way of communicating the kinds of things they had learned in the program.

I won’t cover all of these in detail, but will expound a bit more on the assessment and assessing piece…

Peer Assessment

There have been suggestions that the credentialing aspect of badges is separate from the process of assessment that leads to the badge, but in practice I think it’s both likely that they get rolled together, and beneficial when they are. Frankly, students don’t see the distinction, and they can reinforce each other in interesting ways. So, while I have done peer critique in the past, from the outset here, I wanted to get students involved in the process of granting badges via peer critique.

A lot of this was influenced by discussions with Philipp Schmidt and the application of badges in Peer2Peer University. I have long stated the goal of “disappearing” as an instructor in a course, and the place where that appearance is most obvious is when it comes to grading. (And assessment, not the same thing, but bound together.) From the outset, I saw the authority of a badge as vested in the material presented as evidence of learning, and the open endorsement/assessment of that work by peers.

Lots of reasons for this, but part was as a demotivator. That is, my least favorite question on the first day of classes is “how do I get an A?” I am always tempted to tell the truth: “I don’t care, and I wish you didn’t either.” So, I wanted badges to provide a way of getting away from that linear grading scale. I went so far as to basically throw grades out, saying that if you showed up on something approaching a regular basis, you’d get an A.

I should say that this was a failure. If anything, students paid more attention to grades because the unique system made them have to think about it. It wasn’t onerous, but a lot more of the course became about the assessment process. And it’s funny, my desire to escape grading as a focus and process turned a 180, and I am now all about assessment. I should explain…

I hate giving traditional tests (I don’t think they show anything), and hate empty work. And while I now know I like ideas around authentic assessment, from the outside these seemed a lot like more of the same. Now, not only do I think formative assessment is the key element of learning, but that the skill of assessing work in any field is what essentially defines expertise. Being able to tell what constitutes good work allows you to improve the work of others, and importantly, of yourself. At the core of teaching is figuring out what in a piece of work is good, what needs improvement, and how the creator can improve her work.

Beyond Binary

I had expected students to do the work, apply for a badge, and then either get it or not. A lot of other people new to badges seem to have a similar expectation. Just the opposite occurred, and a lot of the changes to my badge system have been to accommodate this.

First, a lot of work that really was not ready for a badge was submitted. I kind of expected students to be very sure of the work that they submitted for a badge, in part because of my experience with blogging in classes, and seeing that students were more careful about their writing when it was for a peer audience. Instead, students often presented work that was not enough for a badge, or barely enough for a badge. I was pleasantly surprised by how much feedback, and in what detail, students gave to their peers.

One of the more concrete changes I made to the system was to move from a binary endorsement (qualified or not, on a number of factors), to a sliding scale, with the center point being passing, and the ability of reviewers to come back and revise their “vote.” As a result, you can see from the evidence of a badge not just what the student has done, but whether their peers thought this was acceptable or awesome.

I’ve also been surprised by how many nominated themselves for “aspirational” badges. When a user selects a badge, it is moved into their “pending” category, and I was confused by so many pending badges that had no evidence uploaded. But students seem to click on these as a kind of note to themselves that this is what they are pursuing. This, incidentally, leads to a problem for reviewers who look at a pending badge before it is ready, and find that process frustrating, but one of the things that needs to improve in the system is communicating such progress. I didn’t plan to need to do that, since I saw badges as an end point rather than a process.

The Reappearing Teacher

The other surprise was just how interested students were in getting my imprimatur. But the reason, in this case, was not the grade–they had that. They actually valued my response as an expert a bit more, I think. This was a refreshing change from students turning to the back page of graded paper to see the grade, and then throwing it out before reading any of my comments. No doubt, some of this comes from a lack in confidence in their peers as well, and I’ve found that in some cases this lack is reasonable.

In some ways, I’m trying to encourage the sempai/kohai relationship, of those who have “gone before” and therefore have more to say about a particular badge. I’ve been reluctant to limit approval to only those who actually have the badge (in part for reasons I’ll note below regarding encouraging reviews), but I may do more of that. There are some kinds of assessment, though, that don’t require having the badge. I don’t need to know how to create a magic trick to be amazed by it, for example. So I don’t want to rule out this kind of “audience assessment.” There is also space for automated assessment. For example, for some badges you need to show a minimal number of tweets, or comments, or responses to comments, or (e.g.) valid HTML. There is no reason to have a human do these pieces of the assessments, though I would hate to see badges that did not involve human assessment, in large part because, again, I think building the capacity to do assessments is an important part of the system.

The Other Motivation

I began by hoping students would ignore the grading process, and have evolved to think that they should pay a lot of attention to assessment. In some courses, students have jumped into peer assessment. In others–and particularly the undergraduate course I’m teaching this semester–they were slow to get started. I want to think about why people assess, and how to motivate them to be involved.

When I did peer assessments in the pre-badge world, I assigned a grade for the quality of the assessment provided. I want to do something similar here, and a lot of this comes of a discussion with Philipp Schmidt in Chicago last year. The meta-project here is getting students to be able to analytically assess work and communicate that. Yes, you could do an “expert assessor” badge, or something similar, but really it is more essential to the overall project.

One way to do this is inter-coder reliability. If I am considered an expert in the area (and in the current system, this is defined as having badges at a higher level than the one in question, within the same “vertical”), those with less experience should be able to spot the same kinds of things I do, and arrive at a similar quantitative result on the assessments.

So, for example, if someone submits the write-up of a content analysis, two of her peers might look at it and come up with two very different assessments of the methods section of the article. Alice may say that it is outstanding, 90/100 on the scale of a particular rubric. Frank might disagree, putting it at 25/100. Of course, both would provide some textual explanation for why they reached these conclusions. Then I come along and give it a 30/100, along with my own critique.
The dynamics of getting students to do peer assessments (some courses they did a lot, some they have not), and my involvement in the assessment, is an interesting piece for me. In this case, Frank should receive some sort of indication within the system that he has done a good job of performing the assessment.

I’m still working out a way to do this that isn’t unnecessarily complex. Right now there is a karma system that gives users karma for performing assessments, with multipliers for agreeing with more experienced assessors, but this is complicated to “tune” and non-intuitive.

There is also the issue of when various levels perform the assessment. For the above process to work, Alice and Frank both need to get their assessments in before I do, and shouldn’t get the same kind of kudos for “me too” assessments after the fact.

Badges

None of this is necessarily about badges, but it leaves a trail of evidence, conversation, and assessment behind. One of the big questions is whether badge records should be formative or summative. As I said, the degree to which students have engaged in badges as a process rather than an outcome came as a bit of a surprise to me. Right now, much of that process happens pretty openly, but I can fully understand how someone well on in their career may not want to expose fully their learning process. (“May” is operative here–I think doing so is valuable for the learning community!)

On the other hand, I think badges that appeal to authority undermine the whole reason badges are not evil. Badges that make an authoritative appeal (“Yale gave me this badge so it must be good.”) simply reinforce many of the bad structures of learning and credentialing that currently exist. Far better is a record of the work done to show that you understand something or can do something, along with the peers that helped you get there, pointed to and easily found via a digital badge.

Balancing the privacy needs with the need to authentically vest the badge with some authority will be an interesting feat. I suspect I may provide ways of hiding “the work” and only displaying the final version (and final critiques) to the outside world, while preserving the sausage-making process for the learning community itself. But this remains a tricky balance.

Posted in Teaching, Technology | Tagged , , , , , , , , , , , , , , , , | 1 Comment

Buffet Evals

“Leon Rothberg, Ph.D., a 58-year-old professor of English Literature at Ohio State University, was shocked and saddened Monday after receiving a sub-par mid-semester evaluation from freshman student Chad Berner. The circles labeled 4 and 5 on the Scan-Tron form were predominantly filled in, placing Rothberg’s teaching skill in the ‘below average’ to ‘poor’ range.”

So begins an article in what has become one of the truthiest sources of news on the web. But it is no longer time for mid-semester evals. In most of the US classes are wrapping up, and professors are chest-deep in grading. And the students–the students are also grading.

Few faculty are great fans of student evaluations, and I think with good reason. Even the best designed instruments–and few are well designed–treat the course like a marketing survey. How did you feel about the textbook that was chosen? Were the tests too hard? And tell us, were you entertained?

Were the student evals used for marketing, that would probably be OK. At a couple of the universities where I taught, evals were made publicly available, allowing students a glimpse of what to expect from a course or a professor. While that has its own problems, it’s not a bad use of the practice. It can also be helpful for a professor who is student-centered (and that should be all of us) and wants to consider this response when redesigning the course. I certainly have benefited from evaluations in that way.

Their primary importance on the university campus, however, is as a measure of teaching effectiveness. Often, they are used as the main measure of such effectiveness. Especially for tenure, and now as many universities incorporate more rigorous post-tenure evaluation, there as well.

Teaching to the Test

A former colleague, who shall remain nameless, noted that priming the student evals was actually pretty easily done, and started with the syllabus. You note why your text choice is appropriate, how you are making sure grading is fair, indicate the methods you use to be well organized and speak clearly, etc. Throughout the semester, you keep using the terms used on the evals to make clear how outstanding a professor you really are. While not all the students may fall for this, a good proportion would, he surmised.

(Yes, this faculty member had ridiculously good teaching evaluations. But from what I knew, he was also an outstanding teacher.)

Or you could just change your wardrobe. Or do one of a dozen other things the literature suggests improves student evaluations.

Or you could do what my car dealership does and prominently note that you are going to be surveyed and if you can’t answer “Excellent” to any item, to please bring it to their attention so they can get to excellent. This verges on slimy, and I can imagine, in the final third of the semester, that if I said this it might even cross over into unethical. Of course, if I do the same for students–give them an opportunity to get to the A–it is called mastery learning, and can actually be a pretty effective use of formative assessment.

Or you could do what an Amazon seller has recently done for me, and offer students $10 to remove any negative evaluations. But I think the clearly crosses the line both in Amazon’s case and in the classroom. (That said, I have on one occasion had students fill out evals in a bar after buying them a pitcher of beer.)

It is perhaps a testament to the general character of the professoriate that in an environment where student evaluations have come to be disproportionately influential on our careers, such manipulation–if it occurs at all–is extremely rare.

It’s the nature of the beast, though: we focus on what is measured. If what is being measured is student attitudes toward the course and the professor, we will naturally focus on those attitudes. While such attitudes are related to the ability to learn new material, they are not equivalent.

Doctor Feelgood

Imagine a hospital that promoted doctors (or dismissed them) based largely on patient reviews. Some of you may be saying “that would be awesome.” Given the way many doctors relate to patients, I am right there with you. My current doctor, Ernest Young, actually takes time to talk to me, listens to me, and seems to care about my health, which makes me want to care about my health too. So, good. And frankly, I do think that student (and patient) evaluation serves an important role.

But–and mind you I really have no idea how hospitals evaluate their staff–I suspect there are other metrics involved. Probably some metrics we would prefer were not (how many patients the doctor sees in an hour) and some that we are happy about (how many patients manage to stay alive). As I type this, I strongly suspect that hospitals are not making use of these outcome measures, but I would be pleased to hear otherwise.

A hospital that promoted only doctors who made patients think they were doing better, and who made important medical decisions for them, and who fed them drugs on demand would be a not-so-great place to go to get well. Likewise, a university that promotes faculty who inflate grades, reduce workload to nill, and focus on entertainment to the exclusion of learning would also be a pretty bad place to spend four years.

If we are talking about teaching effectiveness, we should measure outcomes: do students walk out of the classroom knowing much more than they did when they walked in? And we may also want to measure performance: are professors following practices that we know promote learning? The worst people to determine these things: the legislature. The second worst: the students. The third worst: fellow faculty.

Faculty should have their students evaluated by someone else. They should have their teaching performance peer reviewed–and not just by their departmental colleagues. And yes, well designed student evaluations could remain a part of this picture, but they shouldn’t be the whole things.

Buffet Evals

I would guess that 95% of my courses are in the top half on average evals, and that a slightly smaller percentage are in the top quarter. (At SUNY Buffalo, our means were reported against department, school, and university means, as well as weighted against our average grade in the course. Not the case at Quinnipiac.) So, my student evals tend not to suck, but there are also faculty who much more consistently get top marks. In some cases, this is because they are young, charming, and cool–three things I emphatically am not. But in many cases it is because they really care about teaching.

These are the people who need to lead reform of the use of teaching evaluation use in tenure and promotion. It’s true, a lot of them probably like reading their own reviews, and probably agree with their students that they do, indeed, rock. But a fair number I’ve talked to recognize that these evals are given far more weight than they deserve. Right now, the most vocal opponents to student evaluations are those who are–both fairly and unfairly–consistently savaged by their students at the end of the semester.

We need those who have heart-stoppingly perfect evaluations to stand up and say that we need to not pay so much attention to evaluations. I’m not going to hold my breath on that one.

Short of this, we need to create systems of evaluating teaching that are at least reasonably easy and can begin to crowd out the student eval as the sole quantitative measure of teaching effectiveness.

Posted in Teaching | Tagged , , , , , , , , , , , , , , | Leave a comment

Review: Planned Obsolescence


It is rare that how a book is made is as important as its content. Robert Rodriguez’s El Mariachi stands on its own as an outstanding action film, yet it is a rare review that does not mention the tiny budget with which it was accomplished. And here it is difficult to resist the urge to note that I, like many others, read Kathleen Fitzpatrick’s new book, Planned Obsolescence, before ink ever met paper. Fitzpatrick opened up the work at various stages of its creation, inviting criticism openly from the public. But in this case, the making of the book–the process of authorship and the community that came together around it–also has direct bearing on the content of the work.

Fitzpatrick’s book is a clear and well-thought out response to what is widely accepted as a deeply dysfunctional form of scholarly dissemination: the monograph. In the introduction, Fitzpatrick suggests that modern academic publishing in many ways operates via zombie logic, reanimating dead forms, feeding off of the living. As a result, it is tempting to conclude that the easiest way to deal with academic publishing is similar to the best cure for zombies: a quick death.

Planned Obsolescence does not take this easy path, and instead seeks to understand what animates the undead book. For Fitzpatrick, this begins with questioning the place and process of peer review, and this in turn forces us to peel back the skin of what lays beneath: authorship, texts, preservation, and the university. The essential question here is whether the cure to the zombification of scholarly communication may be found in a new set of digital tools for dissemination, and explores what the side effects of that cure may be.

In what constitutes the linchpin of her argument, the first chapter takes a bite out of one of the sacred cows of modern academia: the flawed nature of peer review as it is currently practiced. Unlike those who have argued–perhaps tactically–that open access and online journals will keep sacrosanct peer review in its current form, Fitzpatrick suggests that new bottles need new wine, and draws on a wide-reaching review of the history and problems of the present system of peer review, a system driven more by credentialing authors than promoting good ideas.

Fitzpatrick does not offer an alternative as much as suggests some existing patterns that may work, including successful community-filtered websites. She acknowledges that these sites tend to promote an idiosyncratic view of “quality,” and that problems like that of the male-dominated discourse on Slashdot would need to be addressed if we do not want to replace one calcified system of groupthink with another. The argument would be strengthened here, I think, with a clearer set of requirements for a proposed alternative system. She presents MediaCommons, an effort she has been involved in that provides a prototype for “peer-to-peer review,” as itself a work in progress. It is not clear that the dysfunctional ranking and rating function of the current peer review system is avoided in many of the alternative popular models she suggests, in which “karma whoring” is often endemic. As such, the discussion of what is needed, and how it might be effectively achieved could have been expanded; meta-moderation of texts is important, but it is not clear whether this is a solution or a temporary salve.

If we move from peer review to “peer-to-peer review,” it will have a significant effect on what we think of as “the author.” In her discussion of the changing nature of authorship, Fitzpatrick risks either ignoring a rich theoretic discussion of the “death of the author” or becoming so embroiled in that discussion that she misses the practical relationship to authors working in new environments of scholarly discourse. She does neither, masterfully weaving together a history of print culture, questions of authorship, and ways in which digital technologies enable and encourage the cutting up and remixing of work, and complicate the question of authorship.

The following two chapters discuss texts and their preservation. As the process of authorship changes, we should expect this to be revealed in the texts produced. Naturally, this includes the potential for hypermedia, but Fitzpatrick suggests a range of potential changes, not least those that make the process of scholarly review and conversation more transparent. This discussion of the potential edges of digital scholarship provides some helpful examples of the variety of scholarly discourse that is afforded by new media forms–a set of potentialities that is richer than the future that is sometimes presented by academic publishers whose visions are clouded by models and technologies that require profitability. The following chapter on the processes of disseminating and preserving this work I found to be particularly enlightening. As in earlier chapters, Fitzpatrick manages to draw together a surprisingly broad set of experiments and analyses into an intriguing and concise synthesis.

The penultimate chapter of the book discusses the question of how to support and sustain the creation of these new texts. The chapter argues that university presses should not try to beat commercial presses at their own game, but should instead invent a new game. It presents a number of models and strategies through which this might be achieved, and suggests those that show the most promise: notably, providing for open access and drawing the university press more directly into the work of the university and its library.

Planned Obsolescence itself was born of the realization that Fitzpatrick’s previous book was rejected due not to the quality of its thought but by the potential for press profit. That her next book is of now in a bound physical volume, published by New York University Press, and that this review will itself appear in a bound journal, published by Sage, seems to suggest that in some ways this born-digital scholarly conversation has itself succomed to the slow-moving process of traditional scholarly publication, and as such, might appear as something of a rebuttal to the argument that the only good zombie is a dead one. On the other hand, any criticism I provide above represents a form of slow, printed conversation that is largely outmoded by digital scholarly communication.

In fact, this neatly reflects the complexity of the new structures of scholarly publishing, and the promise for its future; a future in which we stop hiding from zombie books and invite them to a more convivial scholarly conversation. Anyone who is serious about understanding the future of scholarly publishing–and anyone who cares about knowledge and society should share this concern–will find Fitzpatrick’s book an essential, thought-provoking, and highly approachable introduction to the conversation.

Kathleen Fitzpatrick, Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: NYU Press, 2011, vii+245 pp. ISBN 0-8147-2788-1, $23 (pbk)

A version of this review is to appear in New Media & Society, which provided me with a review copy.

Posted in Research | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment

What does the university offer?

The answer is obvious: courses. But you can get courses anywhere. I’ve written about this before (Dealing Out the Uni), but Jim Groom’s effort to get a new server for his course via Kickstarter has me thinking again.

Earlier this week, in the context of discussing what the traditional university provided that crowdsourced and open options did not with my students, I got an interesting mix of the usual suspects and some answers that I hadn’t heard framed in exactly that way before. (And yes, I am always impressed when students are thoughtful about complex issues.) Here are some of the reasons to go to college despite the increased availability of alternatives:

Credentialing

This is no surprise, of course. One of the reasons to go to an accredited university is for the transcript and the diploma. Long after other structures do learning better (if, indeed, such structures or institutions emerge), the university will maintain a stranglehold on students because of its ability to print educational currency in the form of a transcript. For me, this serves as a good reason to loosen that grip.

When MIT jumps on the badge bandwagon and people start talking about Thrun credits, we might argue that this imperative has already been diminished. But does a MOOC that is not a Stanford course hold the same kind of value? I doubt it. Personal brads do matter: that’s why Howard Rheingold and Edward Tufte, among others, can draw paying students to their seminars. But I wonder how far their letters of completion or endorsement carry.

For the time being, if you want to brand yourself as a graduate, you have to go to a university. And completely regardless of the quality of the instruction at that university, the name must be recognized and valued.

Structure

A university tells you what you should be doing, and not just in the classroom. One of my students was open about the fact that if she didn’t have to go to class she wouldn’t: the university in some sense provides a structure of discipline. (Another student disagreed, saying she would get bored without being able to go to class, and was motivated to attend on her own.) This extended beyond the classroom, though, to “life skills.” For many people this is when they are becoming independent, both living on their own and becoming their own thinkers.

Now, I worry that in some places, universities do a poor job of this, extending adolescence well beyond what might be ideal. Many of our students are too scaffolded, and unwilling or unable to put themselves in the driver’s seat of their own learning career. But it was an interesting suggestion: the university provides a needed structure for learning, and frees the student from some of the “meta”: what’s important? when should I study? what are we doing in class today?

Expertise Curation

I thought one of the more interesting answers was that it was hard to find the right people to teach the right things. Yes, there were a lot of self-styled experts out there, but when you don’t know anything about a field, the university provides a faculty that is presumably made up of people who know what they are doing. Perhaps because I’m not as confident in the ways in which universities filter people, this was never one of my top picks for the reason universities exist, but it is an interesting one. For open alternatives to thrive, they need to present a compelling case that they are providing access to experts, and that can be a difficult thing to do.

In some ways, I’m interested in the model of the European Graduate School, which boasts a star-studded faculty. Where else do you find Derrida and John Waters in the same list? Or Peter Greenaway and Donna Haraway? But it is an interesting question: how do you distinguish expertise when you don’t know anything about a field? You don’t. You leave it up to an institution that can act as a filter, and you trust that they hire the cream of the crop.

Guaranteed Skepticism

As an institution, skepticism is built-in. One of the students noted that faculty are willing to expand the conversation by taking positions they may not agree with, by raising questions, by placing skepticism and inquiry in an exalted position. Now, I am sure there are other institutions that do this, but it was heartening to hear this from students: one of the values of the university is a professoriate that is not married to the status quo, that takes nothing for granted, and that encourages a community of inquiry.

Virtuous Community

Because you are–in many cases–sharing the same physical space and bound together in a community, there is some feeling that you are expected to serve other students. Showing up to class unprepared isn’t just a personal failure, but in some way a letting down others in the community. Of course, you can get virtual communities where similar social capital is built, but it is much harder to achieve in the one-off networked class, where dropping the ball (or the course) might have very little effect on other parts of your life. The investment of time in an undergraduate degree means that you are all in the same boat.

Alt-U

I’m a fan of efforts like P2PU. But I also am not quite ready to give up on the university. I don’t think our only choices are the university as it is today or no universities at all. In fact, those two may be exactly the same thing: the university that does not rapidly change to fit the new environment is likely to be buried by the forces of history. As I’ve said before, we are about to go through a sea change in the way universities work that will make the newspaper shakeup seem tame by comparison. The mountain of student loan debt (some of which I continue to carry) constitutes an educational bubble. When universities find themselves having to contract, the outmoded tenure system will make that difficult.

But I also think that this will force some universities to rapidly innovate their way away from failure. It will be a painful process, but part of that process is figuring out what the real value and strengths of a university are. I think relying on the current hold on the credential is a very short-sighted approach.

In the medium term, one of the best solutions is the liberal arts college / research university hybrid. I also suspect that a successful model exists in universities and university towns merging. The walls of the university are coming down, and with them the distinction between student and faculty and worker. I suspect one model of the future university feels a bit more like a small town with a really good library and really good schools, and the four-year program leading to a slip of paper will slowly fade away.

Posted in Teaching | Tagged , , , , , , , , , , , , , , , , , , , , , , | 2 Comments