Teacher – A Thaumaturgical Compendium https://alex.halavais.net Things that interest me. Mon, 18 Nov 2013 20:40:13 +0000 en-US hourly 1 12644277 Massifying Higher Learning https://alex.halavais.net/massifying-higher-learning/ https://alex.halavais.net/massifying-higher-learning/#comments Mon, 18 Nov 2013 20:28:43 +0000 http://alex.halavais.net/?p=4891 Mooc?
Unless you’ve been under a rock for the last few years, you know that there has been a massive change in education recently. Sure, some of the hyperbole has abated, but there are a lot of people who are still thinking about how a single person might teach more than a classroom. In some cases, they have extended their voice to such a degree that it reaches out to thousands or tens of thousands of people.

Of course, there are issues here. For one thing, this kind of education is mostly one-way. Yes, there are ways of feeding back. A lucky student might be able to grab a sliver of the teacher’s attention, but generally, it’s about the passing of knowledge from one-to-many. Also, students in many cases get together in groups and discuss the work–building on one another’s knowledge. But this isn’t the same as the circle around the sage, the guided conversation that has gone on as long as we have had schools.

Transforming teaching from the kind of conversational learning community that is–at least ideally–found in the university classroom to this sort of massive version is more than just switching media. It means that the teacher has to shift the structure and format of her work. In many cases, this means moving to text, but it also means framing the work more didactically, shaping ideas into units and subunits that can be consumed in little bites.

And this can be amazingly advantageous to many students; students who can’t necessarily get into a classroom each week because of the expense of tuition, because they have full-time jobs and maybe families. Yes, this could potentially be a less rich form of learning than a classroom, but it can reach people who might otherwise never get at that education. It can be provided relatively cheaply, and often for free.

Now some of these are absolute crap. No, check that, I would argue that most are absolute crap. And some have called for their elimination because of that–and a return to more traditional forms of face-to-face learning. The response is natural, and especially when presented with some of these weak examples, it’s clear why they might want to do away with them altogether.

But here’s the key. Many of those who engage in this massified version of learning find their way into the more traditional classroom, or maybe even new forms of learning communities we haven’t even thought of. In other words, although this new form may replace some of the traditional classroom learning, every indication is that there are opportunities for synergy between traditional forms of learning and leveraging new media. There have been calls to move away from massification of learning, and as a person who is interested in networked approaches, I find sympathies with this. But I think a much more reasonable approach is to ask how we can continue to combine these largely “broadcast” models to help enhance what we do in the traditional face-to-face classroom.

So, I say, let’s keep up with this book-writing thing. It might turn out to actually be worthwhile.

]]>
https://alex.halavais.net/massifying-higher-learning/feed/ 2 4891
Do online classes suck? https://alex.halavais.net/do-online-classes-suck/ https://alex.halavais.net/do-online-classes-suck/#comments Sat, 08 Dec 2012 05:24:46 +0000 http://alex.halavais.net/?p=3321 Before arriving at my current posting, I would have thought the idea that online classes compared poorly to their offline counterparts was one that was slowly and inevitably fading away. But a recent suggestion by a colleague that we might tell incoming freshmen that real students take traditional meatspace courses and those just interested in a diploma go for the online classes caught me a bit off-guard.

I want to be able to argue that online courses are as good as their offline counterparts, but it’s difficult, because we don’t really know that. And this is for a lot of reasons.

The UoP Effect

First, if traditional and elite universities had been the originators of successful online courses and degrees, or if they had promoted those successes better (since I suspect you can find some pretty substantial successes reaching back at least three decades), we wouldn’t have the stigma of the University of Phoenix and its kin. For many, UoP is synonymous with online education, particularly in these parts (i.e., Phoenix).

Is UoP that bad? I don’t know. All I have to judge them on is people I’ve met with UoP degrees (I was not at all impressed), and what I’ve heard from students. What I do know is that they spend a lot of money on advertising and recruiting, and not very much money on faculty, which to me suggests that it is a bad deal.

Many faculty see what UoP and even worse for-profit start-ups are doing and rightly perceive it as a pretty impoverished model for higher education. They rightly worry that if their own university becomes known for online education, it will carry the same stigma a University of Phoenix degree does.

The Adjuncts

At ASU, as with many other research universities, the online courses are far more likely to be taught by contingent faculty rather than core tenure-track faculty, and as a result the students are more likely to end up with the second-string. I’ll apologize for demeaning adjuncts: I know full well that if you stack up the best teachers in any department there is a good chance that adjuncts will be among them, or even predominate. But on average, I suspect that a class taught by an adjunct instructor is simply not as good as one taught by full-time research faculty. There are a lot of reasons for this, but perhaps the most important one is that they do not have the level of support from the university that regular faculty do.

I’ve been told by a colleague here that they wanted to teach in the online program but were told that they were “too expensive” to be employed in that capacity. And there is a model that is beginning to separate out course design, “delivery”(ugh!) or “facilitation,” and evaluation. But I suspect the main reason more full-time faculty don’t teach online is more complicated.

Online is for training, not complex topics

This used to be “Would you trust a brain surgeon with an online degree?” which is actually a pretty odd question. Brain surgeons in some ways have more in common with auto mechanics than they do with engineers, but the point was to test whether you would put yourself in mortal danger if you were claiming online education was good. Given how much surgery is now done using computer-controlled tools, I think some of that question is moot now, but there remains this idea that you can learn how to use Excel online, but you certainly cannot learn about social theory without the give-and-take of a seminar.

It’s a position that is hard for me to argue against, in large part because it’s how almost all of us in academia learned about these things. I too have been taught in that environment, and for the most part, my teaching is in that environment. As one colleague noted, teaching in a physical classroom is something they have been taught how to do and they have honed their craft; they do it really well. Why are they forced to compete for students with online courses when they know they would not be as effective a teacher in that environment?

But in many ways this is a self-fulfilling prophecy. Few schools require “traditional” faculty to teach online, though they may allow or even encourage it. As a result the best teachers are not necessarily trying to figure out how to make online learning great. We are left with the poor substitute of models coming from industry (modules teaching employees why they should wear a hair net) and the cult of the instructional designer.

Instructional Designers

As long as I’ve already insulted adjuncts, I’ll extend to instructional designers. I know a lot of brilliant ones, but the “best practices” make online education into the spoon-feeding idiot-proof nonsense that many faculty think it is. It is as if the worst of college education has been simmered until you get it down to a fine paste, and this paste can be flavored with “subject expertise.” Many are Blackboard personified.

When you receive a call–as I recently did–for proposals to change your course so that it can be graded automatically, using multiple guess exams and the like, it makes you wonder what the administration thinks good teaching is.

I am a systematizer. I love the idea of learning objectives aligned with assessments and all that jazz. But in sitting through a seminar on Quality Matters recently, we found ourselves critiquing a course that encouraged participation on a discussion board. How did discussion align with the learning objectives? It didn’t. OK, let’s reverse engineer it. How can you come up with a learning objective, other than “can discuss matters cogently in an online forum” that encourages the use of discussion-based learning. Frankly, one of the outcomes of discussion is a personalized form of learning, a learning outcome that really comes out as “Please put your own learning outcome here, decided either before or after the class.” Naturally, such a learning outcome won’t sit well with those who follow the traditional mantra of instructional design.

QM has its heart in the right place: it provides a nice guideline for making online courses more usable, and that’s important. But what is vital is making online spaces worthy of big ideas, and not just training exercises.

The Numbers

I like the idea of the MOOC, and frankly, it makes a lot of sense for a lot of courses. It’s funny when people claim their 100-student in-person class is more engaging than a 1,000-student online course. In most cases, this is balderdash. Perhaps it is a different experience for the 10 people who sit up front and talk, but generally, big classes online are better for more students than big classes off.

Now, if you are a good teacher, chances are you do more than lecture-and-test. You get students into small groups, and they work together on meaningful projects, and the like. Guess what: that’s true of the good online instructors as well.

I think you can create courses that scale without reducing them to delivery-and-test. ASU is known for doing large-scale adaptive learning for our basic math courses, for example, and I think there are models for large-scale conversation that can be applied to scalable models for teaching. It requires decentering the instructor–something many of my colleagues are far from comfortable with–but I am convinced highly scalable models for interaction can be developed further. But scalable courses aren’t the only alternative.

I think the Semester Online project, which allows students from a consortium of universities to take specialized small classes online, is a great way to start to break the “online = big” perception. Moreover, you can make small online course materials and interactions open, leading to a kind of TOOC (Tiny Open Online Course) or a Course as a Fishbowl.

Assessment as Essential

I’ll admit, I’m not really a big part of the institutionalized assessment process. But it strikes me as odd that tenure, and our continued employment as professors, is largely based on an assessment of the quality of our research, not just how many papers we put out–though of course, volume isn’t ignored. On the other hand, in almost every department in the US, budgeting and success is based on FTEs: how can you produce more student hours with less faculty hours. Yes, there is recognition for effective and innovative teaching. But when the rubber hits the road, it’s the FTEs that count.

Critics of online education could be at least quieted a bit if there were strong structures of course and program assessment. Not just something that gets thrown out there when accreditation comes up, but something that allowed for the ongoing open assessment of what students were learning in each class. This would change the value proposition, and make us rethink a lot of our decisions. It would also provide a much better basis for deciding on teachers’ effectiveness (although the teacher is only one part of what leads to learning in a course) than student evals alone.

This wouldn’t fix everything. It may very well be that people learn better in small, in-person classrooms, but that it costs too much to do that for every student or for every course. The more likely outcome, it seems to me, is that some people learn some things better online than they do offline. If that’s the case, it would take the air out of the idea that large institutions are pursuing online education just because it is better for their bottom line.

In any case, the idea that we are making serious, long-term investments and decisions in the absence of these kinds of data strikes me as careless. Assessment doesn’t come for free, and there will be people who resist the process, but it seems like a far better metric of success than does butts in seats.

]]>
https://alex.halavais.net/do-online-classes-suck/feed/ 7 3321
Buffet Evals https://alex.halavais.net/buffet-evals/ https://alex.halavais.net/buffet-evals/#respond Thu, 03 May 2012 03:16:06 +0000 http://alex.halavais.net/?p=3173 “Leon Rothberg, Ph.D., a 58-year-old professor of English Literature at Ohio State University, was shocked and saddened Monday after receiving a sub-par mid-semester evaluation from freshman student Chad Berner. The circles labeled 4 and 5 on the Scan-Tron form were predominantly filled in, placing Rothberg’s teaching skill in the ‘below average’ to ‘poor’ range.”

So begins an article in what has become one of the truthiest sources of news on the web. But it is no longer time for mid-semester evals. In most of the US classes are wrapping up, and professors are chest-deep in grading. And the students–the students are also grading.

Few faculty are great fans of student evaluations, and I think with good reason. Even the best designed instruments–and few are well designed–treat the course like a marketing survey. How did you feel about the textbook that was chosen? Were the tests too hard? And tell us, were you entertained?

Were the student evals used for marketing, that would probably be OK. At a couple of the universities where I taught, evals were made publicly available, allowing students a glimpse of what to expect from a course or a professor. While that has its own problems, it’s not a bad use of the practice. It can also be helpful for a professor who is student-centered (and that should be all of us) and wants to consider this response when redesigning the course. I certainly have benefited from evaluations in that way.

Their primary importance on the university campus, however, is as a measure of teaching effectiveness. Often, they are used as the main measure of such effectiveness. Especially for tenure, and now as many universities incorporate more rigorous post-tenure evaluation, there as well.

Teaching to the Test

A former colleague, who shall remain nameless, noted that priming the student evals was actually pretty easily done, and started with the syllabus. You note why your text choice is appropriate, how you are making sure grading is fair, indicate the methods you use to be well organized and speak clearly, etc. Throughout the semester, you keep using the terms used on the evals to make clear how outstanding a professor you really are. While not all the students may fall for this, a good proportion would, he surmised.

(Yes, this faculty member had ridiculously good teaching evaluations. But from what I knew, he was also an outstanding teacher.)

Or you could just change your wardrobe. Or do one of a dozen other things the literature suggests improves student evaluations.

Or you could do what my car dealership does and prominently note that you are going to be surveyed and if you can’t answer “Excellent” to any item, to please bring it to their attention so they can get to excellent. This verges on slimy, and I can imagine, in the final third of the semester, that if I said this it might even cross over into unethical. Of course, if I do the same for students–give them an opportunity to get to the A–it is called mastery learning, and can actually be a pretty effective use of formative assessment.

Or you could do what an Amazon seller has recently done for me, and offer students $10 to remove any negative evaluations. But I think the clearly crosses the line both in Amazon’s case and in the classroom. (That said, I have on one occasion had students fill out evals in a bar after buying them a pitcher of beer.)

It is perhaps a testament to the general character of the professoriate that in an environment where student evaluations have come to be disproportionately influential on our careers, such manipulation–if it occurs at all–is extremely rare.

It’s the nature of the beast, though: we focus on what is measured. If what is being measured is student attitudes toward the course and the professor, we will naturally focus on those attitudes. While such attitudes are related to the ability to learn new material, they are not equivalent.

Doctor Feelgood

Imagine a hospital that promoted doctors (or dismissed them) based largely on patient reviews. Some of you may be saying “that would be awesome.” Given the way many doctors relate to patients, I am right there with you. My current doctor, Ernest Young, actually takes time to talk to me, listens to me, and seems to care about my health, which makes me want to care about my health too. So, good. And frankly, I do think that student (and patient) evaluation serves an important role.

But–and mind you I really have no idea how hospitals evaluate their staff–I suspect there are other metrics involved. Probably some metrics we would prefer were not (how many patients the doctor sees in an hour) and some that we are happy about (how many patients manage to stay alive). As I type this, I strongly suspect that hospitals are not making use of these outcome measures, but I would be pleased to hear otherwise.

A hospital that promoted only doctors who made patients think they were doing better, and who made important medical decisions for them, and who fed them drugs on demand would be a not-so-great place to go to get well. Likewise, a university that promotes faculty who inflate grades, reduce workload to nill, and focus on entertainment to the exclusion of learning would also be a pretty bad place to spend four years.

If we are talking about teaching effectiveness, we should measure outcomes: do students walk out of the classroom knowing much more than they did when they walked in? And we may also want to measure performance: are professors following practices that we know promote learning? The worst people to determine these things: the legislature. The second worst: the students. The third worst: fellow faculty.

Faculty should have their students evaluated by someone else. They should have their teaching performance peer reviewed–and not just by their departmental colleagues. And yes, well designed student evaluations could remain a part of this picture, but they shouldn’t be the whole things.

Buffet Evals

I would guess that 95% of my courses are in the top half on average evals, and that a slightly smaller percentage are in the top quarter. (At SUNY Buffalo, our means were reported against department, school, and university means, as well as weighted against our average grade in the course. Not the case at Quinnipiac.) So, my student evals tend not to suck, but there are also faculty who much more consistently get top marks. In some cases, this is because they are young, charming, and cool–three things I emphatically am not. But in many cases it is because they really care about teaching.

These are the people who need to lead reform of the use of teaching evaluation use in tenure and promotion. It’s true, a lot of them probably like reading their own reviews, and probably agree with their students that they do, indeed, rock. But a fair number I’ve talked to recognize that these evals are given far more weight than they deserve. Right now, the most vocal opponents to student evaluations are those who are–both fairly and unfairly–consistently savaged by their students at the end of the semester.

We need those who have heart-stoppingly perfect evaluations to stand up and say that we need to not pay so much attention to evaluations. I’m not going to hold my breath on that one.

Short of this, we need to create systems of evaluating teaching that are at least reasonably easy and can begin to crowd out the student eval as the sole quantitative measure of teaching effectiveness.

]]>
https://alex.halavais.net/buffet-evals/feed/ 0 3173
Rank Teacher Ranking https://alex.halavais.net/rank-teacher-ranking/ https://alex.halavais.net/rank-teacher-ranking/#comments Fri, 24 Feb 2012 17:57:48 +0000 http://alex.halavais.net/?p=3090 There has been a little discussion on an informal email list at my university about the Op-Ed by Bill Gates in the New York Times that argues against public rankings of teachers. It’s a position that in some ways constrains the Gates Foundation’s seeming interest in quantifying teaching performance. It led to questions we have tried to face about deciding merit in teaching, and encouraging teaching excellence at our own institution. I obviously won’t post the stream, but here’s my response to some of the discussion:

The problem with ranking is that it suggests that excellence in teaching is a uni-dimensional construct, which I think even a cursory “gut-check” says is dead wrong. When I think back to my greatest teachers, they have little in common. One was cold, condescending, and frankly not a very nice human, but he was exacting in asking us to clearly express ourselves, and his approach led to a room full of students who could clearly state an argument, lead a discussion, and understand the effects of style on philosophical argument. Another was a little scattered, but brought us into his home and family, was passionate about the field, and taught us how important it was to care about our research subjects. Another had a bit of the trickster in him, and would challenge our assumptions by setting absurd situations. And I could name another half-dozen who were excellent teachers–but one of the things that made them excellent was the unique way in which they approached the process of learning.

And frankly, if you asked a number of my undergraduatepeers who the “best” teachers in our program were, there would certainly be some overlap, but it would be far from perfect. An essential question is “best for whom”? And just as our students are each unique, and we should approach them as whole people (the unfortunate fact is that we *do* rank them by grading them, but that doesn’t make the process right), we should approach faculty as… perhaps a box of chocolate. The diversity of backgrounds, styles, and approaches to teaching and learning are a strength, not a weakness. We shouldn’t all be striving to fit to the golden standard of the best among us.

Now, this is not an argument for absolute relativism: there are better and worse ways of fostering student learning. It is also not an argument against quantification or assessment: I think an essential tool for improving our teaching is operationalizing some of the abstruse concepts of “good teaching” to something measurable, and using qualitative AND quantitative assessments to help us develop as a group. But the problem with ranking faculty is that there isn’t a single scale for teaching effectiveness, nor even the three (or four, if you count “hotness”) that RateYourProfessor suggest, but dozens of different scales that we might be ranked on. And while some of us may be near the top of many of those scales, I doubt any of us are at the top of all of them.

]]>
https://alex.halavais.net/rank-teacher-ranking/feed/ 1 3090