Grand Central gridlock

Imrpov Everywhere strikes again, freezing for several minutes in Grand Central Station. They rule.

Posted in Uncategorized | Tagged , | Leave a comment

Social Graph API

If you are like me, you probably think that XFN and FOAF are really cool ideas that just don’t seem to end up being very useful. Part of the reason for this is that it has been a little difficult for people to understand what the use of the links are, since there aren’t a lot of tools that make cool use of them. While there are applications, they are fairly sparse.

Enter the Google Social Graph API, a way getting easier access to all this linkage data. This still doesn’t answer the question of what cool things can you do with the social network data. (Can I tell you how much I dislike the term “social graph,” by the way?) I totally want to teach a “Leverage a Web 2.0 API” course, where people build mash-ups and API-based applications to wow the world. If I end up teaching the basic LAMP course in the fall, that may provide enough basis for doing a “cool APIs” course after that. Or it might even make up a good final project for that class. Something like Stanford’s Facebook App class.

(Thanks to Sam for the heads up.)

Posted in Uncategorized | Tagged , , | Leave a comment

WebMynd and distributed archiving

Techcrunch has a writeup on the startup WebMynd, a Firefox add-on that records each website you visit to your hard-drive. The idea is that you have something of a personal archive. It’s free to archive for a week, and $20 to archive for a year.

This is a great idea, of course–after all, I’ve advocated for it for years. I recognize the effort for the commercial nature here, but this is crying out for an open alternative. Why?

With a small addition, it solves some major problems with archiving the web:

1. Selection. Obviously, those pages that are visited most frequently will be most frequently archived. It’s not the case that only the popular material is worth archiving or caching, but because it requires no hand-selection–or rather that selection is invisible–it is a good alternative.

2. Bandwidth. No crawlers but humans. Bandwidth isn’t a huge concern for the web hosts (though a large part of my traffic is bots), but it certainly is for the archiver, who has to try to suck the web down a single pipe.

3. Storage. Yes, storage has come down in price, but it’s still expensive. To paraphrase The Streets, a zettabyte don’t come for free.

What this means is that you would need to set up an infrastructure for sharing individual indexes, and then allow for P2P archive searches. It’s not a simple problem, but it is definitely a tractable one. The result would be a kind of holographic memory of the web: knock out half the internet, and it would still exist on the hard-drives of the remaining accessible client computers on the web.

Posted in General | Tagged | 2 Comments

Professors strike back

I got a kick out of some of these responses of faculty to their RateMyProfessor comments. Some are pretty amusing. They share what seems like the prevailing view: don’t trust what you see on RMP because it is clearly biased, and if you have a complaint, come talk to your instructor, and you might actually have an effect on your experience in the classroom.

I am very into being a good teacher, but I’m not sure how well student evals reflect this. I may just be stinging from a recent set. Like most faculty, I zero in on the negative evaluations (best way to improve the course? “new curiculum [sic]. new instructor”), but the ideal seems to me to be what a colleague at Buffalo recommended: do reviews five years after graduation. If a student says that a course was taught them something then–rather than just before final exams–I think it probably carries a bit more weight.

Posted in Uncategorized | Tagged | Leave a comment