Web content course

Here is the the skeleton of a syllabus for my web analysis course for this spring. I already have a bunch of readings picked out for this, but I would be very grateful for any suggestions on this front, and for the course generally. We’ll be putting together the resource page, including some literature reviews and the like, on a wiki.

This entry was posted in Uncategorized and tagged . Bookmark the permalink. Trackbacks are closed, but you can post a comment.

3 Comments

  1. Posted 10/31/2003 at 9:31 am | Permalink

    You might want to point to CSpider, a spidering class for Mozilla in the wiki. This combined with simple DOM based extraction code (var numlinks = document.links.length) could easily generate interesting statistics on a collection of pages.

    A very interesting new level of analysis, presented at HT’03, is the mapping of link to directory structure (deepening, shallowing, broadening) which was shown to be predictive of site type in the “Connectivity Sonar” paper. Free to access PDF, or ACM

  2. Posted 10/31/2003 at 10:59 am | Permalink

    You may want to spend a class or part of a class talking about the ethics of this form of research, a good place to center such a discussion is with the aoir ethics of internet research document. its linked from the frontpage of aoir.org

  3. Posted 10/31/2003 at 11:33 am | Permalink

    Thanks for both of these excellent suggestions.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>