Why I’m not blogging

I’m not blogging, primarily, because I haven’t had two minutes free in the last week. I owe about 20,000 words to various editors, I have two papers that need refereed, I have a dissertation proposal and two dissertation chapters that need read, I have to help assemble a small grant proposal, I am trying to get spam under control on the school blogging server (and I’ve taken down my own wiki on this site to mitigate the 1.5 gigs of transfer from spammers), and this is all just stuff that needs to be done *now*, and is on top of the regular stuff: preps for two new courses, managing the process for the graduate program, the redesign for the school website, and working with my advisees on various projects. And since some of these people read my blog, accept my apologies, and know that as soon as I finish this entry, I’ll be working on (or toward) your project.

So, I haven’t had time to breathe, let alone blog. What blogging time budget I do have is dedicated to the blog for the porn class. Also, things I might have otherwise blogged have increasingly ended up in del.icio.us (the social bookmarking site), where I’ve ended up putting 500 bookmarks over the last month or so. A lot of these have to do with teaching, and some with research, and as I use the service, I am seeing opportunities for using it more. Yes, I considered including the feed here, but really, I’m trying to keep this blog sparse and have as little junk as possible. It’s easy enough to skip on over to the del.icio.us page, if a reader is interested.

Instead, when I have a spare moment, I hope to explore the Python wrapper for the del.icio.us API, to see how to extend things a bit. For example, I’d like a local cache of my bookmarks. Someone has already put together a script that sends caches to gMail, but I actually want to keep a local cache. I want, five years down the road, to know that I can get at a page I was interested in, but for whatever reason didn’t make the Internet Archive’s crawl. I was considering something that linked del.icio.us with furl, but I want more than this combination can offer. I am thinking of a script that can, when run periodically (e.g., daily):
* Pull down the link list from del.icio.us
* Spider each of these pages, and update a current cache
* Highlight for me any changes (or 404s)
* Maintain a history of cached pages, including images

It’s actually not that hard a project. All of the pieces are easily scavanged from existing software. I am just a slow programmer, and don’t have the time. And, for a number of reasons (related as much to teaching and research as it is to entertainment), I want to build a PVR first.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

One Comment

  1. lily
    Posted 1/30/2005 at 10:38 pm | Permalink

    maybe you’re doing too much.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>