Intelligence – A Thaumaturgical Compendium https://alex.halavais.net Things that interest me. Mon, 22 Mar 2010 16:46:51 +0000 en-US hourly 1 12644277 The 1-1-1 map https://alex.halavais.net/the-1-1-1-map/ https://alex.halavais.net/the-1-1-1-map/#respond Thu, 18 Dec 2008 20:41:00 +0000 http://alex.halavais.net/?p=2166 I live in a part of New York that is sometimes called Manhattan Valley. You wouldn’t know why until seeing this snapshot from Google Earth. Though almost all of lower Manhattan now has skinned buildings, my area does not. It’s ironic, since the building across the street (at 100th and Broadway), which shows as a construction site on Google Earth, is now one of the tallest in more than 30 blocks. I’m not complaining, really, just surprised to see this depression–it looks like a giant footprint, right in my neighborhood.

More broadly, I’ve found myself in the strange position of visiting old houses and neighborhoods in Google Earth, and on the web. I stroll down Motomachi and note a new Gap–they show up everywhere these days. As do, it seems, the Street View vans. It made sense that they would start with the large cities, but I somehow didn’t expect them to start covering my old neighborhood in Buffalo (ah, so there’s the neighbor’s new playhouse–they told us about that) or my Mom’s place in California that I’ve never visited, but sits out in a spot that is pretty remote. What happens when the Google car covers the globe? Well, they turn around and do it again, of course. That way we can also roll the clock back.

Google Earth already integrates some photos, those marked on Panoramio, but wouldn’t it be nice to integrate video from YouTube, or images from Picasa (or heck, be more open and include Flickr and Revver). As more of our multimedia becomes time stamped and geotagged, I think we can look forward to records that come close to approximating what was happening at a given time or place. Now, of course, if you are out in the middle of nowhere, the nearest tagged photo may be beyond the horizon and five years old. But in Times Square, you can see photos from last week. Is it that hard to believe that, as more and more phones and cameras include instant uploading, that images from an hour ago, or from five minutes ago, are that far off?

This isn’t some huge leap in technology, this is just charting the current trajectory. What happens when you want to know where your friend is standing and can pull up five views of 82nd and Broadway from your mobile?

]]>
https://alex.halavais.net/the-1-1-1-map/feed/ 0 2166
[OSI] Beyond Simple Search https://alex.halavais.net/osi-beyond-simple-search/ https://alex.halavais.net/osi-beyond-simple-search/#respond Mon, 23 Jul 2007 17:30:53 +0000 http://alex.halavais.net/osi-beyond-simple-search/ Sorry, this last set of notes on a breakout session at the open source conference somehow ended up stuck on my laptop. Moderator Eric Haseltine started us out with the premise: “Search is great, but sometimes it sucks.” As always, this is a highly filtered, non-transcript-level set of reflections.

Presenters:
* John Howard, Deputy Associate Director of Enterprise Solutions, Office of the Chief Information Officer, Office of the Director of National Intelligence
* Dr. James Mulvenon, Director, Center for Intelligence Research and Analysis, Defense Group Inc.
* Francis Kubala, Scientist, BBN Technologies
* Jason Hines, Principal Search Engineer, Google
* Moderator: Dr. Eric Haseltine, Associate Director of National Intelligence for Science and Technology, Office of the Director of National Intelligence

Presentation
Howard begins by talking about projects within the community to provide search that is sensitive to the users attributes, particularly access permissions. Beyond this, should everything be discoverable? To what extent do you actually reach in and discover things? Is it OK if I find your half-finished document? Can you “half-discover” something: e.g., finding an owner of a document, but not revealing the text, and tell the searcher to seek out the author?

Kubala works on getting data from speech, retrieving information from a stream. There are lots of problems with this. One is simply a matter of the text coming out is a true wall-of-text, with no boundaries. The first thing that can be done is separating out the speakers. You can also do some highlighting of named entities. Adding structure to the transcription is a non-trivial piece of the puzzle–speech-to-text is not enough, on its own.

Add translation to this and you get real-time translation of broadcast materials. This allows for federated search across multiple languages. You can have a persistent watch list that catches keywords (in all languages) for incoming streams. [How cool would it be for this to be an open search engine! Doubt there is enough advertising to support it.] Works really well for news broadcasts–showing the stream with transcriptions in the original language as well as machine-translated text, all clickable for random access to the original video stream. But current tech does not work quite as well for “found speech” (e.g., YouTube). Currently exploring example-based queries, and iterated queries that work with the searcher to a convergent set of responses. “One of the futures for beyond simple searching is to get beyond the ad hoc query.”

Mulvenon works mainly on what appears on the Chinese web. He notes that the biggest way that they use open source is as a way to cue targeted collection of classified materials (an oft-heard refrain). Chinese is “China’s first layer of encryption.” Finding linguists and getting them cleared is extremely difficult: 3.5 to 4 years to get even naturalized Chinese and Taiwanese cleared. (This issue of the time it takes to clear personnel came up in several of the sessions.) That said, it is a great source; the number of Chinese-language pages will soon outstrip English-language pages. Must do searches “from within” China. Google.ca is not the best search engine. Baidu is also not particularly great, despite the “Chinese-centric” advertising. Lots of blogging content as well, which is both difficult to track on and sometimes valuable.

They move beyond search to look for text in the under- or un-linked dark web, as well as seeking out structural information (other hosts on shared subnets, open ports, etc.) to help leverage more complete collections. They have also been working with open source geospatial data to combine with their data and do geolocation of various materials in China.

Even military targets are fairly transparent “It’s not transparent in English, as if they’re supposed to make it easy for us.” Some items are pretty opaque, but there are some areas (logistics, etc.) that end up being made available through the mountain of online and off-line posting that goes on in China. Finally, a lot of it is beyond the ability to machine translate, layered beneath cultural allusion that simply won’t be picked up.

Hines reiterated Google’s mission “organizing the world’s information.” Google Enterprise is the fastest unit of Google, with 100% growth each year. They move products to enterprise when they are mature and ready to be secured for enterprise deployment. “Universal search” is the aim: collapsing vertical search and machine translation to do one search on the Google search box that draws in lots of different materials. Didn’t talk in much detail about the kinds of projects they are currently working on.

Q & A

Q: What’s wrong with search tools?

Too much “flotsam and jetsam.”

Q: What would be the ideal user experience?

“Smarter search would be helpful.” Wenlin, with a customized dictionary, is vital to Mulvenon’s work. Need to be able to tag content. “Every time I start to do search, it’s like I’m starting from scratch.”

What is Google doing about this? You should use Google Toolbar for, e.g., on the fly translation, as well as personalized search. [But I think there is a disconnect there; clearly Toolbar is too superficial for the kind of work we’re talking about.]

Kubala: Deep semantics still a deep problem. On the shallow end, there is duplicate and near-duplicate handling.

Q: Is search the right paradigm?

Haseltine: “We can only ask what we know to ask.” [In other words, we want to find, not search.]

Hines makes the comment, and this comes up a number of times, that there is a need to re-introduce the social to the search process. Sometimes it’s more important to know who has the tacit expertise, rather than where it is on the web.

Q. What is Google doing beyond not being evil? (Doesn’t the acquisition of Double-Click’s records provide an extremely valuable & dangerous source of traffic data?)

Hines: “We take user privacy very seriously. We wouldn’t do anything that would challenge our credibility to our users.” He goes on to note that there’s nothing to stop you from leaving Google at any time. However, he doesn’t touch the issue of whether Google will then let you take your data with you (and not have it remain in their control), the way Ask.com has recently allowed for users to request non-collection.

Q. What about Wales positioning himself as a way of taking on the search paradigm?

Hines suggests that others have tried something similar, but perhaps without the weight that Wales can put behind it. It will be interesting to see what he makes of it.

Q. What have you learned about the behavior of humans at Google?

Hines: People want simple search. They don’t want to move beyond it, and when they experiment with ways of providing alternatives, users don’t like it. They want a line to type stuff into, and an answer, and they want that to work with some consistency. They don’t want to know what’s happening under the hood.

]]>
https://alex.halavais.net/osi-beyond-simple-search/feed/ 0 1798
[OSI] Knowledge Management https://alex.halavais.net/osi-knowledge-management/ https://alex.halavais.net/osi-knowledge-management/#respond Tue, 17 Jul 2007 20:28:46 +0000 http://alex.halavais.net/osi-knowledge-management/ Presenters:
* Dr. Mike Wertheimer, Chief Technology Officer, Office of the Director of National Intelligence
* G. Clayton Grigg, Chief Knowledge Officer, FBI
* Jeffrey R. Cooper, Chief Innovation Officer, SAIC
* Ed Waltz, Chief Scientist, BAE Systems
* Moderator: Thomas Sanderson, Deputy Director of the Transnational Threats Initiative, Center for Strategic and International Studies

Presentations

The session asks how we make use of the expertise of the web, especially blogs, wikis, and collaborative environments.

Early in Wertheimer’s tenure at DNI, he’d get the same story from lots of people. There would be some hard problem, and so they would facilitate getting a diverse group into a room. No one solved the problem, and when you asked why not, they said they didn’t have enough data. Why?

They were mostly introverted, and most people had a pretty strongly-held opinions. They rushed to consensus, that consensus being that there was no good solution. That said, he would ask, is it possible that the answer is in someone’s desk drawer? Yes, they say, they think that is possible. Would they have taken a different path if they had known it was in someone’s drawer? Surprising answer: no. They like the way they do it. And what is that strategy? Task more collection. “The strategy for finding a needle in the haystack is to put more hay on the pile.”

Intellipedia is an effort to expose ideas before they are fully baked. The effort has had some success, but is facing new challenges. The first wave of Intellipedians are zealots, and have a clear idea of how the wiki should be used and shouldn’t. The second wave of people are trying new things, but running into similar problems when their efforts run headlong into policy issues. What do you do when contractors set up pages? [That’s actually a problem with Wikipedia as well.] How should pages for debating whether global warming is “real” be handled? What can you put on your home page? That you like to surf? That you are a Christian? What about a case where someone defaced another person’s home page with a racial slur as a joke? How do you stop this happening, while not encumbering the network with rules.

How do you “manage the gray area” of a system that is designed to be without rules in a community that has a long culture of rules. They are trying to let a thousand flowers bloom, but it’s difficult because of the compliance and security issues. How do you keep it a good place to be and useful to the process, and not sink the mission.

Waltz talked about the exchange of knowledge artifacts at various levels: a chat over the phone, exchange of data, and finally, at the highest and most important level, the exchange of mental models; that is, some understanding of tacit models of reasoning.

Grigg gave a pretty broad overview of knowledge management, looking at managing human resources; issues with knowledge architecture in the organization, and how do you migrate previous knowledge; how do organizations memorialize lessons learned, how do you capture the knowledge of the experienced folks; how to ensure that records are captured and placed into operation “just-in-time.” Finally, he moved on to the issue of collaborative platform, peer-to-peer sharing.

Cooper made an argument in support of analysis as a human and social endeavor, and there is a need to not just give lip service to that, but create systems that support the cognitive work of analysts. Building such systems is not easy. He says that the reasons that analysts’ tools have failed because they failed to understand how analysts actually work. He says the panel should not be about “knowledge management” but rather “knowledge creation.” Not moving and storing information, but helping individuals or groups create (or co-create) new knowledge.

He argues that systems are missing the social element, the temporal element (timeline), and the spatial, physical nature of information artifacts. Knowledge management has only changed in terms of volume. Understanding the social networks that already exist to understand the information problem. Most of this remains tacit, and needs to be understood through the social filter. We need to help people to represent the thinking, not just the answer. “Analysis” covers a lot of material: from the immediate and narrow to very broad, general problems. We cannot expect the same tools to handle this diverse set of tasks. We need to understand how this is done socially now, and draw this into design of support tools.

Sanderson talked about a “trusted information network” program (using Groove), a globally distributed forum for discussing jihadists. A question would be put up, and then it would be discussed by this group. The most important incentive was that it had impact, both in terms of real action and their own work. It was important that by entering discussion they would gain knowledge as well. Need to have moderation to maintain a flow. Need a group willing to challenge one another–who are competitive.

Q & A

If a moderator is necessary, what about Intellipedia and similar non-moderated sites work? Even Wikipedia needs gardeners to keep things together and working. (The panelists kept torturing this metaphor so long, it started to feel absurdly a bit like Being There.)

It’s not just KM: are there gaps in policy at a higher level? Yes, for a lot of reasons. One of them may be that the DNI doesn’t have a big enough stick to lead to change in the community. It’s also a cultural issue: according to a survey those with the highest job satisfaction see the least need to collaborate across agencies.

Cooper argues that we have very little peripheral intelligence: most of it is tasked and focussed. I was actually surprised by this, and I’m curious as to how much is actually spent on keeping watch for unexpected threats.

How do you work practices like blogging and wikis into evaluation of performance, and such? Wertheimer says it’s actually dangerous to contribute. Not everyone needs to be an Intellipedian. It’s fine if the people who want to write write, and others just read. It’s an entrepreneurial time, and entrepreneurs fail. That said, Intellipedia has led to big wins, breakthroughs. They haven’t figured out how to minimize the risk, “but the ones who take the risk are the ones who are exciting to be around.” Cooper notes that the problem of evaluating freely published work is one that academia has been trying to work on for at least a century when evaluating the work of faculty.

]]>
https://alex.halavais.net/osi-knowledge-management/feed/ 0 1795
[OSI] Academic Outreach https://alex.halavais.net/osi-academic-outreach/ https://alex.halavais.net/osi-academic-outreach/#comments Tue, 17 Jul 2007 16:21:49 +0000 http://alex.halavais.net/osi-academic-outreach/ I’m continuing to only blog the breakout sessions. At least one of the cameras at the plenaries (I counted eight at the last one) belong to C-SPAN (they’ve been broadcasting the sessions it over the last couple of days) and it’s possible to stream a few of those recordings from their website. They are recording some of the breakout sessions, as well, I think, and that is a good thing because especially today, there are a number that I am going to end up having to forgo. There is a session this afternoon on the “Libraries in the Future,” for example, and another on trends in business intelligence. Luckily, they have a crack team of pro bloggers doing a nice job covering things. This morning, I missed a discussion on employing open source in defense of civil rights that I probably should have gone to, since it is one of the areas I am particularly interested in.

The breakout session I did attend this morning was on academic outreach. Several panelists talked a bit about the professionalization of analyst training in the academy. The focus was on the work at Mercyhurst and Johns Hopkins, but there is an effort, both through the DNI and the International Association for Internet Education to establish a baseline description of required capabilities for the analyst, to aid in educating the next wave of intelligence analysts.

I had hoped that there would be more on research articulation. While there was a bit of talk about accessing expertise outside the intelligence community, there was a lot less on the meta-level of studying how intelligence analysts–and intelligence organizations–do their job. I suspect that there is already a lot out there in this regard, however, and I’m just not plugged into that literature very directly.
ni
It was an interesting session, and I am looking forward to tracking down some of the unclassified documents related to training–particularly the mentioned competency inventory–and maybe some syllabi and other materials from the new programs. I do not see Quinpiac getting into the Intelligence training area any time soon, but many of the analytical frameworks and skills are broadly of interest to business intelligence, user analysis, and market analysis–things our current students need to be able to do better.

]]>
https://alex.halavais.net/osi-academic-outreach/feed/ 1 1794
[OSI] Media as the Open Source https://alex.halavais.net/osi-media-as-the-open-source/ https://alex.halavais.net/osi-media-as-the-open-source/#comments Tue, 17 Jul 2007 03:21:00 +0000 http://alex.halavais.net/osi-media-as-the-open-source/ Had really wanted to attend the Open Source 101 training they had available–would have made the conference much more valuable to me–and so I requested it when I sent them my information. I was denied, and I tried to wrangle my way in, but to no avail. Instead, I went to this session, with the following presenters:
* Mark Mansfield, Director of Public Affairs, Central Intelligence Agency
* David E. Kaplan, President, Kaplan and Associates
* Dr. Chris Westcott, Director, BBC-Monitoring
* Arnaud de Borchgrave, Director, Transnational Threats, Center for Strategic and International Studies

Just a very quick overview on this one. De Borchgrave gave a stirring condemnation of modern journalism and the disappearance of foreign correspondents from the landscape. He claimed that we had entered a new era of “journalism of assertion” rather than verification, brought about by new technology. An argument that would be stronger, I suspect, if he didn’t write for NewsMax. Maybe he wants to be ahead of the curve.

Westcott provided a nice, structured overview of BBC Monitoring, which employs about 450 people to gather news from local sources. Indeed, this seems to be an interesting answer to the fall off in foreign correspondents: hire locals. He notes that it is important that the people monitoring are close to the news they are reporting on–both physically and in terms of cultural literacy–and that they monitor over time rather than just helicoptering in.

Kaplan begins by arguing that journalists and intelligence analysts are not really that different in terms of what they do and how they do it. He says the major difference is that journalists work in an open system [debatable…] while the intelligence community is wholly insular. A few years ago, analysts didn’t have email or access to the web–some of them still don’t. They don’t talk to anyone outside their narrow community. This is bad, he says: “working in a closed system in the 21st century will kill you.” As it stands, new analysts, weened on the web, are told to “leave your connectedness, your network worthiness, at the door.”

Was hoping for a clear refutation of this from Mansfield. His claim was that “a lot has changed” over the last few years, and that analysts could talk to outsiders, but had to forward press queries to the press office. This is common in most large organizations–let alone the CIA–but it seems that this forestalls any discussion outside the Agency. Mansfield also noted that the Agency hosted conferences and did other things to encourage opening up the discussion, but did not elaborate much in that direction.

At the end, there was a discussion about the fall off of research offices among news organizations, and the moderator noted that the DNI believes more librarians were desperately needed in the intelligence community.

OK, I’ve not blogged the two plenaries. They had a whole gaggle of cameras filming those–will be interesting to see if they release the video over the web…

]]>
https://alex.halavais.net/osi-media-as-the-open-source/feed/ 1 1793
[OSI] Technology: Improving the Use of Open Sources https://alex.halavais.net/osi-technology-improving-the-use-of-open-sources/ https://alex.halavais.net/osi-technology-improving-the-use-of-open-sources/#respond Tue, 17 Jul 2007 02:50:04 +0000 http://alex.halavais.net/osi-technology-improving-the-use-of-open-sources/ Presenters (from the Session blurb):
* Steve Selwyn, Deputy Associate Director of Transformation, Office of the Chief Information Officer, Office of the Director of National Intelligence
* Joe Markowitz, Independent Scholar
* Brian Kettler, ISX Lab Chief Scientist and Principal Research Engineer, Lockheed martin Advanced Technology Labs
* Troy M. Pearsall, Executive Vice president of Technology Transfer, In-Q-Tel

I’ll just try to wrap up some of the presentations briefly. Selwyn talked a bit about the process of integrating the various systems available to various agencies, and ways of reducing redundancy.

Kettler spoke on the application of Web 2.0 (take a drink) to open source intelligence issues. He made a strong push for Tapscott’s Wikinomics (which I have yet to read, but someone recently told me was “Benkler Light”). He talked a bit about Technorati, blogosphere analysis, splogs, emotional machine coding, and some of the projects in the TREC conferences.

Markowitz talked a little bit about integrating data and research at different levels. Beginning at the lowest level (open), and allowing queries and results (and machine translation) to propagate to more secure levels in which the analyst can be a bit more detailed in her querries, etc.. This “one-way transfer” allows for insulation against information backwash (my term, not his!). He talks a bit about how it might be possible to link backwards, taking a piece of cleared information and match it with information you can distribute at the open level, to get feedback from non-cleared sources.

Current system allows for a kind of “watched update,” where the analyst can indicate items that are of interest, and a static copy of the page can be moved upward (presumably with the analysts notes), as can a “watch request” that will note changes in those pages. Have to admit, I’m not following much of this as it is outside of my domain.

Q&A

A couple of questions relating to how to get the gov to pay for licenses, there was the beginning of an interesting discussion here on how to value intelligence data, but didn’t get very far.

Someone asked: What specific technologies are coming down the pike now to deal with the phenomenal internet growth problem? Pearsall noted that new search engines are about assigning themes or facets to search results, moving from keyword-only to more contextualized search. There is a subsidiary problem. Search engines tend not to be enterprise software [but what about Google Enterprise?], but rather ad-driven, making it harder to acquire the technology.

Markowitz says this is a challenge to those who aim for unfettered access. The underlying assumption when you share is that the signal will increase more than the noise, and the internet is a counterexample of this. He went on to suggest it may not be worth mining that data [!].

Qestion from the audience: Is there a room for amateur analyst? E.g., an open Intellipedia? [Isn’t that called Wikipedia ;)] Markowitz notes, to the amusement of the group, that “every policymaker is his own analyst… as we’ve learned.” Early Bird represents both the promise and problem here. On the one hand, you can get the raw data out there fast, but then what’s the value added?

(In a later talk, Mr. Naquin, Director of the Open Source Center, noted that the Center articulates with outside experts, including professors and folks like IntelCenter. It seems silly to me to ignore the possibility that good analysis–especially open source analysis–is already happening outside the government intelligence community. )

Is collection actually needed, when the internet is already collecting? Can you do analysis in place? Tools are getting more sophisticated, and so it will bubble up, maybe. Also, remember, that you’ve got to do work to get at the deep web. Markowitz suggests that real analysis usually follows something where you have a hypothesis and then look to confirm it. Google doesn’t do that, but it might be more helpful to have a hypothesis testing search engine.

Someone asked about what happens when people get, e.g., an IP redirect (e.g., al Jazeera looks different for someone coming from a US IP address than it does for someone coming from the Middle East.) I think what he was actually asking was the degree to which the process of collecting open source material might lead to a traffic footprint that suggests the current interests of the US intelligence community, but I may be reading my own question into the question. Not a lot of useful discussion here, but–if indeed that was the intended question!–I would have liked to have heard an answer.

Someone from Booz Allen asks whether there is any intention to have something like Yahoo! Pipes on the secure networks so that analysts can build, deploy, and share their own software tools? Great question! The panelists talked a bit about sandboxes for new product, and the difficulty of trusting code of unknown origin.

Someone from California Dept. of Homeland Security has logins for seven or eight secure sources of “sensitive but unclassified” material. Are they seeking one login to rule them all? Yes, moving toward a single-sign-on, but its not easy. You have both identities and personae (based on where the user is physically, organizationally, etc.). There are reasons for compartmentalization.

A question from someone from “UK Law Enforcement.” How do you find the videos you should be looking at. What about Google’s tagging game, for example? Anything like that happening with video? Tagging only works well when there is a large group who can coalesce around a tag. What is really needed is software that can parse video, but there is enough need that it will be developed. [Aaargh. The ideal shoots and kills the workable!] Publisher tagging is the way forward on this: publishers are trying to get a message across.

]]>
https://alex.halavais.net/osi-technology-improving-the-use-of-open-sources/feed/ 0 1792
[OSI] Hidden in plain view https://alex.halavais.net/osi-hidden-in-plain-view/ https://alex.halavais.net/osi-hidden-in-plain-view/#comments Mon, 16 Jul 2007 15:49:12 +0000 http://alex.halavais.net/osi-hidden-in-plain-view/ I am down in Washington for a couple of days for a conference on open source intelligence organized by the Office of the Director of National Intelligence. I suppose I should mention why I’m here, before moving on to some conference blogging.

I’ve been interested in intelligence, and particularly open source intelligence, since I was an undergraduate political science major. “Open source” was a phrase I considered to be related to intelligence well before I heard it applied to computer programming, and I considered a dissertation on open source intelligence on the web, before turning to my project on Slashdot (which, as it may be clear, was a bit related).

I think that good policy happens only in information rich environments. Reasonable people can disagree over particular policy decisions, but there can be little doubt that good decisions are impossible when there is not a clear view of what the current situation is, and how that compares the environment historically. The intelligence community focuses on strategic intelligence for the defense of the U.S., but the principles are not that different from the application of information gathering and analysis in other policy contexts, or indeed, competitive intelligence in the business setting. Covert intelligence is important, but one of the greatest challenges in both clandestine and open source intelligence is that there is simply too much of it to make sense of in a timely fashion. My interest in open source intelligence fits quite neatly into my overall interest in transparency, self-governance, and learning. It also relates more specifically to my ongoing interest in grokking the global conversation that occurs on the web.

The current Director of National Intelligence, Mike McConnell, has written an overview of his vision for US Intelligence for Foreign Affairs, in which he argues, among other things, that new transparency among agencies, along with a focus on how information technology can help to make sense of the tsunami of open and closed source intelligence. This conference engages that transparency even further, inviting open collaboration with the academic world. That’s part of my interest in the conference, and so I’m here because–despite the fact (or maybe because of the fact) that I don’t know any of the other hundreds of attendees–I’m hoping that there is a space for building bridges and sharing perspectives.

As an aside, it’s an open conference. I haven’t seen anything that says no liveblogging, and there is a concentration of journalists here, so, time permitting, I’ll blog some of the sessions. The standard caveat applies: I don’t make any claims as a stenographer, so you shouldn’t expect a great deal of coordination between what I write and what is actually being said. It’s more a game of “that makes me think of…”

]]>
https://alex.halavais.net/osi-hidden-in-plain-view/feed/ 3 1789
09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0 https://alex.halavais.net/09-f9-11-02-9d-74-e3-5b-d8-41-56-c5-63-56-88-c0/ https://alex.halavais.net/09-f9-11-02-9d-74-e3-5b-d8-41-56-c5-63-56-88-c0/#comments Wed, 02 May 2007 04:13:18 +0000 http://alex.halavais.net/09-f9-11-02-9d-74-e3-5b-d8-41-56-c5-63-56-88-c0/ A SecretSo it seems that the key for the HD-DVD encryption scheme has been broken. Unfortunately, distributing the method for decrypting a device is illegal according to the DMCA. But surely a key, as such, cannot be. This is a number all alone, it can’t be considered a machine. It is human-readable (as a number) but is not executable by itself.

For those new to the party, this is a recap of 2600‘s posting of the DeCSS code. A court found that publishing the code (or even links to the code) was a violation of the DMCA. Of course, this makes my dissertation (pdf, p. 268) illegal, along with this blog. I don’t know who has deep enough pockets to see this to the Supreme Court, but someone needs to try.

(The image to the right, of course, is the number in RGB colors–with leading zeros.)

Update (5/2): Copycats abound :)

]]>
https://alex.halavais.net/09-f9-11-02-9d-74-e3-5b-d8-41-56-c5-63-56-88-c0/feed/ 3 1744
Pong Democracy https://alex.halavais.net/pong-democracy/ https://alex.halavais.net/pong-democracy/#comments Mon, 30 Oct 2006 17:49:04 +0000 http://alex.halavais.net/pong-democracy/ Multi-pongJust what you need: another game of Pong. But there is something less fun than facinating about this Massively Multiplayer Pong. Each side is played by a team of several people who, though constant feedback, “vote” where the paddle should be.

One way to play is to place your vote where you think the paddle should be. But if, for example, one of your teammates is asleep, you have to compensate, voting, say, for a position lower than the ball will actually hit. Often there is a mis-match, with four people playing against one, and this provides a small experiment in control and politics. Who wins at pong, the dictator or the direct democracy?

]]>
https://alex.halavais.net/pong-democracy/feed/ 1 1602
Is the new https://alex.halavais.net/is-the-new/ https://alex.halavais.net/is-the-new/#respond Sat, 12 Aug 2006 23:54:02 +0000 http://alex.halavais.net/is-the-new/ Is the newWhat’s “the new black”? More importantly, what’s the new X that is the new black? Check out the cool diagram of such claims from 2005.

Wouldn’t be all that hard to automate the creation of that diagram, I think. A little script to automate a Google or Technorati search for “is the new,” a bit of scraping, and shipping it off to a grapher to create a map. Would be fun if I had the time. Maybe someone else does?

1st page of Google:

* Small is the new big.
* Awake is the new asleep.
* Random is the new order.
* ‘Force protection’ is the new buzzword.
* Pink is the new blog (2x).
* Funny is the new sexy.

(Technorati won’t let me search the phrase. Probably considers “is” and “the” as stop words. That’s kinda dumb.)

]]>
https://alex.halavais.net/is-the-new/feed/ 0 1543
AOL Data https://alex.halavais.net/aol-data/ https://alex.halavais.net/aol-data/#comments Mon, 07 Aug 2006 15:01:07 +0000 http://alex.halavais.net/aol-data/ As you may have heard, AOL recently released a data set that includes about 20 million searches from about 500 thousand users. This is a bit of a treasure trove for researchers, as it provides an example of what people search for and how their searches change over time. While users are identified only by a number, as you might guess, a history of searches can provide some interesting and revealing patterns: patterns the users probably did not intend to be revealed publicly. Perhaps not surprisingly, AOL quickly pulled down the data set, but the page is still viewable in the Google cache. Moreover, while you cannot download the file from AOL, it is available as a torrent.

What an extraordinarily tempting piece of research data. And so, the ethical question comes quickly to the fore:

Clearly, no Institutional Review Board would ever allow such a collection. Users had a reasonable expectation that their searches would not be recorded and openly distributed. Moreover, the ability to link searches of a given user makes this a potentially very revealing data set. See, for example, the user looking to kill his wife. I don’t think that the anonymization of user names is enough to make this usable.

And yet, there it is. It’s already out there, and as I said, very tempting. Is it ethical to make use of this already-collected data if your use substantially masks the private matters of these users. Any use I would make of the data would make it extremely unlikely that any private information would be revealed–though the mere existence of the public data set in some ways makes this moot.

The obvious parallel (Godwin’s law notwithstanding) is the controversy over using Nazi experimental data in medical research. But it seems to me that there are some shades of grey here. AOL Search is not a Nazi concentration camp, and it is worth noting that an article based on the data has already appeared in peer-reviewed conference proceedings (pdf). While I think that the distribution of their search data without the clear permission of its users, either to the public or to the government, is pretty clearly unethical, I don’t know that it makes this data poison fruit. Tainted, yes; poison, I don’t think so.

Finally, I wonder what AOL’s move is now. They’ve pulled the plug on the page, but lots of people presumably have and will share the data. If AOL now revokes permission to use the data, what does that mean? Do they own the data at this point? Providing and then pulling back data would set a terrible precedent.

Update: TechCrunch has a report on this as well.

]]>
https://alex.halavais.net/aol-data/feed/ 7 1531
Ignorance is strength https://alex.halavais.net/ignorance-is-strength/ https://alex.halavais.net/ignorance-is-strength/#comments Sun, 25 Jun 2006 21:31:04 +0000 http://alex.halavais.net/?p=1470 These two links have been in my publishing queue since this morning, but now that Kevin has posted one of them, forcing me to break my self-imposed “1 a day” publishing pattern.

First, it would be nice if the new head of the NSA knew the constitution that he has sworn to uphold. Unfortunately, not only doesn’t he know it, he believes fervently that he does know it. Sure, as with the case below, you can make an argument that he has his own spin on the spirit of the 4th, but the fact remains, he doesn’t know the letter.

Then there was the whole Battlefield 2 incident, with folks from SAIC claiming that terrorists had extended Battlefield 2 to be used as a recruitment tool. But wait, unlike propaganda games like America’s Army, BF2 allows you to shoot at American soldiers. It’s not a terrorist recruiting tool, it’s a feature!

Again, they claim that they may have been wrong in the details, but they caught the spirit of the thing: terrorists are using video games to recruit. Well, perhaps when collecting intelligence–especially open source intelligence–it might be a good idea to get the facts right. Especially, when the facts don’t take long or cost much to get right. Folks are saying “computer game experts disagree,” but the truth is, anyone who is even moderately aware of the gaming world or popular culture (the “training video” included a voiceover taken from Team America: World Police) would have been able to give the group a reality check. That’s not to say video games are never used to recruit insurgents, just that they had it wrong in this case. And this was one of the cases they decided was especially worth showing congress as demonstrative of their abilities.

While ignorance may not be strength, those with a great deal of power in the “intelligence” community seem to be short on some basic sense.

]]>
https://alex.halavais.net/ignorance-is-strength/feed/ 1 1470