Security vs. theater: the importance of understanding the difference

CNN recently published a commentary by Bruce Schneier that calls into question many of the “security” measures being put into place, in the name of stopping terrorism.

This quote sets the tone for the entire piece, and I think it is something that a lot of people tend to forget, quickly:

Terrorism is rare, far rarer than many people think. It’s rare because very few people want to commit acts of terrorism, and executing a terrorist plot is much harder than television makes it appear.

I have to wonder if we just have too much of this kind of fantasy crime and terrorism on TV and if we’re at the point where it is distorting people’s perception of reality. To put another big wrinkle into things, there’s a whole genre called “reality television” which to be honest, is badly named, and I would even say deceptively misnamed given some of the things that are tagged with that label.

Anyway, Bruce goes on to discuss “movie-plot threats” and “security theater” at length. I won’t quote most of it (don’t want to step outside the boundaries of “fair use”). But he does decry the photo ID checks, the stationing of National Guard troops after the September 11th attacks, and yes, even harassment of photographers as “security theater.”

Particularly the last of these is the most egregious example of “security theater” as the last thing a potential terrorist would do is draw attention to oneself by sporting a DSLR, particularly with, say, a 70-300mm zoom lens. A point-and-shoot of the type commonly available in the US for under $150 is a more likely choice for a terrorist wanting to do clandestine reconnaissance, as a tourist is much more likely to carry this type of camera. Not that it should even matter, of course.

Bruce touches on a great point here:

If we spend billions defending our rail systems, and the terrorists bomb a shopping mall instead, we’ve wasted our money. If we concentrate airport security on screening shoes and confiscating liquids, and the terrorists hide explosives in their brassieres and use solids, we’ve wasted our money. Terrorists don’t care what they blow up and it shouldn’t be our goal merely to force the terrorists to make a minor change in their tactics or targets.

While understandable just to quash the fear of the masses, I have to wonder just what, in the end, the post-September 11th security measures really accomplished. The terrorists are unlikely to attack civilian air travel twice in such a fashion.

Bruce doesn’t go into detail on this, so I’ll say it here: the goal of terrorism is fear and the disruption of normal everyday life. The terrorists, strictly speaking, don’t even have to blow something up to accomplish that, sometimes an obviously planted hoax bomb will do the trick as well: throw some wires together with a cheap timer (or alarm clock) and something that looks like it might be some kind of explosive, and put it in an obvious location that’s still somewhat concealed.

Most damning is Bruce’s blistering attack on the military tribunals:

We should treat terrorists like common criminals and give them all the benefits of true and open justice — not merely because it demonstrates our indomitability, but because it makes us all safer.

Once a society starts circumventing its own laws, the risks to its future stability are much greater than terrorism.

And this is something we should do today. We, as a society, should stick to our own laws, and give those charged with a crime the same rights, whether accused of “terrorism” or petty theft: the right to an attorney, the right not to incriminate oneself, etc.

Finally, this last quote from Bruce echoes my thoughts on the matter almost word for word:

Despite fearful rhetoric to the contrary, terrorism is not a transcendent threat. A terrorist attack cannot possibly destroy a country’s way of life; it’s only our reaction to that attack that can do that kind of damage. The more we undermine our own laws, the more we convert our buildings into fortresses, the more we reduce the freedoms and liberties at the foundation of our societies, the more we’re doing the terrorists’ job for them.

The anti-terrorism measures are more disruptive to our daily lives than any terrorist attack ever have been. It’s time we start lowering the curtain on “security theater” once and for all.

China and censorship: the Green Dam fiasco

Maybe it’s just me, but the first thing I think of now when I hear “China” is “censorship.”

Two recent articles on Freedom to Tinker address the new “mandatory” Green Dam software. The first article by Dan Wallach exposes just how powerful censorship software becomes when installed on the end user’s PC. Since I doubt that Green Dam will be released under a free software license (this is China we’re talking about here) it also highlights just how dystopian things can get when one trusts proprietary, non-free software. This is either the bottom of the slippery slope or very far down it.

The second article by Ed Felten describes just how insecure Green Dam is. In essence it is a security breach waiting to happen. I’m not surprised. A quote from a University of Michigan report quoted within the article sums it up nicely:

Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

I honestly am quite surprised that the software would even allow for uninstallation given what it is designed to do (censorship) and where it is designed to do it (on PCs in China). If Green Dam does allow for uninstallation, this is the first thing any responsible PC owner in China who gives a damn about his/her freedom will do.  I personally build my own PCs when I can, and start with a clean hard drive when I can’t. It would honestly surprise me if neither is an option in China.

Apple’s sneaky iTunes personal information leak

As (re-)discovered in a recent TechBlog article, Apple is embedding personal information in downloads from its iTunes music store. Assumably this is a way to help catch the “low-hanging fruit” of those who partake in unauthorized copying. Casting aside the ethical issues, this is rather horribly misguided if that’s Apple’s reason.

Consider the following situation: Alice hosts a party where several guests, Bob, Charlie, and a few other close friends of hers are in attendance. Mallory crashes the party (or, even attends as a friend of one of the other guests, it’s really kind of immaterial) and snarfs some of the music files from Alice’s collection, with Alice’s name and e-mail address in them. They wind up on a Web server with a Tor hidden service address, run by Mallory the next morning.

Now, nobody downloading these files will know anything about Mallory. Well, obviously they’ll know some Tor user put these up on a hidden service. But all they will see in the files is Alice’s e-mail address, and probably assume she’s the one who has shared the files.

This can happen any number of ways: stolen storage media strikes me as one of the more likely ones (in fact, Mallory may well have sticky fingers when it comes to USB flash drives in the above example). But I think it’s a great reason why this kind of information should not be in downloaded media files.

Not to mention Dwight does a great job of showing how easy this is to circumvent (converting to MP3). I would not even be surprised if there’s a way to configure a decoder to write the exact same encoded audio sans most of the tags.

Autorun, autoworm

It’s a bit old, but just today I read an entry in Ed Truitt’s blog about how the Pentagon got infected with (what I would guess is) a Windows worm.

To quote the quoted message:

Someone infected thumb drives with the WORM then dropped them around the Pentagon parking lot. The employees, picked them up, took them into their offices and plugged them into their office computers to determine the owner of the drive. (emphasis mine)

To me, it seems the real risk is not plugging unknown devices into a computer. Rather, this whole incident is a very damning indictment of Windows’ infamous autorun feature and the risks thereof. The act of merely accessing a device should never automatically run any executable that may be on it, at least not without prompting the user.

This is a security hole big enough to drive a tank through, and inexcusable negligence on the part of Microsoft. This is not something a user should have to explicitly disable (whether permanently or with an obscure trick like holding down Shift while plugging/inserting media).

OpenBSD uses the slogan “secure by default.” Here’s hoping that Windows 7 will be the first version that “insecure by default” doesn’t apply to.

The roots of Internet Explorer’s security problems

About a day ago Zack Whittaker posed the question: Has Internet Explorer ever been safe?

Overall I think this is a pretty good write-up on the history of Internet Explorer for those who don’t understand its faults and/or are actually still using IE for serious Web browsing.

I think on a greater scale, it’s a great example of Microsoft’s utter failure in terms of security, and quite possibly a testament to the problems facing non-free software.

Non-free software is defined here as software licensed under terms which do not grant at least one of the four freedoms in the FSF’s Free Software Definition. This includes most of the shrink-wrapped boxes on the shelf at your local computer/electronics retailer.

This class of software, particularly software made available without human-comprehensible source code (like just about all of Microsoft’s products),  starts at a significant security disadvantage. The users are stuck waiting on the maintainer’s patch, and in the case of some remotely exploitable holes, are “sitting ducks” until one is available.

The FSD’s freedoms 1 and 3 are particularly important for getting security fixes out on the users’ timetable instead of the maintainer’s timetable, with freedom 2 playing a strong supporting role in the case where the maintainer refuses to even acknowledge the problem. This is how the teardrop vulnerability in the kernel, Linux, made it out in a matter of hours, instead of days or weeks like the corresponding patch for Windows. Unfortunately for the Windows users in 1997, Microsoft’s stance on security had much more room for improvement than it does today. Even if there was a fix which came from a user or group of users, it could not be legally distributed due to Microsoft’s end-user license agreement (EULA).

Note that this is only an example. The issues are still just as relevant in 2008 (or soon 2009) as it was in 1997. They apply to the recent zero-day IE exploit the same as they do to the teardrop vulnerability.

It is possible Microsoft’s programming staff may one day, finally, match the speed at which Firefox’s development team (which includes users capable of fixing security holes in Firefox)  on a consistent basis. In fact I would like to see that happen in the near future.

However, I’ll be honest here and say I’d also like to win a multi-million dollar lottery jackpot in the near future. Casting wishful thinking aside and sticking to strict realism, I don’t see either happening soon.