Blockheaded blocking, GitHub style

From WPTavern.com comes this surprising development about the block functionality on GitHub. From the article:

Users who are blocked will no longer be able to open or comment on issues or pull requests, nor will they will not be able to add or edit any of the project’s wiki pages.

…both of these sound reasonable so far, but then there’s this:

Blocked users are also prevented from forking any of the organization’s repositories.

It is this last one, however, that I have a huge issue with. While rare, forking is generally considered the best way to resolve differences of opinion on a project. There are also other valid reasons for forking such as wanting to recode substantial parts of a project for different hardware (anything from a completely different architecture to a newer or older generation of CPU), to satisfy different jurisdictions’ legal requirements (example: a GPLed arcade game software needs recoding for Texas’s somewhat oddball redemption game limits), or even rewriting the original code in a new language or new variant of that language (Inkscape in C++ versus Sodipodi in C).

This problem is compounded by Github’s terms of service A.7 which disallows one person or legal entity from having multiple free accounts. Otherwise, the easiest workaround would be to make a one-off account just for forking a project one has been blocked from, and transfer it to the “real” account. Someone who wished to fork a project after being blocked would either need to form a new legal entity such as an LLC (!) or have someone fork the project on their behalf. I’m not even sure if that would provide full functionality.

I am, however, pretty sure that the vast majority of ghost accounts on online services, GitHub or otherwise, go undetected for years. Outside of legal action, there’s no real way to stop them. (Example: I could get a cheap voice/text only “burner” phone plus $10 worth of airtime, pay for it in cash, go down to an Internet-connected cafe such as a Starbucks like the one I am in now, and set up a new Gmail account within minutes. If there are more than 5 people in that Starbucks at the time the account was created, and/or there’s no reason to suspect monkey business until after the video surveillance system at Starbucks erases/overwrites the video from that time, it’s going to be damn hard to figure out who really made that account. Personally, I’ve never done this, but I’m sure some people have. Bonus points for using Tails/Tor Browser on top of all this.)

Too many people who use platforms like Facebook and Twitter have no idea what blocking actually does. Too many use it as an “easy way out” with regard to avoiding differences resolvable with minimal effort. (As an aside, in the case of Twitter specifically, it has changed over the years. At one time one could still easily read public tweets of a user that blocked them. Now, there’s just a screen that says “You have been blocked from following @skquinn and reading @skquinn’s Tweets. Learn more”. One huge problem I have with this setup is that it is taking the user’s authentication information and using it against them; it’s possible to see the same timeline as a not-logged-in user. Twitter does not even restrict me from making a second account just for the purpose of evading certain aspects of blocking, though things like a prohibition against harassment and stalking are still in play.)

Also, blocking is an all-or-nothing proposition in most cases. There’s no way to line-item block certain posts from certain people that are no longer friends. The closest one can come is, on Facebook, to make a custom filter that amounts to “all friends except Joe Dummy” or similar. Indeed, GitHub’s block function appears to allow no easy way to allow just forking, and was made with the assumption someone would want to prohibit a diverse and far-reaching set of actions with one button most of the time.

So again we have a service which has made the block function far more powerful than it really should be. As unfortunate as it is, there is still time for GitHub to fix this unfortunate mistake, and I am hoping that they do.

Wireless phone companies tossing customer privacy with long retention periods

A recent story on rawstory.com highlights the rather disturbing and frightening data retention policies of two major phone carriers. Verizon and Virgin Mobile both keep the content of text messages after they are sent; the former for a mere “3 to 5 days”, but Virgin Mobile keeps around text message content for a staggering 90 days (but thankfully requires a search warrant for law enforcement agencies to get copies).

The story links this chart from the Department of Justice obtained by the ACLU. The numbers that texts are sent to and received from is one thing, but those shouldn’t even be kept for longer than is necessary to resolve billing disputes.

Also quite horrifying, is the length of time cell tower information is kept by certain carriers. Perhaps the worst offender here is AT&T, which merely states “from July 2008” and has no upper end on how long they will keep the information such as one year, two years, three years, five years, etc. Ideally, this information should not normally be kept beyond, say, a week up to a month, maybe longer when absolutely necessary for the express purpose of troubleshooting (such as while repairing a tower that drops a statistically significant number of calls higher than average), and securely deleted as soon as it’s no longer needed.

Perhaps the worst part of this story is that each company appears to have one area in which they are keeping certain records way too long, undermining most attempts to preserve privacy by switching companies. AT&T hangs on to store surveillance videos for 2 months, clearly not necessary if T-Mobile only keeps them two weeks (and then there’s Sprint, who doesn’t reassure me at all with their “depends” response, which could mean they’re buying hard drives every year to archive surveillance video indefinitely). Sprint (including Nextel and Virgin Mobile) keep call detail information the longest, and have no upper end on subscriber information retention (scary, as I was once a Sprint customer). Verizon keeps IP session information for a whole year, and IP destination information for 90 days, while Sprint keeps both for 60 days; however, it’s clearly not necessary to keep either if AT&T, T-Mobile, and Virgin Mobile don’t keep that information at all.

In response to receiving this document, ACLU affiliates in 32 states filed requests for information with local law enforcement agencies seeking to uncover exactly how they are using this information to track Americans. Unfortunately, Texas is not one of those states, and I am trying to find out why.

If there are legal minimum requirements for keeping information, that’s one thing. However, companies need to be held accountable when they make record retention decisions that have a potentially deleterious effect on customer privacy. Judging by the diverse range of record retention times, there appear to be no legal minimums for many categories. If anything, in the age where landline use is seen as antiquated, the laws should be revised to protect the privacy of wireless phone subscribers.

I will likely be following up with the most interesting parts of what the ACLU and ACLU affiliates find out regarding their requests for information, as well as what I find out, if anything, regarding Texas. It may not be for several months, though I will endeavor to post incremental followups if I uncover something particularly important or interesting.

An in-app purchase snafu: Apple sets sail on the failboat

It’s been a while since I’ve noticed Apple doing something really dumb. But this was almost shoved in my face, so it was difficult to just pass it by without writing a brief commentary on it.

Jacob Gorban recently wrote and published a short piece about Apple’s new in-app purchases feature that left users annoyed, and left him and his company looking like thieves. From his post:

We started to receive support requests from customers that purchased the “unlock” feature [in an app called Cashculator] but the application was still acting as “locked”. All they saw was a message that the transaction failed, with a very descriptive “Unknown Error” message, and nothing more. The really appalling aspect of this was that they were charged for the purchase ($20 to $30 in our case) but the transaction was marked as failed, the reason being “unknown error”.

Needless to say, such behavior doesn’t make the customers happy about using your app, not at all. Some of them originally thought that they failed to purchase. Imagine how surprised they were to receive a receipt from Apple a day or two later for their “purchase” which didn’t work.

Jacob goes on to state that just a few days ago, Apple finally fixed this bug. And the app upgrades that had been purchased, mysteriously finally started working. You’d think that’s the end of the story, right? Wrong.

Apple never really ackowledged that there was an issue with this, didn’t close my bug report, didn’t delete all the 1-star reviews that angry customers left and didn’t compensate the affected developers for their financial loss. Nothing.
[…]
I’m really not happy with the opaque way in which Apple handled this. […] Having this issue for more than a month and keeping it secret, while developers and customers suffer the consequences is plain wrong.

In other words, Apple quietly fixed this, and never even acknowledged there was a problem. This is not the way any decent company operates. This is one reason (of many) why I do not trust Apple and do not buy or use their products. (I once had to edit a blog post on my mom’s iPad for Quinn’s Big City. It took a lot of willpower not to start screaming profanity in the middle of a coffee shop.)

In my opinion, the right way to handle this, was to immediately investigate the incident, issue a press release stating the problem, and Do The Right Thing for the customers (both the developers and the end users). I am reminded of Microsoft’s absolutely abysmal attitude towards security, taking days or weeks to even acknowledge there was an issue, during the turn of the century, an attitude which (thankfully) Microsoft has learned cannot be sustained going forward. I can only hope Apple learns the same lesson with regard to communication with its customers and acknowledgement of known bugs, especially when they relate to payment handling.

Shame on you, Apple. This is not fixed; the damage done to the reputations of developers who trusted you to do the right thing still needs fixing. And that’s not as simple as tweaking a few lines of code and recompiling.

Misguided “Operation Wardrive” set to happen in Austin today

If this one seems a bit rushed, it is. I just now came across a mention of these two articles in an IRC channel I’m in, and noticed that this is was scheduled to start happening today. Spread the word if you are in Austin.

According to both EFF Austin and KVUE, the Austin Police Department is sweeping the city. Not for pot plants, speeders, reckless drivers, or even jaywalkers. They are sweeping the city for open wi-fi access points.

From the KVUE article (quoted in EFF Austin’s article):

Leaving your wireless network open invites a number of problems:

  • You may exceed the number of connections permitted by your Internet service provider.
  • Users piggy-backing on your internet connection might use up your bandwidth and slow your connection.
  • Users piggy-backing on your internet connection might engage in illegal activity that will be traced to you.
  • Malicious users may be able to monitor your Internet activity and steal passwords and other sensitive information.
  • Malicious users may be able to access files on your computer, install spyware and other malicious programs, or take control of your computer.

Before even getting into EFF Austin’s side of the story, I’d like to analyze these, which it would appear at first glance were ripped straight out of APD’s press release without any vetting whatsoever (I really hope KVUE is better than that, but this is the state of news in 2011). Most of these aren’t even correct or have errors in fact. In order:

  • “You may exceed the number of connections permitted by your Internet service provider.”: Most wi-fi routers assign a private IP address and don’t really differentiate between one, two, five, ten, twenty, or fifty devices on the network (some may run out of addresses at a certain point but this problem can be remedied). It is rare to have a wi-fi access point connected directly to the outside Internet so for the majority of users this doesn’t apply. If your Internet provider does hard-limit the number of devices you can run on your connection, it’s time to switch. (This may, eventually, become an issue again with IPv6, but even then with most users getting a block of 65,000 addresses, this is doubtful.)
  • “Users piggy-backing on your internet connection might use up your bandwidth and slow your connection.”: Most users using an open wi-fi access point will not download excessive amounts of data. The benefit here of invited guests and friendly neighbors being able to borrow your connection usually outweighs the risks.
  • “Users piggy-backing on your internet connection might engage in illegal activity that will be traced to you.”: This is the same argument used to dissuade people from running Tor exit nodes and I would expect most of the same legal advice given to Tor exit node operators would apply here. In summary, an IP address does not uniquely identify an individual Internet user. It is simply routing information.
  • “Malicious users may be able to monitor your Internet activity and steal passwords and other sensitive information.”: This is about the only one that may be true with any regularity, and even then this would only apply to connections in plaintext, not to encrypted connections.
  • “Malicious users may be able to access files on your computer, install spyware and other malicious programs, or take control of your computer.”: Only if your computer is misconfigured, and in the case of malware, only likely if you’re running Windows or possibly MacOS. This doesn’t happen very often in the wild, if at all.

From EFF Austin’s post on the topic:

The EFF Austin Board of Directors finds nothing wrong with this analysis of the potential risks Internet users undertake when intentionally or unintentionally leaving their wireless access points open for shared use. In fact, we could cite a few more. However, these are much the same risks that Internet users undertake when using ANY shared wireless access point, such as those provided by cafés, public parks, or the Austin Public Library.

Missing from the cited analysis is any recognition of potential benefits to be gained from publicly sharing one’s wireless access point. Lately, the virtues of contributing to any shared commons tends to be overshadowed by fears of bad actors (both real and imagined). For some facts, it’s worth reviewing cryptographer and computer security specialist Bruce Schneier‘s discussion on the virtues and risks of running an open wireless network.

I agree in principle with EFF Austin’s argument, and I think it is unfortunate that APD has chosen to go through with this with the misguided belief they are helping keep their citizens safe. (The rest of the article mentions EFF’s Texas Public Information Act request and their concern about exactly what is being collected and why.)

We have maybe a couple of hours before APD’s officers will start knocking on doors to contact computer network owners sharing their Internet intentionally or unintentionally. So I think it’s a good time to remind everyone, especially those in Austin, that it is a bad idea from both a privacy and a legal standpoint to let the police inside your residence or business unless they have a warrant or you called them and they need access to do their job. For more information review this FAQ entry at flexyourrights.org.

I suggest either not talking to APD or saying as little as possible if they want to discuss the security settings of your wireless network. Frankly, I think there are better uses of taxpayer money, and I encourage Austin residents who agree to communicate this to their elected officials.

UPDATE: Per entersection’s comment below, this was actually canceled/disapproved by APD. I will be making a followup post about this in the near future (probably by tomorrow night at the latest).

Caught in a landslide

Joe McGinniss recently blogged about a most unusual story out of Wasilla, Alaska, also
reported by the Mat-Su Valley Frontiersman. The principal, Dwight Probasco, told the school’s symphonic jazz choir they could not sing the song “Bohemian Rhapsody” because the songwriter, Freddie Mercury, was gay.

(Eventually, Mr. Probasco relented and allowed the song to be sung with lyrics edited to remove a section about killing a man.) From the article:

[Senior Rachael] Clark said it didn’t make sense for the school district to tout tolerance for all and then turn around and allow homophobia to dictate something such as graduation music.

“We were joking about singing Elton John’s ‘Candle in the Wind,’ instead,” Clark said. “I guess no matter what you do, someone’s feathers are going to get ruffled.”

(For those that don’t get Rachael’s joke: Elton John is gay. Also of note is that Freddie Mercury was actually bisexual, not gay.)

Of all the reasons to censor a piece of music, this probably has to be the lamest. Especially from a school that promotes tolerance for all. I question just how much Wasilla High’s administration really promotes tolerance in light of this. Gay, lesbian, white, black, yellow, red, brown, male, female, dark hair, light hair, low IQ, high IQ… we’re all people.

I’m no fan of either violence or censorship. This was some of Freddie Mercury’s best known songwriting work, and I think it’s unfortunate they felt editing was necessary at all. I hope going forward, those at Wasilla High and at schools with similarly intolerant administrations get a chance to learn what true tolerance and diversity is about.

This is 2011. Let’s make sure everyone gets the memo.