Intel’s silicon shenanigans

As reported by Engadget, Intel is experimenting with a somewhat novel CPU upgrade scheme. They want to charge you to unlock features of your CPU that are already there.

Now, it’s not unheard of for CPUs to have cores or cache memory disabled at the factory. It’s acceptable, perhaps even expected, that a chip manufacturer would disable a defective portion of a chip before shipping it out. This is in fact how maximum clock speeds are determined: a chip that cannot run reliably at, say, 2.0 GHz is tested again at 1.9 GHz, then 1.8 GHz, etc. down to a minimum acceptable speed for the class of CPU until the highest speed is found at which that particular CPU chip will function. It’s similar with cache and cores: quad-core chips with two defective cores will have two of the cores disabled and become dual-core chips instead, and a chip with a defect in part of the L2 cache will have that portion disabled.

The difference is that Intel is shipping out fully working CPUs and using a DRM (Digital Restrictions Management) scheme to lock them down, holding the full functionality for ransom. This is not how responsible companies operate. A few of the comments on the Engadget blog entry already indicate that Intel has lost goodwill with this rather cowardly move.

What to do? I personally recommend avoiding the purchase of the DRM-crippled CPU chips in question. It may not be practical to buy your next PC without a single Intel chip in it, but I certainly won’t blame you if you do. Intel’s “just testing it out… in a few select markets for now.” Let’s all grade this test a big fat F.

Games book publishers still play

Josh Catone writing for Mashable reports on the not-too-surprising pitfalls of digital textbooks and why they are not ready for prime-time for many students. The primary focus of the article appears to be college students where textbooks are purchased. (If I have any readers still in high school out there, yes, it’s true, senior year of high school is the last time you’ll get to borrow your textbooks for free.)

Indeed, very predictable it is that the third reason (of three) is “questions of ownership.” Cited are DRM (digital restrictions management) limiting time of use to 180 days in one example, after which the books are automatically deleted. The example cited is a biology textbook available via both hard copy and electronic textbook distributor CourseSmart. (The article refers to CourseSmart as a publisher, but it appears this is technically incorrect.) The hard copy version is available for US$50 used, US$80 new; CourseSmart charges US$70 for what is in effect a 180 day rental. Given the cost, and that this is never a concern with printed textbooks, this is simply unacceptable. US$70 for a non-DRM copy is more in line with what I’d consider fair. If Pearson (the publishing) company insisted upon a silly, odious, and obnoxious 180-day time limit, I honestly think US$20 is more realistic. Yes, one-fourth the cost of the print version.

The lack of standardization doesn’t help either, which in turn highlights just how bad of an idea DRM really is, as that is a large part of the reason for lack of standardization. It’s similar to the reason Microsoft’s “PlaysForSure” campaign looked pretty dumb when Microsoft then came out with the Zune, in essence saying “Thanks, hardware manufacturers, for supporting our patented Windows Media format and making it easy for PCs running Windows to use your players, we like you so much that we’re going to say, here’s our Zune, and here’s our middle finger.”

Most digital audio players prior to Apple’s iPod, Microsoft’s Zune, etc. used a standard, if patent-encumbered, format called MPEG Layer III Audio or MP3. Most understood Windows Media (WMA/WMV) files alongside MP3, but MP3 was still a fairly reliable “lowest common denominator” format.

In the world of digital print publishing, despite the clear winner being Adobe’s PDF format (which is as far as I know not patent encumbered, or the patents thus covering it have been made available under a royalty free license), many e-book readers do not support plain PDF, or do so in a manner that’s obnoxious and clumsy compared to grabbing the DRM-infested version.

It seems like print’s slow transition to digital may be the last frontier for DRM elimination. College textbooks are just the tip of the iceberg, though I think students not being able to sell their books at the end of a semester anymore will be quite annoyed. Or, they may just shell out the money again for what’s in reality an expensive rental. Hopefully, the kids smart enough to get into college will be smart enough to see the shell game being played before them.

Silly statistical shenanigans in the drive-thru

As a close friend and former roommate of a QSR (quick service restaurant or “fast food”) crew member and manager, this one strikes a special chord with me.

Consumerist.com reports on a really stupid pet trick being pulled by some QSR drive-thru workers. They are asking customers to back up and pull forward to restart the speed of service timer. A very low-tech and suspicious method of gaming the system.

The article does mention the prospect of in turn gaming the drive-thru jockeys out of free fries or similar such things. I find it difficult to take a real stance on the ethics of such a manuever. Hopefully, it will not matter soon; I am aware that at least Taco Bell, and possibly all other Yum! Brands QSRs (KFC, Pizza Hut, Long John Silver’s, A&W) have an amount display set up below the drive-thru window, which assumably cycles through to the next customer when “cheated” in such a fashion. I’m wondering why Burger King et al don’t adopt similar technology to squash this type of statistical shenanigans.

If the numbers are to matter, if the management of a QSR actually gives a damn about real speed of service and not just making the numbers look good to the next higher manager, this type of cheating needs to be dealt with by termination, first time, no exceptions.

And to the workers resorting to this in a vain attempt to save their jobs: If you can’t stay up to speed, stay out of the kitchen.

[Edit 2024-03-23: Dead link replaced with archive.org copy]

The end of blogging as we know it in the UK?

In perhaps the most daft attack on blogging as free speech, the High Court in London (UK) has ruled bloggers have no right to anonymity, as reported by Yahoo! News UK.

The basis of the ruling comes under the assertion that “blogging is essentially a public rather than a private activity.” I am horrified at the implication made here, as many things one does that would nominally qualify as public activities, one would still expect some degree of anonymity.

Granted, the case here involves a public official and is far from an ideal test case. But it’s a chilling effect, and sadly, I would expect no better from certain US courts. (This is par for the course in e.g. China and maybe even Iran under the current administration there.)

There are and will always be peer-to-peer anonymity-friendly networks like Freenet, though the chilling effect is still present because moving content such as a blog-like journal to such a network reduces the audience substantially. However, it is my stance now, and has been for some time, that true free speech comes only with anonymity, in light of the fact that most censorship comes “after the fact.” Thomas Paine originally published the pamphlet “Common Sense” anonymously during the American Revolution–and for good reason (as shown in this Wikipedia illustration).

Today, Paine would probably write a blog, and/or post to an online Web-based forum. In much the same way that “crimes of the high seas” has been re-interpreted to include air travel, freedom of the press and freedom of speech include publishing via the Internet and similar electronic media.

In summary, the authoring of a pamphlet such as Paine’s is no more a public activity than writing a blog accessible via the Internet, and the latter is in fact the modern day equivalent of the former. I think it is unfortunate that the High Court in London has found nearly the exact opposite to be true.

Unite behind Opera? Read the fine print

NPR reports on Opera Unite, the latest attempt by the alternative Web browser maker to implement what at first is touted as a peer-to-peer (P2P) network, primarily among users of its browser. What Unite actually is, however, is a centralized network run by Opera where all user content must go through Opera’s proxy servers, and users are subject to an odious terms of service agreement (TOS). A portion of the TOS agreement reads as follows:

By using the Services, you warrant that you will not upload, transfer or otherwise make available files, images, code, materials, or other information or content (“Content”) that is obscene, vulgar, sexually-oriented, hateful, threatening, or that violates any laws or third-party rights, hereunder, but not limited to, third-party intellectual property rights.

And further down, the rather ominous but predictable:

Opera has the right, in its sole discretion, to remove any content or prevent access to and/or use of any or all of the services, for any reason, including as a result of your violation, or alleged violation, of these Terms of service.

I think Opera really misses the entire idea of a P2P service here. The whole advantages of P2P are not having to depend on a centralized server and not having to deal with yet another set of restrictions on top of any imposed by one’s ISP.

This whole clause of the TOS smacks of censorship. Whose judgment applies to what is considered “obscene” or “vulgar” anyway? Where does Opera get off telling users what they can and can’t share?

Even Flickr lets you share such content willingly as long as you label it appropriately. Of course, there are some things that even Flickr will not allow at all; that’s where Freenet and hidden services such as Tor come into play.

If Opera had just said “Why trust Facebook, Flickr, and YouTube when you can just trust us instead?” it would at least be transparent what their service is about.

[edit 2023-07-03: fix formatting]