I'm going to try to keep this rant focused less on government and more on a generic shortsightedness that usually accompanies innovation. More often than not, when true innovation happens, it's met with everything from ignorance -- "I don't understand this, so I'm going to stick with this inefficient, bloated, and probably corrupt process that works" -- to Luddism -- "This is going to destroy our livelihoods, so we better get to killing it."
But true innovation almost always wins out over both of those types of direct attacks. Ignorance is usually swept aside by the adoption curve, meaning enough people will assume the new way that it becomes comfortable. Like smartphones, even your grandmother can check her email from the Talbot's now.
And as for Luddism, yeah, when that wins, it sucks, but the smart money usually finds a workaround.
On a personal note, we used to get Luddism a lot at Automated Insights from journalists who were convinced we were going to create writing robots that would take their jobs. Then we did the Yahoo Fantasy Football recaps and they were all like "Ohhhhhh" and went back to covering Justin Bieber and snowstorms because now that they're named it gives them about 100 puns per.
Anyhow, the worst thing that can happen to true innovation is when that innovation gets misconstrued and tons of energy gets wasted trying to figure out how it should be applied to the real world. When you're trying to get machines to pretend to think, it's quite prevalent - thanks to dystopian fantasy like Terminator and the Matrix.
But it happens with things like electric cars too. Instead of mandating a specific amount be on the road by an unrealistic date and plunging billions and billions of wasted dollars into adoption, we need to get the battery thing figured out first.
In other words, we need to figure out how to use what we've got, not paint overly optimistic or pessimistic scenarios and throw the farm and/or John Connor at it.
It's happening again with Google Glass.
If you don't fully understand what Google Glass is... you're not alone. There may not even be a practical application for a full-time wearable video camera and always-connected heads up display, although hundreds of seemingly winning scenarios are being postulated. None, I repeat, none have made it to market yet.
It's that early.
But that doesn't stop the technology from being innovative. Google has created a what-if that didn't exist before: "What if there was a mass market for a computer you wear on your face?" The early adoption curve is giddy over this question, and despite the pop-culture jokes and tweeted one-offs, just about everybody thinks this technology is a winner.
Except those who want to make sure we don't drive while wearing one. The West Virginia legislature is tacking an anti-wearable-computer-with-head-mounted-display rider to a no-texting law they're trying to pass.
At least they had they sense to not just put "No Google Glass while driving."
First. Holy cow, do I get this. Yes, you probably shouldn't be careening down a major interstate with a computer strapped to your face. Even though there's a part of me that thinks that natural selection would pretty much take care of things, I have to use these roads too.
But really, shouldn't this issue be tackled from the technology outward? Or at least shouldn't we wait until we know what's possible with this technology before we start spending time and money debating in what circumstances said technology should be legal?
I can already start counting the loopholes.
What Google Glass, or for that matter any wearable heads-up display, is ultimately going to look like, feel like, and how it's going to interact with the user, is still pretty much up for grabs. On the flip side, there hasn't been a whole lot of innovation behind the cheeseburger either, and that's just as lethal in the hands of someone hurtling a two-ton projectile through a crowd of commuters, in my mind.
The cheeseburger. Some hundreds of years after its launch. Still perfectly legal.
Again, I'm not trying to argue the libertarian anti-nanny-state side of this debate. Although I most certainly could, I don't like to talk about politics.
But I go back to the potential for our Automated Insights robots to put millions of qualified journalists out on the streets with our writing robots. It just isn't what we're about, and all that time spent having to explain that was time that was not being used focusing on what our technology can do.
And ultimately, it's less time spent innovating a solution that could possibly make that concern obsolete in the first place.