In the past few years, the public has been confronted with hitherto unimaginable levels of personal privacy invasions.
We've learned that the NSA has been aggressively scooping up phone metadata and even building a giant data center in Utah called "Bumblehive" for the purpose of recording every phone call and much more. The facility is so massive that it will be able to store a yottabyte of surveillance data -- an amount of data so large you may not have heard of the number before. (A yottabyte is 1 trillion terabytes.)
And we learned this month that a flaw called Heartbleed in the security layer of two-thirds of all encrypted websites potentially puts all of our information at risk of being exposed -- and that includes our passwords and even our encryption certificates.
And we just learned this week that the FBI's face-recognition database is on an incredible growth spurt. The database contained 13.6 million images last summer and is on track to contain 52 million by next year. The database links faces to names, addresses, phone numbers and other personal data. Law enforcement agencies will be able to take pictures from store security systems or ATMs -- or any photo -- and run it through the database and know exactly who you are. The FBI's own documents show that millions of these images have nothing to do with crimes or criminals.
So the people who are worried about threats to their privacy are justified in their anxiety.
The trouble is, far too many people are freaked out about the wrong things.
I'm going to give you examples of three technologies that are regarded as threats to our privacy -- technologies that a great many people seem to be vexed about, but only because of muddled or misinformed thinking -- and then I'm going to spell out why they're nothing to worry about.
Here they are:
1. Apple iBeacon
Apple's iBeacon, also known as "indoor GPS," is designed to provide very accurate information about a user's location, indoors or outdoors, for a variety of purposes. Apple, for example, uses iBeacon with its Apple Store app. If you're standing in the iPad section, it can pop up information and promotions for iPads.
Museums are using iBeacon for guided tours via smartphone. Stadiums are using iBeacon to tell people stuck in long beer lines where they can find a shorter line. And, of course, department stores are using it to promote products and provide customer information and customer service.
Beacon technology in general, and Apple's iBeacon in particular, doesn't get the attention and mindshare that it deserves. It's a transformative idea that will change everything. But to the extent that people do pay attention, they tend to oppose it as yet another encroachment upon our personal privacy.
The unexamined myth about iBeacon is that it senses the presence of your smartphone, from which it learns your identity, then records or transmits this information to who knows where.
But that's not how iBeacon, or any beacon technology, works. The systems work with low-cost, low-power beacons that are placed at specific locations and use Bluetooth LE to broadcast identifiers over short distances.
Here's why it's not the privacy invasion people think it is: Beacons can't receive data; they can only send data.
If the user has voluntarily downloaded and installed an iBeacon-supporting app (on either iOS or Android, by the way), and has granted permission for the app to interact with beacons, then the phone will receive the beacon data and the app can do things with that information.
As an oversimplified example, a beacon at Macy's department store might sit there and transmit data that essentially says: "Hi, this is Macy's beacon No. 13." If the Macy's app receives that information, it will learn both the beacon's location and the distance between the phone and the beacon. The creators of that app know where Macy's beacon No. 13 is in the store. And the app could, of course, relay this location information to a remote server.
It's an important distinction to understand that the smartphone is perceiving the beacon, not the other way around. It's the smartphone connecting with the outside world, not the beacon tracking the user's phone. It's the user who controls this activity, not the beacon or the store that installed the beacon.
A real privacy violation is a situation where you're not in control. But with iBeacon systems, you are in control of your own participation with the application.
Besides, the knowledge that you're in the shoe department at Macy's isn't significantly more of a privacy violation than the knowledge that you're at Macy's generally -- information that is already being collected by, at a minimum, your wireless carrier.
2. Gmail scanning
Google this week updated its terms of service to clarify the longstanding practice of scanning Gmail messages in order to provide customized advertising. The new document is Google's attempt to satisfy critics -- and Judge Lucy H. Koh, who told Google that its terms of service and privacy policies weren't explicit enough.
The idea that specific messages in Gmail may be accompanied by ads that reflect the content of those messages can freak people out.
Say you're planning a camping trip with a friend via email, and right next to your message is an ad for tents and sleeping bags. You might say, "Google is reading my email!"
Microsoft capitalized on this hysteria in its "Scroogled" marketing campaign. "Don't Get Scroogled by Gmail," Microsoft declared, boasting that its Outlook email doesn't scan messages for the purpose of delivering custom ads. (The company may have canceled the "Scroogled" campaign this week.)
But Google's email scanning isn't a privacy violation. Why? Because the content of nearly all email is routinely scanned, usually as part of an effort to combat spam.
And in any event, an algorithm isn't a person. Google co-founders Larry Page and Sergey Brin are not sitting in some conference room reading your emails.
Algorithms are doing simple logic, reasoning that if email = "camping" then display "sleeping bag" ad. Nobody is reading your email.
3. Google Glass
The biggest unfounded privacy panic has got to be the hysteria surrounding Google Glass.
Google Glass eyewear is equipped with cameras, and you wear the device on your face. From that combination of facts, a vocal percentage of the public has concluded that Google Glass wearers are entitled jerks who don't care about the privacy of others. "Glasshole" is the label of choice for these people.
It gets ugly out there. Especially in San Francisco, a city that's roughly 36 miles from Google's Mountain View headquarters. A woman wearing Glass was attacked in a bar in San Francisco in February. A male Glass user was attacked near that city's Mission District this week.
To the anti-Glass crowd, such incidents are evidence of just how much of a socially unacceptable privacy invasion Google Glass is. But these attacks have nothing to do with Google Glass and everything to do with San Francisco. A loud minority of people in that city are assailing the technology industry in general, and Google Glass is a symbol of that industry.
A group calling itself the Anarchist Collective is protesting IT professionals, and even harassing Kevin Rose, who is a co-founder of Digg, Revision3, Pownce and Milk and who currently works as a partner at Google Ventures. They claim that IT professionals are overpaid and by living in the city are driving the cost of living too high. Their demand is $3 billion from Google so they can live wherever they want, including in the woods, in a capitalist-free society of their own making.
Most people who are slamming Google Glass aren't anarchists or VC harassers. But the widespread myth that Glass represents a special class of privacy violation adds a level of social acceptability, even righteousness, to their cause and may even convince them that it's OK to criticize, confront and even attack wearers of Glass.
Google Glass is not a violation of privacy that anyone should freak out about, and here's why: It's harder to take sneaky pictures of people with Glass than it is with a smartphone.
If I'm wearing Glass at Starbucks and turn my head in your direction, I'm briefly pointing a camera at you. But so is that teenage girl over there checking her messages. And so is that guy standing in line and holding his phone. And so is the woman in the corner taking a selfie -- you're in the background.
In other words, for every one time a Google Glass camera has been pointed at you, maybe a thousand smartphone cameras have been pointed at you. (I'm making these numbers up, but you get the point.)
When I'm wearing Glass, I'm not taking a picture of you because I have no use for a picture of you -- and for that same reason I wouldn't take a sneaky picture of you with my smartphone.
And if I did use Glass to take a picture of you, chances are you'd know about it. I would have to either utter a pretty loud audible voice command, reach up and press the shutter button on top of Glass or wink awkwardly and in an exaggerated fashion. Then the glass would light up and the picture on its little screen would be visible to everyone in the room.
There's nothing sneaky about Google Glass photography. It's far more obvious than smartphone photography.
The bottom line is that there are many major, huge, colossal invasions of our privacy that we should all be up in arms about. But iBeacons, Gmail scanning and Google Glass are not among them.
This article, "3 Privacy Violations You Shouldn't Worry About," was originally published on Computerworld.com.
Mike Elgan writes about technology and tech culture. You can contact Mike and learn more about him at http://Google.me/+MikeElgan. You can also see more articles by Mike Elgan on Computerworld.com.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.