Another company is in ‘hot water’ for the privacy effects of its products: this time it’s Amazon, with their Alexa device. Alexa’s a voice activated microphone & speaker that responds to your voice and helps out by setting timers, searching, playing podcasts, and the like. A judge asked for data that Amazon might have from the device, and Amazon declined.
You shouldn’t care about this.
You can avoid buying an Alexa and instead buy a Google Home, or use Siri, or use old-fashioned Google, and have your searches recorded a different way. You can switch from Google Mail to Fastmail and have your email off of Google’s servers until you email anyone who uses Google Mail, which is almost everyone. You can delete your Facebook account, so Facebook can’t track you while you use their site, but they’ll continue to track you everywhere else.
To sum up:
How much do you know about Edward Snowden? Do you think he’s a patriot or a traitor? You remember his stints in Hong Kong and know about his unusual relationship with Russia?
Now weigh that knowledge against what you’ve read of the documents that he released. Do you know more about the relatively normal person, or about the earth-shattering information, and why?
How much do you know about Facebook’s surveillance apparatus? How many articles have you read about how Facebook can build a 20 facet analysis of every user and sell it to advertisers? Remember when the CEO called users dumb fucks for giving him information?
Now think of every other service you use: is it better, or more respectful of your privacy? Even Target has a robust profile of you.
Every company has lots of reasons to collect user data and make inferences about it, both for personalization and two-sided markets. And they have few reasons not to: the status quo is data collection and retention. Once you have a two-sided market, where you’re selling collected data or aggregates, you can compete effectively with your more traditional competitors who are selling goods and services for money (how quaint), and your users will mostly just are about price and quality - not privacy implications.
So the system is set up so that ‘survelliance capitalism’ is a winning business model, one that more and more companies will flock to. But our conversation about privacy keeps slipping into the trap of painting Mark Zuckerberg as evil. We shouldn’t approach this problem by questioning whether Zuckerberg or Cook or Bezos are good people or not: there might be an interesting answer and great character development to be had, but it’s eventually irrelevant to the fact that spying on users is a pretty good business move with few drawbacks.
So personalization becomes kind of a game of whack-a-mole, where you rally against Apple one month, Google the next, Amazon the next, and so on - moving the focus of ire when new crimes against privacy are revealed, but never noting the reasons why the rate and severity of crimes is increasing.
Boycott, and its companion, the concept of ‘educated consumerism’, dream that consumer choices can both protect people from wrongdoing, by buying ethical products, and influence what is produced, by not buying bad things.
On the internet, you might opt out by using Linux, avoiding Google, locking down your browser, and so on. Google and Facebook lose a user, but keep tracking you indirectly, and you gain the ability to tell your friends the secrets of staying safe on the internet.
But at some point, you’ve got to wonder: why isn’t this the default? You don’t have to read insidery safety blogs to buy a car with airbags: they’re required by law. You don’t have to sort through dud pills at the pharmacist, or test your gasoline to make sure it’s pure. You don’t have to shop around for a doctor who will keep your information private - they all have to, under HIPAA.
So why, on the internet, is knowing the way around privacy traps such a mark of distinction amongst nerds? The educated consumer of the internet who uses extensions to block cookies and spyware considers this to be a badge of honor, when really it’s just a workaround.
Systemic problems deserve systemic solutions.
Regulations and standards are the slowest-moving but most important parts of this fight. The growth of fundamental encryption standards like HTTPS and public key cryptography mean that companies that want to elevate security can just ‘turn it on.’ Google deserves credit for aggressively targeting higher default levels of security, using its leverage with the Chrome browser for good.
What technical standards can’t cover, and what few companies publish yet, is data retention and storage. Uber, for instance, can retain location information indefinitely, and so can Lyft. Lots of attention is given to the types of information collected, but the reality is that even a few kind of data collected over a long period of time is enough to build a complex profile. And how do we know how data is stored? How likely is a security crisis at one company versus another?
The obscurity of Terms of Service should no longer be used as a cloak. Regulation could be used to surface specific, important parts of each sites terms. One case where this was tried, albeit with mixed results, was the Cookie Law in the EU.
So, yep - there are companies which are bad actors, or more likely are pushing the limits of surveillance in a way that their lawyers think is relatively safe. But demonizing individual companies or people has had the effect of taking focus off of the system, which is as unregulated and destructive as it has ever been.
As technologists, we need to rally for better transparency about privacy and higher basic levels of security - and then we need to rally for ways to make those into requirements, expectations, and basic assumptions. Privacy should be default: we shouldn’t and can’t expect individual companies to invest in privacy when it isn’t in their financial best interest.