Just a question I've been pondering lately. I'm working on a side hustle called Engauge Analytics, something like Google Analytics with a machine learning component. But, I tagged this "Analytics Without the Evil".
It was right around the time that the Facebook Cambridge Analytics scandal happened, and I think I coined it on a whim. It also comes at a time when Facebook, Twitter, and Google (well, not Google) are all sitting before the US Congress to discuss foreign influence in elections and other data issues.
So, I guess, my question is....
Are Google and Facebook and the like that use our personal data for their own good evil? Are their actions in these circumstances wrong?
Top comments (36)
Some objections to your statement:
It's difficult or impossible to know what data you're "handing" them.
Even if you try to stop handing them data they still collect it. See: google collecting location data.
They use fingerprinting techniques to track you even though you aren't logging in or submitting any data to them. Nowhere in that scenario are you handing them data, they are harvesting it.
But yes, on the face of it if someone clicks "I agree that you collect data" they're on their own. But reality is more complex than you let on.
Evil may not be the correct term. I don't believe they mean any malicious intent with the data. Negligent? Likely. Should something be done to regulate the Big Data industry? Absolutely.
But at the same time, it would be naive to think we can have all the facilities (Facebook, Google, Twitter, etc) for free. The tradeoff is our data being sold to people beyond our control.
Google probably earned the term "Evil" by using it themselves so much in the early days. Not that they ever wanted the "don't be evil" thing to be a public slogan in the first place, I feel like that's why they get called that more often these days.
Good call to both of you. I agree, Google used "evil" and now it's kinda stuck.
First and foremost, something should be done about the "regulating things" industry. It's the uncontested biggest evil on Earth.
I find this question is really hard to answer.
On the surface it seems a very clear question.If you read it superficially it boils down to something like:
Β»Are others evil if they use ours for their own goodΒ«
Something, which makes your head nod. Of course if others use some thing from us and make money from that thing that is evil, because this is ours, we should have some form of compensation.
Or you could read it in another way:
Β»Is someone evil, when he/she tells other people things, I do not want shared?Β«
This question opens a dark abyss where even darker questions are hiding.
Speaking of "our data":
Who owns "data"?
If you tell me, that you are going to buy a parrot. Do you own this information? Do you still own it, after you told me? Do I own it?
What is "evil"?
And what about, if I tell somebody else that you are buying a parrot and he offers you food for the parrot, because he heard from me, that you are a big fan of parrots. Is that evil? Did I "steal" the information? What about, if I told you upfront, that I have a good friend of mine who I am often talking to and would of course tell, that you are going to buy a parrot. Was that "evil"?
Google and Facebook are similar in that respect, that they both try to sell ads in the one way or the other; which is from my point of view not "evil", at least not per se.
That doesn't mean, I have am a strong advocate for - but also not against - those business models and platforms.
Honestly, I currently do not know what exactly to make out of it. I have to think more about it. The only thing, I can say is, that it is hard to judge because clear terms are currently missing. We humans are used to think in terms of physical things. Transposing concepts from the physical world like "ownership" into the virtual is not as easy as it first seems.
Disclaimer: No parrots were harmed during writing this post.
What if instead of you telling me you're buying a parrot, I as the owner of the mall use the security cameras to track the fact that you went into the pet store and came out of there with a parrot?
I also have facial recognition software running so I can identify you.
Is it OK if I sell this information to a third party for ad purposes? Maybe someone pays top dollar to direct market bird seed to you.
Shady? Not shady? Do you feel comfortable or uncomfortable in this scenario?
Yes. Indeed a good question.
Is there a difference in "telling" people the fact, I bought a parrot or even showing them the surveillance videos? Or as a middle ground telling other people, that you have first class information and people knowing, that you are able to collect and classify the information, so it would be best, paying you for showing the right ads to the right kind of people; because who knows better than you do?
Who owns the information of the surveillance video? Is it yours as a mall owner? Obviously it is your camera. Not obvious, who owns "the fact"? Is it me, because as an agent, I produced the fact? Is the fact shared between you and me? Or do we own "different" facts?
And philosophical even more interesting:
How much power over people do you gain by knowledge about those people?
Does your knowledge about my recent visit to the pet shop give you any kind of power over me?
Say you could trick me into buying some kind of seeds, because you took advantage from the surveillance. Does the trick still work, if I know how the trick works?
I think, there are more questions at the moment than proper answers.
I am no longer continuing the parrot analogy because it is starting to fall apart, and I will continue using real world examples and information.
The short answer to your comment is this: No, there aren't more questions than proper answers.
If a user selects the "track everything about me" option when signing up/in to a service or device then all bets are off, obviously, and we aren't talking about those scenarios at all. We are talking about the scenarios where users aren't informed they are being tracked, aren't given the option to opt out, try to opt out but aren't able to, are tracked despite their best efforts not to be, or are misled about the amount of information tracked.
When a user decides to turn off location tracking, you shouldn't keep tracking their location. Providing a false opt-out mechanic is disingenuous at best or evil at worst.
If I sign in to a service I am well aware that my signing in and my login information is stored and tracked for whatever purpose, but I can be tracked simply by visiting the website (this is the analogy I made with the pet store and the mall surveillance camera). At no point in this scenario are you given information about being tracked or a choice to opt out.
There are absolutely ways for an adversary to use this information nefariously. At the end of the day it's not that big of a deal if ads are targeted to me based on my internet habits, but there are people living with death threats hanging over them for which tracking is literally a matter of life and death. The path from Google tracking my location to the wrong person being able to find out where I live is not a long and twisty path.
To add, the GDPR recently made clear that people can expect privacy on the Internet and to be fully informed about the information tracked and stored about them. And the UN Declaration of Human Rights says in Article 12 that:
Proper answers exist unless you are a Big Data/Big Internet apologist.
The good thing about artificial examples is, that they are sandboxed and easy to reason about. As I wrote in my first answer: real life is complicated and hard to reason about.
Mentioning #GDPR makes your "real world examples" not easier to reason about.
What is to make of that statement? Does questioning put me automatically on one side or the other? Does thinking the arguments against big data are perhaps inconclusive make me in some way an "apologist"? I think not.
And slippery slopes like
do not help.
In principle, your argument goes like this:
There is a law, which says x. Therefore x is right.
This might be the case. But it is not by necessity so.
When we are speaking of "tracking users", why should a user being asked to give consent? What is exactly the good, which is subject of the law, which is protected here?
If we speak of privacy: in what way is visiting a public website private? Or why should it be seen more private than visiting a physical shop instead of a webshop?
If you take the analogy of the post secret: the reason behind that is, that confidential information is exchanged betwen a sender and a recipient. If you send me a secret, it is not a violation of postal law.
In which way is telling advertisers your interaction on my site against your "privacy"? Why should you treat that confidential?
Yes. They support laws that help entrench their monopolies and infringe on individual rights, a free market, and free speech.
They definitely mean to be doing this, it's not an accident, it's not just business as usual.
no - they are not evil
yes - they are dangerous
yes - there are some who use the platform with malevolent intent
as a programmer, googles algorithm is useful - it knows I want to have a particular bias to my search - but I use that with informed intent
but a very good question
I'm not going to shut out my grandparents because they use GMail, and I'm not going to try to convince them to switch off GMail after already being a jerk about not having a Facebook account.
Besides, GMail's limited algorithmic sorting is pretty innocuous; it's Search and Facebook, with their pervasive filter bubbling, that're actually being a problem.
As for messengers... I've never had a problem convincing people to SMS me.
Yeah it's tough. It's the same reason I still use Facebook: Without it, I would be cut off from a large number of family members that only use FB and don't communicate in other means.
Evil is commonly defined as morally or ethically bad. Google & FB do not hide the fact that they use your data for monetary gain. I think it's a well known fact. They are however not always super transparent about how much data they are collecting on you. I believe that in using these services we pay with our data. Once traded your data is no longer yours. It's theirs. Thus I don't think they are evil, I think the medium of payment has changed and people in general were ill equipped to understand the change. In short we should better educate everyone on how these services make money, and let the market decide. If privacy becomes more valuable to people then convenience, it will appear as a successful market solution.
We might call a human evil if they consistently made moral choices to do evil instead of good. I don't think it necessarily makes sense to label people in this way, but we'll go with that.
Corporations don't make moral choices, they make economic ones. They might emulate human morality by engaging the morality of their human employees, but this is mostly camouflage. At best, a corporation's culture might have a vestigial morality left over from when the corporation was a small business, but it will likely be turned into a marketing tool.
Interesting because this statement has certainly been on my mind recently, but I would not necessarily call them "evil". That's what they ultimately got themselves into. For better perspective on this (on the Facebook case) I recommend the medium article written by Nat Eliason
I feel like users should have a bit more control over their data. However, knowing I would do exactly the same if I was at their position I can't really mark them as evil. ππΆ