Several weeks have passed since the Facebook and Cambridge Analytica scandal came to the fore. This matter is considered as the biggest violation of privacy in the history of the social network.
The debate that subsequently arose addressed different sides, from the non-consensual use of information of the users to the psychographic profiles elaborated from the information of users.
There are questions to answer and those questions made the CEO of Facebook, Mark Zuckerberg, appear at the Senate of the USA to give explanations on what happened.
The explanations he gave didn’t convince users, senators or anyone on Facebook, but they were useful to relax the spirits and turn the page.
In any case, users had finally opened their eyes to what their acceptance of Facebook conditions actually meant… or have they?
Apparently, the scandal was a before and after in the perception of the social network. But, I think that may not be the case.
When it was revealed how our information had been interpreted to deduce our political preferences and our social concerns, the immediate response was a general feeling of rejection towards the platform, towards its behavior and towards the violation of the trust we had given to them without hesitation.
And then we had the excuse of lack of knowledge, using the unclear terms and conditions of the service to justify it to us.
It was then that a reality became publicly apparent that was clearly known by everyone, but that no one was willing to openly admit: nobody reads the terms and conditions.
Not even Facebook read the conditions of the application that Cambridge Analytica used to get the data from users. This makes us wonder where the border is between the obligation to inform companies about the use of their data and doing so in an understandable and accessible way to the average user.
The situation showed that users consent provided to Facebook was anything but informed, as to be so, three requirements must be met: the information must be clear, everything is understood and acceptance is voluntary.
It’s evident that the information sometimes can take inscrutable and unintelligible forms, which leads us to the second of the requirements: the understanding. It can be ignored with the objective of getting automatic consent from users, who don’t want to get lost in an interplay of technicalities and legal concepts.
Regarding voluntariness, I prefer to speak of a forced voluntariness, since the platform becomes one of the most widespread, so its use becomes obligatory if a person doesn’t want to become a kind of digital pariah.
Moreover, there is a fact that the clauses of the contract have no alternative or aren’t negotiable. There doesn’t seem to be any kind of voluntariness, not even in its most metaphorical version.
This dose of reality showed us how far the Internet giants are able to take advantage of our information, so it is time to consider at what point of the game we are at.
According to an Ipsos survey published last Sunday by Reuters, one in four users has reduced their activity or has even suspended their Facebook accounts after the scandal. However, the remaining three quarters are still as active, or more so, than before it. What excuse will we use when the next data leak occurs?
Beyond broken commitments and uninformed consents, the reaction of the average user does not stop surprising. It is likely that there is some psychology behind this behavior and it seems that we forget quickly because we have already created the need to live connected. We find it hard to give that up.
How do we reduce the use of a service that links you with others, that makes you feel part of the group, that satisfies that sense of belonging so rooted in the human being from its origins?
The justifications are running out. Possibly, the question that we have to answer now is not whether we know what the Internet giants do with our data, but if we are willing to continue paying with our information for the service they provide us. What price are we putting on our privacy?
The key may be that the data is “only” data, ethereal, not palpable or tangible, at least not when there is a huge didactic campaign to explain what giants do with all the information we provide.
In the meantime, this data will continue to seem innocuous, and we will not be aware that the intelligence and understanding derived from users data isn’t harmless.
After all, we may be doomed to “like” the Facebook scandal. Paradoxical as it may seem.
Guest author on the Open Data Security’s blog
Comments are closed.