Data privacy, digital surveillance and the issue of consent
Sitting in a coffee shop in south west London reading ‘Mindf*ck: Inside Cambridge Analytica’s Plot to Break the World’ by Christopher Wylie, I’m troubled.
This is not a surprise. Having watched Netflix’s documentary, The Great Hack, last year and been waist deep in ‘big data’ and the internet of things (IoT) research ever since.
It’s an extraordinary but uncomfortable book, illuminating the power of psychographic data profiling, and how easy it is to acquire data.
Data privacy dominates global media headlines with an alarming frequency. From the infamous Cambridge Analytica/Facebook scandal, through to a constant stream of online data breaches.
The largest to date being Yahoo! when in 2016 it came to light an estimated 3 billion accounts were compromised.
We want your data. Do you consent?
When it comes to digital data and our online identities, the question which keeps coming up in my conversations is, who owns our data? And the issue of consumer consent. Or, perhaps controversially, forced consent.
It’s a thorny subject. New data protection laws, including the ePrivacy Directive (ePD), GDPR in the European Union and, as of January 1, 2020, the California Consumer Privacy Act (CCPA), have mandated businesses to share their data privacy policy and request a customer agrees to it. Under GDPR, consent must be freely given.
But most websites appear to use social engineering to entice people to click the ‘agree/consent’ button, by making it harder to say no. The consent is therefore not freely given, it is nudged out of people.
Those which do offer a decline option, often force the user to navigate complicated privacy settings or, sometimes deny them the option to use their service.
Most businesses know that the majority of us do not have the time nor the patience to do this for every website we visit. Most of us are accessing websites via our small screened mobile devices whilst on the move too. The legal small print is complicated, dense and time consuming to read.
Society today is always in a rush. So, we impatiently click the yes button to cookies, web trackers and other data sharing agreements — that we don’t really understand and are, in the main, unregulated — that follow us around creating a digital catalogue on our activities that is monitored, mined, analysed, bought, sold…and in the reported case of Cambridge Analytica, manipulated.
In one click, our data, our digital identities, seemingly belong to someone else. A free resource which can be monetised and, in some instances should it fall into the hands of bad actors, used against us.
If it’s free, am I the product?
In her book “The Age of Surveillance Capitalism”, Professor Shoshana Zuboff surmises this situation beautifully. She writes:
“Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioural data.
Although some of these data are applied to service improvement, the rest are declared as proprietary behaviour surplus, fed into advanced manufacturing processes, known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon and later.
Finally, these prediction products are traded in a new kind of marketplace that I call behavioural futures markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies willing to lay bets on future behaviour.”
And in exchange for surveillance we, the public, get convenience, efficiency and social connection.
Can I trust businesses to protect me from cyber criminals?
Putting online surveillance to the side for a moment. The other issue, as already touched on at the start, is the risk of online data breaches.
As criminals become more sophisticated and consumers move to a more digital world, protecting our identities online is increasingly vital.
This is not just the internet of course. It applies just as much to the surge in artificial intelligence (AI) “smart tech” products: wearable tech, energy meters, home speakers with AI assistants, video doorbells, even toys. All storing personal data which can reveal our identity and increase our vulnerabilities to potential criminal activities if compromised.
We each have to be mindful as to where personally identifiable information about us is stored online and on our smart tech devices.
We also have to trust our banks, our governments, the retailers we shop with, to act with the same, if not substantially higher levels of data and identity protection.
Every business needs to take cyber risk extremely seriously as our personal data, as proven, is valuable. Like diamonds and gold, our data needs to be secured in an impenetrable digital vault with an ultra-sophisticated digital alarm system.
Working together for a better internet
Everyone should feel safe online. And I believe this includes having the confidence, choice and clarity around how our personal data is collected and used by businesses.
I echo the comments made by Microsoft CEO, Satya Nadella, at the World Economic Forum in Davos last month where he expressed that data privacy, at an individual level, needs to be thought of as a human right. Referring to it as data dignity.
This does not mean prohibiting data collection and analysis. Far from it. Just to recognise that, as a human right, it needs to be a choice.
Most will agree that data analysis is a powerful tool for good and we need to provide freedom for innovators to continue to ‘innovate’ and drive our world forward. Developing new solutions which create a healthier, wealthier and happier global society.
But as is always the case in life; there are good people, and there are bad people.
And the internet is mostly unregulated and unpoliced.
This has to change. Our safety depends on it.