By Giovanni Buttarelli for The Washington Post
First came the scaremongering. Then came the strong-arming. After being contested in arguably the biggest lobbying exercise in the history of the European Union, the General Data Protection Regulation became fully applicable at the end of May.
Since its passage, there have been great efforts at compliance, which regulators recognize. At the same time, unfortunately, consumers have felt nudged or bullied by companies into agreeing to business as usual. This would appear to violate the spirit, if not the letter, of the new law.
The GDPR aims to redress the startling imbalance of power between big tech and the consumer, giving people more control over their data and making big companies accountable for what they do with it. It replaces the 1995 Data Protection Directive, which required national legislation in each of the 28 E.U. countries in order to be implemented. And it offers people and businesses a single rulebook for the biggest data privacy questions. Tech titans now have a single point of contact instead of 28.
The new regulation, like the old directive, requires all personal data processing to be “lawful and fair.” To process data lawfully, companies need to identify the most appropriate basis for doing so. The most common method is to obtain the freely given and informed consent of the person to whom the data relates. A business can also have a “legitimate interest” to use data in the service of its aims as a business, as long as it doesn’t unduly impinge on the rights and interests of the individual. Take, for example, a pizza shop that processes your personal information, such as your home address, in order to deliver your order. It may be considered to have a legitimate interest to maintain your details for a reasonable period of time afterward in order to send you information about its services. It isn’t violating your rights, just pursing its business interests. What the pizza shop cannot do is then offer its clients’ data to the juice shop next door without going back and requesting consent.
A third aspect of lawfully processing data pertains to contracts between a company and client. When you purchase an item online, for example, you enter into a contract. But in order for the business to fulfill that contract and send you your goods, you must offer credit card details and a delivery address. In this scenario, the business may also legitimately store your data, depending on the terms of that limited business-client relationship.
But under the GDPR, a contract cannot be used to obtain consent. Some major companies seem to be relying on take-it-or-leave-it contracts to justify their sweeping data practices. Witness the hundreds of messages telling us we cannot continue to use a service unless we agree to the data use policy. We’ve all faced the pop-up window that gives us the option of clicking a brightly colored button to simply accept the terms, with the “manage settings” or “read more” section often greyed-out. One of the big questions is the extent to which a company can justify collecting and using massive amounts of information in order to offer a “free” service.
Under E.U. law, a contractual term may be unfair if it “causes a significant imbalance in the parties’ rights and obligations arising under the contract that are to the detriment of the consumer.” The E.U. is seeking to prevent people from being cajoled into “consenting” to unfair contracts and accepting surveillance in exchange for a service. What’s more, a company is generally prohibited to process, without the “explicit consent” of the individual, sensitive types of information that may reveal race or political, religious, genetic and biometric data.
Indeed, regulators are being asked to determine whether disclosing so much data is even necessary for the provision of services — whether it is ecommerce, search or social media. One key principle to remember is that asking for an individual’s consent should be regarded as an unusual request, given that asking for consent often signals that a party wants to do something with personal data that the individual may not be comfortable with or might not reasonably expect. Thus, it should be a duty of customer care for a company to check back with users or patrons honestly, transparently and respectfully. As the Facebook/Cambridge Analytica scandal revealed, allowing an outside company to collect personal data was not the type of service that users would have reasonably expected. Clearly, abuse has become the norm. The aim of the EU data protection agency that I lead is to stop it.
Independent E.U. enforcement authorities — at least one in each E.U. member state — are already investigating 30 cases of such alleged violations, including those lodged by the activist group NOYB (“none of your business”). The public will see the first results before the end of the year. Regulators will use the full range of their enforcement powers to address abuses, including issuing fines.
The GDPR is not perfect, but it passed into law with an extraordinary consensus across the political spectrum, belying the increasingly fractious politics of our times. As of June, there were 126 countries around the world with modern data protection laws broadly modeled on the European approach. This month, Brazil is next. And it will the biggest country to date to adopt such laws. It is likely to be followed by Pakistan and India, both of which recently published draft laws.
But if the latest effort is a reliable precedent, data protection reform comes around every two decades or so — several lifetimes in terms of the pace of technological change. We still need to finish the job with the ePrivacy Regulation still under negotiation, which would stop companies snooping on private communications and require — again — genuine consent to use metadata about who you talk to as well as when and where.
I am nevertheless already thinking about the post-GDPR future: a manifesto for the effective de-bureaucratizing and safeguarding of peoples’ digital selves. It would include a consensus among developers, companies and governments on the ethics of the underlying decisions in the application of digital technology. Devices and programming would be geared by default to safeguard people’s privacy and freedom. Today’s overcentralized Internet would be de-concentrated, as advocated by Tim Berners-Lee, who first invented the Internet, with a fairer allocation of the digital dividend and with the control of information handed back to individuals from big tech and the state.
This is a long-term project. But nothing could be more urgent as the digital world develops ever more rapidly.