Tag: privacy

Source: NBC

Google was sued on Tuesday in a proposed class action accusing the internet search company of illegally invading the privacy of millions of users by pervasively tracking their internet use through browsers set in “private” mode.

The lawsuit seeks at least $5-billion, accusing the Alphabet Inc unit of surreptitiously collecting information about what people view online and where they browse, despite their using what Google calls Incognito mode.

According to the complaint filed in the federal court in San Jose, California, Google gathers data through Google Analytics, Google Ad Manager and other applications and website plug-ins, including smartphone apps, regardless of whether users click on Google-supported ads.

This helps Google learn about users’ friends, hobbies, favourite foods, shopping habits, and even the “most intimate and potentially embarrassing things” they search for online, the complaint said.

Google “cannot continue to engage in the covert and unauthorised data collection from virtually every American with a computer or phone,” the complaint said.

Jose Castaneda, a Google spokesman, said the Mountain View, California-based company will defend itself vigorously against the claims.

“As we clearly state each time you open a new incognito tab, websites might be able to collect information about your browsing activity,” he said.

While users may view private browsing as a safe haven from watchful eyes, computer security researchers have long raised concern that Google and rivals might augment user profiles by tracking people’s identities across different browsing modes, combining data from private and ordinary internet surfing.

The complaint said the proposed class likely includes “millions” of Google users who since June 1, 2016 browsed the internet in “private” mode.

It seeks at least $5,000 of damages per user for violations of federal wiretapping and California privacy laws.

Boies Schiller & Flexner represents the plaintiffs Chasom Brown, Maria Nguyen and William Byatt.

The case is Brown et al v Google LLC et al, U.S. District Court, Northern District of California, No. 20-03664.

By Riccardo Spagni for Fin24

Privacy is widely held as a fundamental human right and is recognised in the UN Declaration of Human Rights, the International Covenant on Civil and Political Rights and in the Constitution of nearly every country in the world.

Privacy is becoming a growing concern as the world continues its mass digitisation. As we move more of our day-to-day business and personal communications and interactions online, the trail of personal data breadcrumbs we leave behind grows.

Take something as simple as an online transaction: when the average consumer pays a merchant in Europe via their PayPal account, their data goes to as many as 600 different companies. The consumer has zero visibility over any of the companies involved. The amount of metadata about our lives is staggering – and we have no control over any of it.

Financial privacy and its malcontents

Regulators have tried to resolve some of the issues around data privacy and use of personal information by businesses. The European Union’s General Data Protection Regulation is a far-reaching piece of legislation that aims to protect EU citizens from unwanted or unauthorised personal data use. Although the upper limits of its sanctions still need to be tested, GDPR promises fines of up to €20-million to organisations that compromise the personal data of any EU citizen.

But for most transactions, consumers and businesses remain at the mercy of a vast network of interlinked companies that process and distribute our personal metadata across the globe. A lot of this is driven by convenience: when cash was still the preferred payment method, people enjoyed a fair amount of privacy as cash transactions can be concluded away from any prying eyes.

With the introduction of electronic payment methods such as wire transfers, SWIFT, credit cards and mobile payments, privacy has been sacrificed for convenience. The amount of Know-Your-Customer (KYC) and Anti-Money Laundering (AML) processes in place means consumers have little in the way of financial privacy as financial services firms are bound by law to constantly analyse transactions for any irregularities and report them to authorities where appropriate.

Shining a light on criminality

Financial crime is a massive problem. A 2018 Thomson Reuters survey of 2373 respondents in 19 countries – including South Africa – found that the aggregate lost turnover as a result of financial crimes amounted to $1.45-trillion, or 3.5% of their total global turnover. In Europe, on average one in every 200 transactions reviewed by bank compliance officers lead to a criminal investigation, but only 1% of criminal proceeds generated in the EU are confiscated by authorities.

But financial privacy is not only important to criminals; it is a critical safety measure for every digital citizen. Without financial privacy, personal and financial safety can be compromised by criminals who could, for example, see the value of a purchase that someone made – as well as their personal details – and use that information to target them with criminal activities. As a business, financial privacy keeps intimate business details such as salary information, profit margins and revenue away from unwanted eyes.

Cryptocurrencies often come into the firing line for their anonymity and lack of regulatory oversight. High-profile examples of illicit purchases on the dark web using cryptocurrencies have made regulators wary of their potential for driving criminal activity.

Not all cryptocurrencies are made equal

A large part of the appeal of cryptocurrencies is that they are more discrete than mainstream payment methods. And while this is partly what makes them attractive to criminals, it is unfair to assume all discrete transactions are criminal. We all make purchases we would rather other people not know about, for fear of embarrassment or judgement. Anonymity also has its benefits: who hasn’t suddenly seen a spike in advertisements related to something you once searched for online, or saw similar products to one you’ve just bought advertised on sites you visit?

Privacy enhancing cryptocurrencies are built on five pillars, namely:

  • Unlinkability, which conceals where transactions are going to;
  • Untraceability, which conceals the origins of transactions;
  • Cryptgraphically valueless, which hides the value of a transaction;
  • Passively hidden, which conceals the transaction from other internet users; and
  • Optionality, which maximises the privacy set while still enabling you to reveal information should you need to.

But not all cryptocurrencies are created equal. And not all have the privacy of their users as a primary concern. Cryptocurrencies such as Monero were built to provide users with the optimum amount of privacy. That’s why I’d add a sixth pillar to the above, namely Ideology. Since cryptocurrencies involve thousands – even millions – of people, it is critical that the cryptocurrency is managed according to a strict set of privacy-enhancing guidelines.

Every contributor to Monero, for example, understands they are responsible for other people’s money, privacy and, by extension, safety. Contributors could, through reckless actions, compromise someone’s financial security or even their lives. Any privacy project that treats it with less care is indistinguishable from a scam and can put people’s lives at risk.

There’s a popular argument that honest people don’t need privacy since they have nothing to hide. But that’s fallacy. As Edward Snowden put it, “Arguing that you don’t care about the right to privacy because you have nothing to hide is no different to saying you don’t care about free speech because you have nothing to say.”

Financial privacy is a fundamental human right. Technology can be both the greatest inhibitor or promoter of privacy. The responsibility rests on all of us who participate in the new world of cryptocurrencies to ensure we protect the privacy of our users.

By Cheyenne MacDonald for DailyMail

Google’s private browsing options may not be as incognito as you’d expect.

New research into Google’s ‘filter bubbles,’ in which search results are personalized based on the data it’s collected about you, has found that logging out or switching to Incognito Mode does almost nothing to shield you from targeted results.

By comparing search results for controversial topics, including gun control, immigration, and vaccinations, the study (notably conducted by rival search engine, DuckDuckGo) uncovered significant variations in what different users were shown.

New research into Google’s ‘filter bubbles,’ in which search results are personalised based on the data it’s collected about you, has found that logging out or switching to Incognito Mode does almost nothing to shield you from targeted results.

Despite the common assumption that logging out or going Incognito provides anonymity, DuckDuckGo points out that this isn’t really the case.

Websites use several other identifying factors to keep tabs on users’ activity, including IP addresses.

To highlight the issue, DuckDuckGo recruited volunteers in the US to perform a series of searches for the terms ‘gun control,’ ‘immigration,’ and ‘vaccinations.’

All were tasked to do this at the same time, at 9pm ET on Sunday, June 24, in Incognito, logged out, and then logged back in.

The study also controlled for location, DuckDuckGo notes.

This made for 87 sets of results in total, with 76 desktop users and 11 mobile users.

Despite the anonymised conditions, which would be expected to produce the same results across the board, most of the participants still appeared to see personalised results.

Private searches for gun control, for example, yielded 62 different sets of results for the 76 participants.

Similar trends were seen in searches for the other two terms, with 57 variations in ‘immigration’ results, and 73 variations in ‘vaccinations’ results.

Users were shown links in different orders, and some were shown links that were not displayed to others.

News and Video infoboxes, in particular, demonstrated ‘significant variation.’

A search for ‘immigration,’ for example, pulled up six variations from six different sources in the Videos infobox, while ‘gun control’ led to 12 variations from 7 sources.

According to DuckDuckGo, the findings indicate that ‘it’s simply not possible to use Google search and avoid its filter bubble.’

While the motivations behind the study are undoubtedly biased, the findings still stand as a reminder that true anonymity on the internet isn’t as straightforward as it might seem.

By Giovanni Buttarelli for The Washington Post 

First came the scaremongering. Then came the strong-arming. After being contested in arguably the biggest lobbying exercise in the history of the European Union, the General Data Protection Regulation became fully applicable at the end of May.

Since its passage, there have been great efforts at compliance, which regulators recognize. At the same time, unfortunately, consumers have felt nudged or bullied by companies into agreeing to business as usual. This would appear to violate the spirit, if not the letter, of the new law.

The GDPR aims to redress the startling imbalance of power between big tech and the consumer, giving people more control over their data and making big companies accountable for what they do with it. It replaces the 1995 Data Protection Directive, which required national legislation in each of the 28 E.U. countries in order to be implemented. And it offers people and businesses a single rulebook for the biggest data privacy questions. Tech titans now have a single point of contact instead of 28.

The new regulation, like the old directive, requires all personal data processing to be “lawful and fair.” To process data lawfully, companies need to identify the most appropriate basis for doing so. The most common method is to obtain the freely given and informed consent of the person to whom the data relates. A business can also have a “legitimate interest” to use data in the service of its aims as a business, as long as it doesn’t unduly impinge on the rights and interests of the individual. Take, for example, a pizza shop that processes your personal information, such as your home address, in order to deliver your order. It may be considered to have a legitimate interest to maintain your details for a reasonable period of time afterward in order to send you information about its services. It isn’t violating your rights, just pursing its business interests. What the pizza shop cannot do is then offer its clients’ data to the juice shop next door without going back and requesting consent.

A third aspect of lawfully processing data pertains to contracts between a company and client. When you purchase an item online, for example, you enter into a contract. But in order for the business to fulfill that contract and send you your goods, you must offer credit card details and a delivery address. In this scenario, the business may also legitimately store your data, depending on the terms of that limited business-client relationship.

But under the GDPR, a contract cannot be used to obtain consent. Some major companies seem to be relying on take-it-or-leave-it contracts to justify their sweeping data practices. Witness the hundreds of messages telling us we cannot continue to use a service unless we agree to the data use policy. We’ve all faced the pop-up window that gives us the option of clicking a brightly colored button to simply accept the terms, with the “manage settings” or “read more” section often greyed-out. One of the big questions is the extent to which a company can justify collecting and using massive amounts of information in order to offer a “free” service.

Under E.U. law, a contractual term may be unfair if it “causes a significant imbalance in the parties’ rights and obligations arising under the contract that are to the detriment of the consumer.” The E.U. is seeking to prevent people from being cajoled into “consenting” to unfair contracts and accepting surveillance in exchange for a service. What’s more, a company is generally prohibited to process, without the “explicit consent” of the individual, sensitive types of information that may reveal race or political, religious, genetic and biometric data.

Indeed, regulators are being asked to determine whether disclosing so much data is even necessary for the provision of services — whether it is ecommerce, search or social media. One key principle to remember is that asking for an individual’s consent should be regarded as an unusual request, given that asking for consent often signals that a party wants to do something with personal data that the individual may not be comfortable with or might not reasonably expect. Thus, it should be a duty of customer care for a company to check back with users or patrons honestly, transparently and respectfully. As the Facebook/Cambridge Analytica scandal revealed, allowing an outside company to collect personal data was not the type of service that users would have reasonably expected. Clearly, abuse has become the norm. The aim of the EU data protection agency that I lead is to stop it.

Independent E.U. enforcement authorities — at least one in each E.U. member state — are already investigating 30 cases of such alleged violations, including those lodged by the activist group NOYB (“none of your business”). The public will see the first results before the end of the year. Regulators will use the full range of their enforcement powers to address abuses, including issuing fines.

The GDPR is not perfect, but it passed into law with an extraordinary consensus across the political spectrum, belying the increasingly fractious politics of our times. As of June, there were 126 countries around the world with modern data protection laws broadly modeled on the European approach. This month, Brazil is next. And it will the biggest country to date to adopt such laws. It is likely to be followed by Pakistan and India, both of which recently published draft laws.

But if the latest effort is a reliable precedent, data protection reform comes around every two decades or so — several lifetimes in terms of the pace of technological change. We still need to finish the job with the ePrivacy Regulation still under negotiation, which would stop companies snooping on private communications and require — again — genuine consent to use metadata about who you talk to as well as when and where.

I am nevertheless already thinking about the post-GDPR future: a manifesto for the effective de-bureaucratizing and safeguarding of peoples’ digital selves. It would include a consensus among developers, companies and governments on the ethics of the underlying decisions in the application of digital technology. Devices and programming would be geared by default to safeguard people’s privacy and freedom. Today’s overcentralized Internet would be de-concentrated, as advocated by Tim Berners-Lee, who first invented the Internet, with a fairer allocation of the digital dividend and with the control of information handed back to individuals from big tech and the state.

This is a long-term project. But nothing could be more urgent as the digital world develops ever more rapidly.

By Scott Duke Kominers for Bloomberg 

How much is your privacy on Facebook worth?

This question has seen renewed attention following the revelation that political analysis firm Cambridge Analytica, hired by the Trump election campaign, gained access to the private information of more than 50 million users. One of the possible responses that’s generated some discussion is the creation of a paid tier that’s free of ads and data sharing. 1 Such an option would likely be socially beneficial and have considerable public appeal. But my guess is that it would be pretty expensive, too.

Let’s start with some rough calculations. Facebook’s annual ad revenue was about $40 billion in 2017, with 2.13 billion monthly active users. That means the average user is worth roughly $20 in ads to Facebook a year. That’s probably already a lot more than many users would pay for privacy on the social network.

But the price also depends on who would choose to pay for greater privacy. And it’s likely that many of the users who would opt for more protection could be worth more than $20 each to the company.

Why’s that? First, the value of keeping your data private increases with the amount of data you provide on the platform; by the same token, the more data you give Facebook, the better it can advertise to you. Similarly, you might find privacy especially valuable if there’s something unusual or unique about you that makes you especially easy to target.

The people who can afford a paid tier are on average wealthier; that too makes them more valuable to advertisers. And some of them already have browser ad blockers, so it’s hard to reach them via other channels.

To make up for those sorts of customers opting out of data sharing, Facebook would have to charge a lot more than the average of $20 just to break even. A back-of-the-envelope estimate based on the Pareto principle — 80 percent of the ad revenue coming from 20 percent of users — suggests that if mostly high-value users purchase privacy, then Facebook would need to charge closer to $80 a year.

That’s much more than even high estimates of the value most people attach to having access to Facebook. And it’s still a substantial underestimate of the likely price. According to Facebook’s annual report, the company’s 239 million North American users are responsible for a bit less than half of ad revenue; applying the Pareto principle to them would suggest annual privacy prices in the range of $325 a person.

If price alone were the question, Facebook might indeed want to charge huge amounts for enhanced privacy. The users who buy out won’t all be the most valuable users, and it would be pretty lucrative if the company could sustainably charge some customers much more for privacy than the annual ad revenue they generate. But that’s unlikely to work out in the long run.

Putting a high price on privacy would make it clear just how much Facebook’s user data is worth. We’d probably see increased calls to share that value by giving users a portion of revenues. The consumer-led drive for increased privacy would likely accelerate, too, prompting a growing number of users to leave the platform (assuming they can’t afford or are unwilling to pay for greater privacy).

A user exodus plus enhanced scrutiny of data practices would quickly eat away at the profits from offering the paid tier, making the whole thing a losing proposition.

Facebook must have run the numbers on this already, using much better information than we have here. The idea of a paid tier isn’t new; if Facebook hasn’t offered such an option, the company probably thinks it would be a money-loser. So if we want Facebook users to have control over how their data is shared, we may need outside pressure. The company isn’t likely to provide the option on its own.

It’s also worth noting that advertising and data sharing don’t have to be completely coupled. Facebook could enhance privacy directly by adopting data protection strategies based on privacy science, as Apple, Google, and the Census have in some of their applications.

Twitter is now finished with a several week process updating rules to curb abuse on the platform — but now the platform is refuting several undercover videos by Project Veritas trying to point fingers at the network.

On January 16, Twitter shared a statement on the latest video that suggests Twitter engineers access private direct messages, calling the project “deceptive”.

The video in question appears to be an undercover project where Project Veritas members recorded Twitter engineers — without their knowledge — while in a bar. In the video, the Twitter employees mention a machine learning system that goes through both Tweets and direct messages, while according to the video, some staff members go through the messages flagged by the machines.

The video was the third recent dig from the organization directed at Twitter, and the platform called the videos “deceptive” and “selectively edited to fit a pre-determined narrative.” In a statement on the direct message video, Twitter said, “We do not proactively review DMs. Period. A limited number of employees have access to such information, for legitimate work purposes, and we enforce strict access protocols for those employees.”

Twitter says the employees in the video were not speaking on behalf of Twitter at the time. Twitter’s Privacy Policy says that for direct messages, “we will store and process your communications, and information related to them.”

The video comes after another report on Twitter’s shadow-banning, and another undercover video where a Twitter engineer says they’d happily hand over President Donald Trump’s data for an investigation. Twitter also refuted both earlier videos.

While a number of individuals are using the recent videos against the platform, others are looking deeper into Project Veritas — an organization run by conservative James O’Keefe that also tried to get the Washington Post to publish fake news against a political candidate. As Twitter’s new rules result in more users getting banned from the platform, some groups aren’t happy with the switch from a platform that was previously more open, saying the changes create more bias.

Twitter, however, isn’t the only one calling the organization’s tactics deceptive. Wired suggests that the videos are part of the inevitable backlash from the new rules designed to combat abuse and eliminate hate groups and hate speech from the platform, suggesting the rules have the “alt-right” groups mad over the removal of some accounts. The video also comes after a handful of lawsuits filed against Twitter, including a complaint from one user that lost Twitter access after a post threatening to “take out” a civil rights activist. While the lawsuit is recent, the account ban happened three years ago.

The videos factor into a larger discussion as Twitter strengthens policies against abuse, and multiple social media networks struggle against fake news and now removing extremist content. No matter what side of the conversation you fall on, the “legitimate work purposes” access is a nice reminder that the internet isn’t the best place for the most private conversations.

By Hillary Grigonis for Digital Trends

Follow us on social media: 

               

View our magazine archives: 

                       


My Office News Ⓒ 2017 - Designed by A Collective


SUBSCRIBE TO OUR NEWSLETTER
Top