
Privacy Zealots Say “Cookies for Me, None for Thee”
This op-ed originally appeared in ComputerWorld.
Last week, Brendan Eich, the chief technology officer and senior vice president of engineering at Mozilla, announced that the organization is planning to block third-party cookies in future versions of the Firefox Web browser. In addition, the Center for Internet and Society (CIS) at Stanford Law School announced that it has created a new organization called the “Cookie Clearinghouse,” which will begin publishing blacklists for websites based on whether it believes a website’s particular usage of cookies “makes logical sense.” Mozilla will use these blacklists to decide which cookies to accept or deny.
Large-scale blocking of third-party cookies may have profound negative consequences on the future of the Internet. There are three main concerns. First, this practice will result in a loss of revenue from online advertising for many websites, and thus lead to less free content available to consumers. Second, it will cut off many legitimate business models for companies that collect and aggregate user data across the Internet to understand user behavior to design better websites, content and features. Third, it will limit the functionality of websites, both today and in the future, by cutting off the ability to use third-party cookies in some cases or making it more difficult to use them in others.
Mozilla’s position is both ironic and hypocritical. Not that long ago, Mozilla vociferously opposed the proposal in the Stop Online Piracy Act (SOPA) to blacklist websites that promote piracy and counterfeit goods, yet now it is surprisingly optimistic about using third-party blacklists to effectively block parts of websites. When describing SOPA, Mozilla CEO Gary Kovacs stated, “More alarming is the reality that sites could be taken down based solely on the mere suspicion of illegal content, not based on the ruling of a body of legal authority. The lack of due process is a serious flaw, a threat to each of our rights as citizens, and simply should not be accepted.” Basically, Mozilla opposed SOPA because it did not think anyone, including the government, should be trusted to monitor websites and determine what is acceptable and what is not acceptable. Yet Mozilla supports giving a week-old organization, which appears to really just be a group of six people with no accountability, the authority to make or break any website on the Internet based on whether they like how a particular website works. Or put differently, Mozilla thinks it has a responsibility to block targeted advertising on illegal websites, but not the illegal websites themselves.
I suppose neither Mozilla nor its leaders are under any ethical obligation to be consistent in their views. And it might just be that this is a pure marketing ploy by the organization in an attempt to regain market share for Firefox. After all, global usage of the Firefox browser has been on a downward trend for the past few years, dropping from 28% of the market two years ago to 19% as of last month. And let me be clear — there is nothing inherently wrong with Mozilla marketing its browser to privacy-conscious people or using technology to blacklist websites. As a private organization, it should be free to do what it wants, and users can decide for themselves whether they want to download the browser. But that does not mean that others in the Internet community should not speak out against the negative consequences its decisions may have on others.
Perhaps the worst hypocrisy is coming from the privacy zealots who are endorsing the Cookie Clearinghouse. These advocates have routinely claimed that they support the principle of user choice, but this is now proving to be little more than empty rhetoric. As soon as they had the ear of a friendly Web browser developer, they did not argue for changes that would let users better specify their preferences, but instead decided to create a system to impose their rigid privacy standard on everyone else. As I have argued before, these privacy advocates fundamentally oppose the idea of corporations having personal information, not just about themselves, but about anyone. Their paternalistic view of Internet users is at the heart of their arguments in favor of government regulation to protect consumers from themselves. They do not want to give users choice, they want to make the choice for users.
A better approach to privacy is to foster a system of choices and consequences. On the business side, companies can choose the privacy policies that make most sense for them and their customers. If they choose badly, the consequence is that consumers will not use their products and services. And if they choose to violate their stated policies, then the consequence is that the FTC can take action against them for unfair and deceptive trade practices. On the consumer side, users can choose which products and services to use. If users do not trust Facebook, then they should not use it. If users (mistakenly) insist that Google is reading their email, then they can find another email provider. And if users do not like targeted ads on a particular website, then they can point their browsers somewhere else. The consequence for users is that they miss out on the benefits of these websites. And more generally, if users choose to opt out of sharing their data with certain businesses, then the consequence is that these businesses can decide whether or not to continue providing access to their products and services.
Balancing choices and consequences will foster online innovation and economic growth while still protecting the principle of user choice that privacy advocates claim to cherish.