As the European Union readies for its upcoming elections, accelerating the fight against fake news has become a top priority. Many policymakers are concerned about attempts to covertly use online platforms to insert propaganda and incendiary messages into public discourse in European democracies by targeting particular groups with disinformation campaigns. One powerful tool in the fight against fake news is artificial intelligence (AI), which can be used to automatically detect and respond to this content as well as empower users with the ability to verify the veracity of claims. A number of leading tech companies, including Facebook, Google, and Twitter, have committed to self-regulatory standards and developed a code of practice on disinformation, and the European Commission is evaluating the extent to which automated tools and self-regulation can counter fake news, without introducing new distortions to public discourse or stifling freedom of speech.
On February 20, 2019, the Center for Data Innovation held a conversation about how the public and private sectors can work together to accelerate the use of AI to combat fake news. Eline Chivot, senior policy analyst at the Center, moderated the discussion.
To begin, Chivot asked the panelists to comment on what online disinformation means in the context of the upcoming elections. Paolo Cesarini, the Head of Unit of media convergence and social media at the European Commission’s DG Connect, stated that the Commission has been acting rather quickly as it became aware of disinformation strategies polluting the public space. However, this problem is complex and impacts everybody, so it is difficult to design solutions and a plan without having a wide, public conversation, which will take some time.
Milan Zubicek, public policy & government relations manager at Google, elaborated on Google’s commitment to the code of practice on disinformation. Google, unlike several other organizations, agreed to keep all five parts of the code. As such, it is focusing on scrutinizing ads, increasing transparency, ensuring that no foreign actors create targeted political ads, protecting the integrity of its services, empowering consumers, and cooperating with researchers.
Clara Hanot, advocacy and fundraising officer at the fact-checking organization EU DisinfoLab, believes that disinformation strategies are rapidly evolving. It is difficult to monitor and debunk false ideas spreading through WhatsApp or Facebook groups. As such, she emphasized that the focus should particularly be on underground networks—such as Reddit and Russian social network VK—where disinformation often sprouts and then spreads to more mainstream outlets.
Jens-Henrik Jeppesen, director of European Affairs at the Center for Democracy and Technology, believes that the response to disinformation has improved, since there is vigilance and awareness from all sides—the private sector, public authorities, and citizens. He noted that there is an expectation and demand for quick fixes and results, but the Commission has realized that it is a multifaceted problem and that this will likely be an issue for years to come.
He emphasized that the term “fake news” is often used to denigrate points that one does not agree with, which can derail the disinformation conversation. Fortunately, the Commission developed a definition of disinformation as “verifiably false or misleading information that is created, presented, and disseminated for economic gain or to intentionally deceive the public, and may cause public harm.” According to him, the key is to stay focused on that narrow definition, in order to not veer into any contentious political debates—and particularly because disinformation online is hard to measure.
Chivot next asked about the issue of free speech, particularly whether controlling disinformation could limit others’ ability to express themselves. Jeppesen thinks this is a top concern but noted that the Commission’s definition is a good tool to stay focused, since it will restrict governments or organizations from censoring news that they simply disagree with. As the Commission moves ahead, it should be careful not veer into the territory of suppressing legitimate political debate.
AI can be a powerful tool to automatically detect and respond to intentionally false content. According to Zubicek, Google is doing a lot in this area. Each year, Google runs hundreds of thousands of experiments on their algorithms, creating thousands of changes with each adjustment being tested against human reviewers. He noted that AI is not a “silver bullet,” and that there needs to be human oversight to contextualize information. In addition, Google has started to progressively balance relevance with source credibility, ensuring that reliable sources are the first ones to appear.
Overall, there was a consensus that a combination of human verification and technology is necessary. Zubicek, Jeppesen and Hanot agreed that AI is a valuable tool, but that algorithms need to be overseen by humans. Hanot believes that AI can be a useful tool for the fact-checking community in particular. Chivot pointed out that debunking disinformation is increasingly difficult, especially with the rise of deep fakes. Government funding for R&D of new technology could help counter such propaganda.
However, the question remains: Can industry self-regulate its efforts to combat disinformation? According to Cesarini, the Commission’s choice in this phase is clearly in favor of the self-regulatory approach, since regulating disinformation is extremely challenging. However, the Commission will keep an open mind, and if the current model isn’t effective, it will investigate co-regulation as an option. He stated that all signatories are doing their work to provide users with better content, with the exception of YouTube, which still hosts a myriad of conspiracy theories. Zubicek responded by pointing out that YouTube is hard to reform, since its content is primarily user-generated. However, Google is working on limiting disinformation shared on this platform by building up the authoritative content it lacks.
Both Cesarini and Hanot believe that it is important to include NGOs in the fight against disinformation, since these actors understand what is happening in their region, language, country, or policy realm. In addition, Hanot emphasized the need for all stakeholders to work together. For example, fact-checkers need technologies and support from platforms, and researchers need access to data from platforms. Cesarini, however, does not believe that fact checking is sufficient to combat fake news. Cesarini mentioned that in order to understand where disinformation happens, the Commission and researchers will need more access to data. Zubicek acknowledged that Google is looking into ways of publishing data in a more useful way, but that it must also protect its system and privacy.
It is imperative to discern who can determine truth. According to Chivot, this can be complicated since governments cannot be the single source of truth, and many would rather not trust platforms. Cesarini urged platforms to examine their role as distributors of content and to particularly consider whether they should provide readers with corrections if disinformation has slipped through. Hanot shared this view, encouraging the public and policymakers to examine the liability of platforms. She also believes that users should use their own judgement as well—in this, media literacy can provide powerful support. Jeppesen, on the other hand, warned of the risks involved when government authorities insist that the technology industry should be the one fixing the disinformation issue, as such demands can lead to “over-censorship” and the restriction of legitimate content.
Ultimately, accelerating the fight against fake news has become a top priority which AI can support. But it will also take the commitment and coordination of multiple stakeholders—across the public and private sectors—to combat disinformation.
Video from the event is available online. Watch Here.