ITIF Logo
ITIF Search
CMA Chair Falls Into the Trap of AI Fear-Mongering as He Reframes Old Risks as New

CMA Chair Falls Into the Trap of AI Fear-Mongering as He Reframes Old Risks as New

February 5, 2024

Marcus Bokkerink, Chair of the Competition and Markets Authority (CMA), delivered a keynote speech to the AI Fringe Hub on November 1, 2023. In his speech, Bokkerink discussed the balance between competition and consumer protection through the lens of AI, before evaluating the broader digital context. Whilst some opportunities from AI were touched upon, the speech focused heavily on AI risks, specifically calling out four examples of how “AI could supercharge the harms to consumers and to competition in digital markets.” It is disappointing to see officials yet again engage in a level of fear-mongering disproportionate to the actual risks associated with AI, and Bokkerink repeatedly shrouds beneficial uses with issues that predate AI’s recent technological advancements.

First, he indicates search algorithms can be distorted to promote profit over people, such as a travel site ranking results based on the commissions it receives rather than one that ranks results based on user preferences. He goes on to say AI would exacerbate this problem by purporting to offer more personalized and, therefore, seemingly more trustworthy results, even when it does not have consumers’ best interests in mind. The issue with this argument is that it looks at the problem superficially. Dishonest vendors will naturally push customers to products offering the highest commission, and AI is, of course, a vehicle for this, but this can and has been done without AI. More disclosure on potential conflicts of interest, as opposed to limitations on the use of AI, would better protect consumer interests regardless of whether they use AI. Transparency across businesses would not only improve trust but also encourage competition by allowing informed consumers to avail themselves of the best deal. Regulators should, therefore, set rules on transparency for any business that earns revenue through commissions, regardless of whether that business, in fact, uses AI.

For his next two examples, Bokkerink speaks of the potential harm of AI-generated fake reviews and using AI to manipulate consumers with targeted advertising. In both cases, he downplays the fact that the CMA uses AI to detect both fake reviews and hidden advertising in endorsements on social media. Rather than place focus on factual use that showcases AI’s benefits, he emphasizes hypothetical risks, giving no justification for why he posits that AI will supercharge consumer harms rather than supercharge consumer protection. Moreover, fake reviews are not new, nor is advertising designed to manipulate consumers, such as subliminal advertising, something that is already illegal in the UK. AI doesn’t change these fundamental issues.

Finally, his case for misinformation is also weak. He states that “AI foundation models can get things wrong” but that “the chatbots they power sound so convincing.” Misinformation has plagued the online space, as well as older forms of media, long before the advent of generative AI. Indeed, fake news has been around for over 500 years, emerging soon after the invention of the printing press. One of the root causes of misinformation is when people, especially young generations, rely on social media for information when they cannot distinguish between legitimate and illegitimate sources.

In 2022, Ofcom revealed that only 22 percent of adults were able to correctly identify the signs of a genuine post. With children, it was 11 percent. In 2018, MIT scholars showed that false news stories on Twitter (now X) were 70 percent more likely to be retweeted than true ones, and six times quicker at reaching people. It is wrong to frame the issue of misinformation as a problem of AI when there are more obvious culprits.

In addition to these examples, he suggests that without competitive pressure, businesses have no incentive to reduce error rates of foundation models. First, this hypothetical is completely divorced from reality; there is massive competition in this market between companies like Google, Microsoft, Meta, Anthropic, and OpenAI as they seek to address consumer demand for more reliable models. Second, even the hypothetical is incorrect. Progress for the sake of progress is an innately human experience, and many of the individuals who build AI models are scientists first who seek to develop and share ideas for the purpose of improvement. This alone will drive improvement and drive down error rates. In addition, even a hypothetical firm with a monopoly on foundation models would have strong incentives to reduce errors in these models because it would be competing against other solutions to process, generate, and present information, including using well-trained human workers.

Bokkerink does not raise any new issues in his speech, and he mostly echoes common fears that AI will make things worse without much critical thought about why that is not actually likely to be the case. Unfortunately, this type of rhetoric from one of the UK’s top regulators—as well as the potentially hostile approach it suggests the CMA might take towards AI—risk stifling UK innovation in AI before its benefits fully emerge.

Back to Top