ITIF Logo
ITIF Search

Comments to the Canadian House of Commons Standing Committee on Industry and Technology Regarding the AI and Data Act

Contents

Introduction. 1

Flawed Premise. 2

Language of the Bill 4

Recommendations. 5

Conclusion. 6

Endnotes. 6

Introduction

For decades, Canada has been at the forefront of advancements in artificial intelligence (AI). However, despite Canada’s important role in the development of AI, national adoption of AI remains low. As policymakers consider how to regulate the technology, they should proceed with caution and keep the economic consequences in mind.

With a robust ecosystem comprised of innovative startups, incubators, labs, and national AI research institutes, Canada has thus far punched above its weight class in the development of AI. One global AI index places Canada fifth in the world for AI capacity and scale.[1] Stanford University’s 2023 AI Index Report shows that Canada has an outsized number of AI startups and researchers on the cutting edge of AI research.[2] It was the first in the world to develop a national AI strategy—one that focused on driving commercialization and adoption of AI to reap the economic benefits that come with it.[3]

These benefits, such as adding $210 billion to Canada’s economy and increasing total factor productivity by 14 percent for every 1 percent increase in AI adoption, could very well be the helping hand that the economy needs to address Canada’s ailing productivity problem and poor GDP per capita growth rate.[4] However, the AI adoption rate amongst businesses remains slow. A Statistics Canada report published in 2023 noted that only 3.1 percent of Canadian businesses have adopted AI, with businesses citing perceived low returns on investments and difficulty recruiting skilled staff to fully harness AI in their day-to-day operations.[5] Despite its strength in the development of AI, Canada places below the global average in the deployment of AI by businesses.[6]

Bill C-27, which includes the proposed Artificial Intelligence and Data Act (AIDA), was introduced in 2022 and proposes a broad, impact-based regime to “enable citizen trust, encourage responsible innovation, and remain interoperable with international markets.”[7] The Information Technology and Innovation Foundation (ITIF) appreciates the opportunity to comment on Bill C-27 and present serious concerns with both the premise of the bill and the language contained in the AIDA section.[8] Strengthening existing laws to address specific risks from AI would better protect Canadians from potential harms than implementing stringent rules for a broad cross-section of AI systems. However, should Canada choose to proceed down the path of horizontal legislation, the best path forward would be to narrow the scope of the bill, add exceptions when AI systems pose little to no risk of harm, and create interoperable rules with its global peers.

Flawed Premise

From the outset, the government’s reasoning for regulating AI was based on flawed logic or simply wrong facts. In the AIDA companion document section titled “Why now is the time for a responsible AI framework in Canada,” it says that “it is difficult for consumers to trust the technology,” and it cites three examples of alleged “high-profile incidents of harmful or discriminatory outcomes.” But none of these examples are valid. Without non-hypothetical risks and clearly defined problems for what the legislation plans to fix, it is hard to evaluate whether the AIDA is going to achieve its goal.

The first example provided refers to a well-known news report of Amazon experimenting with a hiring tool to rate candidates for technical jobs. Amazon’s developers identified that the tool penalized women and discontinued the project in 2017.[9] Moreover, during the experiment, Amazon’s recruiters did not use it to evaluate applicants. In other words, the company did exactly what policymakers should want: It tested its use of an AI tool, detected problems, and then mitigated harm by stopping the project. Creating a new AI law would not have improved that outcome. In addition, Canada’s gender equality laws already prohibit workplace discrimination, and those protections apply even if employers use AI in hiring.

The second example is “An analysis of well-known facial recognition systems showed evidence of bias against women and people of color.” But the now six-year-old study cited is not about facial recognition—technology used to match similar faces, either by searching for similar images in a database (i.e., one-to-many matches) or by confirming whether two images show the same person (one-to-one matches).[10] Instead, it is about facial analysis—technology used to infer characteristics such as age, gender, or emotion based on a photo. Specifically, the study was about whether three commercial facial analysis systems could correctly predict gender across both light and dark-skinned individuals. The two technologies may sound similar, but they are as different as apple trees and apple sauce. Moreover, recent testing by the U.S. National Institute of Standards and Technology (NIST) shows that the best facial recognition algorithms have “undetectable” differences between different demographics.[11] So here again, the evidence falls flat.

The third and final example used to justify regulating AI is that “AI systems have been used to create ‘deepfake’ images, audio, and video that can cause harm to individuals.” The issue is legitimate, although not novel: Software has long made it possible to digitally create “fake” images, although deepfake technology is making it much easier for anyone to produce realistic fake images and videos without much technical expertise. While there are concerns about deepfakes as a source of disinformation, particularly in elections and global affairs, and infringing on celebrities’ publicity rights, its most visible impact is on individuals, particularly celebrities and women, who have fake pornographic images created about them. But AIDA would not address this problem, as the code to produce hyper-realistic images and video—whether legitimate or harmful—is open source and publicly available. Instead, this problem should be addressed with legislation updating Canada’s revenge porn law to prohibit nonconsensual distribution of deepfakes as well. The AIDA is premised on the assumption that stronger technology regulation increases consumer trust and that higher levels of consumer trust will then lead to more technology use. However, past research by ITIF shows that, beyond a minimum baseline of consumer protection, there is little evidence to suggest stronger regulations increase consumer trust and adoption.[12] In fact, additional regulation restricts the supply of digital technologies by raising costs and reducing revenues for companies to invest in new products and services.

The European Union’s (EU’s) impact assessment of its Artificial Intelligence Act (henceforth referred to as the EU AI Act) estimated that small businesses can expect up to €400,000 ($588,000 CAD) in compliance costs for one high-risk AI product, which ITIF has calculated would cause a 40 percent reduction in profit for a European business with €10 million turnover ($14.7 million CAD) wanting to deploy a high-risk AI.[13] As with almost any other product, there is a price elasticity to software, where price increases will result in decreases in demand. As companies developing and adopting AI face higher prices resulting from burdensome compliance costs, this will reduce both the production and consumption of AI in Canada, lessening the ability for Canada to leverage AI to address its productivity challenge and improve the lives of Canadians.

Furthermore, as members of the committee have seen, supporters of the bill have been building a sense of urgency to pass the bill. Minister Champagne has argued that “It is pivotal that we pass AIDA now” and “the costs of delay to Canadians would be significant.”[14] Yet, this sense of urgency is exaggerated, if not outright false. Canadians are not defenseless against the potential risks of AI. As the AIDA companion document notes, “Canada already possesses robust legal frameworks that apply to many of the uses of AI.” Existing laws prohibiting discrimination, like the Human Rights Act, and upholding privacy, like the Personal Information Protection and Electronic Documents Act, already protect Canadians from many of the harms that AI could potentially cause.

The AI industry is also moving quickly to address risks by developing and implementing voluntary measures, such as risk assessments, red teaming, and ethics reviews. In the United States, the Biden administration secured voluntary commitments from leading U.S. AI companies, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, to address safety and security in their AI models.[15] Likewise, the administration secured commitments from major health-care providers and payer organizations to implement voluntary safeguards to support safe, secure, and trustworthy AI within their sector, such as by informing users if content has not been reviewed by a human.[16] These voluntary measures allow policymakers to work with industry to address concerns much faster than going through the legislative or regulatory processes.

To be clear, regulations should protect consumers from the harms that AI might cause, though, in the absence of a clear rationale or evidence of tangible harms, Canada should strengthen existing laws like the Employment Equity Act or the Criminal Code to better protect Canadians from bad actors, whether or not they use AI, and avoid constraining Canadians’ ability to leverage AI to increase their productivity.

Canada is not falling behind its peers by not rushing to implement the AIDA. Contrary to what proponents are saying, nothing in this legislation would give Canadian firms a competitive edge in the development or adoption of AI. The rushed, fear-based rhetoric about dangerous AI will waste away the early mover advantage that Canada has cultivated in the global AI economy and result in buyer’s remorse.

Language of the Bill

As one of the stated goals of the AIDA is interoperability with international markets, it is puzzling to see the government shy away from the EU AI Act’s focus on regulating “high-risk” AI systems in favour of “high-impact” AI systems. One could imagine a situation where a system is considered high-impact but low-risk, such as a conversational search engine, which would not be governed by the EU AI Act but would face steep regulatory requirements despite presenting little risk of harm to Canadians.

Article 2, 44e of the EU AI Act looks only at general-purpose systems that are based on general-purpose AI models that have the capability to serve a variety of purposes.[17] On the other hand, the AIDA defines general-purpose systems far more broadly, including a far greater number of AI systems that are not derived from general-purpose AI models, such as limited-purpose machine learning models.[18] Including any AI system that is designed for use in many fields and purposes in the AIDA misses the point of adding additional clauses that govern general-purpose systems, as multipurpose AI systems not derived from general-purpose AI models can easily be covered by the rest of the proposed regulation. For instance, an AI system that can be used in multiple activities across multiple fields, such as a weather-forecasting system that is used in both aviation and agriculture, may be regulated as a general-purpose system despite not truly being a general-purpose AI.[19] This broad inclusion of multipurpose AI systems will place far steeper compliance requirements on an AI system simply because it is useful in multiple ways, rather than because it poses any kind of risk or impact to Canadians.

The classes of use found in Schedule 2 of the AIDA are far broader than the EU’s corresponding AI Act in its regulation of services. For instance, the proposed classes of use in the AIDA will regulate all moderation and prioritization on online communication platforms and determination/prioritization of services to individuals as high-impact, whereas the EU’s AI Act does not regulate the former and explicitly only regulates the deployment of AI on essential services. As noted above, discrimination is already prohibited under Canada’s existing laws, covering the vast majority of the motivation to cover all services under the AIDA as opposed to just essential services. Under the proposed Canadian regime, a bustling, family-owned cell phone repair shop that uses an AI system to prioritize which customers to service first based on the availability of their replacement parts would fall under the same class as a business using an AI system to determine the pricing of life insurance.

A further difference found between the AIDA and the EU AI Act are the exceptions. Under the AIDA, there are no exceptions to the classifications found in Schedule 2, whereas clause 32a of the EU AI Act lays out exceptions for systems that are technically classified as “high-risk” but do not pose a significant harm, with specific criteria in which AI systems would be determined to not pose a significant risk of harm.[20] Meanwhile, the aforementioned cell phone repair shop would have no recourse and would either have to devote significant resources to implement measures that address risks associated with high-impact AI systems.

The government’s AIDA companion document states that open-source AI models would not be impacted by the AIDA, but this exception is not explicitly laid out in the text of the bill. Given the importance of open-source AI models—free and open access to these AI models creates pathways for innovation for both better models and research on safeguards—policymakers should clarify exactly what exceptions AIDA makes for open-source AI, especially when it is developed and released for scientific or noncommercial use. For example, the EU AI Act’s Article 2(5a) specifies “AI systems and models developed and put into service for the sole purpose of scientific research and development.”[21]

Having a significantly higher number of systems fall under the AIDA than would fall under the EU AI Act will reduce interoperability between these two jurisdictions for little material benefit to Canadians. Additionally, as U.S. policymakers contemplate what AI regulation will look like, it would be worthwhile for Canada to consider how Canadian regulations will align from a multijurisdictional perspective, particularly given how many Canadian AI firms do business in the United States. That said, Canadian regulation should be designed for Canada, and should not mimic other countries for the sake of harmonization. However, forcing firms to navigate entirely disparate regulatory regimes without evidence that other international jurisdictions’ regulations do not go far enough, will make it harder for Canadian AI companies to succeed, increase the overall compliance costs for businesses looking to utilize unobtrusive and safe AI systems, and lead to unnecessary strain on the duties of the proposed AI and Data Commissioner.

Recommendations

ITIF respectfully offers the following recommendations to this standing committee:

Start over: The AIDA is based on a faulty premise with limited evidence outside of hypothetical risks, which does not justify the extent of regulation that will negatively impact the development and adoption of AI in Canada. There is simply too much at stake to rush such impactful legislation that could have chilling effects on the Canadian economy, particularly when existing laws already protect Canadians from most potential harms that could be caused by AI.

Use narrower classifications: The AIDA will unnecessarily regulate many AI systems that the EU does not consider to be high-risk, due to a difference in terminology and definitions. This legislation’s use of “high-impact” instead of “high-risk,” as well as the definition of general-purpose systems to include systems that aren’t actually general-purpose AI systems, will needlessly regulate AI that poses no risk of harm to citizens and limit Canadians’ ability to harness AI. Given that AI regulations will create potentially significant compliance costs, policymakers need to more carefully consider which systems should be affected by regulation.

Create exceptions for AI systems that do not pose a risk of harm: There will inevitably be cases where AI systems may technically be classified as being high-impact or high-risk, but not pose any threat to the health, safety, and wellbeing of Canadians. Clear and reasonable rules detailing exceptions in these edge cases will engender flexibility in the regulatory regime.

Create interoperable rules: The U.S. National Institute of Standards and Technology is working through an open and participatory process to establish many AI-related standards related to AI safety, such as the AI Risk Management Framework, definitions for terminology, and best practices for activities such as transparency and accountability. Similarly, the EU will soon be developing standards for risk management, data governance requirements, conformity assessments, and many other requirements of its legislation. If Canada seeks to impose similar obligations on businesses, it should strive to create interoperable rules so that a firm operating in multiple jurisdictions does not have to duplicate its efforts, such as releasing one model card for Canada and another for the EU.

Conclusion

Canada should take a more balanced approach to regulating AI. Regulating AI that poses little risk of harm to Canadians will not encourage responsible innovation. On the contrary, it will decrease overall levels of innovation without enabling citizen trust, all while straying further from the government’s goal of interoperability with international markets. It is important to recognize that, not only will onerous regulation be an albatross that prevents Canadian businesses from innovating and adopting AI, but AI companies and talent are mobile. If the economics of staying in the Canadian economy no longer make sense, these companies may just pack up and go to a country with regulation that is more conducive to innovation.

Canada is the birthplace of AI, and it currently has an early mover advantage in making sure that it can leverage AI to boost its ailing productivity and grow its economy. PwC recently estimated that the global value of AI will be upwards of $15.7 trillion by 2030.[22] However, if the AIDA is implemented as is, Canada may not be able to fully realize the economic benefits that AI will bring. The number one AI priority should be to accelerate AI adoption in key areas where the technology has the potential to tangibly improve the lives of Canadians, let businesses work smarter, and slingshot Canada to the forefront of the global innovation economy.

Thank you for your consideration.

Endnotes

[1] Tortoise Media, “The Global AI Index,” accessed March 1, 2024, https://www.tortoisemedia.com/intelligence/global-ai/#rankings.

[2] Nestor Maslej, et al., The AI Index 2023 Annual Report, AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, April 2023: 58, https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf.

[3] Ibid, 285.

[4] Public First, “Google Canada Economic Impact Report—2022,” https://googlecanadaimpact.publicfirst.co/; Xueyuan Gao and Hua Feng, “AI-Driven Productivity Gains: Artificial Intelligence and Firm Productivity,” Sustainability 15, no. 11: 8934, https://doi.org/10.3390/su15118934.

[5] Statistics Canada, “The Daily — Survey of Advanced Technology, 2022,” July 28, 2023, https://www150.statcan.gc.ca/n1/daily-quotidien/230728/dq230728b-eng.htm

[6] IBM Global AI Adoption Index 2022: 4, https://www.ibm.com/downloads/cas/GVAGA3JP.

[7] Government of Canada, “The Artificial Intelligence and Data Act (AIDA) – Companion Document,” March 13, 2023, https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document.

[8] The Information Technology and Innovation Foundation (ITIF) is an independent nonprofit, nonpartisan research and educational institute that has been recognized repeatedly as the world’s leading think tank for science and technology policy. Its mission is to formulate, evaluate, and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress. For more information, visit www.itif.org/about.

[9] Vincent, James. “Amazon Reportedly Scraps Internal AI Recruiting Tool That Was Biased against Women.” The Verge, 10 Oct. 2018, www.theverge.com/2018/10/10/17958784/ai-recruiting-tool-bias-amazon-report.

[10] Buolamwini, Joy, et al. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research, vol. 81, no. 81, 2018, pp. 1–15, https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

[11] Michael Mclaughlin and Daniel Castro, “The Critics Were Wrong: NIST Data Shows the Best Facial Recognition Algorithms Are Neither Racist nor Sexist” (ITIF, January 2020), https://itif.org/publications/2020/01/27/critics-were-wrong-nist-data-shows-best-facial-recognition-algorithms/.

[12] Alan McQuinn and Daniel Castro, “Why Stronger Privacy Regulations Do Not Spur Increased Internet Use” (ITIF, July 2018), https://www2.itif.org/2018-trust-privacy.pdf.

[13] European Commission Directorate-General for Communications Networks, Content and Technology, Study To Support An Impact Assessment Of Regulatory Requirements For Artificial Intelligence In Europe (Brussels: European Commission, April 2021), 138, http://dx.doi.org/10.2759/523404; Benjamin Mueller, “How Much Will the Artificial Intelligence Act Cost Europe?” (Center for Data Innovation, July 2021), https://www2.datainnovation.org/2021-aia-costs.pdf.

[14] François-Philippe Champagne, “Correspondence from the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry - Amendments to AIDA,” November 28, 2023: 3, https://www.ourcommons.ca/content/Committee/441/INDU/WebDoc/WD12751351/12751351/MinisterOfInnovationScienceAndIndustry-2023-11-28-Combined-e.pdf.

[15] The White House, “Voluntary AI Commitments,” September 2023, https://www.whitehouse.gov/wp-content/uploads/2023/09/Voluntary-AI-Commitments-September-2023.pdf.

[16] Assistant Secretary for Public Affairs (ASPA), “FACT SHEET: Biden-Harris Administration Announces Voluntary Commitments from Leading Healthcare Companies to Harness the Potential and Manage the Risks Posed by AI,” December 14, 2023, https://www.hhs.gov/about/news/2023/12/14/fact-sheet-biden-harris-administration-announces-voluntary-commitments-leading-healthcare-companies-harness-potential-manage-risks-posed-ai.html.

[17] Council of the European Union, “Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” Article 2, 44e, January 26, 2024, https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf.

[18] Minister Champagne letter at 9.

[19] Access Privacy by Osler, “Interoperability Comparison between the Proposed Artificial Intelligence and Data Act and the European Union’s Draft Artificial Intelligence Act,” accessed March 1, 2024, Comparison-of-AIDA-and-EU-AI-Act-key-provisions.pdf (accessprivacy.com).

[20] EU AIA, op. cit., at 32a.

[21] Ibid, Article 2, 5a.

[22] PwC, “Global Artificial Intelligence Study: Exploiting the AI Revolution,” 2018, https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html.

Back to Top