Principles to Promote Responsible Use of AI for Workforce Decisions
Given the transformative potential of AI for workforce decisions, policy should tilt toward enabling transformation with this technology.
As the global pandemic has accelerated the digital transformation of the economy, including by forcing many businesses to move to remote work, more employers are accelerating their use of artificial intelligence (AI) to support decision-making about the workforce.
AI-enabled tools can support workforce decisions by helping businesses manage their existing employees, as well as recruit and hire new ones. They can boost productivity among employers, such as by reducing the time needed to hire new employees, increasing retention rates, and improving communications and team dynamics among workers. In addition, these tools may help employers reduce human biases when hiring, decide on compensation, and make other employment-related decisions.
To successfully deploy AI for workforce decisions, employers will need to address potential concerns. These include ensuring that the increased use of AI does not exacerbate existing biases and inequalities, metrics AI tools produce are fair and accurate, increased monitoring of employees is not unduly invasive, and processing of biometric does not reveal sensitive personal information about employees that they may wish to keep private, such as information about their emotions, health, or disabilities.
To address these concerns, several policymakers and advocacy groups have called for new public policies that apply the “precautionary principle” to AI, which says that government should limit the use of new technology until it is proven safe. In short, they favor restricting the use of AI because they believe it is better to be safe than sorry. But these policies do more harm than good because they make it more expensive to develop AI, limit the testing and use of AI, and even ban some applications, thereby reducing productivity, competitiveness, and innovation.
Instead, policymakers should pave the way for widespread adoption of AI in the workplace while building guardrails, where necessary, to limit harms. This report offers eight principles to guide policymakers in their approach to the use of AI for workforce decisions:
- Make government an early adopter of AI for workforce decisions and share best practices.
- Ensure data protection laws support the adoption of AI for workforce decisions.
- Ensure employment nondiscrimination laws apply regardless of whether an organization uses AI.
- Create rules to safeguard against new privacy risks in workforce data.
- Address concerns about AI systems for workforce decisions at the national level.
- Enable the global free flow of employee data.
- Do not regulate the input of AI systems used for workforce decisions.
- Focus regulation on employers, not AI vendors.