Special thanks to our articling student Andie Hoang for contributing to this update.
As artificial intelligence and its integration into business operations continues to evolve rapidly, many employers are exploring the use of AI systems in a bid to make hiring decisions more efficient and data-driven. “AI” encompasses a wide range of technologies from simple automated resume screening tools and complex machine learning systems to the forward-looking agentic AI – the kind of AI that does tasks independently.
This rise in the use of AI tools in making employment-related decisions has spurred legislators to regulate their use. This has created a minefield of increased legal liability for employers, especially concerning privacy considerations and the potential for these tools to exhibit biased decision-making. This article provides an overview of the current state of legislative developments related to AI in hiring and recruitment in Ontario, federally, and internationally. It also highlights best practices for employers who are considering the adoption of such tools.
Legislative Developments in Ontario, the Federal Jurisdiction and Beyond
Ontario
On March 21, 2024, Bill 149 – Working for Workers Four Act received Royal Assent as part of a series of legislative initiatives that have been introduced by the Ontario government under the “Working for Workers” banner since 2021. Each piece of legislation in this series seeks to address various contemporary issues within Ontario workplaces through amendments to the Employment Standards Act, 2000 (the “ESA”). Bill 149 brings about a number of additional changes that will be relevant for employers (which are summarized in our blog post), especially relating to the use of AI in the hiring process.
Starting January 1, 2026, employers will be required to disclose in job postings whether they are using artificial intelligence in the hiring process (i.e., if AI is being used to screen, assess or select applicants for the given position). For the Ontario government, the purpose of such disclosure is “to strengthen transparency for job seekers given that there are many unanswered questions about the ethical, legal and privacy implications that these technologies introduce.”
However, the details of how this disclosure requirement is to be applied remain murky as the use of AI tools in the recruitment process encompasses a wide range of technologies. From simple keyword filtering systems to language learning models that conduct predictive analyses of a potential employee’s fit, as AI tools and systems evolve the scope of what is or will be considered to be captured by the new ESA amendment remains unclear.
The end of 2024 saw an accompanying regulation to the ESA published which defined “artificial intelligence” as “a machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.” This is one of the leading definitions of AI and is virtually identical to the Organisation for Economic Co-Operation and Development definition. However, this definition fails to clearly set out what kind of systems, tools, or models are considered AI for the purposes of triggering the disclosure obligation and those which do not. This uncertainty may lead employers to inadvertently omit disclosure, believing a tool or system to not fall under the definition of AI under the ESA.
This amendment to the ESA, however, still fails to properly address the issue of bias in hiring. Disclosure alone cannot ensure fairness in hiring, nor does it clear the “air of mystery” surrounding AI’s operation, especially in the context of hiring practices.
Canada
Many national governments have decided to further regulate the use of AI and Canada is no exception. Parliament was working towards establishing the Artificial Intelligence and Data Act (“AIDA”) as part of Bill C-27, which completed its second reading in the House of Commons in April 2023. The AIDA was set to require employers to ensure any AI tool used would operate with human oversight and to report any serious incidents (data breaches, privacy concerns, etc.) that arise from the system’s use to both the system’s developer and an Artificial Intelligence and Data Commissioner. However, given parliament’s recent prorogation, any further development regarding the AIDA has been effectively stopped in its tracks.
It remains to be seen if the federal government will continue to seek further AI regulation, but given increased regulation efforts undertaken in international jurisdictions, it is very likely that AI regulation will be on the docket for the next federal government.
Internationally
While Canadian regulation of AI continues to develop, the European Union’s Artificial Intelligence Act became law in 2024 and continues to come into force in phases between 2025 and 2027. This Act aims to establish requirements for employers using “high-risk” AI systems in areas of recruitment and performance evaluation. Canadian based employers who operate and conduct recruitment in the European Union should take care to ensure whether they are required to comply with the law. Further, as the law comes into force in the EU, it is likely to influence subsequent national and provincial legislative developments in Canada.
Takeaways
Many employers are lured by the promise of AI to streamline the hiring process and to efficiently identify top talent. However, employers must remain vigilant about these systems’ potential to perpetuate biases and remain up to date with current and future legislative requirements.
Before deploying AI tools for use in hiring and recruitment, employers should consider:
- Conducting an AI impact assessment through evaluating how the AI tool or system makes decisions, what data it relies on, and whether it may inadvertently incorporate characteristics (such as race, religion, gender, sexual orientation, etc.) or other factors prohibited by applicable legislation into its decision-making process.
- Ensuring they fully understand the compliance risks of utilizing AI in the hiring process by familiarizing themselves with applicable legislative requirements and possible penalties in event of a breach.
- If AI is to be used in the hiring process, developing an AI strategy that centres human oversight over its application. AI tools are supposed to aid and streamline the recruitment process, not replace it.
For more information or guidance in navigating the use of AI in the workplace, contact a member of Baker McKenzie’s Labour & Employment team.