partnHR/ Blog/ AI & Legislation

The Gaps in Current Employment Legislation Surrounding Artificial Intelligence

If your organization posted a job in the last 12 months and used any form of automated screening, you may already be operating in a regulatory grey zone. The technology moved faster than the law, and the law is still catching up. As HR practitioners and employers, understanding where the gaps are is not optional. It is foundational to responsible practice.

Someone has to be accountable for the decisions these tools produce, and in an employment context, that accountability sits with the employer. That is not a burden unique to the age of AI. It is the job.

The rapid expansion of Artificial Intelligence, and Large Language Models (LLMs) such as Claude (Anthropic), ChatGPT (OpenAI), and Grok (xAI), has fundamentally changed how organizations attract, screen, and select talent. The Federal Government in Canada attempted to establish a unified national AI law through the Artificial Intelligence and Data Act (AIDA), part of Bill C-27. When Parliament prorogued in January 2025, that bill died in committee. Its core concepts, including human oversight, accountability, and risk-based classification, remain influential in regulatory thinking across the country, but without binding force on Canadian employers today.

What Ontario Has Done So Far

Ontario is ahead of most provinces, and has taken two concurrent steps to begin addressing the risks and shortfalls in existing legislation.

The first is a joint report from the Information and Privacy Commissioner of Ontario and the Ontario Human Rights Commission, which released six key principles for responsible AI use:

The second step is the amendments to the Employment Standards Act, 2000 (ESA). Effective January 1, 2026, employers with 25 or more employees must include the following in any publicly advertised job posting:

The Gaps

1. Enforcement Without Standards

Employers are required to disclose that they use AI, but there is no obligation to explain how. There is no requirement to identify which platform is being used, at what stage in the hiring process it is applied, how the algorithm weights any given factor, or what happens when a candidate is screened out. More critically, there is no requirement to disclose error rates. No AI platform can guarantee zero errors, and in large-scale screening processes, even a small error rate translates into real people being wrongly excluded.

Cathy O'Neil examined this issue in substantial detail in her 2016 book Weapons of Math Destruction. Her central argument was that opaque, large-scale algorithmic models disproportionately harm the most vulnerable, and that the people affected often have no recourse because they cannot see the model being used against them. That observation is considerably more relevant today than when she wrote it.

2. The Regulator Problem

AI algorithms in the recruitment process rely on historical data. That is not a design flaw; it is how these systems learn. The problem is that historical hiring data often reflects the biases of the humans who made those decisions. When biased data is fed into these systems, the algorithm does not correct for it. It learns from it, and then scales it. A candidate passed over due to a name that signals ethnicity, a career gap associated with caregiving, or a credential from a non-traditional institution will never know the reason.

The Ontario Human Rights Tribunal was not designed to adjudicate algorithmic discrimination. Neither was any other existing body in Canada. The question of who actually regulates these systems, and with what authority, remains genuinely unanswered. Making this more acute, the federal Regulators' Capacity Fund, which was intended to help build AI oversight capability across the country, exhausted its $14.2 million budget in March 2025 with no replacement announced.

3. No Consistent National Approach

Ontario has moved furthest on AI-specific employment rules, but the country remains deeply fragmented. Alberta's Privacy Commissioner has released recommendations calling for a provincial AI law. Saskatchewan has issued guidance for public sector employees. British Columbia and Quebec have retraining programs in place. No province uses the same framework, and there is no national standard for how workers and employers should expect AI to be governed in an employment context. For multi-province employers, or workers who cross provincial lines, this patchwork creates real confusion and real risk.

Five Strategies for HR Practitioners and Employers

While the legislative landscape catches up, employers are not operating without obligation. Employment standards, provincial human rights codes, privacy legislation, and collective agreements all continue to apply. The absence of AI-specific regulation does not create a lawless space. What follows are five concrete strategies for operating responsibly within it.

1. Build an AI Inventory Before Someone Else Does It For You

The most fundamental compliance failure right now is that many organizations cannot tell you, with precision, where AI is touching their HR processes. An honest inventory goes well beyond job postings. It includes performance management platforms with automated scoring, scheduling tools that influence shift allocation, attendance monitoring systems, and any third-party vendor whose product touches a decision that affects a worker's livelihood.

Map each tool to the decision it influences. Note whether a human can override it, who that person is, whether they have been trained to do so, and what documentation exists when they act on or against the AI's output. Under Ontario's Bill 149, employers are required to retain job postings and related application materials for three years. Your potential liability window under the Human Rights Code and Employment Standards Act can extend considerably further. Document with that in mind.

2. Read Your Collective Agreement Before You Deploy Anything

This is the strategy that catches unionized employers off guard most often, and it can be the most expensive mistake. Most collective agreements restrict an employer's ability to make unilateral changes that affect working conditions, job duties, or how discipline is handled. Rolling out AI that monitors employees, evaluates performance, or automates parts of their work will likely trigger the duty to bargain. If your collective agreement does not explicitly permit the use of AI for a particular purpose such as discipline or performance scoring, introducing it can support a grievance over an expansion of management rights without proper consultation.

Before any AI tool goes live in a unionized environment, pull the collective agreement and review clauses around technological change, job security, scheduling, performance evaluation, discipline, and privacy. Silence in the agreement is not permission. Ontario's Labour Relations Act requires that grievances be resolved through binding arbitration, and arbitration decisions are filed publicly with the Ministry of Labour. A ruling against your organization on AI deployment sets a precedent others can use. If you are heading into a renewal negotiation, this is the time to table AI language proactively.

3. Apply the Human Rights Code as Your Floor, Not Your Ceiling

Ontario's Human Rights Code does not mention AI. It does not need to. It prohibits discrimination in employment on protected grounds including race, ancestry, sex, disability, age, and family status, and it applies to every decision in the employment relationship, including decisions influenced or made by an algorithm.

Adverse effect discrimination, where a neutral-seeming practice disproportionately harms a protected group, does not require intent. Before deploying any AI tool that touches hiring, promotion, discipline, or termination, conduct a human rights impact assessment specific to Ontario's protected grounds. Ask your vendor for disaggregated outcome data by demographic. If they cannot provide it, that is a significant due diligence concern. Best practice here is not to meet the minimum threshold the Code sets, but to treat it as the starting point and go further.

4. Build a Real Human Override Protocol

The most common defence employers reach for when an AI-assisted decision is challenged is that a human made the final call. That defence is increasingly fragile if the human in question was simply presented with an AI recommendation, had no training to evaluate it critically, and no documented basis for departing from it.

Research published in 2025 found that people tend to mirror AI systems' hiring biases, meaning a person who rubber-stamps an AI recommendation has not exercised meaningful independent judgment. Courts in the United States have begun treating AI tools as agents of the employer, which signals that "the algorithm decided" will not function as a defence in Canadian proceedings either.

A meaningful human override protocol means the HR professional reviewing an AI recommendation has access to the candidate's full profile independent of the AI's ranking, has received training on how the tool works and where it has known limitations, and is documented as having independently reviewed the file. Acceptances and departures from AI recommendations should both be recorded with a rationale. This is not only legal protection. It is how AI and humans actually work well together. The technology handles volume and initial pattern recognition; the human brings contextual judgment, equity awareness, and accountability.

5. Adopt a Governance Framework Now, Not When Legislation Forces You To

Canada's federal AI legislation is stalled, but the direction is clear. In May 2025, the federal government appointed Canada's first Minister responsible for Artificial Intelligence and Digital Innovation. A national AI Strategy Task Force is currently consulting. Regulation is coming. The only question is when.

Organizations that wait for binding law to force compliance will spend far more, far faster, than those who build governance infrastructure now. ISO/IEC 42001:2023, the first international AI governance standard, provides a practical framework for building an AI management system that can be applied by any organization regardless of size or sector. It is voluntary today. It will likely become a reference standard for courts, tribunals, and regulators tomorrow.

Adopting a framework now accomplishes several things at once. It gives your HR team a defensible process. It gives your vendor relationships a set of standards to enforce contractually. It gives employees and unions evidence of good faith. And it positions your organization ahead of what is almost certainly coming as a baseline legal obligation within the next few years.

Where This Leaves HR

The legislative gap in Canadian AI governance is real, and it is not going to close quickly. For HR practitioners, the temptation is to wait for clear rules before acting. That is the wrong instinct. The employers getting this right are not waiting for Ottawa or Queen's Park. They are treating AI governance the same way good employers have always treated employment standards: as a floor to build on, not a target to hit exactly.

The floor will rise. The question worth asking now is whether your practices, your vendor contracts, your collective agreement language, and your internal documentation are ready for when it does. The organizations that will be in the best position are those that made deliberate choices during this period, when the rules were still being written.

AI will not replace the need for sound HR judgment. If anything, it makes that judgment more consequential. Someone has to be accountable for the decisions these tools produce, and in an employment context, that accountability sits with the employer. That is not a burden unique to the age of AI. It is the job.

Sources and Further Reading