Planning to Use ChatGPT? Legal Considerations for Employers
April 20, 2023
| By
Spencer Knibutat
Labour
|
Privacy
|
Human Rights
|
Employment
Overview
On November 30, 2022, OpenAI publicly launched ChatGPT, a Large Language Model (“LLM”) based Chatbot. With its remarkable ability to replicate human speech, ChatGPT shifted the public’s perception of AI and its potential uses. After just two months, ChatGPT had reached 100 million users, making it the fastest-growing consumer internet application in history. Of course, ChatGPT is only one of the newest generation of AI that have been developed over the past few years, which includes other LLM-based Chatbots (e.g., Bing Chat, Bard, etc.), art generators, and medical diagnostic AI.
However, it is ChatGPT’s recent success that has most clearly highlighted AI’s varied use cases and disruptive potential for various industries, including its potential to enhance workers’ productivity or replace some workers altogether. For example, some employers have already implemented LLM-based Chatbots to write computer code, provide customer support, and draft job descriptions, blog posts, and marketing copy. As AI becomes more sophisticated and accurate, its possible applications will broaden correspondingly.
Employers keen to use this new generation of AI should, however, be mindful of applicable and developing legal considerations. As outlined below, the implementation of AI may trigger numerous issues relating to employment, labour, human rights, privacy, and AI-specific laws. Organizations using LLM-based Chatbots and other AI may also face intellectual property law issues, although these are outside the scope of this article.
What Possible Privacy, Cybersecurity, and Confidentiality Considerations Should Employers Keep in Mind when Using AI?
Canadian employers are subject to a variety of privacy and confidentiality obligations under statute, contract, or the common law. Complying with these obligations can be a challenge where an employer relies on AI, given the possible privacy risks that the technology may create.
One such risk is that user inputs may be used as training data for future iterations of AI. In other words, if employees input confidential, personal, or sensitive information into AI like ChatGPT, this information may be used in future outputs, such as those provided to the public. For this reason, several large employers across North America have prohibited their employees from using ChatGPT in the workplace. The privacy implications of ChatGPT have also resulted in regulatory investigations in Canada and several other jurisdictions.
Before implementing LLM-based Chatbots or other AI in the workplace, employers should consider identifying privacy-based risks and taking steps to mitigate and address these risks. Employers should particularly consider conducting a privacy impact assessment. Even in situations where this is not statutorily required, a privacy impact assessment can help determine the extent to which AI may present a risk of a breach of privacy laws or inadvertently disclose confidential information, trade secrets, etc.
Safeguarding Against Malicious Uses of AI
Employers should also be aware of the cybersecurity and other risks associated with the deceptive use of this new generation of AI, both by employees and malicious third-party threat actors. As a result of the increased sophistication of AI, threat actors can use LLM-based Chatbots to generate sophisticated cyberattacks.
These concerns are exacerbated by the anticipated proliferation of “deepfake” technologies, which can be used to create convincing digital imitations of a person’s likeness. Deepfake technology can lead to a variety of cybersecurity and other risks for employers, especially in the era of remote work, as malicious actors can impersonate employees, clients, regulators, and other individuals via audio calls, voicemails and even video-call technology. Employers should consider adopting policies and protocols to ensure their employees have properly verified the identity of those with whom they are speaking virtually, and provide employee training on cybersecurity controls.
What Human Rights Issues Should Employers Consider when Using AI?
AI has a propensity for producing outputs that differentiate based on grounds protected under human rights legislation (known as “biased outputs”).
OpenAI and other AI developers have taken steps to mitigate the risk of biased outputs, such as through content moderation policies. At the time of writing, these steps are not required in most jurisdictions around the world. The regulatory landscape in Canada, however, is expected to change in the coming years.
In the interim, however, biased outputs remain a risk that employers should consider and seek to address. For example, if an employer uses an LLM-based Chatbot that produces biased outputs, any adverse effects of those outputs may give rise to allegations that the employer has breached applicable human rights laws. Examples of such risky situations include where an AI drafts a job description that includes duties that would unjustifiably exclude certain protected groups, or where a customer service Chatbot makes comments based on discriminatory premises or assumptions in conversations with the public. To guard against these potential risks, employers may wish to review public-facing AI outputs or limit the use of AI to lower-risk use cases. In all cases, employers should carefully review the impacts of AI implementation for potential human rights concerns.
What Are the Possible Consequences of Implementing AI to Fulfill Employees’ Job Duties?
LLM-based Chatbots and other AI are unlikely to entirely replace employees. Instead, in the coming years, we expect to see AI technology eliminating tasks traditionally performed by employees, with employees and employers maintaining oversight over the AI performing such tasks. The integration of AI technology in this fashion may be gradual and may pose little risk to employers. In other cases, integration may be more sudden, which can give rise to potential legal risks if employees’ positions are impacted. We have outlined some of the potential consequences of such changes to job duties and the steps that employers may take to mitigate against possible legal risks.
Possible Consequences for Employers with Non-Unionized Employees
Where an employer seeks to use AI to entirely replace employees, the usual set of termination-related legal considerations will apply. In addition to ensuring appropriate statutory, contractual, or common law entitlements are provided upon termination of employment, employers should also ensure that their actions are done in a good faith manner that is neither arbitrary nor discriminatory.
If an employer’s use of AI does not completely eliminate an employee’s position, there may still be a risk that an employee claims that they have been “constructively dismissed”. Constructive dismissal arises where an employer, by virtue of its actions, effectively terminates a worker’s employment. This occurs where an employer unilaterally alters one or more fundamental terms or conditions of an incumbent’s employment (e.g., remuneration, duties and responsibilities, hours of work, etc.). If an employer’s adoption of AI technology fundamentally alters or reduces an employee’s job duties and responsibilities, this may inadvertently prompt claims or allegations of constructive dismissal.
To proactively address the risk of constructive dismissal, prudent employers should consider expressly reserving the right to unilaterally change an employee’s duties and responsibilities from time to time. This reservation of rights must be carefully drafted into an employee’s employment agreement. Where effective contractual language does not exist, employers should consider options for limiting the magnitude of any changes to job duties, such as providing reasonable notice in advance of pending changes or obtaining the employee’s consent to the changes.
Possible Consequences for Employers with Unionized Employees
Generally, subject to the language of the collective agreement or applicable statutes, unionized employers may be in a position to exercise their management rights to alter the duties of bargaining unit employees. Employers may therefore consider amending job descriptions to reflect the implementation of AI in the workplace.
However, collective agreements often include articles that place requirements on employers in the event they implement technological changes, mechanization, or automation that will impact the working conditions or employment security of the bargaining unit. For example, such language may require the employer to consult with and provide adequate notice to the union before any technological changes are implemented, or to pay a premium on termination payments for employees who lose their jobs as a result of technological implementation.
Some jurisdictions’ labour statutes prescribe similar requirements on unionized employers in the absence of the aforementioned collective agreement language. For example, the federal Canada Labour Code requires employers to provide notice of a “technological change” that is likely to “affect the terms and conditions or security of employment of a significant number of the employer’s employees to whom the collective agreement applies”. Whether these obligations will apply where an employer implements an AI will depend on a variety of factors, such as the applicable definition of “technological change”, the impact of the AI technology on employees’ individual working conditions and employment security, and the employer’s motivations in implementing the technology.
Key Takeaways for Employers
LLM-based Chatbots and other AI offer myriad potential use cases for employers to improve their overall efficiency and the productivity of their workforces. The possible use cases for this AI will increase as systems become more sophisticated and tailored to specific industries or job tasks.
However, employers should develop a careful strategy attuned to applicable legal risks when considering how and when to adopt LLM-based Chatbots and other AI into their workplaces, or before permitting employees to use these technologies in their roles. By providing employee training, conducting assessments and having the requisite policies and procedures in place at the front end, employers will avoid unforeseen legal issues and reduce the cost of any associated litigation.
Employers who do not intend to implement AI should similarly stay abreast of the development of these technologies, given the nature of the legal risks that AI can create for their workforces.
Need More Information?
For more information or assistance with legal considerations involving the workplace implementation of ChatGPT or other AI, contact Spencer Knibutat at sknibutat@filion.on.ca or your regular lawyer at the firm.
Download PDF