Corporate Responsibility in the Deployment of Facial Recognition Technology: Prioritizing Transparency and Accountability

Authors: Moushmi Mehta and Yonida Koukio

In 2020, the Canadian parliament held hearings on the use of facial recognition technology (FRT) by law enforcement agencies.[1] The hearings were prompted by concerns about the potential for the technology to be used for racial profiling and the violation of individual privacy rights. The hearings resulted in calls for greater transparency and accountability in the use of the technology.

FRT has become increasingly popular among businesses for various applications, including security, marketing, and customer service. However, with the widespread adoption of this technology, there are growing concerns over its potential impact on privacy, human rights, and data protection.

Corporate responsibility in the deployment of FRT is vital to ensure the protection of individual rights and maintain public trust. It is essential for businesses to prioritize transparency and accountability in the use of this technology to mitigate the risks associated with its deployment.

Privacy Implications from the Use of FRT

The use of FRT in public spaces has significant implications for privacy and human rights. One of the biggest concerns is the potential for mass surveillance, which could enable the tracking and monitoring of individuals without their knowledge or consent. It also raises questions about accuracy, bias, and discrimination, as the technology has been shown to have higher error rates for people with darker skin tones and women.[2]

There are also concerns about the potential for the technology to be used for profiling, tracking political or religious affiliations, and targeting vulnerable groups.[3] With such concerns, companies must ensure that their facial recognition systems are regularly audited and monitored to reduce bias and errors. Significantly, several tech companies have made pledges to use the technology ethically and responsibly. For example, Microsoft has called for federal regulation of the technology and has stated that it will not sell FRT to law enforcement agencies until proper safeguards are in place.[4]

Notable Examples of Misuse of FRT

However, on the other spectrum, there are also some notable examples of corporations misusing the FR technology:

  • Dior is an example of a corporation that has used FRT to collect and retain data. In 2020, Dior launched a virtual try-on feature for its sunglasses that used FRT to detect a customer’s face and superimpose the sunglasses on their image. The technology collected and retained biometric data from the customer, including their facial features and measurements.[5]
  • Clearview AI is a facial recognition company that scraped billions of images from social media platforms and other websites to create a massive facial recognition database. The company sold access to this database to law enforcement agencies and other businesses. However, Clearview AI did not obtain consent from individuals whose images were used in the database, raising serious concerns about privacy and data protection.[6]
  • Uber, in 2019, was sued for allegedly using FRT to identify drivers without their consent. The lawsuit alleged that Uber used the technology to verify the identity of its drivers but did not obtain proper consent or inform drivers of how their biometric data would be used.[7]

The Office of Privacy Commissioner of Canada’s Position

At this stage, it is worthy of noting the Office of Privacy Commissioner of Canada’s (OPC) decision in the Cadillac Fairview case[8] because it set a precedent for the use of FRT in public spaces in Canada. The OPC launched an investigation in 2018 after it was revealed that Cadillac Fairview, a major Canadian property management company, had been using FRT without the knowledge or consent of individuals in and around 12 of its shopping centers across Canada. The technology was used to capture images of shoppers’ faces, analyze their age, gender, and emotions, and then use this information for security and marketing purposes.

In its findings, the OPC concluded that Cadillac Fairview had collected and used the personal information of shoppers without their knowledge or consent, which violated the federal Personal Information Protection and Electronic Documents Act (PIPEDA). The OPC also found that Cadillac Fairview had inadequate safeguards in place to protect the privacy of individuals and that the company had not been transparent enough about its use of FRT.

More recently, in its guidance titled “Privacy guidance on facial recognition for police agencies” published by the OPC in May 2022, the OPC stressed the importance of designing initiatives with privacy protections built in from the outset. The OPC also suggested that “[t]o be most effective, such protections must be incorporated during initial conception and planning, following through to execution, deployment, and beyond.” It went on to clarify that privacy protections ought to be designed to protect all personal information involved in an initiative, including “training data, faceprints, source images, face databases, and intelligence inferred from FR searches, in addition to any other personal information that may be collected, used, disclosed, or retained.”

Compliance Steps by Entities Using FRT

The OPC has made it clear that companies must obtain meaningful consent from individuals before collecting their personal information, and that they must have adequate safeguards in place to protect this information. The decision also highlights the need for transparency and accountability when it comes to the use of FRT, as well as the importance of conducting privacy impact assessments before deploying the technology in public spaces.

Another critical factor for businesses using FRT to elevate their products is to ensure total legal compliance. For instance, Dior’s use of virtual sunglasses to target potential customers using FRT is an example of how corporations can face legal liabilities when using this technology for marketing purposes. While Dior’s campaign was innovative, it raised concerns around the collection and use of individuals’ personal information. Ethical usage of such technology includes providing clear information about the technology’s use and its implications for individuals. Companies should also conduct privacy impact assessments to evaluate the technology’s impact on privacy and human rights.

For businesses considering usage of FRT, the following non-exhaustive list of steps should be kept in mind:

  • Obtain meaningful consent.
  • Conduct privacy impact assessments.
  • Implement FRT policy.
  • Ensure you have the required technological infrastructure and trained personnel to implement and enforce FRT policies.
  • Invest in secure data storage and processing systems.
  • Implement regular security audits.
  • Provide regular training for employees who use / have access to FRT systems.
  • Engage with stakeholders and be transparent about the FRT usage.

Directors’ and Officers’ Personal Liability Under the Proposed Canadian Consumer Privacy Protection Act

With the advent of the incoming Bill C-27-Canadian Consumer Privacy Protection Act (Bill), it is important for corporations to note that the bill proposes several mechanisms for holding companies responsible for the misuse of FRT. The Bill proposes to grant individuals the right to bring a private right of action against organizations that contravene its provisions.

This would mean that individuals may be able to sue companies that misuse FRT for damages, including for any loss or injury that they suffer as a result of the misuse. Under section 52 of the Bill, it also proposes to hold directors, officers, agents of organizations personally liable for contraventions of its provisions if they participated or engaged in the commission of that breach, which could apply to the use of FRT.

FRT is poised to become a ubiquitous part of modern life and business operations, with applications ranging from security and surveillance to advertising and social media. While this technology offers many benefits, it also raises significant concerns about privacy, civil liberties, and potential abuse. To address these concerns, corporations must ensure that they are using FRT in a responsible and compliant manner, with clear policies in place to protect user data and prevent misuse. Notably, under the Bill, representatives of the corporations will not be held liable if they can show that due diligence was exercised to prevent such breaches.

Key Takeaways

While policies are an essential component of responsible FRT usage, they are not enough on their own. Corporations must also ensure that they have the necessary technological infrastructure and trained personnel to implement and enforce these policies effectively. This includes investing in secure data storage and processing systems, implementing regular security audits, and providing regular training for employees who use or have access to FRT systems. Additionally, corporations must be transparent about their FRT usage and make efforts to engage with stakeholders, including customers, privacy advocates, and regulatory agencies. By doing so, they can not only minimize the risk of legal and reputational damage but also foster trust with their customers and stakeholders. Ultimately, responsible use of FRT will be critical in ensuring that this technology benefits society without infringing on individual privacy rights.

If you have any questions about this article or wish to learn more, please contact the authors. Oziel Law communications and legal articles are intended for informational purposes only and do not constitute legal advice or an opinion on any issue. To obtain additional details or advice about a specific matter, please contact our lawyers.


[1] In February 21, 2020, the Office of Privacy Commissioner of Canada launched a joint investigation with provincial privacy authorities in Quebec, Alberta and British Columbia into Clearview AI’s collection of facial images in its database and subsequent disclosure to its customers.

[2] Alex Najibi, “Racial Discrimination in Face Recognition Technology” (24 October 2020) Harvard SITN Blog, online at: < https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/>

[3] Michael Gentzel, “Biased Face Recognition Technology Used by Government: A Problem for Liberal Democracy” (25 September 2021) PubMed Central, National Library of Medicine, online at: < https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8475322/>

[4] Ry Crist, “Microsoft says it won’t sell its facial recognition tech to police” (11 June 2020) CNET, online at: < https://www.cnet.com/tech/tech-industry/microsoft-bans-police-from-using-its-facial-recognition-software/>

[5] Delma Warmack-Stillwell v. Christian Dior Inc., USDC, Northern District of Illinois Eastern Division, Case No.: 22-cv-04633.

[6] Clearview AI (“Clearview”) is a US-based company that created and maintains a large database of images containing faces (along with associated hyperlinks to the location on the Internet where the image was found). Clearview account holders can search this database for matching faces using facial recognition technology.

[7] R. Booth, “Ex-Uber driver takes legal action over ‘racist’ face-recognition software” (5 October 2021) The Guardian.

[8] Privacy Commissioner of Canada, Report of Findings #2018-007 – Cadillac Fairview Corp. Ltd. (October 18, 2018).