When Big Brother starts watching in real time

Live facial recognition is a new breed of artificial intelligence. It enables faces to be scanned from CCTV footage or video stream and compared against an existing database of photos in order to identify an individual in real time.

Live facial recognition technology is proving controversial.

In October last year, I took part in the Global Privacy Assembly’s annual gathering of data protection and privacy regulators from across the globe. At the top of our list of priorities for discussion was facial recognition technology.

The assembly’s resulting resolution on facial recognition technology demonstrates international awareness of the dangers that the technology poses to our human rights and the character of democratic societies.

It recognises that live facial recognition is not like other technology. The potential intrusiveness and widespread impacts means it should be treated with special concern.

The Boston City Council voted unanimously in January to ban the use of facial recognition technology by city government. Credit:AP

It should make policy makers sit up and take notice. My fellow regulators and I are certainly paying very close attention.

Canadian privacy commissioners recently published the report of their investigation into the privacy practices of the facial recognition tool offered by Clearview AI Inc. They found that Clearview’s actions amounted to unlawful mass surveillance and the Privacy Commissioner of Canada noted that “it is completely unacceptable for millions of people who will never be implicated in any crime to find themselves continually in a police lineup”.

Law enforcement agencies around the world are beginning to use the technology to scan crowds in public places to search for known or suspected criminals in real time. Its supporters argue that this can lead to more effective policing and lower crime rates.

However, the use of live facial recognition has also drawn concerns about the dangers it could pose for citizens’ privacy and other human rights. Critics point to the chilling effects it could have on citizens who wish to carry out lawful activities like protests. Also, reports of errors in identifying and matching particular groups, such as people of colour and women, raise inherent issues of bias and fairness.

The recent resolution from the Global Privacy Assembly is a timely reminder from global regulators that facial recognition technology in all its forms raises grave concerns.

By its very nature, it has the capability to enable widespread surveillance and to be more highly invasive and open to abuse than other forms of technology. Its very existence has the capability to erode privacy, data protection and human rights.

Licence photos and other identifying information of West Australians will be part of a national facial recognition database.Credit:Getty Images

The debate around the technology gives pause for reflection on whether it should be used at all.

In the UK, the Metropolitan Police has already commenced using the technology by placing live facial recognition cameras in public locations. When people pass by these areas their images are streamed in real time to a matching system.

This system is loaded with a ‘watchlist’ of known and wanted offenders and the real-time photographs are cross-referenced against these. When the system reports a match, officers are alerted and can approach the individual in question to verify whether the match is accurate.

Demonstrators last year in front of a mobile police facial recognition facility outside a shopping centre in London. Credit:AP

The initial trials of this system certainly attracted criticism. A 2019 report on the trial by the Human Rights, Big Data and Technology Project at the University of Essex showed an overwhelming number of false positive matches. The report also demonstrated indications of inherent bias in the decision making involving the technology.

An investigation by the UK Information Commissioner’s Office also scrutinised missed opportunities for higher standards of compliance, consistent approaches to implementation by law enforcement and building public confidence in the technology’s protection of citizens’ data.

A decision made by the UK Court of Appeal in August 2020 found that the use of live facial recognition technology by some British police authorities to date has violated human rights and data protection laws. While it did not find that the technology can never be used, it showed that authorities have a clear obligation to take great care in how it is used and when it can be relied upon.

Given the growing use of this technology, we must ask ourselves: should individuals, going about their lives, be subjected by default to surveillance when in public places? If so, will they alter their behaviour? Is this desirable? If the risks of bias and errors have been documented, what can be done to protect minority groups against misidentification and discrimination?

A demonstration of facial recognition technology in China. Credit:Bloomberg

The dangers posed by live facial recognition require that states across the globe collaborate to urgently build new and specific legal frameworks to regulate its development and use. But even in the absence of such frameworks, it is worth remembering that any introduction of facial recognition technology must comply with existing privacy and other human rights laws, such as those in place in Victoria.

I am not aware of live facial recognition currently being used by Victorian government agencies. Any agencies considering using live facial recognition for law enforcement or other purposes must seriously consider whether this use is lawful, necessary and proportionate.

Arguments that law-abiding citizens have nothing to fear from such technology ignore the human desire for dignity, intimacy, self respect and the ability to develop one’s full potential free from the spectre of widespread surveillance.

The coronavirus pandemic has highlighted the importance of public trust in government. We are all better off when governments and public institutions behave in a way that earns our trust, and we are more likely to repay that trust by engaging in behaviour that contributes to positive societal outcomes.

Rushing headlong into the deployment of live facial recognition systems is not consistent with building a society based on mutual trust and respect.

Sven Bluemmel is the Victorian Information Commissioner. The office of the commissioner oversees the state government’s collection, use and disclosure of information.

Most Viewed in National

From our partners

Source: Read Full Article