Today's Editorial

Today's Editorial - 24 December 2021

Controversial facial recognition tech

Source: By Shruti Dhapola: The Indian Express

Facebook is phasing out its ‘facial recognition tool’, the company announced in a blogpost written by Jerome Pesenti, VP of Artificial Intelligence. Facebook claims while over one-third of its daily active users had the feature turned on and found it useful, it was moving away from the technology given regulatory uncertainty. But Facebook’s step back comes at a time when there is growing scrutiny of the use of facial recognition technology, especially by the police in many countries.

We take a look at why facial recognition technology is viewed as controversial and where other tech companies and policy makers stand on it.

What is facial recognition technology? How does it work?

Facial recognition technology as the name suggests can identify a person by capturing his face from a photo or video. The technology can work in real-time as well and relies on advanced machine learning algorithms powered by deep neural networks to identify faces and map them to an existing database.

For example, in Google Photos or even Apple Photos, the app will try and bucket photos of a person and ask users to identify the face. All of this is possible due to a form of facial recognition technology being used by these services.

On Facebook too, it was possible to turn on the feature and have the service automatically identify oneself if they were part of any photos or videos uploaded by friends or family. But companies such as AmazonMicrosoft have made it possible to use the technology at a much bigger scale and to analyse more than just images from your phone’s library. The technology is also outsourced to governments and law enforcement agencies, which has sparked concerns on its use.

Why has Facebook removed it?

While Facebook’s facial recognition tool was only being used on the platform, the company is stopping the use of the tech given its controversial nature. There are several privacy concerns around the deployment of such tools, especially since Facebook is such a big social network with billions of users and many photos and videos being uploaded.

In the post, Facebook said it needs to “weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules,” and it has taken this decision after “careful consideration”. The company has already settled a lawsuit in the state of Illinois in the US, where it paid nearly $550 million to a group of users who had argued that the facial recognition tool violated the state’s privacy laws.

What about Amazon?

The other bigger more controversial name around this is Amazon, which offers its Rekognition Software as a service (Saas) as part of its cloud services. But Rekognition has faced criticism because Amazon has offered the tool to law enforcement services as well.

Police enforcement groups prefer Rekognition because it can track and analyse people in real time and even identify up to 100 people in one single image. But the technology is not exactly accurate as it has been shown in the past by the American Civil Liberties Union (ACLU).

However, in a statement last year, Amazon said it was “implementing a one-year moratorium on police use of Amazon’s facial recognition technology.” But it will continue to offer it to organisations such as a “Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics” in order to help “rescue human trafficking victims and reunite missing children with their families.”

The statement also called for governments to “put in place stronger regulations to govern the ethical use of facial recognition technology,” and hoped the US Congress would take a stand on the issue and put in place “appropriate rules,” around the use of the technology.

A major criticism of Amazon’s Rekognition tool in the past was around its accuracy, especially when identifying people of colour, and African-Americans in particular. The use of the technology by law enforcement could lead to wrongful arrests and more discrimination, according to experts. Regarding criticism from rights groups about inaccuracy with the software, Amazon had responded saying they were using an outdated version.

What about other tech companies?

In June 2020, Microsoft also joined Amazon in saying it would not sell the technology to law enforcement until there was a federal law regulating this in the US.

Microsoft President Brad Smith had told the Washington Post that the company had not sold its technology called Face API, part of its Azure Cloud services, to police departments in the US. Smith was quoted as saying, “We will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.”

In the past, Microsoft’s Azure cloud services, which included “facial recognition and identification”, have been offered to the US Immigration and Customs Enforcement (ICE) for which it faced criticism.

IBM, on the other hand, announced it was exiting the business of facial recognition entirely in June 2020. IBM CEO Arvind Krishna wrote a letter to the US Congress calling for regulations on the US of the technology. “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” he wrote in a letter, according to CNBC.

What has Google said on the use of facial recognition?

In January 2020Alphabet and Google CEO Sundar Pichai hailed the European Union for its temporary ban on the use of the technology. Google has been outspoken about the problematic nature of the technology for a while.

For instance, in a 2018 blog post, Google SVP of Global Affairs Kent Walker explained why Google Cloud does not offer “general-purpose facial recognition APIs”, adding that policy questions around its use need to be answered. working through important technology and policy questions.”

As part of its AI responsibilities declaration, Google has also raised questions about the use of facial recognition saying, the technology’s implementation needs to be “fair, so it doesn’t reinforce or amplify existing biases, especially where this might impact underrepresented groups.” It has also said that the technology should not be used in “surveillance that violates internationally accepted norms.” and that it needs to “protect people’s privacy, providing the right level of transparency and control.”

What have governments said about the use of facial recognition?

The big problem with facial recognition is that as the technology gets faster and more accurate there are worries that it will be used for mass surveillance. There are also worries the technology could get so good, it could deduce intent and expressions, leading to real-time surveillance.

In China, the government has used the technology to track Uighurs, the Muslim minority in the country. It was also used in the UK to monitor football fans arriving for a match in 2020. In India too there have been concerns over the use of facial recognition technology by police, especially during protests.

In the US, the Facial Recognition and Biometric Technology Moratorium Act has been proposed by some members of Congress, which would ban the use of the technology by federal entities. It would also ban other biometrics such as voice recognitiongate recognition, and recognition of other immutable physical characteristics, from being used by federal entities. The bill has been sent to House Committees for further consideration.

Meanwhile, the European Union has passed a resolution banning the use of facial recognition technology by the police. This is a non-binding resolution. However, the use of the technology is a major concern of the EU’s upcoming AI Act, which will be debated and voted upon by the EU Parliament. The bill states that AI systems which are meant for real-time and post remote identification of people are “high-level risk systems” and would require compliance before the company can get access to the EU market. It also imposes restrictions on how “real-time” remote biometric identification systems can be used in public spaces for the purpose of law enforcement.

Book A Free Counseling Session