By AI Trends Staff 

The United National Human Rights Office of the High Commissioner this week called for a moratorium on the sale and use of AI technology that poses human rights risks—including the use of facial recognition software—until adequate safeguards are in place.  

Michelle Bachelet, UN High Commissioner for Human Rights

“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” stated Michelle Bachelet, the UN High Commissioner for Human Rights, in a press release  

Bachelet’s warnings accompany a report released by the UN Human Rights Office analyzing how AI systems affect people’s right to privacy—as well as rights to health, education, freedom of movement and more. The full report entitled, “The right to privacy in the digital age,” can be found here. 

“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states,” Bachelet stated. “AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.”  

Digital rights advocacy groups welcomed the recommendations from the international body. Evan Greer, the director of the nonprofit advocacy group Fight for the Future, stated that the report further proves the “existential threat” posed by this emerging technology, according to an account from ABC News. 

“This report echoes the growing consensus among technology and human rights experts around the world: artificial intelligence powered surveillance systems like facial recognition pose an existential threat to the future [of] human liberty,” Greer stated. “Like nuclear or biological weapons, technology like this has such an enormous potential for harm that it cannot be effectively regulated, it must be banned.”  

While the report did not cite specific software, it called for countries to ban any AI applications that “cannot be operated in compliance with international human rights law.” More specifically, the report called for a moratorium on the use of remote biometric recognition technologies in public spaces—at least until authorities can demonstrate compliance with privacy and data protection standards and the absence of discriminatory or accuracy issues.  

The report was also critical of the lack of transparency around the implementation of many AI systems, and how their reliance on large datasets can result in people’s data being collected and analyzed in opaque ways, and can result in faulty or discriminatory decisions, according to the ABC account. The long-term storage of data and how it could be used in the future is also unknown and a cause for concern, according to the report. 

“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face,” Bachelet stated. “We cannot afford to continue playing catch-up regarding AI—allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact.” Bachelet called for immediate action to put “human rights guardrails on the use of AI.”  

Report Announced in Geneva  

Peggy Hicks, Director of Thematic Engagement , UN rights office

Journalists were present at the announcement of the report in Geneva. “This is not about not having AI,” stated Peggy Hicks, director of thematic engagement for the UN rights office, in an account in Time. “It’s about recognizing that if AI is going to be used in these human rights—very critical—function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”  

The report also expresses caution about tools that try to deduce people’s emotional and mental states by analyzing their facial expressions or body movements, saying such technology is susceptible to bias, misinterpretations, and lacks scientific basis.  

“The use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty, and to a fair trial,” the report states.  

The report’s recommendations are consistent with concerns raised by many political leaders in Western democracies; European regulators have already taken steps to rein in the riskiest AI applications. Proposed regulations outlined by European Union officials this year would ban some uses of AI, such as real-time scanning of facial features, and tightly control others that could threaten people’s safety or rights.  

Western countries have been at the forefront of expressing concerns about the discriminatory use of AI. “If you think about the ways that AI could be used in a discriminatory fashion, or to further strengthen discriminatory tendencies, it is pretty scary,” stated US Commerce Secretary Gina Raimondo during a virtual conference in June, quoted in the Time account. “We have to make sure we don’t let that happen.”  

At the same conference, Margrethe Vestager, the European Commission’s executive vice president for the digital age, suggested some AI uses should be off-limits completely in “democracies like ours.” She cited social scoring, which can close off someone’s privileges in society, and the “broad, blanket use of remote biometric identification in public space.”  

Consistency in Cautions Issued Around the World  

The report did not single out any countries by name, but AI technologies in some places around the world have caused alarm over human rights in recent years, according to an account in The Washington Post 

The government of China, for example, has been criticized for conducting mass surveillance that uses AI technology in the Xinjiang region, where the Chinese Communist Party has sought to assimilate the mainly Muslim Uyghur ethnic minority group.  

The Chinese tech giant Huawei tested AI systems, using facial recognition technology, that would send automated “Uyghur alarms” to police once a camera detected a member of the minority group, The Washington Post reported last year. Huawei responded that the language used to describe the capability had been “completely unacceptable,” yet the company had advertised ethnicity-tracking efforts.  

Bachelet of the UN was critical of technology that can enable authorities to systematically identify and track individuals in public spaces, affecting rights to freedom of expression, and of peaceful assembly and movement.  

In Myanmar this year, Human Rights Watch criticized the Myanmar military junta’s use of a public camera system, provided by Huawei, that used facial and license plate recognition to alert the government to individuals on a “wanted list.”  

In the US, facial recognition has attracted some local regulation. The city of Portland, Ore., last September passed a broad ban on facial recognition technology, including uses by local police. Amnesty International this spring launched the “Ban the Scan” initiative to prohibit the use of facial recognition by New York City government agencies. 

Read the source articles and information in press release from the UN Human Rights Office, read the report entitled, “The right to privacy in the digital age,” here; from ABC News, in Time and in The Washington Post 

Categories:

wpChatIcon