Monday, March 27, 2023

What Artificial Intelligence Gone Awry Really Looks Like



What Artificial Intelligence Gone Awry Really Looks Like
Techno-Authoritarianism

To the uninformed, the above camera in a Chinese classroom appears to be valuable in evaluating teacher performance and whether or not the students are truly engaged. But add progressively intrusive AI analytics and you can link each individual student’s reaction to history lessons, political statements and any mention of a controversial topic. The ability of the government to utilize a combination of massive information-gathering together with humongous file server storage capacity has turned China into a mega-repressive regime with a long-term memory. Kids whose facial expressions suggest apathy, skepticism or disdain for politically “correct” thought can expect those AI- determined reactions to be stored on their permanent “social credit” profile.

“[Thus, t]he most audacious form of surveillance is China's ‘social credit’ system. The ruling party dishes out penalties and rewards based on a system created with the intention of incentivizing good behavior. Chinese citizens are given various penalties for a range of offenses -- including not paying taxes, jaywalking, walking a dog without a leash and how long someone plays video games, among others.

“Millions of travelers have also been barred from buying plane and train tickets and denied access to education. Unlike the United States, where companies use data to assess a person's creditworthiness for a loan or credit card, China's credit system applies to daily life… In 2018, individuals were blocked 290,000 times from taking senior management jobs or acting as a company's legal representative. For ethnic minorities, like Muslims, the surveillance can take a more intrusive turn… [as detained Uighurs have learned the hard way].” Fox News (5/1/20). 

Governments the world over are increasingly enamored of these Chinese surveillance systems, complete with highly intrusive AI analytics. Some simply amplify the “need to fight crime” mantra… others do not stop there. Writing for the February 26th Los Angeles Times, Paul Scharre, vice president and director of studies at the Center for a New American Security, notes: “If democracies don’t promote an alternative, Beijing’s repressive style will become a global norm….

“China is forging a new model of digital authoritarianism at home and is actively exporting it abroad. It has launched a national-level AI development plan with the intent to be the global leader by 2030. And it is spending billions on AI deployment, training more AI scientists and aggressively courting experts from Silicon Valley.

“The United States and other democracies must counter this rising tide of techno-authoritarianism by presenting an alternative vision for how AI should be used that is consistent with democratic values. But China’s authoritarian government has an advantage. It can move faster than democratic governments in establishing rules for AI governance, since it can simply dictate which uses are allowed or banned.

“One risk is that China’s model for AI use will be adopted in other countries while democracies are still developing an approach more protective of human rights… The Chinese Communist Party, for example, is integrating AI into surveillance cameras, security checkpoints and police cloud computing centers. As it does so, it can rely on world-class technology companies that work closely with the government. Lin Ji, vice president of iFlytek, one of China’s AI ‘national team’ companies, told me that 50% of its $1 billion in annual revenue came from the Chinese government.

“The government is pouring billions of dollars into projects such as the Skynet and Sharp Eyes surveillance networks and a ‘social credit system,’ giving it a much larger role in China’s AI industry than the role the U.S. government has in the industry here… China is building a burgeoning panopticon, with more than 500 million surveillance cameras deployed nationwide by 2021 — accounting for more than half of the world’s surveillance cameras. Even more significant than government cash buoying the AI industry is the data collected, which AI companies can use to further train and refine their algorithms.

“Facial recognition is being widely deployed in China, while a grassroots backlash in the U.S. has slowed deployment. Several U.S. cities and states have banned facial recognition for use by law enforcement. In 2020, Amazon and Microsoft placed a moratorium on selling facial-recognition technology to law enforcement, and IBM canceled its work in the field. These national differences are likely to give Chinese firms a major edge in development of facial-recognition technology.

“China’s use of AI in human rights abuses is evident in the repression and persecution of ethnic Uighurs in Xinjiang [Province in western China], through tools such as face, voice and gait recognition. Under the Strike Hard Campaign, the Chinese Communist Party has built thousands of police checkpoints across Xinjiang and deployed 160,000 cameras in the capital, Urumqi. Facial-recognition scanners are deployed at hotels, banks, shopping malls and gas stations. Movement is tightly controlled through ID checkpoints that include face, iris and body scanners. Police match this data against a massive biometric database consisting of fingerprints, blood samples, voice prints, iris scans, facial images and DNA…

“The U.S. government needs to be more proactive in international standard-setting, working with domestic companies to ensure that international AI and data standards protect human rights and individual liberty. International standard-setting — through organizations such as the International Organization for Standardization, the International Electrotechnical Commission and the United Nations International Telecommunication Union — is one of the lower-profile but essential battlegrounds for global tech governance.” 

Except there are too many current uses of AI-driven surveillance technology right here in the United States, including both state and federal agencies. It’s not even about “you” anymore. “Surveillance technology has long been able to identify you. Now, with help from artificial intelligence, it’s trying to figure out who your friends are… With a few clicks, this ‘co-appearance’ or ‘correlation analysis’ software can find anyone who has appeared on surveillance frames within a few minutes of the gray-haired male over the last month, strip out those who may have been near him a time or two, and zero in on a man who has appeared 14 times. The software can instantaneously mark potential interactions between the two men, now deemed likely associates, on a searchable calendar.

“Vintra, the San Jose-based company that showed off the technology in an industry video presentation last year, sells the co-appearance feature as part of an array of video analysis tools. The firm boasts on its website about relationships with the San Francisco 49ers and a Florida police department. The Internal Revenue Service and additional police departments across the country have paid for Vintra’s services, according to a government contracting database… But the firm is one of many testing new AI and surveillance applications with little public scrutiny and few formal safeguards against invasions of privacy.” Noah Bierman, LA Times, March 3rd.

There are way too many buyers of such systems all around the world. They are the great enabling tools of autocracy. And these sales must stop and the use of these intrusive systems severely curtailed.

I’m Peter Dekom, and I have to wonder what “culture warriors” like Florida Governor and de facto presidential candidate Ron DeSantis would do with surveillance tools like this freely used against… all of us.

No comments: