Friday, March 5, 2021

Worse than You Can Imagine, More Dangerous than You Can Believe

Worse than You Can Imagine, More Dangerous than You Can Believe

The fierce battle to unleash versus contain artificial intelligence


Whether we like it or not, artificial intelligence is everywhere around us. The underlying power is a combination of massive data storage,  faster-than-lightening-speed micro processing and the uncanny ability to take its basic programming to learn and expand that programming on its own… without further human intervention. By way of example, provide a basic visual analytic for detecting emerging breast cancer from X-rays, and if you then feed in tens of thousands of X-rays of nascent breast cancer, artificial intelligence will both significantly enhance its detection criteria and become an infinitely faster, more accurate and sensitive diagnostic tool, probably incredibly more accurate than a normal review by even the most competent radiologist. Machines can actually perform soup-to-nuts surgical procedures and even draft legal briefs. That’s the good news.


The bad news surrounds the inherent cultural biases and assumptions that are introduced in the original programming and initial data feeds, what happens when computers internalize those biases – as illustrated by the difficulty that facial recognition (worrisome on its merits) has differentiating darker skinned individuals. It gets worse when these artificially intelligent software programs are created for socially questionable purposes, for military advantage, for surveillance and espionage, in self-target-selecting weapon systems, to manipulate and simulate human communications for malignant purposes or to mislead and misdirect consumer behavior… and where those malignancies and biases are now embedded in the programming, and implementing and expansion are being performed directly by the computer with no or severely limited human control. In short, human beings have relegated their ability to choose and act to a machine.


University of Massachusetts academics, Nir Eisikovitz (associate professor) and Dan Feldman (senior research associate), writing for February 24th theConversation.com, explain a bit more: “AI is being used for wide and rapidly expanding purposes. It is being used to predict which television shows or movies individuals will want to watch based on past preferences and to make decisions about who can borrow money based on past performance and other proxies for the likelihood of repayment. It’s being used to detect fraudulent commercial transactions and identify malignant tumors. It’s being used for hiring and firing decisions in large chain stores and public school districts. And it’s being used in law enforcement – from assessing the chances of recidivism, to police force allocation, to the facial identification of criminal suspects.

“Many of these applications present relatively obvious risks. If the algorithms used for loan approval, facial recognition and hiring are trained on biased data, thereby building biased models, they tend to perpetuate existing prejudices and inequalities. But researchers believe that cleaned-up data and more rigorous modeling would reduce and potentially eliminate algorithmic bias. It’s even possible that AI could make predictions that are fairer and less biased than those made by humans… Where algorithmic bias is a technical issue that can be solved, at least in theory, the question of how AI alters the abilities that define human beings is more fundamental. We have been studying this question for the last few years as part of the Artificial Intelligence and Experience project at UMass Boston’s Applied Ethics Center.” Isn’t that precisely the problem? From making recommendations based on consumer online history is one such intrusion, but take it up one level and it is the computer that makes and implement the decisions?

The ethical ramifications are horrific. You can just look at how Russians routinely use AI to spread disinformation, automatically  sensing individual biases and vulnerabilities based on tracking their online activity… with no human supervision or intervention. The use and misuse of AI is a deep concern among privacy advocates and ethicists, who believe that these forces must be contained, that enforceable global standards be developed to deal with the obvious threats that AI poses, but the tech companies seeking the largest possible global markets are fighting such containment, tooth and nail.

The big brouhaha in the Silicon Valley regards the December firing of Google’s AI ethicist, Timnit Gebru. Katherine Schwab, writing for the February 26th FastCompany.com, fills in the details:
“Gebru had been fighting with the company over a research paper that she’d coauthored, which explored the risks of the AI models that the search giant uses to power its core products—the models are involved in almost every English query on Google, for instance. The paper called out the potential biases (racial, gender, Western, and more) of these language models, as well as the outsize carbon emissions required to compute them. Google wanted the paper retracted, or any Google-affiliated authors’ names taken off; Gebru said she would do so if Google would engage in a conversation about the decision. Instead, her team was told that she had resigned. After the company abruptly announced Gebru’s departure, Google AI chief Jeff Dean insinuated that her work was not up to snuff—despite Gebru’s credentials and history of groundbreaking research.

“The backlash was immediate. Thousands of Googlers and outside researchers leaped to her defense and charged Google with attempting to marginalize its critics, particularly those from underrepresented backgrounds. A champion of diversity and equity in the AI field, Gebru is a Black woman and was one of the few in Google’s research organization… ‘It wasn’t enough that they created a hostile work environment for people like me [and are building] products that are explicitly harmful to people in our community. It’s not enough that they don’t listen when you say something,’ Gebru says. ‘Then they try to silence your scientific voice.’…

“At stake is the equitable development of a technology that already underpins many of our most important automated systems. From credit scoring and criminal sentencing to healthcare access and even whether you get a job interview or not, AI algorithms are making life-altering decisions with no oversight or transparency. The harms these models cause when deployed in the world are increasingly apparent: discriminatory hiring systemsracial profiling platforms targeting minority ethnic groupsracist predictive-policing dashboards. At least three Black men have been falsely convicted of a crime based on biased facial recognition technology.

“For AI to work in the best interest of all members of society, the power dynamics across the industry must change. The people most likely to be harmed by algorithms—those in marginalized communities—need a say in AI’s development. ‘If the right people are not at the table, it’s not going to work,’ Gebru says. ‘And in order for the right people to be at the table, they have to have power.’” 

Today, most of our cruise missiles have preprogrammed targets. But if those targets cannot be found within the striking radius, the missile has a host of alternative targets that it can easily and autonomously substitute. Imagine launching a barrage of smart weapons with no simply prioritized targets. While this falls short of the man vs machine wars popularized in films like the Terminator series, machines that can make decisions and teach themselves, building their knowledge and program bases without human intervention, are ethical nightmares any way you look at it. Their masters’ shortcomings could easily determine the fate of the world. 

I’m Peter Dekom, and this is one of those mega-issues that gets buried by a society scrambling to deal with a pandemic amidst an acceleration of horribles from climate change.


No comments: