Monday, September 4, 2017

The Letter, the Threat and the Ethics

In January of 2015, one hundred and fifty major scientists, academics and entrepreneurs – from top universities around the world and including such notables as Elon Musk and Stephen Hawking – signed a four paragraph open letter effective telling the world that we are not prepared – from the good, bad and really potentially ugly side – for the impact of artificial intelligence. While the obvious possible benefits to society were touted – “the eradication of disease and poverty are not unfathomable” – the letter also warned of the dangers of autonomous weapons, no longer under the control of human beings.

The authors noted that “we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? ...What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an ‘intelligence explosion’?”

A Labor Day tweet from Elon Musk amplified his earlier statements: “China, Russia, soon all countries w strong computer science… Competition for AI superiority at national level most likely cause of [World War III in my opinion].” If the Kim/Donald rhetoric doesn’t get us there sooner.
At a lecture at Cambridge University in the fall of 2016, Professor Hawking added: ““The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not know which… It will bring great disruption to our economy, and in the future AI could develop a will of its own that is in conflict with ours.”  Indeed, the recent job displacement of under-educated Americans (or those with now-obsolete skills) has been less a function of global competition than the transfer of too many jobs to ever “smarter” automated machines. Reshoring manufacturing to the United States has often been a benefit to the wealthy owners of those sophisticated machines and not to the workers those machines are intended to replace. An accelerant of income inequality.
But even as he expounds upon the potential of AI to find cures for major diseases and the opportunities this field offers, Microsoft founder and mega-philanthropist, Bill Gates, echoes his concern for machines that can operate without human supervision… and the failure of governments to prepare for the economic impact of obvious job displacements and transitions by reason of AI. “I am in the camp that is concerned about super intelligence… I agree with Elon Musk and some others on this and don't understand why some people are not concerned,” Gates has said. There are AI optimists, like Facebook CEO Mark Zuckerberg, who had this to say about AI this past July: “I think that people who are naysayers and kind of try to drum up these doomsday scenarios — I just, I don't understand it… I think it's really negative and in some ways I actually think it is pretty irresponsible.”
What is artificial intelligence (AI) anyway? “In computer science, the field of AI research defines itself as the study of ‘intelligent agents’: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term ‘artificial intelligence’ is applied when a machine mimics ‘cognitive’ functions that humans associate with other human minds, such as ‘learning’ and ‘problem solving.’” Wikipedia.
Machines that can think on their own, learn, communicate with other AI machines without human direction, self-correct, self-direct and even self-maintain. They might even design and build other such smart machines. Academic and corporate institutions the world over – including the military side of too many countries – are investing tens of billions of dollars to explore, enhance and implement machines driven by artificial intelligence. Including weapons.
Will AI eventually be able to mimic the abilities of a human brain? Obviously yes. That computing power exists even now, hardly as small and efficient as that lump of fat between your ears, but we do not remotely have the software to create the missing functionality. Just remember that the 1970 Apollo 13 manned mission to the moon had less computing power than a modern smart phone. Give it time. AI might just write its own software and design that much smaller, super-efficient computer.
We currently have cruise missiles that can select and prioritize targets – embedded into their computer-based memories – without any specific human direction. Even government-designed cyber-viruses can be introduced into the guidance and military operational systems of their enemies and can separate from their creators and operate autonomously to destroy what they can. Like the Stuxnet malware that took down over a thousand nuclear processing centrifuges in 2010? These are still only the primitive and nascent developments in a world that is anxious to build so much more. We are definitely seeing an arms race in autonomous, AI-driven weapon systems, and there are absolutely no international accords to contain such efforts, despite the very obvious and terrifying consequences. No one is even trying to negotiate standards.
So where should the basic ethics of AI begin? Oren Etzioni, the chief executive of the Allen Institute for Artificial Intelligence, presented three basics in his September 1st OpEd in the New York Times: “I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the ‘three laws of robotics’ that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws…
“First, an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties…
“My second rule is that an A.I. system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasingly sophisticated dialogue with real people — society needs assurances that A.I. systems are clearly labeled as such…
“My third rule is that an A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information. Because of their exceptional ability to automatically elicit, record and analyze information, A.I. systems are in a prime position to acquire confidential information. Think of all the conversations that Amazon Echo — a “smart speaker” present in an increasing number of homes — is privy to, or the information that your child may inadvertently divulge to a toy such as an A.I. Barbie. Even seemingly innocuous housecleaning robots create maps of your home. That is information you want to make sure you control.” Simply put, this new level of automation is not only disruptive – as such paradigm-shifting technology usually is – it is downright dangerous with strong and enforceable rules.
I’m Peter Dekom, and we have denigrated ethics and morality – often hiding behind false echoes of such values to justify bigotry and greed – while ignoring the true, essential and existential nature of ethics and morality to preserve humanity itself.

No comments: