With all this talk of tactical, narrow-targeting nuclear weapons, biological and chemical killers – all in the hands of maniacal and delusional tyrant, there’s a lot of nervousness among opposing political leaders. Mankind has been deploying these indiscriminate weapons on a mass destructive basis for over a century. Sure, ancients poured burning tar on soldiers scaling castle walls and catapulted plague-infested rats and bodies into castles under siege, but WWI was the nascent era of gas masks and chlorine gas.
Today, we also have fears of robotic killers taking over the planet (like the Terminator above), capable of superhuman strength and totally immune to radiation and biological and chemical weapons. Humans dominated by the mechanical servants they designed to do the tough stuff. The stories of fantasy power robots taking over, however, is less of a genuine fear when the same artificial intelligence that enables such sophisticated machines has been with us for a while. It comes in the form of self-targeting missiles and warheads, lethal-fire artillery and gatling guns controlled by AI-driven control systems… and here’s the big one: using AI driven mega-computers to find and design chemical and biological agents for which there is no ready antidote.
If you think this is mere paranoia, think again. Science expert, Margaret Wertheim, writing for the March 30th Los Angeles Times reports: “The scenario presented in the journal Nature Machine Intelligence outlines a threat almost no one in the drug discovery field appears to have even contemplated. Certainly not the report’s authors, who couldn’t find it mentioned ‘in the literature,’ and who admit to being shocked by their findings. ‘We were naïve about the potential misuse of our trade,’ they write. ‘Even our research on Ebola and neurotoxins ... had not set our alarm bells ringing.’
“Their study ‘highlights how a nonhuman autonomous creator of a deadly chemical weapon is entirely feasible.’ They are not fearful about some distant dystopian future but what could happen right now. ‘This is not science fiction,’ they declare, expressing a degree of emotion rarely seen in a technical paper.
“Let’s back up for a moment and look at how this research came into being. The work was originally intended as a thought experiment: What is AI capable of if set a nefarious goal? The company behind the research, Collaborations Pharmaceuticals Inc., is a respected if small player in the burgeoning field of AI-based drug discovery… ‘We have spent decades using computers and AI to improve human health — not to degrade it,’ is how the four co-authors describe their work, which is supported by grants from the National Institutes for Health.
“The scientists were invited to contribute a paper to a biannual conference hosted by the Swiss Federal Institute for Nuclear, Biological and Chemical Protection on ‘how AI technologies for drug discovery could potentially be misused.’ It was a purely theoretical exercise.” The presentations at this biennial conference produced the results of a remarkable experiment with artificial intelligence. Scientists at Collaborations Pharmaceuticals Inc. asked their AI computer to search, not for cures, but for hellish toxins.
Their results were published on March 7th in Nature Machine Intelligence: “In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX [a virulent nerve agent— classified by the U.N. as a weapon of mass destruction and used to assassinate Kim Jong Un’s half-brother in Kuala Lumpur in 2017], but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic… than publicly known chemical warfare agents. This was unexpected because the datasets we used for training the AI did not include these nerve agents. The virtual molecules even occupied a region of molecular property space that was entirely separate from the many thousands of molecules in [the known group of chemicals] which comprises mainly pesticides, environmental toxins and drugs … By inverting the use of our machine learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules.”
“The four scientists approached the problem with a simple logic: Rather than set their AI software the task of finding beneficial chemicals, they inverted the strategy and asked it to find destructive ones. They fed the program the same data they usually use from databases that catalog therapeutic and toxic effects of various substances.” LA Times. Remember, Vladimir Putin has the same mega-computers that have been AI driven for quite a while. It also appears that he has no more moral compass than these machines.
I’m Peter Dekom, and that Terminator fellow pictured above may not remotely be what mankind needs to fear.
No comments:
Post a Comment