Tuesday, December 15, 2015

Debunking Artificial Intelligence

We use the term “smart” for inanimate objects like mobile phones with cool bells and whistles. We make movies about killer robots challenging humanity… or beings that are part human and part robots (mind control and enhanced physical abilities are popular themes). We have tech leaders, from Tesla’s Elon Musk to Microsoft’s Bill Gates, telling us that artificial intelligence (AI) is one of mankind’ greatest threats, perhaps worse than ISIS but on par with global climate change. Are we evolutionally becoming less necessary within our own society? Are machines more trustworthy than human deciders… and where do computers cross that real threat line?
On the economic front, it seems pretty clear that the level of jobs that are likely to be lost to AI will move beyond the robotic assembly line and massive record-keeping into the highest reaches of white collar jobs, from medical doctors (yup, including surgeons), lawyers, geriatric care, banking services, business analysts, marketing managers and varying forms of design and engineering, to name just a few.
Want to see an example from left field that you would never expect? “Computers have already been trained how to write up quick summaries of sporting events after studying the box score, putting the jobs of sports journalists at risk. Now comes the next step: live commentary of professional sporting events as they actually happen, thanks to a combination of machine learning and computer vision.
“In India, for example, computers are able to provide text-based commentary of cricket matches with 90 percent accuracy. In a recently published paper (“Fine-Grain Annotation of Cricket Videos”), a group of three Indian researchers – Rahul Anand Sharma and C.V. Jawahar, scientists at IIIT Hyderabad, together with Pramod Sankar K, of Xerox Research Center India — showed that weakly-supervised computers could reliably distinguish what’s happening during videos of cricket matches and then provide text-based commentary.
“To make that possible, computers analyzed hundreds of hours of cricket videos from the YouTube channel for the Indian Premier League, breaking them into categories based on text descriptions of them that were already available. The next step was breaking down these longer videos into smaller scenes in order to classify each video shot. Then a computer algorithm had to find the right commentary that matched up with what was being shown in each video shot. For that, the researchers used commentary for about 300 matches that already existed in the Cricinfo database.”
“As a result, computer algorithms were able to accurately label a batsman’s cricketing shot by using visual-recognition techniques for an action that sometimes lasted no more than 1.2 seconds. Washington Post, December 15th. Add a charming voice. Literally from left field. Who’s next? MLB? NFL? NHL? NBA? College sports? Watch out! Commander Data, report!
And if there aren’t remotely enough jobs but lots of people, does wealth shift entirely to those who own the machines to the exclusion of everyone else – a trend that appears to be happening already – or do political and economic systems change to adapt to that new reality? Is capitalism even sustainable where labor cannot work?
It’s not as if these economic disruptions will be issues for future generations. It’s happening right now. In my November 18th blog, My Robot Can Out-Earn Your Robot, I cited this quote from FastCompany.com (November 13th) “Over the last few years, we've heard a lot about how artificial intelligence could put large numbers of people out of work. An often-cited study from Oxford University found that 47% of jobs in America are at ‘high risk of computerization’ in the next 20 years. And more recent research from Forrester predicts a net loss of 9.1 million jobs in the next decade.
But there is so much emphasis on building bigger, faster, more “intuitive” computers, from multi-petaflop-capable super-computers (one petaflop = a thousand trillion floating point operations per second) to an entirely new generation of so-called “quantum computers” that make use of the quantum states of subatomic particles to store information, concentrating even more information into the tiniest places imaginable. We are not pulling backwards, and today our entire electrical grid, Internet, telecommunications grid, our entire financial system, so many complex and necessary operating systems and our very military security are dependent on the proper functioning of these exceptionally complex machines. China has more and faster super-computers than does the United States.
But what exactly is “artificial intelligence”? “Real AI isn't about building a know-it-all computer, but rather one that's a good learner, able to sort overwhelming amounts of data, and diligently catalog recurring patterns.” FastCompany.com, December 15th. Simply, human beings cannot “learn” fast enough when massive data is the source of the lessons. But when a computer “learns,” are its rapidly and resulting evolving decision-sophistication abilities a clearly better replacement for human choices that simply cannot be made quickly enough to allow the computer to learn and move on to the next level of issues?
Effectively, we have placed computational machines in direct contact over the Web (or hardwire, Bluetooth, WiFi, etc.) with other computational or electronic-command-responsive machines to effect pre-programmed (with self-learning) desired abstract goals without intervening human input. We call that kind of machine-to-machine interface the Internet of Things (IoT), and it is not just defining business and governmental operations but even basic appliances in the home.
The mystique of proprietary computational capacity, as government and university researchers and the highest level tech companies vie to create the next greatest and best “thinking machines,” floats above the capacity of most of us to understand. There are initiatives and arguments taking place well-above the understanding ability of most of us. And these are battles royal, believe me.
Elon Musk, who has warned us about AI, has also taken steps to create a new open-source platform where the world can share and build this interconnected future within clear public view. “The new OpenAI organization, [is] backed by … Musk, Peter Thiel, and other tech luminaries. AI is neither a synonym for killer robots nor a technology of the future, but one that is already finding new signals in the vast noise of collected data, ranging from weather reports to social media chatter to temperature sensor readings. Today IBM has opened up new access to its AI system, called Watson, with a set of application programming interfaces (APIs) that allow other companies and organizations to feed their data into IBM's big brain for analysis.” FastCompany.com, December 15th. In short, through these doors (application programming interfaces) into Watson, IBM has invited developers to join in an open-source effort to grow, understand and perhaps contain AI.
But the scope of AI is going to reach deep into every facet of our lives. Understanding these social leaps and bounds has to be explored from top to bottom. Computers are also moving from clear commands to understanding phrases and concepts with less-clearly defined meanings. “IBM is going beyond industrial devices with Watson, though, opening it up to other ‘things’ such as videos, people's voices, or text from Twitter. Like an isolated sound reading from a turbine, the meaning of a phrase such as ‘pedal is soft’ (an example IBM gave me) isn't immediately clear to a computer. It's ‘unstructured data’ that requires sorting out to understand. But after reading enough tweets and other text, AI can figure out that that particular phrase means the brakes aren't working well.
“The opening of Watson's interface exposes to the world what IBM has already been doing within a few pilot programs. The company has been combining unstructured data with straight-up traditional measurements in a project with the Beijing Environmental Protection Bureau (EPB), to track and forecast air pollution conditions for the city. ‘Using not just very structured stuff but videos that people are taking, call-center transcripts . . . and blogs, all this unstructured data . . . we've been able to identify very accurately exactly where the pollutants are coming from [and] how they are moving," says Harriet Green, IBM's general manager for Watson IoT and Education. IBM claims that its efforts have led to a 20% reduction of one pollutant, ultra-fine particulate matter, although Beijing has a long way to go, given recent reports of its most dangerous air pollution ever.” FastCompany.com, December 15th.
When you think about the impact of this technology, reaching across every spectrum of our lives and able to change our economic and political systems rather completely, it is fascinating how little social dialog over these realities is actually taking place as part of our national conversation with ourselves. As a practical matter, these technological realities will effect more change on the way we live than most anything else out there. ISIS is bad, but you aren’t likely to lose your job over its horrific machinations or face a completely new political-economic order because of its threats.
Perhaps because of the complexities involved, maybe because the resulting social shift are so massive that we simply are stymied at what to do about it or because our distraction and preoccupation of more immediate threats, this issues are all-but-missing in the political debates aimed at electing the next President of the United States of America. And as our grappling with major issues in our immediate past illustrates, we seem to be a reactive political nation, unable to plan for a tumultuous future… until it swallows us up in devastating and uncontrolled change.
I’m Peter Dekom, and that “habit of reacting versus anticipating” is the underlying issue that may do us in entirely.

No comments: