Tuesday, March 25, 2014

An Imperfect Crystal Ball



Big Data… the macrotrend of tracking consumer behavior, being prediction models for political campaigns and marketing campaigns to name a few uses… has redefined privacy, trending, media and marketing. I wrote a book about it (see the upper right hand corner of this blog). But as scientists comb through human digital expressions with a large dollop of other generally-available public information, a new possible use for Big Data is bubbling to the surface: predicting where insurrection and genocide are likely to occur.
“Australian researchers say they have developed a mathematical model to predict genocide. A Swiss sociologist has sifted through a century of news articles to predict when war will break out — both between and within countries. A Duke University lab builds software that it says can be used to forecast insurgencies. A team assembled by the Holocaust Museum is mining hate speech on Twitter as a way to anticipate outbreaks of political violence: It will be rolled out next year for the elections in Nigeria, which have frequently been marred by violence.
“What makes these efforts so striking is that they rely on computing techniques — and sometimes huge amounts of computing power — to mash up all kinds of data, ranging from a country’s defense budget and infant mortality rate to the kinds of words used in news articles and Twitter posts.” Somini Sengupta writing for New York Times, Sunday Review, March 22nd.
Anyone who believes that this creates digitally certain models, unambiguous and totally accurate forecasts, would still be disappointed. These digital crystal balls remain flawed, but what they can say about ourselves and our world at the very least tell us when trouble is brewing, when there are intervention opportunities to prevent “bad and worse.” And even where the predictors are more than clear, the ability to intervene often slams into the wall of sovereign integrity, malevolent dictators with powerful police and military forces at their disposal or well-funded zealots deeply dedicated to misguided murder and mayhem in the name of God who are not remotely ready to deescalate.
How do these systems work? “Predicting mass violence is yet another frontier. Among these efforts is a 2012 project funded partly by the Australian government in which a team from the University of Sydney looked at more than a dozen variables that could point to the likelihood of mass atrocities: Had there been political assassinations or coups; were there conflicts in neighboring states; is there a high rate of infant mortality? (Infant mortality turns out to be a powerful predictor of unrest, a signal that state institutions aren’t working.)
“Using machine-learning tools to draw inferences about the effects of each piece of information they analyzed, the researchers compiled a list of 15 countries facing the highest risk of genocide between 2011 and 2015. Central African Republic, which had been on no one’s radar at the time, came out at the top, followed by the Democratic Republic of Congo and Chad. Also on the list were some obvious contenders with continuing strife: Somalia, Afghanistan and Syria. They didn’t get everything right: Sri Lanka was on the list, but has witnessed no outbreaks of mass violence since 2011 — not yet, anyway…
“Kalev H. Leetaru, a computer scientist based at George Washington University, has constructed a huge trove called the Global Database of Events, Language and Tone. It scours the Internet to catalog news coverage of major events from 1979 to the present. It can be used to study what might happen in the future — or to produce a snapshot of what’s happening now, as in the case of a map that Mr. Leetaru produced to show outbreaks of violence in Nigeria.
“Whether any of this fortunetelling will actually help stave off violence is another matter. That ultimately has less to do with mathematical models than with the calculus of power.
“A handful of projects are trying to deploy predictive tools in real time. Michael Best, a professor at the Georgia Institute of Technology, helped develop a tool for Kenyan elections last year that mined reports of political violence on Twitter and Facebook. Nigeria has agreed to let the researchers sit in the election security headquarters when its voters go to the polls next year: They will mine social media for hate speech, using automated tools, and combine the results with the findings of election monitors on the ground.
“Social media speech cannot pinpoint violent outbreaks, Professor Best cautions, nor would it be ethical to censor what Nigerians post online. But words are like smoke signals, he argues, and he hopes they can help the authorities get to the right place at the right time.” NY Times.
Even as the certainty of violence rises in the predictability, what do you do with the information? Is it strong enough to engage international concern? Is the data sufficient bases to justify tangible action, even military intervention? Or are we bound to know where terrible things are about to happen so that we can send our journalists in to watch the death and destruction unfold? We truly need how to use our new predictive models to create diffusion strategies that can be more globally accepted.
I’m Peter Dekom, and indeed, solutions do in fact start with education and information.

No comments: