Saturday, January 16, 2016

Human Brain vs. Artificial Intelligence (AI)

The notions of cognitive capacity delineating what artificial intelligence really means have changed substantially since that term was invented by Cal Tech/Princeton educated computer & math genius John McCarthy in 1955. McCarthy died in 2011 after a distinguished career as a Stanford professor, but we have all struggled with the impact of AI ever since. As preeminent tech leaders, like Elon Musk and Bill Gates, warn us all of the threat that AI poses, as more and more skilled jobs are moving into the world of automated machine-based efficiencies, how we adapt to this technological tsunami is mission critical to what our world will look like in the future. Hopefully not like that Star Trek nasty above, the “Borg.”
First, it is interesting to note that current super computers use multiple processors working in parallel to achieve the absurdly fast rates of calculation they generate. They can work within a single machine (cluster) or as separate computers linking together (a grid). 100,000 or more such processors working together is not unusual. Their speed is measured in the number of floating point operations they can perform in a second (a FLOP). The fastest computer in the world is in China and currently processes at almost 34 petaFLOPs; a petaflop is the ability of a computer to do one quadrillion (a thousand trillion) floating point operations per second. The U.S. is now in the process of building a new computer system capable of reaching 300 petaFLOPs. Whew!
Simply put, computers are just about able to compute at a level equivalent (or better) of the human brain, but while the brain is small and consumes about 20 watts, these computing systems are relatively large, and dissipate massive heat as they consume more than a million times more energy to operate than that fat between our ears. Still, you’d think that these new machines could outthink us because of the rigid accuracy, and in terms of numerical computations, you’d be right. But the hardware is only part of the computational/analytical structure behind AI. There is the data, which for humans is learning and life experience with lots of random information, and of course the software that actually tells the mind or the computer what to do with all that information.
Our brains are trained to juggle that massive, seemingly unconnected, maze of information by means of a set of thinking and reactive processes (our software, if you will) developed (evolved) over millions of years. The software applied to super computers, even where self-learning is programmed, is much less flexible (and much less messy!), which places practical limitations on what can or cannot be done. What if you could enlist groups of super computers as well as the top human thinkers to solve complex problems by working together? A new crowd-sourcing model with the best of the best.
Enter Fairfax, Virginia-based Human Computation Institute (HCI) - a worldwide network of human computation researchers who are "dedicated to the betterment of society through novel methods leveraging the complementary strengths of networked humans and machines." It is headed by Cornell University’s doctor of Cognitive Science and Mathematical Psychology, Pietro Michelucci. And what if that accumulation of academic and mechanical excellence were to focus, issue by issue, on solving mankind’s most difficult problems?
“Michelucci, with Cornell professor of Natural Resources and director of Citizen Science at the Cornell Lab of Ornithology Janis L. Dickenson, recently co-authored an article published in this month’s Science magazine headlined ‘The power of crowds: Combining humans and machines can help tackle increasingly hard problems.’ The article is a perspective on how human computation, crowdsourcing, and so-called ‘Citizen Science’ intersect with computing power to create new opportunities for problem-solving at unprecedented scales.
“One of the problems Michelucci’s HCI has started to address is Alzheimer’s disease, through an initiative called WeCureALZ. Dickenson points to an interesting human computation oriented initiative at Cornell called YardMap that has set out, she says, ‘to learn a tremendous amount about how to improve the way we manage living and working landscapes, not only for species conservation, but for human health as well.’…
“WeCureALZ was recently funded by the BrightFocus Foundation, a nonprofit that supports research to end Alzheimer's disease, macular degeneration, and glaucoma. Michelucci explains that this initiative got its start when he was introduced to research currently being conducted by Harvard-trained physicists Chris B. Schaffer and Nozome Nishimura, both now professors at Cornell’s Department of Biomedical Engineering, where they run the Schaffer-Nishimura Lab, in which a team of biomedical students and scholars use ‘advanced optical techniques to observe and manipulate in vivo [living organisms] biological systems, with the goal of developing a microscopic-scale understanding of normal and disease-state physiological processes in the brain.’
“In short, the Schaffer-Nishimura Lab, among many other projects, studies plagues, also called beta-amyloids, that have been known to destroy or complicate cell-to-cell communications in the brain, and a good reason to believe that such plagues are a major cause of Alzheimer’s.
“These plagues are analyzed by inserting Alzheimer-causing genes into mice that are then studied in the laboratory via imaging techniques in which lab technicians take real-time video of blood flowing through micro vessels of these mice brains. The data from these videos are used to build 3-D images of where the blood is flowing and how the vessels are attached to each other. Through this process ‘they have been able to make an important but not-yet published discovery,’ Michelucci explains. ‘Two percent of the brain’s blood vessels in the Alzheimer’s mice were actually stalled, meaning that no blood was flowing through them compared to only a half percent of the normal mice.’ The 2% causes an overall 30% reduction in brain blood flow.
“How does human computation and crowdsourcing come into play in this scenario? ‘It took almost two years of painstaking data analysis by laboratory technicians to get the result that led to this discovery,’ Michelucci says. ‘The remaining data that needs to be done to get to a treatment target can literally take decades.’ By offloading this data analysis to a crowd of thousands of online citizen scientists and the general public, ‘we expect to be able to do it in just a few years.’” FastCompany.com, January 9th.
Whatever the results, the crowd-sourced solution of the future will be very different from current crowd-sourced solutions. We will need funding for research that does not generate proprietary patents to pay for the effort. Interesting challenges with potentially massive payoffs for us all.
I’m Peter Dekom, and just looking at how we solve problems in the future may result in a very different socio-political world just to manage those solutions.

1 comment:

Richard C. Lambert said...

how we adapt to this technological tsunami is mission critical to what our world will look like in the future. Hopefully not like that Star Trek nasty above, the “Borg virtual assistant program