April 6, 2016

Brains versus AI

  • 2488 Views
  • - Shares

Taneli Tikka

Head of Innovation Incubation, Data Driven Businesses, Tieto

If we aim for AI that is smarter than human, then it is implied that it will at least have to surpass the human brain in one capacity or another. The brain is also the best benchmark we have available.

One that computer scientists have modelled using modern recurrent neural networks and other techniques, at least in principle. There are however all sorts of problems with creating Super Artificial Intelligence. I hope we press on despite the obstacles, because the benefits are massive, Super AI could solve all the wicked problems known to man.

For example, simulating the real world (the objective reality around us) down to exactly correct precision cannot really be done with computation. The math becomes impossible.

The Planck length in physics is 1.616199(97)×10−35 metres and we would have to use a simulation as precise as this to achieve the “100% correct” goal. Otherwise, even basic physics and geometry will not work with exact precision. This leads to numbers so vastly precise that we don’t know how it could be done even in theory yet.

At the moment, according to experts in theoretical physics, even the computation of simple systems—say, point-to-point Newtonian gravity between dust particles in space—is hopelessly beyond our modern computational reach. If you want exact precision, not just approximation, the task is immense.

Simulating the world with a quantum level of precision involves just too many interactions at a level that’s too precise for computation—it plausibly cannot be done with anything we know of yet. And life isn’t computation; it’s real physics. Perhaps due to this challenge, the best humanity can ever accomplish is to build software that’s “accurate enough”, without ever achieving that fabled 100% correctness level of simulation, modelling and prediction of our reality. After all, to model reality precisely (not just approximately) we would probably have to create an entire second reality to use as the model. Not practical.

Another problem with AI is that consciousness, values, attitudes and intelligence in general are things you learn, absorb, observe and adapt while you grow. The process of paced growth is an integral part of learning itself. How would you grow an AI like that? We simply don’t know yet. And in practice it’s impossible to test and simulate. AI-related hypotheses are next to impossible to test, especially in isolation or via factor by factor scientific verification.

Computing power also isn’t anywhere close to reaching the kind of levels that would be required for true AI. Borrowing inspiration from the human brain, we have neurons in our heads that change their genome dynamically. Like software rewriting itself quickly on the fly. Neurons also learn and reconfigure. Synapses between neurons aren’t just cables but part of the brain’s natural “computational system”. The brain is vastly more complex than that. There are many hundreds of different types of neurons, for example. There are also dendrites, tree-like structures with synapses on them. Your average adult has about 300 billion of those alone.

The biological process the brain uses to “compute” is superbly complex, inductive, relativistic and self-writing and self-configuring on the fly. All of this makes the human brain totally badass when it comes to computational power, and how many operations per second it is capable of doing.

Current conservative and careful estimates place the brain’s calculation speed at around at least 1.075 × 10^21 FLOPS (Zetta-scale computing) and plausibly several orders of magnitude higher (Yotta-scale or more). The complexity involved is truly dauntingly immense. The fastest computer on this planet is pulling off 3.386 × 10^15 FLOPS (Peta-scale computing), which is so slow it’s not even hitting 0.0032% of the brain’s power with that conservative estimate. The world’s fastest supercomputer also consumes $65k to $100k worth of electricity in a single day, adding up to $30M or more in a year. Not quite on the same level as humans getting their energy from normal food.

A 2009 article in H+ Magazine estimated the brain’s computational power to be in the range of 3.68 × 10^15 FLOPS, which would be just a few times more than the fastest supercomputer. However, this 2009 estimate was later proven to be fundamentally flawed, despite the fact that, at the time of writing this post, Wikipedia still uses it as a baseline reference for the brain’s computing power.

For example the dendritic information processing in the human brain was only possible to verify with techniques born in 2013. This new information significantly upgrades the current best estimate of the human brain’s computational power. Also, as previously stated, we have not really fully researched the brain yet. There might be additional layers of complexity in it and surprises in computational power we don’t yet understand.

The conservative estimate of the brain being in the Zetta-scale of computing is conservative because it is leaving out a long list of the brain’s known features that further increase the computational complexity of the brain. Besides the known ones, we still have the unknown ones to discover in the future. The estimate does not include known things like (as listed by Tim Dettmers) multi-neurotransmitter vesicles (which can be thought of as multiple output channels or filters, just as an image has multiple colors); glial cells (besides having an extremely abnormal brain [about one-in-a-billion], Einstein also had an abnormally high number of glial cells. (It has been estimated that in addition to neurons the brain contains an equal mass of glial cells. Such as astrocytes, microglia and oligodendrocytes. Due to their small size, they number up to 10 times more than neurons. The latest research suggests that astrocytes especially have an active role in brain neuroplasticity and communication. Further increasing the estimate of the brain's computing power.); non-axodendritic synapses (axon–axon and axon–soma connections); electrical synapses; neurotransmitter-induced protein activation and signaling; neurotransmitter-induced gene regulation; backpropagation, i.e. signals that travel from the soma to the dendrites, such that the action potential is reflected within the axon and travels backwards (these two things alone may almost double the complexity); axon terminal information processing; voltage-induced (dendritic spikes and backpropagating signals) gene regulation; voltage-induced protein activation and signaling; the geometrical shape of dendritic trees and dendritic spine information processing. All this complexity and additional capabilities for computation probably pushes a normal brain to Yotta-scale computing, or even more. The brain is a miraculous product of evolution with countless iterations, resulting in complexity, power and an energy efficiency ratio that is far beyond those of any modern computer.

For the time being, the human brain isn’t even in a contest against the fastest computers on the planet. Puny computers have no chance. How quickly is computing power growing then? Moore’s law is probably too optimistic considering what kind of severe obstacles computing needs to solve in the next 100 years due to the limitations of physics, power, cost, memory scaling, IO capacity, speed of light, reliability and architecture.

Once you factor in all the current estimates, it is plausible it will take until the years 2075 to 2100 before we have a computer on par with the brain’s computing capabilities. And that’s the conservative estimate of how complex and powerful the brain really is. All of this causes me to be a realist on what it really takes to build a true AI for our benefit—it might not happen in our lifetime after all. With high optimism, it just might be possible by 2060!

Meanwhile, we keep on improving the world, solving real problems with technology. Developing Cognitive Automation and data-driven services around it. Conserving precious resources, and making life better for us all. It’s about making the world a better place by creating purpose to the information of the world. More narrow intelligence besides true AI can still go a long way. And it will. I myself am eagerly waiting for the self-driving cars and all the great things quite close on the the horizon!

Terms used

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also the name of the academic field of study that researches and develops computers and computer software that are capable of intelligent behaviour. Major AI researchers and textbooks define this field as "the study and design of intelligent agents", in which an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "the science and engineering of making intelligent machines".

Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a "Field of study that gives computers the ability to learn without being explicitly programmed". Machine learning explores the study and construction of algorithms that can learn from and make predictions using data. Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions, rather than following strictly static program instructions.

Cognitive Automation is a specific field of application of advanced automation that aims to automate cognitively challenging tasks and entire processes conducted by humans. This is also described as “Cognitive Technology” in some cases. Cognitive Automation is used for tasks such as accounting, financial risk assessment, technical documentation and inspection, supply chain optimization, etc. This technology represents the convergence of robotic process automation, AI, machine learning, machine vision, natural language processing, cognitive computing, and advanced analytics.

This series has just barely scratched the surface of the exciting topic of AI. For more information on the topics discussed in this series of blog posts, please see the following resources.

Books:

Ray Kurzweil – The Singularity is Near

Nick Bostrom – Superintelligence: Paths, Dangers, Strategies

James Barrat – Our Final Invention

Articles and papers:

Stuart Armstrong and Kaj Sotala, MIRI – How We’re Predicting AI—or Failing To

Susan Schneider – Alien Minds

Stuart Russell and Peter Norvig – Artificial Intelligence: A Modern Approach

Theodore Modis – The Singularity Myth

Gary Marcus – Hyping Artificial Intelligence, Yet Again

J. Nils Nilsson – The Quest for Artificial Intelligence: A History of Ideas and Achievements

Steven Pinker – How the Mind Works

Vernor Vinge – The Coming Technological Singularity: How to Survive in the Post-Human Era

Nick Bostrom – Ethical Guidelines for A Superintelligence

Nick Bostrom – How Long Before Superintelligence?

Vincent C. Müller and Nick Bostrom – Future Progress in Artificial Intelligence: A Survey of Expert Opinion

Moshe Y. Vardi – Artificial Intelligence: Past and Future

Russ Roberts, EconTalk – Bostrom Interview and Bostrom Follow-Up

Jaron Lanier – One Half a Manifesto

Bill Joy – Why the Future Doesn’t Need Us

Kevin Kelly – Thinkism

Paul Allen – The Singularity Isn’t Near (and Kurzweil’s response)

Stephen Hawking – Transcending Complacency on Superintelligent Machines

Kurt Andersen – Enthusiasts and Skeptics Debate Artificial Intelligence

Terms of Ray Kurzweil and Mitch Kapor’s bet about the AI timeline

Ben Goertzel – Ten Years To The Singularity If We Really Really Try

Steven Pinker – Could a Computer Ever Be Conscious?

Carl Shulman – Omohundro’s “Basic AI Drives” and Catastrophic Risks

World Economic Forum – Global Risks 2015

John R. Searle – What Your Computer Can’t Know

Stuart Armstrong – Smarter Than Us: The Rise of Machine Intelligence

Ted Greenwald – X Prize Founder Peter Diamandis Has His Eyes on the Future

Kaj Sotala and Roman V. Yampolskiy – Responses to Catastrophic AGI Risk: A Survey

Arthur C. Clarke – Sir Arthur C. Clarke’s Predictions

Hubert L. Dreyfus – What Computers Still Can’t Do: A Critique of Artificial Reason

Jeremy Howard TED Talk – The wonderful and terrifying implications of computers that can learn

Academic neuroscience articles:

Ji, D., & Wilson, M. A. (2007). Coordinated memory replay in the visual cortex and hippocampus during sleep. Nature neuroscience, 10(1), 100-107.

Liaw, J. S., & Berger, T. W. (1999). Dynamic synapse: Harnessing the computing power of synaptic dynamics. Neurocomputing, 26, 199-206.

Ramsden, S., Richardson, F. M., Josse, G., Thomas, M. S., Ellis, C., Shakeshaft, C., … & Price, C. J. (2011). Verbal and non-verbal intelligence changes in the teenage brain. Nature, 479(7371), 113-116.

Smith, S. L., Smith, I. T., Branco, T., & Häusser, M. (2013). Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo. Nature, 503(7474), 115-120.

Stoodley, C. J., & Schmahmann, J. D. (2009). Functional topography in the human cerebellum: a meta-analysis of neuroimaging studies. Neuroimage,44(2), 489-501.

Brunel, N., Hakim, V., & Richardson, M. J. (2014). Single neuron dynamics and computation. Current opinion in neurobiology, 25, 149-155.

Chadderton, P., Margrie, T. W., & Häusser, M. (2004). Integration of quanta in cerebellar granule cells during sensory processing. Nature, 428(6985), 856-860.

De Gennaro, L., & Ferrara, M. (2003). Sleep spindles: an overview. Sleep medicine reviews, 7(5), 423-440.

Academic high performance computing articles

Dongarra, J., & Heroux, M. A. (2013). Toward a new metric for ranking high performance computing systems. Sandia Report, SAND2013-4744, 312.

PDF: HPCG Specification

Interview: Why there will be no exascale computing before 2020

Slides: Why there will be no exascale computing before 2020

Interview: Challenges of exascale computing

Fusion.net: Microsoft’s Tay Tweetbot turned racist by 4chan and 8chan: http://fusion.net/story/284617/8chan-microsoft-chatbot-tay-racist/

The Human Memory: Neurons & Synapses

Stay up-to-date

Get all the latest blogs sent you now!