SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Photonics West 2018 | Call for Papers

SPIE Defense + Commercial Sensing 2018 | Call for Papers




Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

A new model reveals how to build brain-like computers

Recent developments in neurology, computer science, and robotics shed light on how the human cortex computes and represents knowledge.
4 September 2008, SPIE Newsroom. DOI: 10.1117/2.1200808.1230

The human brain has many capabilities that current computer systems attempt to emulate. For instance, the brain can perceive the world as a dynamic, colorful, 3D environment where objects, events, and patterns are segmented into entities that can be linked to situations that have meaning and emotional value. The human brain can also use knowledge to set priorities, generate plans, and achieve goals in accordance with personal values and social customs.

Over the years, artificial intelligence researchers from different disciplines have attempted to build computer programs that reason and think based on formal logic. Neurologists have provided a wealth of knowledge about the biochemical, physiological, and anatomical properties of the brain. Neural net researchers have focused on the brain's ability to learn, recognize, and generalize. Finally, robotics scientists have attempted to build intelligent control systems that pursue goals and react to stimuli with responsive actions.

Knowledge from these various disciplines has never been integrated into a general computational systems model that emulates how the brain is able to reliably recognize objects and events, assign meaning and worth to situations and episodes, make decisions, generate plans, and control behavior in real-world environments.

The phenomenon of the human mind emerges in a computing architecture that is quite different from that of modern computers. The brain is a massively parallel processing unit, consisting of approximately 1011 neurons (that are individual computers) and 1014 synapses (that are programmable gates) all working simultaneously with their own input and output transform functions. Neurons are arranged in clusters and arrays that are capable of complex mathematical and logical transformations. These clusters and arrays are arranged in hierarchies with a massive flow of information moving up, down, within, and between them in separate parts of the brain. Neural arrays and clusters communicate by data channels containing nested loops that are used to perform spatial and temporal analysis of sensory data and to plan and control behavioral activity.

Our proposed model postulates that the human neocortex is composed of an array of approximately one million cortical columns that, combined with related subcortical nuclei, form cortical computational units (CCUs), as shown in Figure 1.

Figure 1. In the posterior cortex, a cortical hypercolumn and its underlying thalamic nucleus comprise a CCU, which consists of a set of processors that execute procedures based upon sensory inputs to establish and maintain the attributes, state, membership criteria, and relational pointers in a CCU frame.

Each CCU contains a data frame plus a set of processes that acquire and maintain the information stored in the frame slots. Each CCU frame contains slots for attribute values, state representations, and pointers that link the self-CCU to other CCUs in a variety of relationships. In the sensory processing regions of the brain, CCUs represent entities or events that are linked together in networks that group signals and pixels into situations and episodes. In the behavior-generating regions of the brain, CCUs represent tasks and goals that are linked in networks. Goals and priorities are decomposed into plans and behaviors that can be transformed into signals to muscle fibers to effect action.1

Many of the theoretical elements underlying the CCU model have been implemented in intelligent control systems. Examples include the control of automated factories and postal facilities, numerical control machining centers, water-jet cutting machines, autonomous undersea vehicles, and unmanned ground vehicles during off-road and on-road autonomous driving experiments.2,3

Our model integrates experience from the fields of robotics and automation, new information from neuroscience, and existing knowledge from computer science and systems engineering. A novel feature is that it hypothesizes that cortical columns, together with their underlying subcortical nuclei, form computational modules that possess the computational power of general purpose computing machines. Therefore, they can complete arithmetic and logical operations, memory storage and retrieval, indirect addressing, and sequential processing.

Our model suggests that the human neocortex might be emulated by a million CCUs. Estimates of the computational power required to implement a typical CCU in real-time suggest that the computational power to achieve this already exists in modern supercomputers. If this is correct, within two decades it might be possible to emulate the amazing powers of the human brain on lap-top machines.

There are, of course, many steps to take before a theoretical model can be implemented in practice. This theory has been formulated from an extensive survey of the literature. It is one of the most comprehensive theories developed, but it still needs to be experimentally tested. Any early theory of this scope is bound to be over-simplified in some details, and verification is likely to require a research program costing many tens of millions of dollars over several decades.