AI Realities, Myths, and Misconceptions: Comparison to Human Intelligence
Written by Bruce R. Copeland on March 31, 2023
Tags: artificial intelligence, information, learning, neural network, human brain
The recent release of several powerful artificial intelligence technologies has unleashed all kinds of responses among humans: shock at the sophistication of what these technologies can do, shock at their occasional stupidity, and in some quarters terror at the potential implications for human civilization. Before we can really understand this kind of artificial intelligence, we need to understand human intelligence.
This article is the third in a series of three articles about artificial intelligence (AI), how it works, and how it compares to human intelligence. The first article surveyed the different available types of AI, whereas the second article examined computational neural networks in some detail.
What Does the Human Brain Do (and how does it do it)?
As humans, we like to believe our intelligence is conscious and rational. The reality is FAR different. Everything in animal decision making is based ultimately on some form of neuronal network. A single neuron or set of neurons converts sensory inputs into some kind of output decision or action. The overwhelming amount of this processing is done unconsciously. Human unconscious learning is predominantly emotional. The rate and extent at which we learn something is dictated by neurotransmitters (glutamate, GABA, dopamine, etc) whose levels/activity are heavily influenced by emotion. Thus we learn some things very rapidly and others much more slowly.
Humans and likely many other animals also have a certain degree of self-awareness and consciousness. This is an important alternate system for decision making that operates substantially parallel to our unconscious — but only when we make use of it.
Using our conscious awareness we can carry out templated procedures (step A, then step B, then...). Some of us have the capability to perform complex logical analysis and higher mathematical reasoning, apparently as advanced templated procedures. But very often those of us with these capabilities find ourselves performing the steps unconsciously. It isn't clear which actually came first, the conscious process or the unconscious process.
So what are the key elements that we associate with human intelligence?
- Memory (long-term, short-term, working, procedural).
- Highly robust metadata and indexing associated with memory.
- Pattern recognition.
- Learning — similar to regression training, but heavily weighted toward emotional learning.
- Social skills.
- Conscious self awareness.
- Executive control.
Executive control (or executive function) refers to any system(s) that organizes and controls thoughts, tasks, priority, interruption, signaling, maintenance, etc. In humans this is both conscious and unconscious. It enables a significant number of additional qualities related to intelligence:
- Goals and goal setting.
- Task switching and multi-tasking.
- Templated procedures.
- Absurdity filtering — the ability to recognize and discard absurd thoughts or conclusions.
- Memory pruning and reorganization (partially involving sleep).
- Reconciling inconsistencies in facts and reasoning The human brain has an extensive system for reconciliation (also partially involving sleep).
As you can imagine we do not necessarily have a good idea how all these different parts of human intelligence work.
How Do Current AI (artificial neural net) Systems Compare?
Computational artificial neural networks (ANNs) make up the overwhelming majority of current AI. ANNs compare to human intelligence in the following ways:
- Computational systems have very cheap, extensive data storage memory (typically but not exclusively organized as databases). These far outstrip the capabilities of the human brain, but are not nearly as efficient.
- Metadata and indexing exist in computational systems, but are vastly inferior to the human brain.
- ANNs have become quite effective at large-scale image processing but still lag the human brain in many basic capabilities.
- Everything in ANN AI is based on learned pattern recognition.
- AI learning derives heavily from non-emotional regression of training set data but is limited by training set bias.
- Very limited (human) emotion can be learned by an ANN, and it is possible to externally apply emotional learning. (This is an external choice that can only be made by ANN developers.)
- Certain social nuances can be learned during ANN training, but those are different from conscious social skills.
- ANNs have no consciousness or self awareness.
- Executive control is completely external to ANN AI.
Executive control is widely used in computer software. ALL executive control in ANN AI is externally programmed by humans. This includes ANN training, breadth and extent of learning, and filtering of input/output for ANN prediction. Much of what we associate with current ANN AI capabilities is actually managed by human controlled software that is separate from the AI itself. Goals and goal setting, prioritization, bootstrapping, abstract rules-based logic, absurdity filtering, and most organization are all imposed on ANN AI through human-managed executive control. A limited amount of "this follows that" can be learned by ANNs during training. Also some organization is built into training patterns, so certain basic levels of numerical and visual abstraction can be learned by ANNs.
Memory pruning of ANNs is instigated externally by humans and is not frequently done. There is currently no meaningful system within ANNs for reconciling inconsistencies in facts and reasoning.
What are the operational comparisons between human and artificial (ANN) intelligence? ANNs have much more computational power and database access than human intelligence, but much less efficiency and data interconnectivity. ANNs learn with significantly better objectivity because they rely on true regression instead of emotional learning. This also makes ANN learning of new data and patterns significantly slower than human learning. Both types of intelligence are substantially at risk from bias in training data. However bias in artificial intelligence training sets is more readily (but never completely) avoided through sound data science practices (see e.g., AI Realities, Myths, and Misconceptions: Computational Neural Networks). Human intelligence has some potential capacity for overcoming bias and for filtering out absurdity through the additional use of conscious reasoning.
Pattern recognition learning is adversely affected by falsehood(s) in training data. This is true for both human and artificial neural networks. The effect of falsehood can be more severe in human intelligence because of emotional learning, but the parallel existence of conscious human learning/reasoning and human brain reconciliation systems can potentially offset this problem. Conscious human reasoning is only useful as a means for offsetting falsehood if it is developed to the point of logic and if there is sufficient data for reconciliation analysis. This is also why conspiracy theories can be so damaging in human intelligence, because the normal conscious reasoning and reconciliation systems are often unable to overcome systematized or hierarchical falsehood. Lies can also be particularly problematic in human neural network learning because they frequently are associated with highly emotional learning.
Much has recently been made of the sophistication of artificial ANN intelligence as well as its occasional, surprising stupidity. These features really should not be surprising since they also characterize human neural network intelligence. Humans are actually quite good at (unconsciously) interpolating between similar/related training patterns, but we are horrible at extrapolation. We should expect the same from AI systems. Recent ANN systems are a lot like the new character who shows up at a party and can converse so charmingly, but the next morning you are left wondering if the character actually ever said anything profound.
Humans learn continually. It is possible to set up ANN learning so that it is continual, but this is seldom done. Most ANN learning is in batch. This means ANN intelligence could be out-of-date in some settings. However, it is very difficult to use continual training of ANNs while also maintaining good data science oversight of training data.
Risk that Artificial Intelligence Might Take Over the World
For most of human history we have feared higher intelligence. There is probably some justification for such fear, but we need to be objective.
It is theoretically possible that some artificial intelligence could become advanced enough to take over, but it is unlikely to happen any time soon. Such an advanced AI would need to have its own goal setting and (not-human-regulated) executive control. Current AI development does not cede either of those capabilities to the AI itself (i.e., current AI is not HAL).
As with all areas of risk, some vigilance is required.
Ethical Concerns Associated with Powerful Artificial Intelligence
There are an extensive series of ethical concerns with respect to artificial neural net intelligence.
First and foremost it is not appropriate to apply ANN artificial intelligence in mission critical settings without considerable human supervision. This is because ANNs are not conscious and cannot give detailed logical explanations for decisions that they make. [Current AI chat systems can answer 'why' type questions, but such answers are learned responses rather than logical enumerations of the steps used by the AI to render some previous answer.] Somewhat similar problems exist with unconscious human intelligence in mission critical settings, but humans sometimes skirt those problems by having important decisions evaluated by other (non-identical) humans. In a related vein, there is a problem with the tendency to make decisions in a yes/no mode rather than a yes/no/maybe or yes/no/uncertain mode. This applies to both ANN and human decisions.
There is a massive difficulty with privacy and protection of intellectual property in ANN training data. These problems are not readily solved with conventional data science, and no meaningful legal frameworks have been established to deal with such problems. For example, a previous-generation, commercial AI application was trained with human facial images scraped from the web. Even though ALL those images were protected by copyright, next to nothing has ever been done about that legal breach. This is ultimately a societal problem.
Recent AI chat technologies make clear that AI has the potential to replace some kinds of human labor in the relatively near future. This is deeply troubling, but perhaps some highly repetitive jobs may be better suited to automation. At the same time we humans really need to up our game. Educators have been telling us for decades that students are not getting either the breadth OR depth of education needed, and our standards are too low. The fact that we can be so easily impressed with AI chat technology that is fluent in human language and simple logic tells us what we humans need to do better. Apart from language, we all need to pay far less attention to lies, tropes, memes, and conspiracy theories, and far more attention to meaningful learning. We should not be parroting either our facts or our reasoning from media personalities who have no deep background in the complex subjects they are discussing.
There have already been many calls for regulation of AI. It is difficult to see how this would work. Any such comprehensive regulation requires input from top software engineers, AI developers, brain scientists, and social scientists. We are not going to get the involvement of this kind of talent on civil servant salaries. At the same time it is important to recognize that many of the most serious potential problems with artificial intelligence occur at the human/machine interface NOT internal to AI itself. It may be tractable to have basic regulations for executive control and nature of input to and output from ANNs, as well as basic legal rules about types and sources of data used in training sets. If we go this route, then consistent enforcement (policy) is essential. It will not work if we have one set of rules for almost everyone and a different set of rules for billionaires, politicians, or National Institutes of Health.
AI can be scary, but AI also offers great potential for solving human problems. We can also understand a great deal about human learning and thinking from AI. Navigating all this requires careful engagement among humans and between humans and AI.
- S-J. Blakemore, U. Frith, The Learning Brain: Lessons for Education, 1st ed. Malden, MA: Blakemore Publishing 2005.