From Hal 9000 to Self-Driving Fleets Breaking through the Wall: A Strategy for Progressing from Narrow to General AI

Guest author, Fritz Barth

Since its conception, Artificial Intelligence’s (AI) developments have been been marked by a series of technological plateaus stemming from technologic limitations and results requirements.  At each stage, the creation of genuine AI, as popularly imagined, seemed imminent. But, reality’s limitations quickly became apparent. Take, for example the vast gap between the very real IBM Selectric typewriters at Lawrence Livermore’s ELIZA that seemed to enable conversations with the mainframe and the cinematic aspirations of the Forbidden Planet’s Robby the Robot. With each advance, our understanding of the complexity of creating something that can match a two-year old’s common sense and learning ability increased and then was disappointingly followed by a long hiatus.

We’re currently experiencing the “Second Wave” of AI development, and it appears we may be on the verge of another plateau. I say this because we are approaching a point of diminishing marginal returns at which no amount of computing power or training data can generate common sense or deal with out-of-scope inputs.  Intelligence turns out to be more than an algorithm.

Nonetheless, a great deal of progress has been made, particularly in machine learning (ML), deep learning, computer vision, and related areas in robotics. Applications of these technologies and techniques will continue to grow across industry, society and government in unforeseeable ways. Advances in computing power, networking, cloud computing, data science and data storage would likely have happened in any case, but they have made today’s AI both a necessity and a promise for managing and exploiting these unfathomably complex systems.

Our current plateau differs from previous ones in that it has resulted in many capabilities, often for solving problems technology itself created, (e.g., spam filters, or for solving problems we didn’t know we had, such as sentiment analysis of social media). However, these capabilities generally lack the reliability, robustness and traceability most critical military applications require. An 85% effective spam filter is useable, while an 85% accurate landmine locator is suitable only for emergencies, especially if it only works for anti-tank mines and there’s no way to discover why it finds some and misses others.

"Good enough" is not enough of a driver to push us past this plateau and has resulted in what is commonly known as “narrow AI.” The private sector will continue to largely focus on narrow AI because there are numerous opportunities where the cost of error is acceptable or borne by someone else. Although expensive and often accompanied by unintended consequences, the process for creating narrow AI is well understood and accepted.

Where private efforts leave good enough behind is in the push to develop AI that supports robotics and autonomous vehicles. These efforts have more in common with potential Department of Defense (DoD) military combat-related applications because they require similar levels of reliability, robustness and traceability. However, progress on driverless vehicles has been slow in the auto and trucking industries despite their enormous investments and commitment, illustrating that this challenge is not unique to DoD.

Where is the challenge? Self-driving is fairly successful in the relatively closed system of open highways, while AI has a much harder time dealing with computationally rare situations that a human would immediately understand even during a first encounter. For example, jaywalking occasionally occurs in highway travel and frequently in city driving. Due to handling and braking differences, autonomous trucks need longer-range computer vision systems and different algorithms than passenger cars. They are not nearly as similar as one might originally assume.

The bottom line is that, although there has been significant progress towards militarily useful AI, DoD has much further to go than it has come, leaving us stuck on this plateau. What is clear is that the DoD needs to lead that journey rather than follow: the private sector is consumed with current opportunities and military applications are sufficiently unique and demanding that they will not be developed fortuitously in the course of private-sector Research and Development in a useful timeframe.

How do we move to Third Wave AI?

The book, Rebooting AI: Building Artificial Intelligence We Can Trust, provides a good overview of the current state of AI and discusses where it falls short before offering prescriptions for what is needed to create something recognizably intelligent. Based on my experience, study and the authors’ recommendations, we move forward using military-specific AI development based on the current foundation:

Foundational Second Wave successes so far:

  • Narrow AI
    • Computer vision, image classification and recognition
    • Machine learning
    • Deep learning
    • Search
    • Fusion and situation awareness – sensor data processing and integration
  • Robotics
    • Broad-scale and precise localization (GPS and non-GPS based)
    • Motor control (feedback)
    • Situational awareness of relationship to objects and geography-based on stored and collected sensor data
    • Onboard and offboard processing
    • Man-machine interface
    • Swarming

Building From the First Wave:

    • Functional ontologies
    • Object-oriented internal representation
    • Expert systems, decision support
    • Lightweight, efficient computing

To get to Third Wave AI, or something that behaves more like what is generally considered intelligence, additional conceptually (and sometimes physically) integrated capacities/specializations are needed that track with our understanding of what the human brain uses. Individually each capacity will increase functionality in ways that may be different than anticipated but will be useful collectively:

  1. Specialized functions
    • Internal representation (carried forward from the First Wave), i.e., structured (guided) learning
    • Course of action development, evaluation and selection (carried forward from First Wave)
    • Time perception and prediction
    • Causality – simulation and non-simulation based
    • Cognition based on internal representation
    • Ethics: designed in; hardcoded; updateable; learned

    2. Shared “cortex” functions

    • Data interface, translation, exchange, standards
    • Shared learning – learning exchange, compatibility and trust. One AI absorbs understanding from another without the need for training.

    3. Possible foundational developments

    • ODE net (ordinary differential equations) - A method of applying deep learning to continuous data instead of arbitrarily segmenting data into discrete digital elements. It accommodates many natural and biological processes. In other words, a synthetic analog process.
    • Quantum computing -  Beyond simply increasing computing power, the fundamental three-state (yes, both, no) nature of quantum computing itself may offer a natural advantage to binary AI in that uncertainty (or indeterminacy) is an essential feature of the natural world and natural intelligence.

Each of these functions (and others), lend themselves to individualized development followed by one-on-one pairing and integration. Multiple configurations should then be created—with the understanding that very complex and unpredictable behaviors must be anticipated.

DoD’s focal point for AI Joint Artificial Intelligence Center (JAIC) can take four very specific actions to gain (preserve?) and maintain DoD’s military lead in AI:

  1. Apply or leverage commercial narrow AI solutions where appropriate while gaining expert knowledge of their limitations and the ability to assess limitations.
  2. Guide or support experimentation and development of more robust DoD and private AI capabilities with military applications such as autonomous vehicles that have to make adaptive decisions with high reliability.
  3. Guide the development and evolution of ethical AI “common sense” towards interactive, responsive, anticipatory and associative behaviors that meet military requirements with military reliability. Provide specific target objectives for research and experimentation based on the previous list of desired attributes to DARPA’s AI Next efforts.
  4. Support the development of Technologies, Techniques and Procedures to counter adversary AI and autonomous systems with insight gained thorough understanding of the vulnerabilities and limitations of AI and autonomy in general.

AI’s Third Wave is in our future, but will require a shift from narrow “good enough” to a general and more behavior-focused development process.

Marcus, Gary & Davis, Ernest (Sept 10, 2019). Rebooting AI: Building Artificial Intelligence We Can Trust, Pantheon House

Hao, Karen (2029). A radical new neural network design could overcome big challenges in AI: Retrieved from:

DoD Ethical Principles for Artificial Intelligence (2020): Retrieved from: