Is Artificial Intelligence coming of age?

The State of the Art

Most experts have settled on a description of Artificial Intelligence as being the scientific endeavor of building computers that mimic the capabilities of the human brain.

To put that into perspective, we know that Human Intelligence started to evolve 7–8 million years ago when our oldest ancestors had a brain volume of about 450 cubic centimeters. In the next 3.5 million years our ancestors’ brain volume increased to about 1350 cubic centimeters. Modern humans (average brain volume of about 1200 cubic centimeters) evolved from the Homo Sapiens species during a period of dramatic climate change 300,000 years ago. Like other early humans that were living at this time, they gathered and hunted food, and evolved behaviors that helped them respond to the challenges of survival in unstable environments. Over the 7–8 million years the brain volume increase was complemented by a concerted addition of neuron density to important areas of the human brain. Some 200,000 years ago the brain also underwent genetic changes that influenced the development of the nervous system and provided the opportunity for human language as the basis of abrupt evolution of human intelligence.

Human intelligence evolved when “upright position and bipedalism were significantly advantageous. The main drive of improving manual actions and tool making could be to obtain more food. Our ancestor got more meat due to more successful hunting, resulting in more caloric intake, more protein and essential fatty acid in the meal. The nervous system uses a disproportionately high level of energy, so better quality of food was a basic condition for the evolution of a huge human brain.” [Read more]. Increased intelligence during a period of rapid climate change allowed early humanoids to survive better through migration (bipedal mobility) and improved access to resources (hunting, gathering, tools, weapons).

In contrast to this 7–8-million-year development of the human brain, the story of Artificial Intelligence, started only 65 years ago, at the 1956 Dartmouth College workshop on Artificial Intelligence. It was organised by John McCarthy, then a mathematics professor at Dartmouth. In his proposal, he stated that the workshop was “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Since the 1956 conference, the idea of building an intelligent computer was met with one disappointment after another; so much so that regular setbacks led to periodic “AI winters” of despair. But what did happen since 1956 is that many academics and businesses have developed computing techniques that now contribute to what we call robust and automated AI. However, those computer techniques require considerable computer resources (processor throughput, processing memory, data storage capacity) that have only become feasible in the last decade.

Top supercomputer speeds: logscale speed over 60 years

For example, a particular recent development has been cloud computing where high capacity computing resources including supercomputers and vast data storage arrays can be shared to facilitate a massive increase in available capacity for very high speed computer processing that can access massive amounts of stored information.

As these recently-developed computer hardware technologies have developed, software technologies such as “Machine Learning” and “Knowledge Engineering”, the two broad fields of Artificial Intelligence techniques, have become feasible and it is now broadly accepted that we have entered the third phase of computing where

  • Phase One started in the 1880’s with “tabulating machines
  • Phase Two started in the 1950’s with “programmable computers
  • Phase Three started in the second decade of this century, the 2010s, with rapid development of machine learning techniques.

Slightly lagging machine learning, we have now seen the re-emergence of knowledge engineering techniques essential for enabling “artificial intelligence”, now known as “AI”.

The demand for AI has increased to better serve human pursuit of information resources now being driven by new applications including Search, Social Networks, e-Commerce, and so on, all enabled by the emergence of “the Internet”.

The data being generated has overwhelmed the traditional Phase Two technologies of programmable computers. While programmable computers have become increasingly sophisticated and powerful, they are still limited by their dependence on human intelligence necessary to instruct them and their decades-old technologies.

In his book “Software Wasteland”, Dave McComb argues that enterprise computing systems of today are totally inadequate for the datacentric environment we have today. They have, he says, typically been designed and built by “consulting” organizations that profit from complexity and the high cost of systems management where information outcomes are drowned by a data tsunami.

One reviewer quoted American writer Upton Sinclair “It is difficult to get a man to understand something when his salary depends on his not understanding it”.

There is another important factor to consider; the meaning of “intelligent”. Here it is important to differentiate between human and computer intelligence. Theoretically, there are three main categories of AI:

  • Artificial Narrow Intelligence (ANI) — mimics human intelligence at a comparatively low level
  • Artificial General Intelligence (AGI) — performs on par with humans
  • Artificial Super Intelligence (ASI) — surpasses human intelligence

Artificial Narrow Intelligence is emerging now and its development is likely to accelerate sharply in the next five years and endure for decades to come. While computer intelligence is no match for human intelligence, Artificial Narrow Intelligence is a massive leap forward from the traditional computing of today.

There are a small number of notable people and commercial enterprises that exaggerate the near-term potential to develop a next-level Artificial General Intelligence. Such people are not considered credible by expert AI technologists or neuroscientists most of whom remain in awe of the capability of the human brain as we seek to learn more about how it actually functions.

Behind the hype, the emergence of Artificial Intelligence offers multitudes of exciting new opportunities the likes of which have never been seen in the human and computer development of intelligence discussed above. No human is old enough to have observed the biologically rapid development of the human brain, but some of us have been lucky enough to witness the rapid development of programmable computers that enabled modern telecommunications, the internet, robotics, mobile computing, and medical devices.

The young people of today will now witness the emergence of Artificial Intelligence. When Alan Turing dreamed of the potential of programmable computers in the 1940’s, he never imagined a world such as ours. Many large, credible corporations forecast that Artificial Intelligence developments will add some fifteen trillion dollars (around the size of the US economy) to global economic activity by 2030. That’s likely to be an underestimate as we increasingly use computers as intelligent assistants.

The emerging age of AI offers the opportunity to exploit useful data and discard useless data. It can distinguish fact from fiction in a fraction of the time a human can, allowing us to focus our attention on what computers may never be able to take from us, our ability to reason and think beyond the limits of logic, the limitation so perfectly portrayed by the Star Trek television series character Mr Spock.

This leads us to a final comment on current indicators of progress.

During “the tens”, from January 2010 to December 2019, we saw rapid development of Machine Learning technologies, one of the two broad fields of technologies essential for AI. The other broad field, Knowledge Engineering, developed more slowly.

A subset of Machine Learning, so-called “Deep Learning”, made substantial progress in “pattern recognition” that facilitates image recognition, search engines, advanced robotics and other computing applications based on recognising and facilitating repetition and repeatability. It progressed so significantly that the use of the expressions Machine Learning and Artificial Intelligence became interchangeable until the late 2010s.

For most problems where deep learning has enabled transformationally better solutions (vision, speech), we’ve entered diminishing returns territory in 2016–2017.

François Chollet, Google [Read More]

“There’s a mismatch between what we have now with Deep Learning which is good at categorization and what we humans do which includes categorization but also a lot of reasoning and understanding and comprehension so the key point in the book is the mismatch…”

Emeritus Professor Garry Marcus, NYU [Read More] [YouTube]

Perhaps Bill Gates, the co-founder of Microsoft, said it best at Reddit in February, 2018

The most amazing thing will be when computers can read and understand the text like humans do. Today computers can do simple things like search for specific words but concepts like vacation or career or family are not understood. It has always been kind of a holy grail of software particularly now that vision and speech are largely solved.

That leads us to Knowledge Engineering progress. In the last two years, it has become more widely accepted that:

1. to display intelligence, a computer must have knowledge that it can understand itself, without reliance on humans

2. such computer knowledge must extend to understanding concepts, defined as “a general idea about a thing or group of things, derived from specific instances or occurrences”, e.g. concepts such as psychology, motor vehicles, family

3. intelligent computers must be able to engage in logical reasoning

By extension, it is desirable that computer knowledge is encoded in a language that is common to all intelligent computers, for automated computer interoperability.

The field of Knowledge Engineering developed more slowly because it is the more complex and intellectually challenging field of AI. During the 2010s, a variety of knowledge solutions were proposed based on a new and more sophisticated type of database called a “graph” database. The most sophisticated type of graph database, based on the most complex mathematics ever introduced for a computer model, is the so-called “semantic” graph database. The word semantic alludes to the fact that we can recognize the meaning of a word from the context within which it is used. For example, the sentence “Pluto orbits the sun in 248 earth years” is a comment in the field of astronomy, not Greek mythology.

Semantic Graphs facilitate complex descriptions of data including its context and relationship with other data necessary to comprehensively identify and describe concepts, thus enabling a human-like ability of computers to reason and infer, a measure of elementary intelligence.

It is the adoption of semantic graphs that will be the key indicator of progress with AI, because, to quote highly respected industry analysis firm Forrester Research in 2017:

“This (semantic) technology is on a trajectory of moderate success. However this could move to significant success if the market and buyers hit an inflection point where they realize the importance of semantic technologies in making AI systems more robust and automated.”

TechRadarTM: Artificial Intelligence Technologies, Q1 2017

Today the leaders of Knowledge Engineering based on semantic technologies are a small number of small companies. This is typical of substantially disruptive technology breakthroughs. When a new market segment’s growth accelerates sufficiently, the leading software companies will typically acquire the small leading innovation companies in the growth segment that will be happy to be acquired at astonishing prices. The ability of the small companies to invent and patent novel new technologies will be a key contributor to the market value they attain.

For customers, the slow and steady development of Knowledge Engineering has been accompanied by the gradual development of comprehensive international standards, a very important factor in enterprise adoption of new software techniques.

The small number of small companies leading the adoption of AI based on Semantic Graphs will do well to also stay close to the international standards development because that is where the enterprise market growth will be, especially if new inventions that adhere to those standards can be patented.

Author: Mark Bradley is the founder and sales chief of Cognitive Software Group, the leading cognitive computing company in Australia. www.cognitivesoftware.com.

A science graduate of Uni of NSW, I joined IBM Australia in 1981 as a trainee Systems Engineer (software programmer), then management, then AI start-up founder!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store