Cognitive Software Group — Loose lips sink AI ships

Cognitive computing is not an IBM “fraud”!

In February, a couple of weeks ago, Dr. Roger Schank castigated IBM. Accusing IBM of “fraud”, Dr Schank asserted “they are not doing “cognitive computing” no matter how many times they say they are”.

Dr Schank has been CEO of Socratic Arts since 2002, is a prolific publisher of articles and books, and formerly holding positions including Professor of Computer Science and Psychology at Yale University and Director of the Yale Artificial Intelligence Project (1974–1989), a visiting professor at the University of Paris VII, Assistant Professor of Linguistics and Computer Science at Stanford University (1968–1973), and research fellow at the Institute for Semantics and Cognition in Switzerland.

What attracted Dr Schank’s ire was a proclamation from an IBM Vice President of Marketing, Ann Rubin, that IBM’s Watson AI platform could “outthink” human brains in areas where finding insights and connections can be difficult due to the abundance of data.

“You can outthink cancer, outthink risk, outthink doubt, outthink competitors if you embrace this idea of cognitive computing,” she apparently said.

Clearly attempting to out-Musk the wildly optimistic predictions of wires embedded in a pigs brain and interacting with a “connected” microchip being nearly on the cusp of the great Singularity, Ms Rubin deserves to be cut, at least, a little bit of slack.

Soon after the COVID-19 outbreak, we approached a University epidemiologist to offer help with our Artificial Intelligence know-how and our “cognitiveAI”, platform. After a couple of chats with an eminent Professor and epidemiologist, the challenge we set ourselves was to build a system that could read 70,000 coronavirus research papers stored and distributed by the “Semantic Scholar” medical research database and search engine.

We learned that researchers look to prior research for clues when pursuing some new hypothesis. Clearly no human researcher can read 70,000 research papers and remember everything in them; just reading them would take a year reading 192 papers every day. Nor could a human remember the content of 70,000 research papers and then ‘join the dots’ between all that content to find clues to propose or support a new hypothesis.

Over the next five months we allocated around 100 person days to see what we could do with an artificial intelligence system, built from scratch.

If I can seek your forgiveness for being a little technical for a moment, “Betsy”, as we call our coronavirus AI system, was provided with 3000 coronavirus research papers that we downloaded from Semantic Scholar. Betsy reads each sentence in each paper and uses Neural Networks, Semantic Computing and proprietary algorithms to extract what we call “facts”. The extracted facts are used to create “RDF triples” in subject-predicate-object form. For example, the part-sentence, “the S protein can induce protective immunity against viral infection” was transformed into the subject-predicate-object forms of two triples, “the S protein induce — protective immunity” and “protective immunity-against-viral infection”. Epidemiologists annotate some “facts” with a concept, thus training the neural network to associate facts with concepts, enabling a “semantic layer”.

Facts can be related to a concept by a predicate and/or to another fact by virtue of appearing in the same sentence, and so on for every sentence in the paper. A large number of triples can therefore be generated from each research paper highlighting the key concepts and related knowledge being discussed in the paper. These triples form a sophisticated data structure called a “Semantic Graph”, that is enriched as more facts are added from more research papers.

A Semantic Graph possibly mimics the semantic memory occurring in the human brain’s neocortex, where neuroscientists suggest semantic memory is “a type of long-term memory involving the capacity to recall words, concepts, or numbers, which is essential for the use and understanding of language”.

A putative ontology, also stored in the Semantic Graph, can be generated from the triples to “join the dots” within and between papers. This process can be performed on 700,000 research papers as easily as it can be performed on 700 and provides a rich description of each research paper encoded for a computer to understand, by itself. Ranking can be applied to highlight the papers of highest relevance. SPARQL queries can quickly find related papers to correspond with a researcher’s domain of interest and represented graphically using Visualizers. Betsy also supports queries expressed in natural language for query and answer of the Semantic Graph, mimicking a human at a basic level, but with a massive amount of well-ordered data at Betsy’s immediate disposal.

https://qbi.uq.edu.au/brain-basics/memory/where-are-memories-stored

In computer architecture, a bus is a communication system that transfers data between components inside a computer, and covers all related hardware and software components. In the human brain and nervous system, data is moved around the brain and the body by 100 billion specialized cells called “neurons”, a much more sophisticated type of “bus” than that of a computer.

In the human brain, knowledge is encoded in memory cells by a combination of cell biology, chemistry and electrical pulses, but we don’t yet know how. Obviously that encoding is independent of spoken language so that you learn in one language and the knowledge is transportable to another language for multilingual people, but we don’t know what the mechanism is for memory function in human brains. In a computer’s semantic graph, knowledge is encoded in a RDF-triple data model that is also independent of the data format that it may be acquired from or published in. Even though the computer model is based on the most complex mathematical study called “topology”, human brain coding is even more sophisticated and not well understood. In both cases some kind of transformation occurs.

Importantly, computer reasoning can be applied to the triples to infer relationships between concepts and facts in different research papers. Here the computer activity of the processors and RAM possibly mimics the working memory that “occurs in the prefrontal cortex of the brain and is a type of short-term memory that facilitates planning, comprehension, reasoning, and problem-solving. The prefrontal cortex is the most recent addition to the mammalian brain and has often been connected or related to intelligence and learning of humans”. Human experts can then consider, accept or ignore the computer-generated inferences in a human-machine partnership of discovery.

In plain non-technical language now, what is indisputable is that a computer can, at thousands of times the speed of a human

1. read a digital document or a database at astonishing speed compared to a human

2. extract key phrases, words, and concepts (with some human training)

3. remember everything it reads

4. using semantic computing techniques, establish and record relationships in the extracted information

5. using computer reasoning techniques, create relationships or infer them for humans to consider.

In the above, the computer out-paces any human researcher; a human simply cannot compete with it.

But, Ms Rubin, does it “out-think” a human being? Maybe if you are in IBM’s marketing department, but not if you are a human neurologist, or any other type of neuroscience expert, or Dr Schank.

While intelligence and thinking are linked to the power of reasoning, computers are currently limited to logical reasoning, bound by mathematics. In contrast, Humans have no such limitations. We have very much superior reasoning based on experience, intuition, imagination, and emotion, for example. (Try telling that to a teenager.)

Today we distinguish between “Machine Learning and AI”.

The not-so-good news is that

1. Machine Learning techniques such as Neural Networks are clever rather than intelligent, limited to pattern recognition and largely dependent on the human intelligence involved in making and supervising them.

2. Artificial Intelligence techniques such as Semantic Graphs and computer reasoning are still at a very basic stage of mimicking human intelligence.

The good news is that these resource hungry techniques are better supported by faster infrastructure solutions such as cloud computing. The rate of understanding Semantic Graphs and computer reasoning is now increasing quickly, and getting much greater attention now than it has in the last twenty years. As a result, the potential of expanding the functionality of AI techniques such as Semantic Graphs and RDF is growing rapidly.

Artificial Intelligence is not a technique; it is a collection of techniques and as the functionality of each one is extended it will complement and extend the others.

So, is Artificial Intelligence equivalent to or close to Human intelligence? Definitely not.

Is the combination of Semantic Graphs and computer reasoning equivalent to or close to human thinking? Definitely not.

Can computers outperform humans in finding hidden “insights and connections where there is an abundance of data”? Absolutely.

Is Artificial Intelligence capable of substantial support to human thinking? Most definitely.

Is Watson capable of cognitive computing? It’s a matter of semantics!

Dr Schank’s article can be found here.

Cognitive Computing explained here.

Author: Mark Bradley is the founder and sales chief of Cognitive Software Group, the leading cognitive computing company in Australia. www.cognitivesoftware.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store