Let’s face it, when people talk about AI, most of the time they’re thinking about Machine Learning. In fact, ML is so pervasive that they’re often surprised to hear there are other types of AI. So let’s talk about ML…

There has been amazing progress made in ML recently — we have phones that we can speak to, apps that know our music tastes and can suggest music we like, cars that can “almost” drive themselves, apps that can recognize dog breeds or plant species, and many more. The list goes on and on.

These have all been made possible by machine learning. Given some initial programming and many examples, machines can “learn” to recognize cats, pedestrians or faces. You can ask Alexa “What time is it in Germany?”. Using ML, Alexa converts your spoken words into text, passes the text to a search engine and retrieves a list of answers along with their associated confidence scores. Being a simple question, the top answer would have a high confidence score and is spoken back to you.

“Awesome”, you say, “it understood what I asked!”

But did it understand?

In a similar vein, you can train a machine to recognize cats. To do this, you give it thousands of pictures of cats. You then show it a picture of your cat and when it labels it with ‘cat’, you’re impressed!

“Wow, it knows what a cat is!”. And that is impressive! It had never seen a picture of your cat, yet it was able to recognize it as one.

But does it really know what a cat is?

To answer this, we need to look at what the AI actually did. Machine Learning is a statistical method that associates inputs to outputs. In effect, it is associating the label ‘cat’ with many varying patterns of pixels that make up the shape of a cat. That’s the extent of it. It does not know what a cat is, that cats are pets, that cats are animals and they move or anything else about cats. In other words, it has no actual understanding of the concept of a cat and therefore, is unable to associate information to it.

When AI systems were built that beat the world champion at chess or the best ‘Go’ player, they were built as two separate and specialized AI systems, one to play chess, the other to play Go. Neither could apply any of its logic to the other. Neither had any concept of what a “board game” even is.

For a machine to understand, it needs to represent real world concepts explicitly and know how they relate to each other. This can be done using knowledge graphs which is a system of nodes and connections between those nodes that allows us to store data or knowledge on anything you can think of. For example, a knowledge graph can describe a person and their attributes such as height, hair color, hobbies, job, and links to their friends or colleagues. It is only once the machine has this knowledge, and potentially linking it to other knowledge, is it able to derive insights.

We can apply specific and general knowledge as follows:

Specific knowledge:

    • Mary plays the trumpet
    • Mary lives in Kitsilano
    • Amy plays the trombone
    • Amy lives in Kitsilano

General knowledge:

    • Kitsilano is a suburb in Vancouver
    • Trumpets and trombones are musical instruments often played in bands
    • People who play musical instruments are musicians

We may derive the insights:

    • Mary and Amy are musicians
    • Mary and Amy live near each other
    • Mary and Amy likely know each other

This can be represented as follows:

Unlike ML where input/output associations are represented as vectors of weights, representing concepts explicitly enables the AI to use the same words as the user. Importantly, without it, there can be no explanation of the results. ML systems are known as black box systems for this reason. Blackbox systems are fine for non-critical tasks such as suggesting what to buy or who to date but, if they can’t explain their results, they can’t be used in high stakes scenarios like medical diagnoses, legal advice or any scientific endeavor.

Not only can AI with the use of knowledge graphs enable explanations of results, it can also provide advice. Just like a human expert reviewing results, this type of AI system provides suggestions on gaps in knowledge and what needs to be done to confirm any answer it presents.

Current state of the art AI systems rely heavily on ML. Although useful for many tasks, it has no fundamental understanding, so to answer the question “How good is AI really?”

In the case of machine learning, the answer is: “Useful… but limited.”