Common Sense is Not Artificial

Common Sense is Not Artificial

Despite being a game changer, AI cannot yet beat the human brain at common sense or lifelong real-time learning

We all know that artificial intelligence is set to be the next technological turning-point. It has already taken up several human activities with commendable success and will continue to do so. It is also going to change the way computational technology perceives the world and vice-versa. But Data Scientists and machine language developers admit that despite all its promise AI algorithms still carry inherent inadequacies. Compared to what we – as humans – consider to be “intelligent’, AI is nowhere near.

In a recent article published in Forbes, Rob Toews has mentioned a few areas in which artificial intelligence lags behind. Let us discuss the top two.

Commonsense

The number of facts about how the world works that humans come to understand through lived experience are, indeed, infinite. None can list them down all. We automatically grasp certain situations and facts only because we possess a broad body of basic background knowledge about how the world works. That is possible because we have “lived-in” those situations right from birth. And, as AI researcher Leora Morgenstern rightly puts it, most things a human child picks up naturally from environment are so instinctive that these are never ever documented in a book or something to be formally passed on as training content. Either one has been through it, or one hasn’t.

As a result, the “common sense” we possess is a derived consequence of the persistent mental representations that we develop of different elements and concepts that populate our world—their inherent qualities, characteristic properties, and inter-relationships. That is exactly why an algorithm cannot be trained on them.

In his article, Toews presents an excellent example how even after extensive training an algorithm would miss crucial information. He presents the following three statements narrating an event:

A man went to a restaurant. He ordered a steak. He left a big tip.

Now, if you are asked what this man ate, can you answer it? But of course, you can! It is evident to all that the man ate the steak he ordered. However, this information is evident only to us – humans – and not to an algorithm; because nowhere in the three statements has it been explicitly mentioned. We know people goes to restaurants to eat; they order food there, and leaves a tip only after eating. No machine can surmise that based only on those three given statements.

Language can be used in multiple ways to communicate the same message. We can derive such messages intuitively through our experience. But deep neural networks do not form mental models as we do. They do not possess discrete, semantically grounded representations. Instead, they rely on statistical relationships in raw data to generate insights that humans find useful. For certain tasks, this statistical approach is superbly efficient, but not for some others. Thus, in this example, any AI system would miss the key information altogether – because nowhere the sentences state “this is what the man ate”!

Experts assume, hybrid models are still the only viable solution.

Real-time learning

Conventionally, the development process for AI algorithms involve two distinct phases: training and deployment. During training, an AI model is fed with a wide corpus of static pre-existing dataset that provides an extensive exposure for the algorithm to “understand” the real-life setting in which a certain task is to be performed. Once this training is complete, a model’s parameters are fixed. After deployment, the model is expected to work on novel data based on whatever it has already learned from the training data.

Now, how do we refine the existing parameters already learned by the model? We stop it, take it off work, feed it with the new and/or updated dataset, and then re-deploy it. This means, every new learning even would have to be done in batches when the functioning would be interrupted, separate cost and effort would be involved, and this would have to be repeated whenever any new learning has to be added to the model’s parameters.

This batch-based training/deployment paradigm is so deeply ingrained in modern AI practice that we forget how much more refined human learning is. Humans keep on learning all along by accommodating new knowledge while retaining previously learned experiences. This is known as continual or lifelong learning. We encounter a continuous stream of incoming data in real-life. New information becomes available to us incrementally as circumstances keep evolving. We can dynamically and smoothly incorporate this continuous input from our environment, and adapt our behaviour accordingly – without the need to stop everything, learn, and then start again. This means, we, humans “train” and “deploy” in parallel and in real-time.

Conventional deep learning methods are yet nowhere near, primarily because of “catastrophic forgetting.” This is when new information interferes with or altogether overwrites earlier information. Humans preserve all existing knowledge and at the same time makes incremental addition of new information effortlessly.

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us