When I was at the Intelligent Content Conference a few weeks ago, I had the pleasure of meeting Pavan Arora of IBM Watson. I listened to him speak about Watson and talk about the enormous content opportunity we have in front of us. I found his talks (both private and public) intriguing and inspiring.
I’ve been in the technology industry for a long time. I started long before we had a public internet. Before cellphones. Before laptop computers. Let’s just say, I’ve seen a lot. However, for the past decade or so, I haven’t felt a tectonic shift in the way we compute. Sure, smartphones have made remarkable advancements. There are myriad applications that can do everything from make a simple phone call to turn your lights on and off.
In general, though, the way we interact with our devices has remained pretty stagnant. We type (or speak) things into a computer and it gives us the information that it has. We can asked for that information to be presented in different ways. We can use keyword searches to see what other information exists on the web. But, we are asking for and receiving pretty much the same information that we’ve grown accustomed to since the 1990s.
Cognitive computing systems represent a true change in the way we interact with our devices and the types of insights those devices serve up. Using enormous domain-specific corpora, these incredibly powerful systems can analyze data at unimaginable speeds. The systems draw inferences, find parallels, and even score multiple, often competing results to provide information that a mere human would not be able to glean perhaps in any amount of time.
How Cognitive Computing Works
The magic of cognitive computing is based on natural language processing (NLP). In essence, everything that an AI system does is dictated by its ability to understand the meaning and intent of content. These complex systems read and interpret text like a person. They break down sentences into components and process the semantics of written material. Once the system understands the meaning and intent of the text, it can process the information very quickly to draw correlations and inferences to potential answers to questions. Most cognitive systems use a broad set of linguistic models and NLP algorithms to process the data.
How Cognitive Computing Systems Learn
To illustrate how cognitive computing systems learn, let’s take a look at IBM Watson. Here is what needs to happen in order for IBM Watson to solve complex problems:
- A large body of content for a particular domain – often called a corpus – is uploaded.
- The content is curated by human experts. During curation, experts discard information that is out of date, immaterial, and deemed to be no good.
- After curation, the content is ingested. When Watson ingests content, it pre-processes the content, creating indices and metadata. This makes working with the content more efficient in the future.
- Once ingested, human experts train Watson on how to interpret the information. This is called Machine Learning.
- During Machine Learning, a human expert uploads training data in the form of question and answer pairs. These pairs are the ground truth for the data.
- The question / answer pairs teach Watson to look for linguistic patterns of meaning in the domain. Not every question and answer about a topic needs to be uploaded. Instead, by processing the linguistics of the information, Watson learns how to read the information. Watson can then apply this learning to new information as it is received.
- After training, Watson continues to learn about a domain from its ongoing interaction with users. Periodically, the interactions are reviewed by human experts and fed back into the system. This helps Watson to better interpret information in the future.
Remember, it is not the information itself that cognitive systems learn. It is linguistic pattern matching that allow these systems to locate and score information.
How Does This Affect Me…I Mean You?
On the one hand, the brilliance of cognitive computing lies in its ability to find a huge amount of data in a tiny amount of time. This data can be in any form. For example, tweets, articles, blog posts, and so on. Once the machine has been trained, the content does not have to be structured in order to be found.
But therein lies the rub – in order to train the engine, content must be structured. And the structure needs to be very specific. Ground truth is always structured in question/answer pairs. In order for your company to offer services via a cognitive system (whether a chatbot or IBM Watson, or something else), the system must be loaded with content and trained using the ground truth question/answer pairs.
Right now, as I type this, the world’s largest and most creative companies are busy training their cognitive systems to answer your questions. If your company wants to stay ahead of the game, you should be thinking about gathering your corpus, curating it, and creating structured ground truth pairs to train it. Otherwise, you will miss the boat and your competition will get there before you.
If you had told me 30 years that the work I do today in the field of content would be the key to cognitive computing in the future, I never would have believed you. Yet, here we are at the beginning stages of an entire field of computing. All based on linguistics.
When not blogging, Val can be found sitting behind her sewing machine working on her latest quilt. She also makes a mean hummus.
Latest posts by Val Swisher (see all)
- The 2018 Web Globalization Report Analyzes 150 Global Brands - February 12, 2018
- Our Community in Times of Crisis - October 31, 2017
- Chatbot Challenges - October 25, 2017