What is natural language processing?

What is natural language processing?

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!



Teaching computers to make sense of human language has long been a goal of computer scientists. The natural language that people use when speaking to each other is complex and deeply dependent upon context. While humans may instinctively understand that different words are spoken at home, at work, at a school, at a store or in a religious building, none of these differences are apparent to a computer algorithm.  

Over the decades of research, artificial intelligence (AI) scientists created algorithms that begin to achieve some level of understanding. While the machines may not master some of the nuances and multiple layers of meaning that are common, they can grasp enough of the salient points to be practically useful. 

Algorithms that fall under the label “natural language processing (NLP)” are deployed to roles in industry and homes. They’re now reliable enough to be a regular part of customer service, maintenance and domestic roles. Devices from companies like Google or Amazon routinely listen in and answer questions when addressed with the right trigger word. 

How are the algorithms designed? 

The mathematical approaches are a mixture of rigid, rule-based structure and flexible probability. The structural approaches build models of phrases and sentences that are similar to the diagrams that are sometimes used to teach grammar to school-aged children. They follow much of the same rules as found in textbooks, and they can reliably analyze the structure of large blocks of text. 

These structural approaches start to fail when words have multiple meanings. The canonical example is the use of the word “flies” in the sentence: “Time flies like an arrow, but fruit flies like bananas.”  AI scientists have found that statistical approaches can reliably distinguish between the different meanings. The word “flies” might form a compound noun 95% of the time, it follows the word “fruit.” 

How do AI scientists build models? 

Some AI scientists have analyzed some large blocks of text that are easy to find on the internet to create elaborate statistical models that can understand how context shifts meanings. A book on farming, for instance, would be much more likely to use “flies” as a noun, while a text on airplanes would likely use it as a verb. A book on crop dusting, however, would be a challenge. 

Machine learning algorithms can build complex models and detect patterns that may escape human detection. It is now common, for instance, to use the complex statistics about word choices captured in these models to identify the author. 

Some natural language processing algorithms focus on understanding spoken words captured by a microphone. These speech recognition algorithms also rely upon similar mixtures of statistics and grammar rules to make sense of the stream of phonemes. 

[Related: How NLP is overcoming the document bottleneck in digital threads]

How is natural language processing evolving? 

Now that algorithms can provide useful assistance and demonstrate basic competency, AI scientists are concentrating on improving understanding and adding more ability to tackle sentences with greater complexity. Some of this insight comes from creating more complex collections of rules and subrules to better capture human grammar and diction. Lately, though, the emphasis is on using machine learning algorithms on large datasets to capture more statistical details on how words might be used. 

AI scientists hope that bigger datasets culled from digitized books, articles and comments can yield more in-depth insights. For instance, Microsoft and Nvidia recently announced that they created Megatron-Turing NLG 530B, an immense natural language model that has 530 billion parameters arranged in 105 layers. 

The training set includes a mixture of documents gathered from the open internet and some real news that’s been curated to exclude common misinformation and fake news. After deduplication and cleaning, they built a training set with 270 billion tokens made up of words and phrases. 

The goal is now to improve reading comprehension, word sense disambiguation and inference. Beginning to display what humans call “common sense” is improving as the models capture more basic details about the world. 

In many ways, the models and human language are beginning to co-evolve and even converge. As humans use more natural language products, they begin to intuitively predict what the AI may or may not understand and choose the best words. The AIs can adjust, and the language shifts. 

What are the established players creating? 

Google offers an elaborate suite of APIs for decoding websites, spoken words and printed documents. Some tools are built to translate spoken or printed words into digital form, and others focus on finding some understanding of the digitized text. One cloud APIs, for instance, will perform optical character recognition while another will convert speech to text.  Some, like the basic natural language API, are general tools with plenty of room for experimentation while others are narrowly focused on common tasks like form processing or medical knowledge. The Document AI tool, for instance, is available in versions customized for the banking industry or the procurement team

Amazon also offers a wide range of APIs as cloud services for finding salient information in text files, spoken word or scanned documents. The core is Comprehend, a tool that will identify important phrases, people and sentiment in text files. One version, Comprehend Medical, is focused on understanding medical information in doctors’ notes, clinical trial reports and other medical records. They also offer pre-trained machine learning models for translation and transcription. For some common use cases like running a chatbot for customer service, AWS offers tools like Lex to simplify adding an AI-based chatbot to a company’s web presence. 

Microsoft also offers a wide range of tools as part of Azure Cognitive Services for making sense of all forms of language. Their Language Studio begins with basic models and lets you train new versions to be deployed with their Bot Framework. Some APIs like Azure Cognative Search integrate these models with other functions to simplify website curation. Some tools are more applied, such as Content Moderator for detecting inappropriate language or Personalizer for finding good recommendations. 

What are the startups doing? 

Many of the startups are applying natural language processing to concrete problems with obvious revenue streams. Grammarly, for instance, makes a tool that proofreads text documents to flag grammatical problems caused by issues like verb tense. The free version detects basic errors, while the premium subscription of $12 offers access to more sophisticated error checking like identifying plagiarism or helping users adopt a more confident and polite tone. The company is more than 11 years old and it is integrated with most online environments where text might be edited. 

SoundHound offers a “voice AI platform” that other manufacturers can add so their product might respond to voice commands triggered by a “wake word.” It offers “speech-to-meaning” abilities that parse the requests into data structures for integration with other software routines. 

Shield wants to support managers that must police the text inside their office spaces. Their “communications compliance” software deploys models built with multiple languages for  “behavioral communications surveillance” to spot infractions like insider trading or harassment. 

Nori Health intends to help sick people manage chronic conditions with chatbots trained to counsel them to behave in the best way to mitigate the disease. They’re beginning with “digital therapies” for inflammatory conditions like Crohn’s disease and colitis. 

Smartling is adapting natural language algorithms to do a better job automating translation, so companies can do a better job delivering software to people who speak different languages. They provide a managed pipeline to simplify the process of creating multilingual documentation and sales literature at a large, multinational scale.  

Is there anything that natural language processing can’t do? 

The standard algorithms are often successful at answering basic questions but they rely heavily on connecting keywords with stock answers. Users of tools like Apple’s Siri or Amazon’s Alexa quickly learn which types of sentences will register correctly. They often fail, though, to grasp nuances or detect when a word is used with a secondary or tertiary meaning. Basic sentence structures can work, but not more elaborate or ornate ones with subordinate phrases. 

The search engines have become adept at predicting or understanding whether the user wants a product, a definition, or a pointer into a document. This classification, though, is largely probabilistic, and the algorithms fail the user when the request doesn’t follow the standard statistical pattern. 

Some algorithms are tackling the reverse problem of turning computerized information into human-readable language. Some common news jobs like reporting on the movement of the stock market or describing the outcome of a game can be largely automated. The algorithms can even deploy some nuance that can be useful, especially in areas with great statistical depth like baseball. The algorithms can search a box score and find unusual patterns like a no hitter and add them to the article. The texts, though, tend to have a mechanical tone and readers quickly begin to anticipate the word choices that fall into predictable patterns and form clichés. 

[Read more:Data and AI are keys to digital transformation – how can you ensure their integrity? ]