Skip links

News

The expanding ease and utility of text analytics and natural language processing

The expanding ease and utility of text analytics and natural language processing

The strategic gains of text analytics are myriad and, quite possibly, greater today than they’ve ever been before. The influx of advanced machine learning approaches impacting natural language technologies makes textual analysis more accessible to the enterprise than it was even 5 years ago.

Statistical model techniques also produce the benefit of accelerating tradi- tional natural language processing (NLP) methods to reduce their time-to-value. Conversely, pairing these conventional methods with their newer statistical counterparts heightens the accuracy of text analytics, which, in turn, increases the use cases for the full spectrum of natural language technologies.

Time-honored applications of sentiment analysis and contractual reviews are as prevalent as they ever were. There’s also an array of more modern deployments, including the automation of regulatory reports, spoken interfaces with front- and back-end IT systems, and generative text summaries of documents and visualizations.

Consequently, natural language technologies—including natural language understanding (NLU), natural language generation (NLG), and conversational AI—are embedded in everything from business intelligence (BI) solutions to the now ubiquitous remote conferencing options.

The expanding number of choices in this space means there’s also a burgeoning assortment of technological approaches to account for when selecting the right one for the enterprise. According to Franz CEO Jans Aasman, “Any technology having to do with text and unstructured text, that’s text analytics. Parsing a text is text analytics. But, doing entity extraction or computing a word embedding is text analytics, too.”

Understanding the implications of these different methods is critical for obtaining the accuracy, traceability, explainability, and time-to-insight needed by organizations to achieve any desired objective from text analytics.

Structured data

The applicability of natural language technologies to text analytics spans structured data, semi-structured data, and unstructured data. Nonetheless, depending on how it’s configured, not every form of text analytics is suitable for each of these data types. The most immediately available form of NLP is often for structured data. Numerous BI vendors augment their relational analytics with natural language interfaces, enabling users to have conversational interactions with them. In these cases, the systems understand questions (via natural language querying) about relational data to deliver answers and “embellish them with more information that prompts a new question,” explained Josh Good, Qlik VP of global product marketing.

The resulting text analytics applies to the questions asked (a form of unstructured data), not to the underlying source data, which requires additional analytics. Those analytics may be facilitated by BI vendors or natural language technology vendors partnering with them. When you see your data and note that sales are moving up, you’re likely to wonder about the cause. According to Emmanuel Walckenaer, Yseop CEO, “We can automatically analyze all the data and actually take the contributors and get some intelligence.” BI solutions imbued with NLG also produce textual explanations of visualizations.

Unstructured text

The more mature form of text analytics is performed on unstructured data sources such as documents, emails, and webpages. Advanced machine learning techniques have become popular for the rapidity with which they not only can understand this information, but also produce pertinent analyses and summaries of its import to business objectives. Methods involving transformers, Bidirectional Encoder Representations from Transformers (BERT) and, more recently, Generative Pre-trained Transformer 3 (GPT-3), are lauded for these capabilities—especially for their NLG propensities. They can provide “excellent exercises of summarization of conversations, sentiment analysis, keyword extraction, and things of that nature,” maintained Ignacio Segovia, Altimetrik head of product engineering.

The cardinal advantage of these purely statistical deep learning methods is that, via techniques such as representation learning and sophisticated word embeddings, organizations can implement them quickly without significant amounts of upfront work. This boon is redoubled via transfer learning approaches that minimize training data quantities. Nevertheless, there are three shortcomings of pure deep learning approaches to text analytics:

♦ Dearth of knowledge: Although GPT-3 can sequence words together in the context of a particular domain, “it doesn’t know anything,” Aasman pointed out. “It doesn’t have a mental model, or memory, or sense of meaning.” Specifically, it lacks domain knowledge.

♦ Omissions: According to Walckenaer, the analysis of text or the generation of natural language isn’t as thorough with this approach as it is with others. “If you feed these models that Q1 revenue is one billion, it may create a sentence saying the revenue is one billion, without stating it’s Q1,”
Walckenaer cautioned.

♦ Unsubstantiated results: Because they’re limited to what they’ve been trained on, purely statistical methods are prone to “create some sentences that invent stuff,” Walckenaer revealed. This issue can be acute for NLG deployments. “While working with banks, while working with pharma companies, while automating the writing of regulatory reports, you cannot afford these mistakes,” Walckenaer added. “It has to be perfect, auditable, and traceable.”

Rules-based systems

Granted, there are numerous use cases in which the imperfections of purely statistical techniques don’t compromise business value. Segovia referenced their utility for “translation mechanisms in English that deliver that to another language.” Customer-facing chatbots are another example, along with forms of intelligent document processing (IDP) that render what Automation Anywhere CTO Prince Kohli called a “layout vocabulary that says for a tax form, it has these fields, this is what they look like, and with a high degree of confidence, we can extract what the person who wrote in it is trying to say.”

However, for mission-critical applications involving cognitive search, healthcare billing, and regulatory compliance, “You need a rule-based approach if you want to have the correct answer,” Aasman commented. “You cannot have 80% right in a contract. That would be a legal fest for lawyers.” Because it’s predicated on rules, this non-statistical form of AI is inherently explainable. Consequently, multiple NLG vendors use this approach for crafting narratives “so that we can be absolutely sure that what we write is traceable and can be used with confidence in highly regulated industries,” Walckenaer remarked.

Symbolic AI

Rules-based techniques are integral to symbolic reasoning, which is also known as symbolic AI. This methodology “is more traditional NLP ,” Kohli said. “It requires you to build a vocabulary and to build a domain model for that vocabulary.” More importantly, perhaps, it involves the use of taxonomies, whose hierarchies of definitions become the basis (or symbols) upon which systems reason when analyzing text. Such constructs are pivotal for enabling machines to understand and “find words by their synonyms and by their categories,” Aasman mentioned. These capabilities power use cases such as intelligent document search. Taxonomies typify the human-curated knowledge that’s core to symbolic AI and knowledge management. However, organizations can expedite the curation of that knowledge with statistical AI techniques involving these components:

♦ BERT: BioBERT is a variation of BERT specifically adapted to the biomedical field. For text analytics applications in this domain, BioBERT is useful “to extract the important words out of the text,” Aasman observed. This information can springboard use cases for input- ting accurate billing codes based on descriptions of diagnoses or treatments patients undergo.

♦ GPT-3: This statistical AI approach can significantly hasten the time required to write rules pertaining to a specific text analytics application. “Rules-writing is time-consuming, but with GPT-3, you can do it 10 times as fast as before,” Aasman said.

♦ Additional deep learning techniques: A variety of deep learning approaches, including contrastive learning and manifold layout techniques, can analyze a corpus to populate a knowledge graph used for NLU or NLG with entities, terms, and concepts. “We are actually mixing machine learning for what we call knowledge-based creation to learn a subject or a segment,” Walckenaer said.

Domain models

The semantic clarity of rules-based systems, particularly when they’re expedited using some of the foregoing advanced machine learning techniques, is what gives these systems higher accuracy levels for text analytics than those just involving statistical methods. “Using taxonomies, regular expressions, excluding words, including words, and processing entire vocabularies or training models is great,” Segovia acknowledged. Of equal importance is a subject area model, which, Aasman mentioned, GPT-3 lacks, that works in tandem with taxonomies to flesh out knowledge of a particular domain such as finance, supply chain management, or even an organization’s “tribal knowledge” in those or other areas. Some of the conversational AI capabilities Good alluded to benefit from such models too.

“You build a data model, then our analytics engine can adjust that model and we can apply AI on top of that to have conversational analytics,” Good commented. With this approach, the underlying analytics engine can make intelligent inferences about terms (in users’ questions) which users can then refine by inputting business logic, rules, and taxonomies. Time-to-value is a capital advantage of this method. “You can start using it right away and iterate on it,” Good said. According to Aasman, the domain models—also known as ontologies—“describe objects as objects that have a set of attributes.” The nomenclature for the objects themselves is specified in the taxonomy. However, the domain model is where organizations denote what the objects actually are by “describing all the features that are important for them,” Aasman indicated. To that end, it represents a department or organization’s cosmology or “world view” of that subject.

Digital agents

Virtual agents or bots are increasingly being employed in text analytics use cases. In some instances, their appeal is the sheer scale they enable for a particular application. Kohli explored an application in which bots are deployed at scale for “looking at documents all over the web, extracting data that can be used for anti-money laundering.” In this and other IDP use cases, the digital agents are extracting the underlying semantics behind the entities or objects to fuel their analysis. In this specific use case, this includes looking “for records of that entity who owns the account, how they interact with other accounts as well, what patterns they have, and then [spotting] these money flows,” Kohli said. Bots are also employed to interact with customers as a means of rapidly informing a company’s representatives of their concerns or reasons for calling.

“In the financial services space, a customer can engage with a bot asking for specific financial advice or a specific situation or financial state,” Segovia commented. “We can summarize entire conversations with a financial advice type of bot, send it over to the financial advisor, and then the advisor has enough context to engage in a call directly with the client.” This application is particularly interesting because it requires NLU to comprehend customers’ concerns, may involve conversational AI for the bot’s conversation with the customer, and necessitates NLG for the summaries. It also involves speech recognition and perhaps even spoken-word interfaces for systems via natural language. “Text and speech is the currency of thought,” Segovia reflected. “If we use natural language to interface with machines, text to speech, that ecosystem is extremely robust. Natural language processing with transformer architecture in particular enables a lot of that.”

Question-answering

Quite possibly, there are as many applications of text analytics as there are ways of facilitating natural language technologies. Those use cases—and their enterprise worth—are only enlarging as the unstructured data divide continues to broaden across industries. Natural language interactions between humans and IT systems are swiftly becoming normative. However, the greater utility of text analytics, whether or not that’s preceded by
speech-to-text conversions, is the capacity to analyze the reams of unstructured text in the form of documents, conversational transcriptions, social media streams, and more.

The ability to make this information searchable, in natural language, and amenable to question-answering for what Aasman termed “business insights” is, for many organizations, the upper echelon of text analytics. “It’s ultimately about the business questions you can ask of the text, not whether you can find words in the text,” Aasman specified.

Department or enterprise-wide knowledge only aids in this endeavor—especially when it becomes a matter of legal or regulatory interest.

The original article can be found at : https://www.kmworld.com/Articles/Editorial/Features/The-expanding-ease-and-utility-of-text-analytics-and-natural-language-processing-156495.aspx

Subscribe

Latest News

Adaptive Clinical Trial Designs: Modify trials based on interim results for faster identification of effective drugs.Identify effective drugs faster with data analytics and machine learning algorithms to analyze interim trial results and modify.
Real-World Evidence (RWE) Integration: Supplement trial data with real-world insights for drug effectiveness and safety.Supplement trial data with real-world insights for drug effectiveness and safety.
Biomarker Identification and Validation: Validate biomarkers predicting treatment response for targeted therapies.Utilize bioinformatics and computational biology to validate biomarkers predicting treatment response for targeted therapies.
Collaborative Clinical Research Networks: Establish networks for better patient recruitment and data sharing.Leverage cloud-based platforms and collaborative software to establish networks for better patient recruitment and data sharing.
Master Protocols and Basket Trials: Evaluate multiple drugs in one trial for efficient drug development.Implement electronic data capture systems and digital platforms to efficiently manage and evaluate multiple drugs or drug combinations within a single trial, enabling more streamlined drug development
Remote and Decentralized Trials: Embrace virtual trials for broader patient participation.Embrace telemedicine, virtual monitoring, and digital health tools to conduct remote and decentralized trials, allowing patients to participate from home and reducing the need for frequent in-person visits
Patient-Centric Trials: Design trials with patient needs in mind for better recruitment and retention.Develop patient-centric mobile apps and web portals that provide trial information, virtual support groups, and patient-reported outcome tracking to enhance patient engagement, recruitment, and retention
Regulatory Engagement and Expedited Review Pathways: Engage regulators early for faster approvals.Utilize digital communication tools to engage regulatory agencies early in the drug development process, enabling faster feedback and exploration of expedited review pathways for accelerated approvals
Companion Diagnostics Development: Develop diagnostics for targeted recruitment and personalized treatment.Implement bioinformatics and genomics technologies to develop companion diagnostics that can identify patient subpopulations likely to benefit from the drug, aiding in targeted recruitment and personalized treatment
Data Standardization and Interoperability: Ensure seamless data exchange among research sites.Utilize interoperable electronic health record systems and health data standards to ensure seamless data exchange among different research sites, promoting efficient data aggregation and analysis
Use of AI and Predictive Analytics: Apply AI for drug candidate identification and data analysis.Leverage AI algorithms and predictive analytics to analyze large datasets, identify potential drug candidates, optimize trial designs, and predict treatment outcomes, accelerating the drug development process
R&D Investments: Improve the drug or expand indicationsUtilize computational modelling and simulation techniques to accelerate drug discovery and optimize drug development processes