Removing the trash from your Master Data can be a real competitive advantage. [shutterstock: 559430509, Y Photo Studio]
While experts are still musing over the accuracy of matching or forecast algorithms, the quality of the underlying master data is lagging behind. So on what basis should business-critical forecasts be made?
According to German consultants Lünendonk, around 85 percent of firms have a problem with their master data. What has been known for years in the business intelligence environment is surfacing in digital transformation. Firms must prepare themselves for digital transformation in a process-oriented way.
That includes ensuring they have high-quality master data. New possibilities (e.g. machine learning) can play an active role in the process and ensure the generation of high quality data from the very outset. Such an intervention in business processes however must be desired by the firms. Although those who don’t will lose out in a few years’ time.
Two current examples
Imagine a situation where your matching algorithm automatically writes to a customer who is not even the intended recipient because the data is not complete or up-to-date. Yet how should the algorithm know what “complete “and up-to-date” related to the customer’s data really means? The cost incurred through the unnecessary loss of time (sending emails, making follow-up telephone calls, etc.) is one thing, but the real embarrassment is writing to the wrong customer in the first place.
“The cost incurred through the unnecessary loss of time is one thing, but the real embarrassment is writing to the wrong customer in the first place.”
Imagine that on account of an automatic algorithm you decide in favor of a supplier whose data are likewise incomplete and not up-to-date like in example 1, or were even entered incorrectly (terms and conditions, delivery reliability, legal requirements, etc.). Once the order has been placed, it’s difficult to withdraw. Besides the effort entailed in cancelling the order, more than anything your credibility as a customer will suffer.
Intervention in business processes
To improve the quality of master data, it is now possible to intervene in business processes especially in an SAP environment using the Hana platform. The advantage is that with Hana-based master data maintenance, options offered by matching algorithms can be used toy signal to the person processing the data whether an input is correct or incorrect, or to a certain degree of probability is correct or incorrect.
In a Hana technology environment, it is possible to determine on the basis of already existing data what correct or incorrect means. How high the degree of accuracy of the master data is as a whole must be evaluated, defined and if need be, clarified in advance. As data is added, the accuracy (= consistency and completeness as well as current validity and semantics) of the data will gradually increase and thus improve the basis for further decisions.
“… the process also places demands on the person handling the data. He is ultimately responsible for ensuring the high quality of data!”
This process applies machine learning methodology – learning from incoming data. However, the process also places demands on the person handling the data. He is ultimately responsible for ensuring the high quality of data! But with machine learning, he is given a tool to successfully carry out the required visual check.
A decisive step towards improving the quality of master data is interactively influencing data input by entering default values and/or probability statements. The empirical values gained from the machine learning approach must be available in real time. An evaluation of empirical values in “adjoining” data management systems (Hadoop etc.) is not helpful for real-time processing.
The empirical values (not necessarily the entire data pool from which these values were extracted) should thus be right there where they are needed during runtime, i.e. directly in the database of the operative system. To use these possibilities within the SAP Hana platform, licensing requirements have to be met. In any case, it is an investment – but one that is worth it. Even today, not only about five to ten percent of the entire time spent on rectifying master data errors can be saved, but the knowledge based on modern algorithms can be reliably transformed into a real gain – right now and well ahead of market competitors.