AI is an important tool that can help drive a company’s profitability. Organizations working to become more efficient have a desire to simplify business objectives by leveraging the usefulness of AI to create a system of intelligence. This combines data-driven insights with automation and recommendations to enhance business processes and maximize efficiency. A lack of quality data can be a setback for these organizations, but it is a problem that can be solved. Building an initial AI model paves the way to capturing the right data for a successful delivery of an intelligent experience, highlighting gaps in data collection, surfacing key trends, and helping organizations better understand their business.
What If I Have Poor Data Quality?
Large corporations that depend heavily on AI often begin with data quality problems, but many of these organizations deliver brilliant systems of intelligence that become the bedrock of their growth and profitability.
How? The systems of intelligence surface insights and information that help organizations to better under their business, customers, and industry — hence they make better, more informed business decisions. This is possible because they understand that data quality is not a blocker to the development of systems of intelligence. Instead, feeding the right data to an AI model is the way forward, and the next question to ask after the business problem has been determined is: How do you get the right data to solve the problem?
AI Model Moves to Capture the Right Data
The available data in an organization does not always answer the questions we ask it. Sometimes, different data is needed to solve an organization’s business needs; capturing new data in the same way the old data was captured will not help solve the problem. Intentional collection of data with the business questions in mind is a good place to start. Then, by designing AI models on baseline data, it is possible to identify gaps and create a roadmap to get the right data into the next version of the model. From that point forward, data capture would make more sense to the system of intelligence. Sit back and let’s review a case study of an organization that wants to implement AI to reduce customer churn.
Case Study: Using AI to Reduce Customer Churn
Meet Roofers, a hardware store that sells and installs roofing materials. Roofers noticed a decline in revenue. A deeper investigation revealed that the reduction in revenue could be traced to customer attrition. Attempting to mitigate this, the company decided to leverage AI to build a system of intelligence that would predict and prevent customer drop-off using automated processes that initiate actions and activities to retain customers. Some of these include assigning an account manager or a case manager to a customer’s account to follow up on cases that might lead to attrition, creating tasks for sales representatives to take on identified customers; emailing a coupon; sending a satisfaction survey email to the customer with an included call-to-action button.
A deep dive into their data revealed that the company had no previous data out of the box that shows which customer is likely to leave within a period of time. Additionally, they also lacked data around historical actions that previously worked in resolving dissatisfied customers’ cases. This organization, like many others, perceived this absence of specific information as having data quality problems. But in reality, the data was not to blame. It was just not the right data to solve this current problem.
Capturing the Right Data to Solve Customer Churn
To capture the right data to solve Roofers’s churn problem, we built an initial attrition AI model that scores customers based on the traditional features of customers and business relationships — such as spending patterns, variety of products purchased, amount spent month to month, and rate of orders returned — to understand how customers have been relating with the business. By training the model with month-to-month data, it uncovered insights about which customers are likely to attrit within a short period of time, which then triggered a call to action to sales representatives.
Our next phase was developing a list of business case actions that can be taken against the specific attrition risk that was captured by the model. Let’s use frequent returns as an example. A set of insight-driven recommendations were suggested to the sales reps, such as assigning a dedicated account manager to the customer or sending a survey. The trained model then captured data on the actions taken by the sales reps, and matched it against the changes in the attrition score of the customer within a short period of time and identified what works for different types of attrition risk.
The next update solution was to automate the process. Insights from attrition data and actions taken from resolved cases were leveraged to conveniently provide the best solution to each type of attrition risk that is detected by the AI model. Actions like assigning customer accounts to an account manager, sending out surveys to customers, or opening a case on behalf of the customer are initiated by the system in an automated process. After a short period of time, a reduction in the attrition risk score and an increase in revenue were noticed as a result of prompt actions taken from the insights delivered by the system of intelligence.
Additionally, by carefully assessing the model results with business experts, additional sources of data can be identified that would improve the model results in future iterations. This step is the key to identifying better data and subsequently improving the performance of predictive AI models.
Delivering Systems of Intelligence: Atrium to the Rescue
Building the initial AI model and capturing the right data is essential to the successful delivery of AI to solve business problems; lack of quality data is not a barrier because it can be solved. This is an area where our team at Atrium makes a key difference in delivering systems of intelligence. You can learn more from our expert series on data quality — centered on common patterns of data inadequacy and how we address these to help you realize a data-driven intelligent experience.