Our customer, an industry-leading property, casualty, and auto insurance provider, works with millions of leads from various sources, such as internet lead providers, website quote requests, agent-driven events, and more — each with very different dynamics.
To support the company’s aggressive policy growth goals, their agents needed to be able to prioritize these leads based on the “likelihood to bind.” The company’s internal data science team attempted to create predictive models for leads using their machine learning platform, but the development was slow, and they lacked a meaningful way to surface the models’ results for agents to understand and act on within their Salesforce work processes.
Building models faster and more intentionally for agents to take action
The Atrium team established a cross-functional team integrating both business and technology groups within the customer organization to prove that predictive models for lead propensity can be built at a faster pace with more intent and purpose using Salesforce’s Einstein Discovery tool (now Einstein Studio and Data Cloud). Beginning with the business process perspective, our team first looked to understand the business processes executed by agents, the desired outcomes, and the overall agent processes.
We reviewed the variations in agents’ handling of various leads by lead source as well as the available data throughout the process, identifying nuances in both process and data. In just one example, agents pay for leads provided by ILPs, whereas they are provided/assigned leads from the website. However, website leads tend to convert at higher rates; a customer who goes directly to the website may not be as likely to be “shopping around” as one who submits to a generic ILP site. To this end, knowing which leads need to be worked in a timely manner can significantly affect conversion rates.
While going through the agent business processes, our team explored the underlying data using Einstein Discovery stories. We were able to quickly understand data correlations, patterns, missingness (aka missing data), quality, and overall statistical relevance for selected and engineered features through the Einstein tool.
At first, we created a model for all leads to understand the initial feature set for the model and then evolved the model to segment by lead source, further refining the predictive features and our understanding for actionability. This continuous process of “translating” the business process and the predictive features of the data continued through a series of refinements — translating what we saw in the Einstein Discovery stories with the agency representations to confirm understanding and significance.
Most importantly, with trained and tested models, we shifted our focus to the “last mile” (i.e., how agents will consume and put the model insights into action). The user experience must be easy and intuitive, and the model must drive agent workflow for prioritization and actioning.
Our resulting agent experience definition considered agent lists to surface lead priority and the transparency of the models with opportunity-level insights (i.e., reasons why the score is what it is). Business change enablement considerations for independent agents are critical as well. For example, instead of showing agents a score of 89.75% likely to bind, we can translate that model-generated score into a higher-level indicator to drive behavior (e.g., Urgent, Hot, Warm).
The result: 2-8% increase in “quote-to-bind” by office
Having built purpose-driven CRM models at scale with Einstein Discovery for 20K+ agents, we enabled our customer to uncover deeper insight into their policy sales process and better prepare their cross-functional team to speed up the process of building, deploying, and productionizing models that drive measurable business value.
Even more importantly, we facilitated the “last mile” for agents to drive their business processes, consume insights, and take action. Agent offices increased the quote-to-bind rate from 2-8% and at least 70% of agents surveyed felt that they had a better sense of where to focus their efforts.
The test and learn has been eye-opening. We thought we understood the data before, but now we know the level of understanding that is needed to create models that are useful to our agents.
Analytics Leader
After all, what good is a predictive model if the end users can’t prioritize daily priorities and action the insights to drive productivity and results? As one leader on the company’s analytics team said: “The test and learn has been eye-opening. We thought we understood the data before, but now we know the level of understanding that is needed to create models that are useful to our agents.”