The Solution
Building models faster and more intentionally for agents to take action
The Atrium team established a cross-functional team integrating both business and technology groups within the customer organization to prove that predictive models for lead propensity can be built at a faster pace with more intent and purpose using Salesforce’s Einstein Discovery tool (now Einstein Studio and Data Cloud). Beginning with the business process perspective, our team first looked to understand the business processes executed by agents, the desired outcomes, and the overall agent processes.
We reviewed the variations in agents’ handling of various leads by lead source as well as the available data throughout the process, identifying nuances in both process and data. In just one example, agents pay for leads provided by ILPs, whereas they are provided/assigned leads from the website. However, website leads tend to convert at higher rates; a customer who goes directly to the website may not be as likely to be “shopping around” as one who submits to a generic ILP site. To this end, knowing which leads need to be worked in a timely manner can significantly affect conversion rates.
While going through the agent business processes, our team explored the underlying data using Einstein Discovery stories. We were able to quickly understand data correlations, patterns, missingness (aka missing data), quality, and overall statistical relevance for selected and engineered features through the Einstein tool.
At first, we created a model for all leads to understand the initial feature set for the model and then evolved the model to segment by lead source, further refining the predictive features and our understanding for actionability. This continuous process of “translating” the business process and the predictive features of the data continued through a series of refinements — translating what we saw in the Einstein Discovery stories with the agency representations to confirm understanding and significance.
Most importantly, with trained and tested models, we shifted our focus to the “last mile” (i.e., how agents will consume and put the model insights into action). The user experience must be easy and intuitive, and the model must drive agent workflow for prioritization and actioning.
Our resulting agent experience definition considered agent lists to surface lead priority and the transparency of the models with opportunity-level insights (i.e., reasons why the score is what it is). Business change enablement considerations for independent agents are critical as well. For example, instead of showing agents a score of 89.75% likely to bind, we can translate that model-generated score into a higher-level indicator to drive behavior (e.g., Urgent, Hot, Warm).