expert_insights_series_soc
Paul Harmon

Paul Harmon

Expert Insights: Top-Down vs. Bottom-Up Approaches in Forecasting

Machine learning, analytical data visualization tools, and advanced AI are changing the CRM space from one that had traditionally been focused solely on data storage and minimal engagement to something new: full-scale systems of intelligence. Organizations looking towards the future want to take advantage of their historical data to build precise, actionable forecasts.

There’s no shortage of out-of-the-box tools that do forecasting, but many of them are costly and require users to learn additional systems on top of their CRM infrastructure. Ideally, organizations would be able to integrate precise, actionable forecasts to their front office so that insights can be surfaced and utilized in the system they’re already using every day.

Fortunately, Salesforce provides a framework for two powerful methods of forecasting that can be utilized individually or combined together to develop an effective forecasting experience. Top-down forecasting and bottom-up forecasting can answer important questions about the future, but they require different data and can be used to answer different business questions.

Top-Down Forecasts
What are top-down forecasts? Think of it this way – the San Francisco Giants baseball team might be interested in predicting the number of games they win next year. They could use information from last year’s season totals (total home runs hit, runs scored, etc.) to predict the total number of wins next year – and with a few previous seasons worth of data, they could potentially predict their record for several years into the future.

This kind of forecast is called a ‘top-down’ approach because it makes use of aggregate-level information – both the forecast and the data used to generate are made for the entire season. This forecast isn’t particularly helpful if I make a $20 bet with a friend on tomorrow’s Giants vs. Red Sox game, but would certainly be helpful if I wanted to determine if the Giants were likely to make the playoffs next year.

Sales and marketing organizations may not be concerned with hitting line drives or making the playoffs next year, but they have many of the same types of forecasting needs. Being able to generate top-level market forecasts is helpful for anticipating customer demand, efficiently allocating internal resources, and proactively addressing potential future difficulties.

Bottom-Up Forecasts
Let’s return to the SF Giants example. Maybe we are interested in knowing what’s likely to happen in each game they play. In this case, knowing the total number of home runs hit over the course of the season isn’t going to be quite as helpful – to make an accurate forecast about the next game, we need to have game-level data. Who’s healthy? Who’s the starting pitcher? Is the game being played at home or away? The answers to these questions are all crucial to generating an accurate prediction of the Giants’ next game.

This type of forecast is called a ‘bottom-up’ or ‘rollup’-based forecast because predictions are made for each game based on the Giants’ probability of winning each matchup. To get a forecasted record for the season, each individual game needs to be ‘rolled-up’ or combined over the entire season.

In the same way, organizations tracking their opportunities can use their data along with tools like Salesforce Einstein Discovery to estimate predictions of winning deals, converting leads, etc. for each record in their dataset. These types of insights are helpful to sales reps working at the ground level, identifying key factors, and highlighting actions that may help convert a deal. However, they have even more power when rolled-up to generate forecasts for teams, regions, or even the entire organization.

Methods for Implementing Top-Down Forecasts
Machine learning models, like neural networks, support vector machines, etc. are all the rage in the automated AI space. For top-down forecasting, however, these methods have the potential to fall short. In fact, many traditional statistical methods can outperform high-tech AI algorithms.

This is because many machine learning models do not take into account a phenomenon called autocorrelation, the fact that observations made at close points in time are more closely related than those made at more distant time points. To illustrate this concept using our baseball example, let’s think about making a $20 bet on the next game the Giants play. What information would be most helpful? Often, knowing about their most recent stretch of games is important. For instance, if the team’s star slugger got hurt in the previous game, it’s much more likely to affect the outcome of the Giants’ next game than it is to affect the outcome of games played a month from now. Or, perhaps the team is in the middle of a losing streak, or on a long road trip. These short-term factors all impact games that are close to each other in time, but that effect is lessened over the course of the season. Therefore, it makes sense to account for this autocorrelation when trying to build forecasting models. Many out-of-the-box statistical tools make the assumption that observations in time are independent, so using a standard model on data that exhibit this time-based correlation can generate misleading results.

The good news is that many tools exist that handle these types of data. Exponential smoothing models, moving average models, and smoothing splines, to name a few, are methods that easily handle autocorrelation and can be used to generate top-down forecasts. In Einstein Analytics, the timeseries function uses a method called triple exponential smoothing to generate robust forecasts that can be visualized easily in a dashboard.

Implementing Bottom-Up Forecasts in Einstein Discovery
To implement bottom-up forecasts in Salesforce, Einstein Discovery is the tool of choice. Discovery can be used to train logistic regression models on historical data that estimate the probability of success for each record in the dataset. By combining this predicted propensity of success with the size of each deal/lead/opportunity in the data, we can estimate the expected value for each record. Then, it’s easy to use Einstein Analytics to aggregate all of the records up in any combination of meaningful ways – either by team, region, or a different method.

The bottom-up method takes into account information that can help drive process efficiency and prioritization across the organization. Consider this question – is it better to prioritize Deal A worth $1,000 with 95% success probability or a deal B, worth $5,000 with a 50% probability of conversion? Calculating the expected value of each deal by multiplying the success probability by the total value of each deal, we can see that Deal A has a .95*1,000 = $950 expected value vs. Deal B, which has an expected value of .50*5,000 = $2,500. Reps can use this information to focus either on high propensity deals, high dollar deals, or (smartly) deals with the highest expected value. The entire forecasted pipeline of Deals A and B can then be rolled up to generate an estimate of $950 + $2,500 = $3,450. Using Einstein Analytics and Discovery, these expected values can be calculated, rolled up, and used to generate action steps that can be surfaced in the system reps are already using as part of their workflow.

Moreover, Einstein Discovery helps identify key factors associated with improving the forecasted outcome – our baseball model might identify that changing the batting order, for instance, increases the likelihood of winning a game. For our baseball team, this might inform or improve how the manager makes in-game decisions. Similarly, for a sales organization, Discovery might identify that a 5% discount increases the probability more substantially than a 4% discount, so reps can maximize their forecasted amount and better drive up their win rate. Based on these factors, organizations can target energy on increasing the likelihood of lower-probability deals to convert while still making precise forecasts of each deal’s expected value.

Supercharge your CRM with Forecasting
Virtually every big sales and marketing organization uses CRM, but not all are using it to its full potential. A true system of intelligence combines smart use of data analytics, machine learning models, and business expertise to find answers to important questions, highlight areas of improvement, drive process change, and predict what may happen in the future.

This includes using well-developed statistical tools to power your forecasting models – and in Salesforce, these tools are available at the touch of your fingers. Forecasting won’t solve all your problems, and questions about inferential statistics (informing processes, etc.) should be done in a slightly different framework using tools like Einstein Discovery. However, by helping to drive transformative impacts and helping organizations become more data-centric, these methods can rapidly enhance an organization’s ability to make smart decisions about the future. Implementation of models, analytics, and similar tools can help to highlight shortcomings in organizational data and increase data quality, fueling a virtuous cycle of continuous improvement that leads to better and more efficient outcomes.

Related Content

Share this post

LinkedIn
Twitter
Facebook