The rise of Large Language Models (LLMs) — and the scramble by every large technology company from Microsoft to Google to Salesforce trying to keep pace with OpenAI — presents an influx of powerful and transformational tooling for companies. Experts believe it will be the most significant change since the internet.
The ability to leverage AI agents (like Salesforce’s recently launched Agentforce solutions) to increase efficiency at multiple levels in an organization can seem staggering in its impact and at times even magical in its result. But like with any emerging technology, there are layers built into an effective implementation, and those layers are rarely talked about during flashy demos or “art of the possible” ideation sessions.
As transformational as they are and will continue to be, LLMs and agents are not magic, and they require thoughtful planning and good data to maximize the impact for your company.
This blog will explore the key considerations your organization should make when planning for agent-driven, LLM-powered use cases. While there are a plethora of efficiencies to be gained around generative applications that span business areas (e.g., swift content generation, etc.), that won’t be the focus here. Instead, we’ll target ways in which you can create the most intelligent and effective agents, trained specifically to deliver in-context insights and increase the efficiency of a sales or service user.
Atrium’s approach to LLM-powered agents
At Atrium, our methodology focuses on a user-centered approach to system design. By deeply understanding different personas — representative amalgamations of users within an organization — we can more effectively design systems that feel tailor-made for those personas, in return boosting the ROI of a company’s investment in those systems.
A crucial part of our process called “Insights and Actions” explores the “path of inquiry” a given persona will take when executing a task. What questions will they need answered? What follow-up questions might they have? What actions would they most likely take once they have those answers?
A well-trained agent helps to answer those questions, but ideally, it also provides some leeway even if you don’t think of every second- and third-level question a user might ask. The agent (and the underlying model) should still be able to deliver an answer — provided you have the data to support that answer.
4 categories of insights
Providing accurate and complete training data to the underlying model is critical to that success. To help us think through what’s required to deliver next-level agent output, I’ll break down insights into four types: simple descriptive analytics, complex descriptive analytics, predictive analytics, and prescriptive analytics.
1. Simple, descriptive analytics
For simple, descriptive analytics, most LLM-powered agents can effectively return the answers that a user might want to know in a timely manner. These are questions that can be found through access to base data on your CRM platform, like “How many open opportunities do I have?”, “How many are in Stage 3 or greater?”, or “Which service agent resolved the most cases last week?”
After connecting your LLM to your CRM by grounding prompts with CRM data and configuring the agent, you should have an effective delivery mechanism for those simple descriptive analytics.
2. Complex descriptive analytics
Things get a little more nuanced when we consider “complex descriptive analytics,” a category I’m using to refer to any descriptive queries that require snapshotting of data over a period of time.
For example, let’s say I want to see the average call resolution time trended over the last 90 days. My model might be able to calculate that on the fly, but it would take a significant amount of time to process.
If I can point my LLM at an aggregated and trended dataset, suddenly, that query is quick and the user experience isn’t negatively impacted. Processing time and user experience are too often overlooked when designing an agentic experience, but they are critical to positive adoption.
3. Predictive analytics
The power of agent-delivered insights grows exponentially when we train LLM models with predictive insights from which they can surface recommendations. “Which opportunity has the highest likelihood of closing?” is an example of a predictive question you might ask an agent.
You may have a “dumb probability” available in your CRM system — one that is calculated based on the stage or forecast category of a deal — or you may attempt to use an LLM to drive your deal prioritization. In such a case, you might list the top 5 fields to consider in a prompt, and ask an LLM to base a prioritized list from that information.
In the first case, you will get an underwhelming signal that lacks actionable information about predicted deal closure; in the second, you don’t have any guarantees that the model is going to be consistent or objective when making the decisions based on those prompts.
Instead, consider a “hybrid AI”-based solution: seeding an LLM prompt with the output of a predictive model. Combining an objective, data-driven prediction with contextual, natural language outputs from an LLM gives you a far more effective agent — one that is intelligent, informed, and consistent with its delivery of information.
4. Prescriptive analytics
Effectiveness is compounded when we look at prescriptive insights, or those in which the agent (or the underlying model) suggests what the next best action should be in a given situation. “Which opportunity should I focus on next?” is a good example of a prescriptive question designed to help drive user efficiency by having an agent filter out noise and direct the user to the highest-value tasks.
With proper design, an agent and its underlying LLM model can understand that “what should I focus on next?” actually means something more nuanced. For example, when the agent gets that question, it should actually consider which opportunity has the highest likelihood to close (based on a predictive model that we have trained) and has missing key activities that will increase the likelihood of that close and has the shortest window in which to complete those key activities.
Again, you might list the top 5 fields to consider in a prompt, but that’s both inefficient and unrealistic for the majority of the organization. Doing the work to train the models and engineer the agents means regular business users can ask simple questions and still get valuable responses.
Automation for maximum efficiency
There is an additional level of insights delivery beyond prescriptive analytics: automation. This is when a model’s output is so trusted that you choose to accept the prescribed action 100% of the time, thus upgrading an agent from purely assistive — like a copilot — to one that is autonomous.
Automation and autonomous agents are top-of-mind for service teams in particular. While the same progression and principles apply for service use cases around well-trained assistive agents delivering insights in context, they can be even more impactful when you consider them less as copilots and more as a lower-cost deflection channel for easily solvable cases.
The capabilities of a well-designed and well-trained autonomous agent far exceed those of a chatbot since the underlying LLM can more effectively handle unplanned queries or language that isn’t pre-programmed, and the cost savings when deflecting from a human call center agent are both significant and easily measurable.
Get a jump on the competition
We’re still in the early stages of one of the more dramatic changes to how we’ll do business, and because it’s early, there is tremendous opportunity for companies to leap ahead of their competitors.
While LLMs and agents are not magic, the combination of thoughtful planning and good data can maximize the effectiveness of the transformation they bring and the resulting return on your investment.
Learn more about Atrium’s data science and AI consulting services or contact our team for a more in-depth discussion of agent strategy for your business.