While there have been CRM systems tailored to specific sales methodologies, I don’t believe that any of them have been commercial successes. Too many sales organizations have their own flavor-of-the-year sales methodology, or think they’ve come up with a completely new one that could not possibly be covered by one of the standard ones. So the smart CRM vendors have kept agnostic, creating a platform that enables any sales methodology, and encouraging vendors to create nice add-on products.
No matter what sales methodology you currently subscribe to (including none at all), there are key indicators that the CRM can provide that give you significant guidance about what your pipeline is really doing. Good analytics will help uncover process problems and, sometimes, really bad behavior. Some of these analytics may require some significant cleverness in report writing (and variable set up), but here are some sales diagnostics I’m always looking for in clients’ systems (regardless of CRM vendor):
- Deal Size: are the deals you win significantly smaller than the deals you lose? That’s not happy news about the past. Are the deals still in the pipeline resembling the size of the losers more than the winners? That’s not happy news about the future.
- Zero $ Deals: What percentage of won deals have $0 as the amount? Ought to be zero percent, you’d think. Keep laughing. What percentage of lost deals have $0? Does the percentage of open deals with $0 amounts resemble the winners or the losers? Uh-oh.
- Stale Deals: What percentage of your open deals are past their close date? If it’s more than 10 percent, you’ve got an adoption and/or data fidelity problem that skews your pipeline. Best to just slam this door shut by auto-closing any open deal that is two weeks past its due date. And yes, it’s closed-lost. When the reps whine about this, just say “tough.”
- Deals without Contacts: What percentage of your deals don’t have a Contact (or “Contact Role”) assigned? What percentage don’t have more than one Contact assigned? This little detail indicates two key things:
- The sales rep doesn’t know or care to tell management about the people actually involved with the deal
- Marketing will not be able to run their campaign effectiveness reports to fine tune its spending. A telling detail indeed.
The right answer here is to make the Contact Role a requirement for advancing the deal beyond the first phase, but if you can’t enforce that at least measure the problem.
[Related: CRM backups or audit trails? Yes, please]
- Deals without Campaigns: What percentage of deals have no campaign “touches” recorded against them? Are these really “deals spontaneously generated by sales,” or is there a data fidelity problem here? (One of my favorite behaviors s is where the sales reps deliberately delete any trace of marketing activities for the deals they win, to show how it was all their doing.) Again, compare the number of marketing touches for the historically won deals, the lost deals, and the current pipeline.
- Deals without Activities: What percentage of deals have no activities recorded against them? What is the average number of activities involved with won deals (in most enterprise deals, it’s going to be at least 5)? What’s the average number of activities for a lost deal? In many sales organizations, they’ve recorded more activities for the losers than the winners. So much for “fail fast.”
- Insta-Close Deals: What percentage of your deals were closed within a day or two of their creation? If it’s over a few percent, you’ve got an adoption and/or sandbagging problem on your hands. (Of course, in this analysis you need to screen out ecommerce and other system-generated deals that do not involve the sales people.)
- Deal Velocity: Here’s where you’re going to need a pivot table or two, because what you want to do is compare how many days deals spend in each stage of the pipeline, looking for differences between the winners, the losers, and the open deals. You’re likely to find that winners go faster than losers, and you should be able to identify at which stage a deal is most likely to die. So much for “fail fast.” (Ditto the idea of screening out ecommerce and system-generated deals.)
- Deal Regression: What percentage of deals move out in time, backwards in stage, or down in value? If this is over a few percent, it’s an indication of bogusity in the pipeline.
- Probability Accuracy: Most CRM systems have a “probability” percentage that is set to default values for each stage. So look at the historical pipeline, evaluating deals at each stage to compute how many of them actually closed. The probability is that your probability percentages are optimistic, even at the later stages. Once you determine the average likelihood of winning from each stage, adjust those default probabilities in the system. (Note: my best advice is to avoid using the system standard probability field altogether if the system automatically pushes the percentage to the default it at each stage change. One way or the other, you still need to do the data analysis described here for modeling purposes.)
- Forecast Accuracy: What percentage of the final closed amount was in the committed forecast? At what week of the quarter did the committed forecast come within 10% of the actuals (without ever falling back out of bounds)? When did the committed forecast come within 5 percent of the actuals? Two percent? Conduct this analysis for each level of management when they have applied adjustments/amendments to the underlying reps’ pipeline info.
If you’ve got a substantial sales organization with different segments (e.g., US vs UK, commercial vs federal, telecom vs finance, enterprise vs SMB, channel vs direct), you’re going to need to break all these stats out by segment. The aggregate numbers will mask important issues that will jump off the page the instant you drill down into the segments. If you’re using a stats package for analytics, you’ll be able to flag where these breakouts are needed by looking for bi-modality or wide standard deviations.
This is a process, not an event
These diagnostic indicators are interesting when you first run them, but become more important over time, particularly as you evaluate “improvements” to the sales process, reorganization, messaging, etc. This is a game of continuous improvement and tuning.
But the indicators by themselves won’t point you to specific fixes, and they mustn’t be interpreted simplistically because the problems can be as much a result of bad policy (like, “our product quality stinks”) as weak sales execution. The metrics simply say “lookee here.” What’s amazing is how few organizations routinely do that looking, or evaluate their financial model against the realities of their sales performance.