Search

Categories

Learn about Ortto

Search

Categories

Learn about Ortto

Categories

Learn more

Lead scoring today: A candid conversation about its current limitations and opportunities for optimization

Lead scoring today: A candid conversation about its current limitations and opportunities for optimization

Lead scoring today: A candid conversation about its current limitations and opportunities for optimization

· Jun 13, 2023

Co-Founder and CEO, Ortto

Lead scoring has long been a staple in the arsenal of marketers, providing a structured approach to identifying and prioritizing potential customers. However, traditional lead-scoring models have limitations. The linear marketing funnel that once guided our strategies no longer accurately reflects the complexity of the buyer journey.

Mike Sharkey, CEO and co-founder of Ortto, and Charlie Windschill, Director of Growth Marketing, had a candid conversation about the flaws of traditional lead scoring and alternative approaches. They shared their experiences, struggles, and the crucial need for marketers to become experts in their own products and embrace event-based data modeling.

How the buyer journey has changed

Charlie Windschill: A good starting place for this conversation is to give some context about how traditional lead-scoring models were designed, which was with the linear marketing funnel in mind. We’ve looked to them as being a really useful tool to help marketers with forecasting revenue as it moves through the funnel and, importantly, facilitating handoff points between marketing teams and sales teams. But the issue is—and I love a linear funnel— but that’s just not accurate. The buyer journey is not this neat and tidy anymore.

So in a traditional lead scoring model, you have a set of attributes—a combination of demographic or behavioral factors, such as marketing engagement—which are assigned points. One of the problems is that these points are assigned completely at random, but an even bigger problem, is a lot of attributes being scored have little to no correlation with intent to buy.

You probably have an MQL threshold you’re trying to get prospects to cross—maybe a hundred points, maybe a thousand. Let's take two different scenarios: person one is showing some really great content engagement—they're registering for a webinar, maybe attending it, downloading an ebook—and ultimately all those little touch points add up to exceed our threshold of one hundred points and this person becomes an MQL.

Person two, however, is showing what I would consider actual intent to buy. They're viewing product and pricing pages, and watching an on-demand demo before actually submitting a demo request. But in this scenario, both of our MQLs carry equal weight in the traditional lead scoring model, even though we know intuitively that's just not true. But they’re highly likely to be handled similarly by sales. And that’s just the behavioral factors, without even getting into the complexities of demographics, where things can get messier.

Why salespeople and marketers no longer trust this system

Mike Sharkey: I think this is the approach that many of us marketers have been led to adopt over the years—and for good reason. It's fairly structured and simple to identify different touch points we think might convey interest from someone, whether that's on our website or interacting with our marketing in general. But I've never talked to a salesperson or even a marketer in my life who believes in this traditional method. Yet, we all still do it. I think that’s because of a lot of the software we use today and the tools and availability of lead-scoring mechanisms.

To your point, Charlie, another example of how it gets messy when you start adding in demographic-based scores is, say, if you’re targeting CMOs and you give them a hundred points, but you also give ten points for anyone who visits your homepage. So after someone visits your homepage ten times, a person that you're not even interested in is now the equivalent of a CMO. That's how stupid this kind of traditional scoring can be. We're not trying to make a mockery of this approach for scoring because it does have a purpose, and I know many people use it quite well, but I think that we can do a lot better here and advance beyond this.

Charlie: I agree. Part of the reason we keep doing this is that something is better than nothing. But this something needs a bit of a facelift. Mike, you brought up a point about the trust people have in this system and we've got a few stats I want to share. Plenty of brands besides us have wised up to some of the challenges with a traditional approach to scoring. I really love an experiment they ran at Zendesk to test the efficacy of their own scoring mechanism.

They ran a quarter-long experiment, they looked at about eight hundred leads, half of them being sales-ready MQLs and the other half were totally random, unqualified leads. What they did with those is treated them the same, they reached out to them, and the goal was to understand whether higher propensity to buy for scored versus the non-scored. Ultimately they found absolutely no difference.

Forester also conducted a study that found 98% percent of MQLs never result in closed-won business. This broke my heart, but I believed it. I think this is where you realize it's not working as it's supposed to. It needs that facelift. But maybe it's even more than that. Maybe it's actually detrimental to us and to our businesses because salespeople don’t trust the MQLs marketing is passing to them. I think if we're honest with ourselves, marketers really don't trust it either.

Identifying the touchpoints that actually correlate with intent

Mike: I think this also calls out how wrong we all are in terms of which touchpoints indicate intent to buy. We think through that marketing frame of reference—they engaged with our content and therefore they must be interested in buying our product—but that's not the right framing for how we should be thinking about scoring leads. And we also rarely look at the correlation with revenue. For the most part, I think this is because marketers just don't have access to a lot of this data, but what we do have access to are the marketing touchpoints, so that's what we rely on.

Charlie: Something I want to come back to is the idea of a linear funnel. Gartner did a study that investigated what the actual buyer journey looks like today and it's not that neat and tidy, which has huge implications for lead scoring in today's world.

Mike: Gartner’s study is refreshing. The linear funnel is much more fun to look at and easier to think through, but that’s not the reality of what’s going on, and this study probably didn’t even capture all of the touch points and emotions we go through before we buy something. We have no idea what influences the buyer's journey. We think we do, but we don't. There's obviously just too much for our brains to comprehend. We know it's not linear, so why do we keep measuring it in a linear way?

I think what we can do today is get more thoughtful about how we can measure demographics and event-based behavior scores that degrade over time. But in the longer term, I think this is where these large language models can really help us make sense of the buyer journey. We're not there yet and there are definitely some ways that we can start to structure our data to make us ready for that AI story moving forward.

Charlie: It’s coming faster and faster. Now is the time to start putting those steps in place so we are ready when the tools and systems become ready and accessible. That’s a really natural way for us to progress into the next kind of part of what we want to talk about which is the idea of a product-qualified lead. So if we take that old way of scoring and we start thinking about how we can include attributes that are more indicative of actual product usage, product behavior, and intent to buy.

Product-led growth motions and product-qualified leads

Mike: So there are loads of software companies that run what we call a product-led growth function. To me, this is probably the natural state for buying software—instead of making demo requests and trying to evaluate the software without actually using it. The real revolution that's happened in the last decade is now we go and find apps that solve our problems, We sign up for them, we see if they can solve our problem, and if they can, we figure out how we're going to buy that software. That’s really how product-led growth works.

A lot of us know this today and we know we should be tracking in-product behaviors and figuring out what the “aha” moment in our product is. What are the behaviors that will influence someone to buy or at least influence them enough that we should nudge them in a certain direction to get them to buy? It’s these in-product behaviors that really help us understand the buyer journey in a product-led growth motion.

But that’s pretty hard to do and very few companies I interact with aren’t doing it—and I don't think that's something to be ashamed of. We, like every business, struggle with it too, and what you'll find is a lot of these Silicon Valley companies that have raised a lot of money are able to afford to hire data scientists and analysts and use numerous tools to grade their users when they sign up and use the product. They've developed very complex machine learning-based scoring models to help them identify who's likely to buy and who they should pass to sales and this is a function of just being really good with data. But the rest of us don’t have access to that and have been stuck with applying points to different demographical firmographics and different behaviors like attending a webinar or logging into our product.

I think that one takeaway is looking at the simple steps you can take to get to an understanding of the behaviors in your product that actually show buyer intent, instead of just consuming content.

Evolving lead scoring frameworks to include in-product behaviors

Charlie: I've had personal experiences at previous companies where we moved to a hybrid go-to-market model. We were asked to evolve our scoring model to look beyond those marketing engagements to actual product engagement. One of the key challenges with that was actually accessing the data, and being able to merge it in a meaningful way with those marketing touch points to score the leads. One of the first things we had to do was start to build a relationship with the teams that actually owned the data and think about how we could combine it. I don't want to say that was a quick and easy job. It certainly took a lot of time but once we cracked that code and everyone was on the same page.

This is also where we were able to start designing the programs that would help all sides of the table right— not only does marketing care about acquiring leads and converting those to revenue, but the product team cares that customers are using the product the way they intended—ae they driving the appropriate behaviors and are we driving efficiency through the full funnel? it is an evolution, but it is possible. Step one is how you find and access that data.

Mike: The meat and bones of the behaviors are when they get into the product and everything should be focused on getting them into the product. So if you start measuring from when they get in and the behaviors that influence whether they're going to buy, whether they're kicking tires or you should nudge them along through that journey of onboarding all the way through to sale. Then it can be really effective in understanding your pipeline at any given time versus making these assumptions that we talked about earlier.

Charlie: And we’re not suggesting that you throw scoring out the window. It's still valuable and useful for understanding your revenue and how leads engage with your brand. Mike, you mentioned that this approach is relegated to the tech elite at the moment. What do you think the turning point will be for the broad masses to adopt this approach?

Mike:The most important thing for a product-led growth motion is to understand and measure the behaviors in your product—for example, if you have a photo-sharing app, the first step is getting someone to a point of success might be sharing a photo. Typically marketers, because it is quite technical, have never really had the ability to think through what activities they might want to measure so the first step would be to sit down and figure out what all the behaviors someone could do in your product —you don't have to be right about it, it doesn't really matter. But once you start tracking and measuring that data, you can look at what behaviors lead to other behaviors and start building a framework of measurement around in-app product behaviors.

I don't think that negates understanding demographic and firmographic data—so whether they fit my ideal customer profile and whether they're exhibiting the behaviors of someone that's likely to buy your product. By tying those scores together, you can start to make sense of where someone's at beyond the traditional scoring. I think a lot of people just simply aren’t doing this or aren't doing it very well and that's so important because as we move into that AI world we need that data, and if you're not measuring it, you're not going to be able to do anything with AI and you're going to fall behind.

Why product-led scoring models are a growth opportunity for marketers

Charlie: I actually think this is a pretty cool opportunity for marketers to evolve their skill set. Firstly, it’s about becoming an expert in your own product, and second, it's about starting to understand that event-based data model. So it's going beyond sessions and click-through rates and those traditional marketing KPIs to be able to understand that event-based data infrastructure and modeling and being able to tie all that together. That’s an awesome growth opportunity for the marketers out there who aren't doing this today.

Mike: That’s just so important for people to grasp. It's not necessarily the fault of the marketer if they’re not doing it right now. A lot of this stuff has been overly complicated, we just haven't had access to this data in a single place in order to process that information and make sense of it. Getting that information in a single place as a starting point is really important before you can do any of this. So while it’s not the marketer’s fault, Once you do have it all together then yes, it's your responsibility to understand that information. If you work for a company that sells the product online and you're not into the weeds and have a deep understanding of the product you're not doing your job.

Charlie: My parting words of wisdom for those that want to start taking this approach is that if you feel like a particular area is going to be particularly challenging, whether that's because you’re not tracking product usage behavior, or it is being tracked, but you don't have access to it, or you have all the data, but you don’t have the tools in place to combine it—whatever that sticky area might be, that’s where you should start.

This conversation has been edited and condensed for clarity. To hear the full conversation, watch the replay.

Like this article? Share it!

Share this article

Subscribe to The Pulse

Like this article? Share it!

Subscribe to The Pulse

#1 for ease of use

Try Ortto today

Build a better journey.

Product

Pricing

Solutions

Features

About

Resources

Ortto for

Templates

Integrations

Ortto® is a registered trademark.

🍪 We use cookies to improve your experience on our website. You can find out more in our policy.