Lead Scoring 2.0: How to stop handing off cold leads to sales?
Picture the scene. A salesperson calls a lead who downloaded the whitepaper three weeks ago. Once. At 2:37 PM. They browsed the homepage for five minutes and then moved on. No return calls. No email opens. No activity on the LP with the price.
The salesperson calls. The lead is surprised that anyone even remembers his details. The conversation lasts two minutes. Zero impact, time wasted on both sides.
This isn’t an exceptional story. It’s a daily occurrence in dozens of companies that have lead scoring—but they use it in its 1.0 version. That is, someone clicked on something, received points, crossed a threshold, and landed on the sales list.
The problem isn’t with the scoring concept itself. The problem is that the first version of the idea was far too simple for what we know today about customer behavior.
Why Version 1.0 Is No Longer Enough
Lead scoring, as originally conceived, addressed a simple problem: how to separate leads worth pursuing from those not yet ready. The logic was simple: assign points for behaviors, set a threshold, and then pass it on to the sales force.
This worked quite well in a world where the purchase path was linear. A customer would visit a website, download a product, fill out a form, and then it would go to the CRM. Marketing would qualify the product, and sales would close it.
Only this world no longer exists.
Today, a customer (especially a B2B customer) spends weeks researching before filling out a form. They return to a website multiple times from different devices. They read blog articles at 2 a.m. They check a price list for 30 seconds to see the order of magnitude, then disappear for a week. They watch a YouTube demo because they don’t want to talk to sales. And they compare offers from four competitors simultaneously.
In this world, simply adding up points for clicks becomes a misleading heuristic. A lead who visited a website 20 times in a single day might be a journalist writing about the industry. A lead who only visited a pricing and plan comparison page might be a CFO ready for an interview later that week.
The total points won’t tell you which is which.
And that’s why we need to talk about Lead Scoring 2.0.
What is the difference between 2.0 and 1.0
To avoid creating an artificial divide, Lead Scoring 2.0 isn’t a separate technology or a new product category. It’s a shift in approach to the data you already collect.
The difference between the first and second versions lies mainly in three areas.
First difference: context instead of sum.
In version 1.0, you counted points. In version 2.0, you ask for context. Thirty visits to a website in one day sends a different signal than thirty visits spread over three weeks. Visiting the price page immediately after opening a case study sends a different signal than accidentally clicking a link in a newsletter. Same action, completely different meaning.
Modern scoring doesn’t ask, “How many points did the lead collect?” It asks, “In what order did these behaviors occur and what does that say about their intentions?”
Second difference: negative scoring taken seriously.
Most systems support negative points, but few use them consistently. This is one of the most important changes in Approach 2.0.
A lead who signed up for a newsletter a year ago and hasn’t opened a single message since shouldn’t receive 50 points. It should receive fewer points because their engagement has evaporated. The time since last activity is a signal that scoring 1.0 often ignores.
The same applies to signals that exclude. Someone only visits the “Careers” section? Negative points. Someone downloads content from a segment you don’t target? Zero value, not a plus.
Third difference: firmographic data as a filter, not as decoration.
In B2B, company size, industry, and contact person’s position aren’t profile trivia. They’re primary filters. If you’re selling software to companies with more than fifty employees, a lead from a sole proprietorship has zero commercial value—regardless of how many times they visit your website.
Demographic and firmographic scoring should work like a gateway: first we check whether the lead fits the ICP at all, and only then we assess its engagement.
The fundamental problem: who should be sold at all?
Before we move on to the mechanisms, it is worth pausing for a moment on the question itself.
In a typical organization, marketing generates leads and sales closes them. The problem arises at the point of handoff—that is, in the definition of when a lead is “ready.”
Often this definition is either informal or based solely on the number of points. “Let’s give sales everything above 100 points.”
But 100 points can mean very different things. Someone was very active for a week and completely inactive for the past month. Someone who visited the site multiple times but never opened a single email. Someone who downloaded files from the partner section, not the customer section.
In the Lead Scoring 2.0 model, the sell cutoff should be based on a combination of signals, not a single number.
A good definition of “Sales Qualified Lead” answers several questions:
- Does this contact fit your ideal customer profile (company, position, industry, size)?
- Was his last activity recent enough for the topic to be relevant to him?
- Is the interest focused on decision-making content (prices, comparisons, demos) and not just educational content?
- Was there any signal of purchasing intent – not just general curiosity?
What does behavioral scoring look like in practice?
Behavioral scoring assigns points for contact actions. In version 1.0, it looked something like this: website visit = 5 points, email opening = 3 points, file download = 10 points.
Simple. Clear. And quite primitive.
In the 2.0 approach, we begin to differentiate not only the type of activity, but also its context and importance within the context of the funnel.
Some examples of what this might look like:
Visiting a general page (e.g., a blog) is of low value, and anyone can do it. Visiting a pricing page or plan comparison page is of high value, and it’s a decision-making signal. Visiting the “About Us” and “Our Team” pages after visiting the pricing page is of very high value, and it’s a signal that someone is verifying credibility before engaging in a conversation.
Opening a newsletter – low value. Clicking a link in the newsletter leading to a case study – higher value. Clicking a link to a demo page or contact form – very high value.
Inactivity for 30 days after reaching the scoring threshold results in negative points or a reset. Purchasing intentions have an expiration date.
This is the level that distinguishes scoring that actually qualifies from scoring that just sorts.
Demographic and Firmographic Scoring: The Underrated Half of the Puzzle
There is one situation that repeats itself regularly in companies working with B2B leads.
Marketing delivers leads. Sales checks the first few and finds that half are students, freelancers, or competitors. They return to marketing with a grudge. Marketing is offended, saying the leads have high scoring.
And they’re both right. And they’re both talking about something different.
Marketing measures engagement. Sales measures sales potential. They’re not the same thing.
Demographic and firmographic scoring solve this problem – provided it is used consistently as an input filter.
Before any lead can earn points for their behavior, they must pass through a series of questions: Does this person fit a profile that historically generates conversions? If not, behaviors become irrelevant.
In B2B, typical criteria include industry, company size, role (whether decision-maker, influencer, or end-user), geographic location, and sometimes specific attributes such as the technologies used or the company’s stage of development.
The fact that this data is often incomplete poses a challenge. But there are several ways to fill the gaps: data from forms, enrichment from external databases, behavior (e.g., the segment of content someone is viewing often indicates an industry), and in many cases, simply asking.
A form collecting name and email address is one level of data. A form collecting name, email address, job title, and company size is a completely different level of qualification.
Decay scoring: scoring that takes time into account
One of the most overlooked elements of Lead Scoring 2.0 is the so-called scoring decay – a mechanism that gradually reduces the score of a contact if it is no longer active.
The logic is simple: purchase interest isn’t a permanent state. Someone who researched the market intensively three months ago may have already made a decision (either in favor of the competition or “not now”), or they may simply be experiencing a temporary design lull.
If you don’t subtract points for time, your lead database gradually turns into a collection of historical interest signals that look like a current pipeline. This is where salespeople’s calls to “clickers” come from.
Implementing decay is technically straightforward. Two approaches are most commonly used: automatic deduction of points after a specified period of inactivity (e.g., -10% after 30 days, -30% after 60 days) or resetting scoring after a defined period of complete inactivity (e.g., zero points after 90 days without any interaction).
Both approaches have their advantages and disadvantages. The important thing is to use one at all.
Separate Paths for Separate Stages: How Not to Mix TOFU with BOFU
One of the most common mistakes in older scoring implementations is treating the entire customer journey equally. A lead visiting your blog for the first time and a lead returning to the pricing page after three weeks of nurturing are completely different people with completely different needs.
In Lead Scoring 2.0, it is worth thinking about separate scoring plans for different stages of the funnel.
At the top of the funnel (TOFU) stage, points are earned for exploration: reading articles, downloading e-books, and watching webinars. The goal here isn’t sales qualification yet, but rather identifying interest and building relationships.
In the mid-funnel stage (MOFU), points are earned for deepened engagement: repeat visits, case study openings, and comparing options. Here, scoring is an important signal, but the lead is more likely to be sent for further nurturing than directly to sales.
At the bottom of the funnel (BOFU) stage, points are earned for decision signals: visiting the price list, clicking the “schedule a demo” button, or inquiring via the contact form. This is where forwarding to sales makes sense.
Lead Nurturing as a Complement, Not an Alternative
At this point, it is worth mentioning one issue directly, which is often lost in the discussion about scoring.
Lead Scoring 2.0 isn’t about passing fewer leads to sales. It’s about passing better leads—while also creating a system that takes care of those that aren’t yet ready.
The latter is lead nurturing. And it’s an inseparable partner of good scoring.
When a lead doesn’t meet the qualification threshold, it doesn’t go to waste. It’s placed in an appropriate communication scenario that delivers valuable content tailored to its current stage.
Someone reads articles about the basics? They receive an educational series. Someone completed the training and started visiting product pages? They receive case studies and comparisons. Someone repeatedly returns to pricing but doesn’t complete the form? Maybe it’s time for a proactive invitation to a demo or a short chat.
Each of these moments is both a nurturing activity and an opportunity to change scoring – because the activity of leads during nurturing is in itself a signal that should influence their assessment.
How iPresso supports Lead Scoring 2.0 in practice
We have reached the point where theory and tool should meet.
Contact scoring module in iPresso allows you to build advanced scoring plans that go far beyond simple point arithmetic.
First, iPresso supports both demographic and behavioral scoring in a single system. You can award points for contact characteristics (industry, job title, location) and for specific actions (website visit, email click, file download) simultaneously, within a single scoring plan.
Secondly, the platform allows you to build conditional rules. You can define that points for visiting a price page are awarded only once every 24 hours (capping option), or that scoring is suspended during specific hours (quiet hours option). These are minor details that have a real impact on the quality of qualifications.
Thirdly—and this is crucial—iPresso allows you to create separate scoring plans for different scenarios. You can have a separate scoring plan for an acquisition campaign, a separate one for nurturing a specific segment, and yet another for reactivating inactive contacts. Each can follow its own logic.
Fourth, scoring results are visible directly in the contact manager. A salesperson who receives a notification about a new hot lead sees not only the total points but also the activity history—what the person did, when, and in what sequence. This changes the quality of the sales conversation. Instead of “I see you downloaded our report a while ago,” the salesperson can say, “I see you recently reviewed our enterprise plans and case studies from the financial industry—am I right in thinking you’re looking for a solution for a larger implementation?”
This latest conversation looks completely different than the call to someone who got on the list with one click a month ago.
How to Combine Scoring with Automation: Scenarios That Work
Scoring itself is passive. Points accumulate, but nothing happens until someone manually reviews the list and decides what to do next.
In Lead Scoring 2.0, scoring is a trigger. Crossing a threshold or achieving a specific combination of signals triggers automatic action—without human intervention.
Some practical examples:
Scenario: Hot sales lead. A contact reaches a threshold of 80 points with the following combination: ICP match + visited the pricing page + was active in the last 7 days. The system automatically creates a task for the salesperson in CRM and sends an email notification.
Scenario: Returning after a long period of inactivity. A contact who has been inactive in the database for six months suddenly returns and visits several websites. Their score increases. The system sends an automated message asking if anything has changed in the project, or triggers a dedicated reactivation sequence.
Scenario: decision signal without firmographic qualification. The contact demonstrates strong behavioral interest, but their company profile is incomplete or doesn’t match the target audience. The system sends a short follow-up form or triggers a communication that collects the missing data before forwarding it to the sales team.
Scenario: low scoring with high educational interest. The contact regularly opens newsletters and browses the blog, but doesn’t visit decision-making pages. The system launches a nurturing sequence with content that redirects towards the middle of the funnel—case studies, webinars, and demo invitations.
All these scenarios can be designed in iPresso automation scenario creator, which works in a drag-and-drop model and allows you to combine scoring conditions with communication activities without involving the IT department.
Marketing-Sales Alignment: Scoring as a Common Language
There’s another dimension of Lead Scoring 2.0 that rarely comes up in technical discussions about algorithms and rules: the organizational dimension.
Scoring only works when marketing and sales speak the same language.
In most companies, these two departments have different definitions of a good lead. Marketing measures the quantity and cost of acquisition. Sales measures the quality of conversations and time to close. These goals aren’t contradictory—but they become contradictory when no one has agreed on what exactly “I’m handing you a lead ready to talk” means.
Scoring 2.0 works best in companies that have built it as a common tool for both departments – not as an internal marketing metric.
How to do it in practice? A few steps.
First, jointly establish the definitions of MQL (Marketing Qualified Lead) and SQL (Sales Qualified Lead). What exactly must be true for a lead to be transferred? What characteristics? What activity? Over what timeframe?
Secondly, regular sales scoring reviews. Monthly or quarterly, it’s worth checking whether high-scoring leads are actually converting. If salespeople say that half of the leads they transfer are “unplayable,” the problem lies with the thresholds and rules, not the people.
Third: the CRM feedback loop. If a lead was sold but didn’t convert, why? If this information is fed back into the scoring system, the model can be adjusted. This iteration makes scoring more accurate with each cycle.
Scoring in B2C: Different Data, Same Logic
So far I have written mainly in the B2B context, but scoring in B2C works on the same principles – with a different set of data.
In e-commerce and B2C, the equivalents of firmographic data are demographic data (age, location, preferences) and historical purchasing behavior. The equivalents of decision signals include: repeated visits to a product page, adding to a wish list, opening an abandoned cart message, and high purchase frequency (RFM indicator).
Scoring in B2C more often translates not into qualification for a sales conversation, but into qualification for a more aggressive offer, personalized recommendation or priority in communication.
However, the logic is exactly the same: not all contacts in the database are equally valuable, not all are equally ready to buy, and not all should receive the same message.
Mistakes that ruin even good scoring
Now that you know what it should look like, it’s worth explaining what goes wrong most often.
MQL threshold set too low. If almost every lead meets the criteria for being transferred to sales, scoring doesn’t do any work. Check what percentage of your leads exceed the threshold – if it’s above 40%, the threshold is likely too low or the rules too loose.
No negative points. If your scoring only adds but never subtracts, over time your database becomes full of leads with historically high scores that are no longer relevant.
Ignoring firmographic data in B2B. If scoring is 100% behavioral and does not filter the company profile, you will receive leads that are highly engaged but completely outside your target audience.
Scoring without nurturing. Leads that don’t meet the qualification threshold have to go somewhere. If there’s no scenario for under-qualified leads, they’ll drop out of the system and, at best, be sold next year in a different campaign—but this time without any context.
No sales feedback. A scoring model built once and never updated is a model that loses accuracy month after month. The market changes, user behavior changes, and the product evolves. Scoring must keep pace.
How to Get Started with Lead Scoring 2.0: Step by Step
Changing the approach to scoring doesn’t have to mean a technological revolution. In most cases, the platform already exists; the question is how to configure it.
Step 1: Audit of current scoring. Start with a diagnosis: What scoring rules are currently in effect? When were they last updated? What percentage of leads transferred to sales actually convert? The answer to this last question will reveal a lot about the quality of your current model.
Step 2: Define ICP and customer profile. Before you even touch on system configuration, sit down with your sales team and answer one question: Who was your best customer in the last twelve months? What characteristics did this company have? What behaviors were characteristic along the purchase path? This is the starting point for building demographic rules.
Step 3: Map of activities and their importance. List all the touchpoints a lead goes through: websites, emails, forms, downloads, events, demos. Now ask yourself: which of these actions are most likely to lead to conversion? These should be weighted higher. Actions that were popular with leads who never converted should be weighted lower or given conditional points.
Step 4: Establish MQL and SQL thresholds. Based on conversion history, determine the score at which leads typically started talking to sales and the score at which they typically closed. This gives you a range of values for MQL (ready for further nurturing) and SQL (ready for sales contact).
Step 5: Configure decay and negative scoring. Add rules that lower scoring after a specified period of inactivity. Add negative points for exclusionary behavior (e.g., visiting the “Work” section).
Step 6: Connecting scoring with automation scenarios. Each scoring threshold should trigger some action. MQL? Starts a nurturing sequence. SQL? Notification to the salesperson and a task in CRM. Inactivity for 60 days? Reactivation scenario.
Step 7: Establish a review and update rhythm. Review conversion data for leads across various scoring ranges quarterly. Update rules based on observations. Involve sales in the process.
Scoring and personalization of communication: close proximity
There is one more dimension worth considering when talking about Lead Scoring 2.0, which is often treated as a separate topic.
Scoring and communication personalization are not separate processes. The scoring result should directly impact what a lead receives in their inbox.
A low-scoring contact who is new to the database receives educational and general content. A rising-scoring contact who has just passed the educational stage receives case studies and invitations to in-depth materials. A high-scoring contact who is close to the SQL threshold receives a direct invitation to a conversation or a personalized offer.
This logic allows for communication that is relevant at every stage of the funnel – not an annoying series of messages that ignore what you already know about the contact.
iPresso combines scoring with content personalization in this way – a contact’s score can directly determine which dynamic content is displayed to them on the website, what messages they receive, and when.
This is the moment when scoring ceases to be a qualification tool and becomes the engine of all communication with the contact.
Measuring Scoring Effectiveness: What to Track
If you don’t measure, you don’t know if it’s working. A few key metrics.
MQL → SQL conversion rate. What percentage of leads qualified by marketing actually make it to sales as SQLs? If it’s low (below 30%), the MQL rules are too loose or don’t adequately incorporate decision signals.
SQL → client conversion rate. What percentage of leads transferred to sales actually generate revenue? If it’s low, the problem lies with the SQL definition or the quality of the firmographic data.
Time from first contact to conversion. Does scoring help shorten sales time? Effective lead nurturing should place leads closer to a sales decision—and shorten the time from initial conversation to closing.
Sales rejection rate. What percentage of leads submitted to salespeople were rejected as unsuitable? This is the most direct feedback on qualification quality.
Nurturing coverage for non-SQL. What percentage of leads that don’t meet the SQL threshold are in an active nurturing scenario? If the percentage is low, you’re missing out on potential leads that could have matured.
Scoring in 2026: Where Are We Going?
It is also worth saying what is happening with lead scoring as a practice in a broader context.
A few trends worth keeping on your radar.
AI-based predictive scoring. A growing number of platforms are using machine learning models to detect patterns in historical conversion data and automatically adjust scoring weights. Instead of manually setting rules, the system learns which combinations of behaviors and characteristics actually correlate with purchases.
Intent data from external sources. In B2B, more and more companies are supplementing their internal scoring with data from external platforms that track what companies are searching for online. If a company in your target group is actively searching for information about solutions similar to yours, this is a signal you can incorporate into your scoring before they even land on your website.
Real-time scoring. A change in context should immediately impact scoring. If a lead has just opened a demo on a website and has been on it for five minutes, they should receive a notification to the salesperson right then and there—not overnight.
Scoring based on relationships, not transactions. In a longer sales cycle (especially enterprise B2B), scoring increasingly takes into account not only purchasing behavior, but also relational signals – participation in events, engagement with content, responses to surveys, activity in communities.
Summary: Scoring that actually qualifies
Lead Scoring 2.0 isn’t a technological revolution. It’s a change of philosophy.
Version 1.0 asked, “How many points has the lead accumulated?” Version 2.0 asks, “What do we know about this person that suggests they’re ready to talk to a salesperson?”
It’s a subtle but fundamental difference.
Scoring, which qualifies, takes into account context, sequence of actions, company profile, time since last activity, and sales funnel logic. It’s not a single number—it’s an interpretation of a set of signals.
And when it’s built well, it stops salespeople from wasting time on “cold” contacts. They start talking to people who are truly looking for a solution. And they start talking to them at the right moment.
That’s the goal. Not higher scoring. Higher conversion rate.
Do it with iPresso
If you want to see how Lead Scoring 2.0 can work in practice in your organization – how to configure scoring plans, connect them with automated nurturing scenarios and deliver sales only to those leads worth talking to – iPresso offers a free demo of the platform.
Simply fill out a short brief describing your company and needs. Based on it, we’ll show you how to configure iPresso specifically for your needs – not a general demo, but a demo that discusses your funnel and your leads.
👉 Fill out the brief and schedule an iPresso demo
You don’t need a finished project or technical specifications. Just a few minutes to answer basic questions about your business is enough. We’ll take care of the rest.
