Most teams running idea campaigns never agree on scoring criteria before they start reviewing. Everyone applies their gut feeling. They end up with inconsistent results, disagreements that feel personal, and evaluation sessions that run far over time.
The three scoring models in this guide solve that. Each is designed for a different situation. Choose the one that fits your context, agree on it as a team before you begin, and apply it consistently to all submissions in the same evaluation cycle.
When to Use Each Model
Model A is for teams that need to move quickly and want a simple, defensible way to distinguish stronger ideas from weaker ones. Good for a first evaluation pass or when your review team is short on time.
Model B is for situations where the stakes are higher, multiple stakeholders need to weigh in, and the criteria for a good idea need to be explicitly agreed and documented. Good for strategic innovation programs or when leadership wants to see how decisions were made.
Model C is for teams that think visually, prefer discussion over spreadsheets, and want to use the evaluation session to also build a shared understanding of where each idea sits relative to the others.
Model A: The Three-Question Shortcut
Best for: fast evaluation rounds, small review teams, time-pressured situations.
For each idea, score three dimensions on a scale of 1 to 5:
Impact potential (1 to 5)
1 = minimal, affects very few people or processes in a small way
5 = significant, could meaningfully improve outcomes for a large group or a core process
Feasibility (1 to 5)
1 = extremely difficult, requires major resources, approvals, or infrastructure changes
5 = very doable, can be tested quickly with available resources
Strategic fit (1 to 5)
1 = not connected to current priorities
5 = directly aligned with a stated organisational or team goal
Average the three scores. Ideas scoring 3.5 or higher advance to the next stage. Ideas below 2.5 are declined. Ideas between 2.5 and 3.5 go into the Interesting pile for a second look.
This model is fast and consistent. It does not account for nuance, which is an advantage when doing initial evaluation and a limitation when making final decisions. Use it to sort, not to decide.
Model B: The Weighted Criteria Matrix
Best for: strategic innovation programs, higher-stakes evaluation, situations where leadership wants documented decision-making.
Step 1: Before looking at a single idea, your review team agrees on 4 to 6 evaluation criteria. These should reflect what actually matters for this specific campaign, not generic innovation criteria. Examples: cost reduction potential, implementation speed, cross-functional applicability, risk level (inverted score, higher score for lower risk), customer impact, alignment with this year's priorities.
Step 2: Your team assigns a weight to each criterion, expressed as a percentage summing to 100. This is the step most people skip, and it is the most important one. Deciding that implementation speed is worth 30% of the total score and cost reduction is worth 20% forces your team to be explicit about what is actually driving decisions. That conversation is more valuable than the scoring itself.
Step 3: Each reviewer scores each idea on a scale of 1 to 5 for each criterion. Multiply each score by the criterion's weight and sum the results for a weighted total out of 5.
Step 4: Rank ideas by weighted total. Ideas in the top third advance to the next stage. The rest receive the standard triage treatment.
This model takes longer to set up but produces more defensible, consistent results. It is especially useful when you need to explain your decisions to people who were not in the room.
Model C: The Effort-vs-Impact Matrix
Best for: teams that prefer visual thinking, want a discussion-based evaluation session, or need to quickly communicate prioritisation decisions to a broader audience.
Draw a simple 2x2 matrix. The horizontal axis runs from Low Effort on the left to High Effort on the right. The vertical axis runs from Low Impact at the bottom to High Impact at the top. Place each idea as a dot somewhere in the matrix based on the team's collective assessment.
What each quadrant actually means, and what you do with ideas that land there:
High Impact, Low Effort (upper left): Do these first. These are your quick wins. They have disproportionate value relative to what they cost to implement. Most programs should be able to act on at least one of these within 30 days of a campaign closing.
High Impact, High Effort (upper right): Plan these carefully. These are your strategic investments. They are worth pursuing but require proper resourcing, a business case, and a realistic timeline. Do not let them stall in the pipeline just because they are complex. Assign an owner and a next step.
Low Impact, Low Effort (lower left): Do these opportunistically. These will not move the needle much, but they are easy. If someone is motivated to implement one of these, let them. Small wins build momentum. Just do not prioritise them over high-impact ideas.
Low Impact, High Effort (lower right): Decline these honestly. These cost more than they are worth. Be direct with contributors: the idea addresses a real issue, but the investment required does not match the return we expect. That is a legitimate reason to decline, and contributors will respect it.
One important note: the matrix is a starting point for conversation, not a final verdict. Two people placing an idea in different quadrants is useful data. Discuss why. The disagreement often reveals assumptions that need to be made explicit before any decision is made.
A Note on Consistency
Regardless of which model you use, apply it consistently to every idea in the same evaluation cycle. Switching models mid-review, or applying stricter criteria to ideas from certain departments, undermines the credibility of the entire process. If your criteria change, acknowledge it and restart the evaluation.
Related Guides
- How to Triage 100+ Ideas in 2 Hours
- How to Prioritize Ideas When Everything Feels Important
- The One-Page Innovation Report for Leadership
β See our full comparison of the 10 best idea management tools



.webp)