Win Probability Scoring

AI-powered win prediction, theme mapping, and confidence scoring.

Win Probability Scoring uses AI and historical data to predict your likelihood of winning an RFP bid. Get data-driven bid/no-bid recommendations so your team focuses on the opportunities most likely to convert.

How It Works

When you run Win Probability from the Insights dropdown in the project header, Velocibid analyzes seven key factors and produces a weighted composite score:

Library Coverage20%

Percentage of questions with high-confidence knowledge base matches.

Answer Completeness15%

How many questions have been drafted. Unanswered questions lower your score.

Review Quality15%

Proportion of answers that have been approved by reviewers.

Industry Match15%

Historical win rate in the same industry as this RFP.

Past Client History15%

Previous wins/losses with the same client organization.

Deadline Feasibility10%

Remaining time vs. outstanding questions. Tight deadlines lower the score.

Competition Level10%

Number of competitors detected in the RFP. More competition = lower score.

Score Interpretation

Based on the composite score, you receive a clear recommendation:

Strong Bid
75-100%
Bid
55-74%
Cautious
35-54%
No Bid
0-34%

The Circular Gauge

The Insights dropdown displays your current win score. Click the Details pill to open the full side panel with a circular gauge and complete breakdown:

  • Score & Recommendation — the overall percentage and bid recommendation
  • AI Summary — a 2-3 sentence analysis of key takeaways and next steps
  • Factor Breakdown — each factor with its individual score, impact indicator (positive/negative/neutral), and progress bar
  • Recalculate — re-run the analysis after making changes to your responses

Theme Mapping

Alongside win prediction, the RFP Themes analysis (also in the Insights dropdown) uses AI to extract key themes from the uploaded RFP documents:

  • Pain Points — problems the client wants solved
  • Priorities — must-have requirements and preferences
  • Evaluation Criteria — how proposals will be scored
  • Key Phrases — recurring terminology to mirror in your responses

Click the Details pill on the Themes row to open the full side panel. The Alignment Check tab then scores each of your answers against these themes, showing which themes you've addressed and which you've missed. This helps you tailor every response to what the evaluator is looking for.

Confidence Scores

Each question card now displays a confidence badge based on the AI's confidence in the generated answer:

High Confidence
85%+
Medium Confidence
60-84%
Low Confidence
<60%

Hover over the badge to see the exact score and whether it came from a library match (higher confidence) or a document match. Low-confidence answers are great candidates for manual review.

Best Practices

  • Run early, run often — calculate win probability as soon as you upload documents. Re-run after major edits to track improvement.
  • Use themes to guide writing — extract themes before starting responses. Use the alignment check to ensure coverage.
  • Focus on low-confidence answers — red-badged questions need the most human attention.
  • Log outcomes — set project outcomes (won/lost) after bid decisions to improve future industry and client history scoring.