Advanced

Scoring Rubrics

A practical guide to scoring rubrics for AI procurement and vendor-risk practitioners.

What This Lesson Covers

Scoring Rubrics is a key topic within DD Questionnaire Design. In this lesson you will learn the underlying procurement and vendor-risk discipline, the contractual or operational lever that gives the buyer control, how to apply the procedures to real AI vendor relationships, and the open questions practitioners are actively working through. By the end you will be able to engage with scoring rubrics in real AI procurement and vendor-risk work with confidence.

This lesson belongs to the Vendor Due Diligence category of the AI Procurement & Vendor Risk track. AI procurement and vendor-risk management sits at the intersection of procurement, third-party risk, security, privacy, legal, and AI engineering. Understanding the vendor due-diligence discipline that separates well-run AI vendors from the others is what lets you build a vendor relationship that delivers value while limiting downside.

Why It Matters

Design DD questionnaires that surface real risk rather than performative answers. Learn question taxonomy (security, privacy, AI-specific, operational, financial), evidence-anchored questions (the answer must be backed by an attestation, policy, or screenshot), scoring rubrics, conditional branches that go deeper based on initial answers, and when to retire a question that no longer differentiates vendors.

The reason scoring rubrics deserves dedicated attention is that AI vendor markets are immature: vendor terms are slanted toward providers, attestation standards (ISO 42001 in particular) are only just becoming consumable, model deprecation cycles are aggressive, IP litigation is unsettled, and the EU AI Act has just begun applying flow-down obligations to upstream providers. Practitioners who reason from first principles will navigate the next vendor pitch, the next renewal, the next incident, and the next regulatory inquiry far more effectively than those who only have a checklist.

💡
Mental model: Treat every AI vendor as a chain — intake, due diligence, RFx, contract, onboarding, ongoing management, incident response, and exit. Each link must be defensible to a sophisticated reviewer (board, regulator, customer, plaintiff in litigation). Master the chain and you can run any AI vendor relationship that lands tomorrow.

How It Works in Practice

Below is a practical procurement and vendor-risk pattern for scoring rubrics. Read through it once, then think about how you would apply it to a real AI vendor in your portfolio.

# Vendor DD pattern
DD_STEPS = [
    'Issue tier-appropriate questionnaire (security, privacy, AI, financial)',
    'Collect attestations (SOC 2 Type 2, ISO 27001, ISO 42001, FedRAMP)',
    'Run independent technical evaluation on representative data',
    'Map sub-processors and AIBOM components',
    'Conduct on-site / live assessment for critical vendors',
    'Produce DD report with risk register entries',
]

Step-by-Step Analytical Approach

  1. Establish the criteria — What is the policy, standard, or contractual requirement that governs this decision (procurement policy, TPRM policy, security baseline, AI governance, AI Act provider/deployer rules, sector regulation)? Document the criteria up front; vendor decisions made without explicit criteria are intuition, not governance.
  2. Tier the vendor — Map the vendor to the right tier (critical / high / medium / low) so DD depth, contract requirements, and ongoing management intensity are proportionate to risk.
  3. Plan the evidence — For DD, lay out questionnaires, attestations, independent technical evaluation, and on-site / live assessment as appropriate. For contracts, identify the AI-specific clauses and the negotiation strategy. For ongoing management, define the metric set and the operating rhythm.
  4. Collect sufficient appropriate evidence — Multiple sources, time-stamped, hash-pinned where applicable, independent of vendor self-reporting. The bar is what a sophisticated reviewer (board, regulator, customer, plaintiff) would expect.
  5. Form the decision — Compare evidence to criteria; identify residual risks; route for acceptance per matrix; document the audit trail; communicate the decision to vendor and stakeholders.
  6. Operate the relationship — Onboard, monitor, refresh attestations on cadence, run incident response, manage change, recertify annually, exit cleanly when the time comes.

When This Topic Applies (and When It Does Not)

Scoring Rubrics applies when:

  • You are evaluating, contracting with, or managing an AI vendor (foundation-model API, AI SaaS, AI infrastructure, AI consulting)
  • You are running a TPRM programme that includes AI vendors (regulated sector or otherwise)
  • You are responding to a customer or regulator question about AI vendor governance
  • You are operating under the EU AI Act and need to flow obligations through the vendor chain
  • You are exiting a vendor or planning a vendor substitution

It does not apply (or applies lightly) when:

  • The AI is genuinely first-party (built and operated entirely in-house with internal data)
  • The vendor relationship is below the procurement-policy threshold for full TPRM (usually a small-dollar exception with no sensitive data)
  • The work is research-only with no path to production
Common pitfall: Practitioners often treat procurement as a one-time exercise — sign the contract, hand it to ops, move on. AI vendor relationships are dynamic: models get deprecated, sub-processors change, prices reset, regulatory obligations evolve, and incidents happen. Build the lifecycle muscle (annual recertification, change management, exit runbook ready before you need it) so you are not negotiating from a weak position when something goes wrong.

Practitioner Checklist

  • Are the criteria for this decision explicit, written, and tied to the procurement / TPRM policy?
  • Is the vendor risk-tiered, and is the depth of DD / contract / management proportionate to tier?
  • Does the contract include AI-specific clauses (data rights, no-train, IP indemnity, SLAs, exit, AI Act flow-down)?
  • Are residual risks documented, accepted at the right level, and tracked in the register?
  • Is performance monitored from your own instrumentation, not vendor self-reporting?
  • Are attestations refreshed on cycle, with gaps surfaced and addressed?
  • Is the exit runbook in place before you need it, with parallel-running and data-migration plans?

Disclaimer

This educational content is provided for general informational purposes only. It does not constitute procurement, legal, or professional advice; it does not create a professional engagement; and it should not be relied on for any specific vendor selection, contract negotiation, or risk-acceptance decision. AI vendor markets, contracts, and regulatory obligations vary by jurisdiction and change rapidly. Consult qualified procurement, legal, and risk professionals for advice on your specific situation.

Next Steps

The other lessons in DD Questionnaire Design build directly on this one. Once you are comfortable with scoring rubrics, the natural next step is to combine it with the patterns in the surrounding lessons — that is where doctrinal mastery turns into a working AI procurement and vendor-risk programme. Procurement and vendor-risk are most useful as an integrated discipline covering intake, DD, RFx, contracts, ongoing management, incidents, and exit.