CID Rubric v0.3.2

We score reports on how the research was done — not what it concluded. A report can be factually correct and still have poor methodology. Every score is based on specific, documented evidence.

v0.3.2 8 dimensions 7 document types 5 grade bands

Grade Bands

Score → Label → Meaning

Every report receives a score from 0 to 10. The score determines its grade. Grade colors communicate severity — they do not communicate judgment.

8.0
Research-Grade
8.0–10.0
Meets the standards used in peer-reviewed academic research.
6.0
Adequate
6.0–7.9
Good enough to use, but has gaps a careful reader should know about.
4.0
Deficient
4.0–5.9
Has major problems that make it hard to trust the conclusions.
2.0
Advocacy-Grade
2.0–3.9
Reads more like an advocacy argument than independent research.
0.0
Unreliable
0.0–1.9
Does not meet basic standards. Claims cannot be verified.

Eight Dimensions

Each weighted by importance

Each report is scored on eight categories. Some categories matter more than others — the weight shows how much each one counts toward the total score.

D1

Definitional Precision

12%

Are the key terms defined clearly enough that someone else could apply them the same way?

D2

Classification Rigor

18%

Would different analysts looking at the same data sort it into the same categories?

D3

Case Capture & Sampling

15%

Does the data actually represent what the report claims it represents?

D4

Coverage Symmetry

15%

Does the report cover its topic evenly, or does it only look in one direction?

D5

Source Independence

10%

Do the sources check out independently, or do they all trace back to the same place?

D6

Verification Standards

18%

Could an outsider verify the claims by checking the underlying evidence?

D7

Transparency & Governance

5%

Is it clear who funded the work, who wrote it, and whether they have conflicts of interest?

D8

Counter-Evidence

7%

Does the report address criticism and acknowledge what it can't prove?

Document Types

7 categories

Every report is classified by what kind of document it is. The type determines which dimensions apply and how scores are weighted.

Code Type What it is
TYPE 1 Survey Primary empirical research using systematic questionnaire administration to a defined sample
TYPE 2 Incident Tracker Ongoing dataset of discrete events collected through media/platform monitoring or self-reporting
TYPE 3 Investigation Report Structured examination of a named organization using documentary evidence and primary source statements
TYPE 4 Composite Index Quantitative ranking aggregating multiple indicators into a single score or categorical ranking as primary output
TYPE 5 Academic Study Peer-reviewed or pre-print research following disciplinary methodology standards
TYPE 6 Advocacy Document Document whose primary purpose is advancing a stated policy or normative position
TYPE 7 Policy Report Synthesizes existing research to inform policy; no original data collection

Applicability Matrix

Which dimensions apply to which types

Not every dimension applies to every document type. When a dimension doesn't apply, its weight is spread across the ones that do. D4 and D5 always apply — no exceptions.

Dimension TYPE 1 Survey TYPE 2 Tracker TYPE 3 Invest. TYPE 4 Index TYPE 5 Academic TYPE 6 Advocacy TYPE 7 Policy
D1 Definitional Precision FullFullAdaptedFullFullAdaptedAdapted
D2 Classification Rigor AdaptedFullN/AFullFullN/AN/A
D3 Case Capture & Sampling AdaptedFullN/AFullFullN/AN/A
D4 Coverage Symmetry FullFullFullFullFullFullFull
D5 Source Independence FullFullFullFullFullFullFull
D6 Verification Standards AdaptedFullAdaptedFullFullAdaptedAdapted
D7 Transparency & Governance FullFullFullFullFullFullFull
D8 Counter-Evidence FullFullAdaptedFullFullAdaptedFull
Full: dimension applies at full weight. Adapted: dimension applies with an adjusted rubric for that doc type. N/A: dimension does not apply; weight is redistributed.

Non-Compensatory Rules

Ceilings that prevent a high score masking a critical failure

Two rules prevent a strong performance in one area from hiding a critical failure in another. When a cap fires, the report's final score is reduced and the raw weighted score is preserved in the record.

Rule 1 · Sampling
Cap at 5.9 when D3 < 3.

If a report's data can't represent what it claims to measure (D3 Case Capture & Sampling below 3), the overall score is capped at 5.9 — no matter how well it scores elsewhere.

Rule 2 · Verification
Blocks Research-Grade when D6 < 7.

If outsiders can't independently check the underlying evidence (D6 Verification Standards below 7), the report cannot reach Research-Grade (8.0+).