Who watches the watchmen?
Quis custodiet ipsos custodes? — Juvenal
Why This Exists
Research reports about India and Indian Americans shape laws, news coverage, and university courses. But who checks whether those reports used good methods? We do. The Citation Integrity Dashboard scores 44 published reports on how well their research was conducted — not on what they concluded.
A report can be factually correct and still have poor methodology. A widely-cited report can fail basic research standards.
These reports have been cited in congressional hearings, State Department briefings, and major newsrooms. We publish our scoring criteria, weights, and every piece of evidence openly — before any report is evaluated.
How CID Scores
A three-step, pre-registered review- 1 We read the full report. Every scored report is read cover to cover. We examine the methodology, data collection, sourcing, and analytical framework — not the conclusions.
- 2 We score eight dimensions. Each report is evaluated on eight aspects of research quality: how terms are defined, whether sources are independent, whether data can be verified by outsiders.
- 3 Everything is published. The rubric, weights, and scoring criteria were published before any report was evaluated. If an organization disagrees, they can submit a response that we publish unedited.
Corpus at a Glance
44 reports · 15 organizations · 1999–2026Evaluated Reports
44 reports · sort, filter, and collapseOrganizations
7 with two or more scored reportsOrganizations evaluated on two or more published reports. Scores reflect the scored reports, not the organization as a whole. Click through for institutional methodology reviews where available.
Single-report organizations
Citation Loops
4 documented patternsWhen Organisation A cites Organisation B as independent evidence — but they share a founder, funding, or staff — the appearance of corroboration is manufactured.
Rubric Preview
Eight dimensions · non-compensatory caps · five grade bandsEvery report is evaluated on eight dimensions of methodological rigor. Weights are published before any report is scored — not adjusted after the fact.
| Dimension | Weight | What it measures |
|---|---|---|
| D1 Definitional Precision | 12% | Are the key terms defined clearly enough that someone else could apply them the same way? |
| D2 Classification Rigor | 18% | Would different analysts looking at the same data sort it into the same categories? |
| D3 Case Capture & Sampling | 15% | Does the data actually represent what the report claims it represents? |
| D4 Coverage Symmetry | 15% | Does the report cover its topic evenly, or does it only look in one direction? |
| D5 Source Independence | 10% | Do the sources check out independently, or do they all trace back to the same place? |
| D6 Verification Standards | 18% | Could an outsider verify the claims by checking the underlying evidence? |
| D7 Transparency & Governance | 5% | Is it clear who funded the work, who wrote it, and whether they have conflicts of interest? |
| D8 Counter-Evidence | 7% | Does the report address criticism and acknowledge what it can't prove? |