How I think about source credibility — and why my answer has changed
Last Friday, news broke that all 24 members of the National Science Board had been fired. It was eerily relevant, as earlier that day, someone had asked me how I evaluate an information source.
Historically, my answer would have started with: Who published it? A university research center. A peer-reviewed journal. A governmental agency. These names carried weight, often representing, to a large degree, institutional independence, expert oversight, and a firewall between political pressure and scientific findings.
That framework is still valid. But, as that recent news headline highlights, it’s getting harder to apply.
A Pattern Worth Paying Attention To
Since January 2025, federal actions have canceled research funding, fired or pushed out thousands of scientists and staff across the CDC, NIH, NSF, NOAA, and NASA, shuttered advisory boards, defunded grants, and removed data infrastructure.
In fact, according to the Silencing Science Tracker, published by Columbia Law School’s Sabin Center for Climate Change Law, there have been 519 documented efforts to silence science across the past three administrations — 19 under Biden, 351 during Trump’s first term, and 149 in the current term as of the date of this post.
What This Means on the Ground
For those of us in emergency management, this isn’t abstract. Our work depends on fast access to reliable, current data. The impact of agencies being hollowed out, data being stripped, grants being defunded, and boards sitting vacant rests at the very core of what we need to do our jobs.
We rely, in part, on OSINT — open-source intelligence — to evaluate, analyze, and fill in information gaps. Cross-referencing data from multiple independent sources (source triangulation), going directly to the primary source whenever possible, assists in developing situation reports (Sit Reps), threat and risk assessments, and issuing alerts and warnings, among other tasks.
The Questions That Still Matter
At its core, evaluating a source has always come down to a few questions: Who collected this data, and do they have the expertise to do it well? Who reviewed it — and did those reviewers represent a wide range of field leaders? Who paid for the research, and could that funding have shaped the conclusions? Was the data collected properly — large sample sizes, sound methodology, free from selection bias? Do the cited sources actually say what the paper claims, or has something been taken out of context?
The questions we use to evaluate information haven’t changed — but the effort required to answer them honestly has. When the institutions we relied on as shortcuts to credibility are being systematically weakened, the only responsible path is to do the longer work: trace sources back to their roots, triangulate independently, and stay alert to what’s quietly disappeared. That’s always been best practice. Right now, it’s become essential.

Leave a comment