Physics Researcher Intelligence

Physics Researcher Pipeline

Loading…

How we built this

We deployed parallel Claude AI agents to systematically evaluate every researcher across the top 284 labs at the US physics research universities. Each candidate was scored on a 1–10 excellence scale across bibliometric signals, fellowship wins, paper quality, and career trajectory.

Agents didn't just count citations — they read actual papers, assessed the substance of each researcher's contributions, and evaluated novelty within their specific subfield. By ingesting the full publication record for every candidate in a lab, agents built a peer-relative ranking, comparing each student and postdoc against others working on similar problems. This approach surfaces researchers who are genuinely exceptional within their niche, not just those in high-citation subfields.

Beyond current output, agents traced each candidate's full academic history — undergraduate publications, early-career research contributions, and participation in elite competitions like the Putnam Mathematical Competition and Physics Olympiads. These early signals of raw talent often predict future breakthroughs more reliably than graduate-level citation counts alone.

Bibliometric data was sourced from INSPIRE-HEP, Google Scholar, and arXiv. Contact information was enriched via LinkedIn and institutional directories. Scoring was calibrated to ensure no high-confidence candidate exists without supporting evidence.

Universe
Total researchers evaluated
Qualified candidates
Universities covered
284
Top research labs
Pipeline Quality
Solid candidates (score ≥ 7)
Very strong (score ≥ 8)
Elite (score ≥ 9)
Contact Coverage
Data Quality
Score Distribution (qualified candidates)
Below threshold (1–6)
Solid (7)
Very strong (8)
Elite (9–10)
Top Institutions by Qualified Candidates
Institution
Qualified
Score ≥ 7