We deployed parallel Claude AI agents to systematically evaluate every researcher across the top 284 labs at the US physics research universities. Each candidate was scored on a 1–10 excellence scale across bibliometric signals, fellowship wins, paper quality, and career trajectory.
Agents didn't just count citations — they read actual papers, assessed the substance of each researcher's contributions, and evaluated novelty within their specific subfield. By ingesting the full publication record for every candidate in a lab, agents built a peer-relative ranking, comparing each student and postdoc against others working on similar problems. This approach surfaces researchers who are genuinely exceptional within their niche, not just those in high-citation subfields.
Beyond current output, agents traced each candidate's full academic history — undergraduate publications, early-career research contributions, and participation in elite competitions like the Putnam Mathematical Competition and Physics Olympiads. These early signals of raw talent often predict future breakthroughs more reliably than graduate-level citation counts alone.
Bibliometric data was sourced from INSPIRE-HEP, Google Scholar, and arXiv. Contact information was enriched via LinkedIn and institutional directories. Scoring was calibrated to ensure no high-confidence candidate exists without supporting evidence.