Library intelligence
Leaders Watch
Tracking how influential figures' positions evolve in public—through statements, interviews, governance moves, and the documents we host. This is not a directory; it is an editorial timeline you can extend entry by entry. Use theme and role filters to slice the set (e.g. governance, alignment).
How we assign tiers
Tiers reflect present-day leverage over frontier AI and the global safety debate—who can move capability, deployment, norms, or regulation at the largest scale right now, not lifetime achievement alone. Evidence still has to be anchored to primary sources on each profile; tier is not a quality score for every quote.
- Tier 1 (featured) — Heads of major frontier labs or hyperscaler AI divisions; founders of standalone frontier-model companies; and a small set of researchers whose public statements routinely reset institutional and media language on catastrophic or systemic risk. If their narrative or product choices shift, many others have to react.
- Tier 2 — Highly influential specialists: alignment and safety researchers, norm-shaping advocates, and leaders whose AI footprint is significant but narrower than the tier-1 bar above. Still first-class profiles with full timelines—not a "B list" for rigour.
Filter
Matching leaders
3 profiles match your filters.
Founder / builder
Dario Amodei
Anthropic leadership
Researcher-operator whose public writing and interviews often stress interpretability, misuse, and deployment discipline as Anthropic scales competing models.
Open timeline →
Researcher
Geoffrey Hinton
Independent research leadership
A central figure in modern neural networks whose public risk framing in 2023 helped legitimise extinction-risk and governance language well beyond specialist circles.
Open timeline →
Executive
Sam Altman
OpenAI leadership
A defining operator of the frontier-lab era whose public commitments on safety, deployment, and geopolitical context are continuously stress-tested by product velocity.
Open timeline →