Library intelligence

Leaders Watch

Tracking how influential figures' positions evolve in public—through statements, interviews, governance moves, and the documents we host. This is not a directory; it is an editorial timeline you can extend entry by entry. Use theme and role filters to slice the set (e.g. governance, alignment).

How we assign tiers

Tiers reflect present-day leverage over frontier AI and the global safety debate—who can move capability, deployment, norms, or regulation at the largest scale right now, not lifetime achievement alone. Evidence still has to be anchored to primary sources on each profile; tier is not a quality score for every quote.

  • Tier 1 (featured) — Heads of major frontier labs or hyperscaler AI divisions; founders of standalone frontier-model companies; and a small set of researchers whose public statements routinely reset institutional and media language on catastrophic or systemic risk. If their narrative or product choices shift, many others have to react.
  • Tier 2 — Highly influential specialists: alignment and safety researchers, norm-shaping advocates, and leaders whose AI footprint is significant but narrower than the tier-1 bar above. Still first-class profiles with full timelines—not a "B list" for rigour.
16 profilesTier-1 featuredTheme + role filters

Filter

Matching leaders

6 profiles match your filters.

Founder / builder

Dario Amodei

Anthropic leadership

Tier 1

Researcher-operator whose public writing and interviews often stress interpretability, misuse, and deployment discipline as Anthropic scales competing models.

alignmentgovernancedeploymentriskalignment

Open timeline →

Researcher

Geoffrey Hinton

Independent research leadership

Tier 1

A central figure in modern neural networks whose public risk framing in 2023 helped legitimise extinction-risk and governance language well beyond specialist circles.

riskgovernancepublic understandingalignmentgovernance

Open timeline →

Researcher

Yoshua Bengio

Mila / academic leadership

Tier 1

A co-founder of modern deep learning who has moved into explicit AI safety and governance work, including institutional experiments that bridge research and policy.

alignmentgovernanceriskinstitutionsgovernance

Open timeline →

Researcher

Ilya Sutskever

Safe Superintelligence leadership

A core technical architect of the deep learning boom whose later public emphasis tilted toward superintelligence risk and governance—often in tension with product tempo.

alignmentgovernanceriskalignmentgovernance

Open timeline →

Researcher

Paul Christiano

Alignment research leadership

A leading technical voice on scalable oversight, iterated amplification, and practical alignment paths—often bridging abstract risk to engineering agendas.

alignmenttechnicalgovernancealignmentevals

Open timeline →

Researcher

Stuart Russell

Academic governance leadership

Longstanding public interpreter of AI risk for policymakers and general audiences—often with formal models of uncertainty and control.

governanceriskalignmentpolicyalignment

Open timeline →