Library intelligence
Leaders Watch
Tracking how influential figures' positions evolve in public—through statements, interviews, governance moves, and the documents we host. This is not a directory; it is an editorial timeline you can extend entry by entry. Use theme and role filters to slice the set (e.g. governance, alignment).
How we assign tiers
Tiers reflect present-day leverage over frontier AI and the global safety debate—who can move capability, deployment, norms, or regulation at the largest scale right now, not lifetime achievement alone. Evidence still has to be anchored to primary sources on each profile; tier is not a quality score for every quote.
- Tier 1 (featured) — Heads of major frontier labs or hyperscaler AI divisions; founders of standalone frontier-model companies; and a small set of researchers whose public statements routinely reset institutional and media language on catastrophic or systemic risk. If their narrative or product choices shift, many others have to react.
- Tier 2 — Highly influential specialists: alignment and safety researchers, norm-shaping advocates, and leaders whose AI footprint is significant but narrower than the tier-1 bar above. Still first-class profiles with full timelines—not a "B list" for rigour.
Filter
Matching leaders
7 profiles match your filters.
Executive
Demis Hassabis
Google DeepMind leadership
Research leader turned lab chief whose public mix of scientific optimism and cautious governance language tracks how DeepMind interfaces with Alphabet and global regulators.
Open timeline →
Founder / builder
Elon Musk
xAI leadership
A high-decibel operator whose bets span frontier models (xAI), autonomy, and global platforms—continuously shaping how the public and policymakers imagine both catastrophic risk and competitive acceleration.
Open timeline →
Executive
Sam Altman
OpenAI leadership
A defining operator of the frontier-lab era whose public commitments on safety, deployment, and geopolitical context are continuously stress-tested by product velocity.
Open timeline →
Researcher
Yoshua Bengio
Mila / academic leadership
A co-founder of modern deep learning who has moved into explicit AI safety and governance work, including institutional experiments that bridge research and policy.
Open timeline →
Advocate / organiser
Max Tegmark
FLI leadership
A public organiser and communicator who links existential risk framing to concrete policy instruments and institutional coordination.
Open timeline →
Researcher
Stuart Russell
Academic governance leadership
Longstanding public interpreter of AI risk for policymakers and general audiences—often with formal models of uncertainty and control.
Open timeline →
Advocate / organiser
Tristan Harris
Center for Humane Technology leadership
Former design ethicist turned organiser focused on attention economies, platform power, and how AI accelerates psychological and civic harms.
Open timeline →