Library intelligence
Leaders Watch
Tracking how influential figures' positions evolve in public—through statements, interviews, governance moves, and the documents we host. This is not a directory; it is an editorial timeline you can extend entry by entry. Use theme and role filters to slice the set (e.g. governance, alignment).
How we assign tiers
Tiers reflect present-day leverage over frontier AI and the global safety debate—who can move capability, deployment, norms, or regulation at the largest scale right now, not lifetime achievement alone. Evidence still has to be anchored to primary sources on each profile; tier is not a quality score for every quote.
- Tier 1 (featured) — Heads of major frontier labs or hyperscaler AI divisions; founders of standalone frontier-model companies; and a small set of researchers whose public statements routinely reset institutional and media language on catastrophic or systemic risk. If their narrative or product choices shift, many others have to react.
- Tier 2 — Highly influential specialists: alignment and safety researchers, norm-shaping advocates, and leaders whose AI footprint is significant but narrower than the tier-1 bar above. Still first-class profiles with full timelines—not a "B list" for rigour.
Filter
Matching leaders
16 profiles match your filters.
Researcher
Andrew Chi-Chih Yao
Tsinghua IIIS leadership
Founding figure in computational complexity and cryptography whose later career focuses on elite computer-science education, interdisciplinary institutes, and AI-related research leadership in China—representing the “deep bench” beneath headline-grabbing models.
Open timeline →
Founder / builder
Dario Amodei
Anthropic leadership
Researcher-operator whose public writing and interviews often stress interpretability, misuse, and deployment discipline as Anthropic scales competing models.
Open timeline →
Executive
Demis Hassabis
Google DeepMind leadership
Research leader turned lab chief whose public mix of scientific optimism and cautious governance language tracks how DeepMind interfaces with Alphabet and global regulators.
Open timeline →
Founder / builder
Elon Musk
xAI leadership
A high-decibel operator whose bets span frontier models (xAI), autonomy, and global platforms—continuously shaping how the public and policymakers imagine both catastrophic risk and competitive acceleration.
Open timeline →
Researcher
Geoffrey Hinton
Independent research leadership
A central figure in modern neural networks whose public risk framing in 2023 helped legitimise extinction-risk and governance language well beyond specialist circles.
Open timeline →
Founder / builder
Kai-Fu Lee
01.AI / Sinovation leadership
A bridge figure between Silicon Valley and China’s AI industry whose public writing and investing track how capability, capital, and state-market dynamics intersect in the PRC—and how that compares to US frontier labs.
Open timeline →
Executive
Mark Zuckerberg
Meta leadership
Operator of one of the world’s largest AI research and deployment footprints; the Llama family and open-weights strategy materially reset the debate over capability diffusion, misuse, and guardrails.
Open timeline →
Executive
Mustafa Suleyman
Microsoft AI leadership
Product-facing executive for Microsoft’s consumer and copilot AI stack; public voice on containment, “frontier” safeguards, and how enterprise-scale deployment should pair with governance.
Open timeline →
Executive
Robin Li
Baidu leadership
Operator of China’s largest search-and-AI platform stack; Baidu’s ERNIE models and cloud narrative are a primary domestic counterpart to Western foundation-model labs for governance and deployment analysis.
Open timeline →
Executive
Sam Altman
OpenAI leadership
A defining operator of the frontier-lab era whose public commitments on safety, deployment, and geopolitical context are continuously stress-tested by product velocity.
Open timeline →
Researcher
Yoshua Bengio
Mila / academic leadership
A co-founder of modern deep learning who has moved into explicit AI safety and governance work, including institutional experiments that bridge research and policy.
Open timeline →
Researcher
Ilya Sutskever
Safe Superintelligence leadership
A core technical architect of the deep learning boom whose later public emphasis tilted toward superintelligence risk and governance—often in tension with product tempo.
Open timeline →
Advocate / organiser
Max Tegmark
FLI leadership
A public organiser and communicator who links existential risk framing to concrete policy instruments and institutional coordination.
Open timeline →
Researcher
Paul Christiano
Alignment research leadership
A leading technical voice on scalable oversight, iterated amplification, and practical alignment paths—often bridging abstract risk to engineering agendas.
Open timeline →
Researcher
Stuart Russell
Academic governance leadership
Longstanding public interpreter of AI risk for policymakers and general audiences—often with formal models of uncertainty and control.
Open timeline →
Advocate / organiser
Tristan Harris
Center for Humane Technology leadership
Former design ethicist turned organiser focused on attention economies, platform power, and how AI accelerates psychological and civic harms.
Open timeline →