latentbrief
Back to news
General2w ago

The EA and AI Safety Movements Face Leadership Vacuum

LessWrong

In brief

  • The effective altruism (EA) and AI safety movements have reached a pivotal moment.
  • In 2026, these groups acknowledge no single leader or organization as their authority-a stark contrast to the past when figures like Peter Singer and Eliezer Yudkowsky provided clear direction.
    • This shift comes after scandals involving individuals like Sam Bankman-Fried (SBF) and critiques of certain EA leaders, leading many to question traditional hierarchies.
  • Without formal leadership, informal structures have emerged.
  • Open Philanthropy and Coefficient Giving dominate funding flows, holding significant influence despite their reluctance to claim leadership roles.
  • Similarly, Anthropic's public statements and research shape the discourse on AI safety, making their timelines and decisions highly influential.
  • The Constellation network also wields quiet power through its research, even as it avoids overt claims of influence.
  • Looking ahead, the absence of clear leaders raises questions about coordination and moral guidance within these movements.
  • While some view this as a retreat from responsibility, others see it as a necessary evolution.
  • The challenge now is to find new ways to lead without relying on traditional hierarchies.

Terms in this brief

Open Philanthropy
A nonprofit organization that uses research to identify and fund effective altruism causes. They focus on issues like AI safety, global health, and climate change, aiming to maximize the impact of their donations.
Coefficient Giving
A platform or initiative within the effective altruism movement that evaluates and recommends charitable organizations based on their effectiveness. It helps donors make informed decisions about where to contribute for maximum positive impact.

Read full story at LessWrong

More briefs