What We Owe the Future cover

What We Owe the Future

A Guide to Ethical Living for the Fate of Our Future

byWilliam MacAskill

★★★
3.93avg rating — 7,309 ratings

Book Edition Details

ISBN:9781541618626
Publisher:Basic Books
Publication Date:2022
Reading Time:11 minutes
Language:English
ASIN:N/A

Summary

In the crossroads of human destiny, philosopher William MacAskill presents a provocative vision in "What We Owe the Future." Humanity teeters on the brink of unprecedented potential or catastrophic demise, with the choices of today sculpting the landscapes of tomorrow. MacAskill's longtermism compels us to look beyond immediate crises like climate change or pandemics, urging us to forge a resilient future where digital intellects coexist with human legacy. His narrative, woven with philosophical insights and historical reflections, challenges us to become stewards of a world brimming with promise. Will our legacy be one of foresight and compassion, securing prosperity for generations yet unborn?

Introduction

The decisions we make today may determine the fate of humanity for millions of years to come. This radical proposition challenges our conventional moral thinking, which typically focuses on immediate consequences and present-day concerns. The central argument presented here is that we have profound moral obligations to future generations—not just our children and grandchildren, but potentially trillions of people who may live in the centuries and millennia ahead. This perspective, known as longtermism, rests on three deceptively simple premises: future people matter morally, there could be an enormous number of them, and our actions today can significantly influence their lives. The implications of accepting these premises are transformative, suggesting that some of humanity's most pressing priorities may not be the problems that dominate today's headlines, but rather the long-term risks and opportunities that will shape civilization's trajectory. The analysis proceeds through careful philosophical reasoning combined with empirical investigation, examining everything from the contingency of moral progress to the mathematics of population ethics. The goal is not merely to present an abstract philosophical position, but to demonstrate how longtermist thinking can guide practical decision-making about technology, governance, and human flourishing. The journey requires grappling with profound questions about value, risk, and responsibility that extend far beyond our ordinary moral horizons.

The Moral Case for Longtermism and Future Generations

The case for caring about the long-term future begins with a fundamental moral intuition: future people count. This may seem obvious, yet its implications are revolutionary. If someone drops a glass bottle on a hiking trail, the moral obligation to clean it up doesn't depend on when a child might cut themselves on the shards—whether next week or next century. Harm is harm, whenever it occurs. This temporal neutrality extends to positive experiences as well. The joy of attending a concert or achieving a lifelong goal doesn't become less valuable simply because it happens in the future rather than today. Distance in time, like distance in space, shouldn't diminish our moral concern. People matter regardless of whether they live thousands of miles away or thousands of years hence. The significance of this principle becomes clear when we consider the sheer scale of the future. If humanity survives for even a fraction of its potential lifespan—say, as long as the typical mammalian species—there would be roughly 80 trillion people yet to come. Future generations would outnumber us ten thousand to one. Each of these individuals would have hopes, dreams, and the capacity for both suffering and flourishing. The mathematical implications are staggering. Even small changes in the probability of human survival or the quality of future civilization could affect more lives than all of recorded history combined. This doesn't mean we should ignore present suffering or sacrifice current wellbeing for speculative future benefits. Rather, it suggests that the long-term consequences of our actions deserve far more attention than they typically receive in moral deliberation and policy-making.

Trajectory Changes: Value Lock-in and Moral Progress

Throughout history, societies have experienced periods of moral plasticity followed by relative rigidity, like molten glass that can be shaped before it cools and hardens. We may currently be living through such a plastic period, where the values that guide civilization remain malleable and open to change. However, advancing technology—particularly artificial intelligence—could enable unprecedented value lock-in, permanently fixing certain moral and political arrangements. The historical precedent is illuminating. Ancient China's "Hundred Schools of Thought" represented a period of remarkable intellectual diversity, with Confucianism, Legalism, Daoism, and Mohism competing for influence. When the Han dynasty eventually locked in Confucianism as the dominant ideology, it shaped Chinese civilization for over two millennia. Similar patterns appear throughout history: periods of ideological competition followed by the entrenchment of particular worldviews. Advanced artificial intelligence could make such lock-in far more permanent and comprehensive than anything previously possible. AI systems could potentially be designed to preserve and enforce specific values indefinitely, creating immortal guardians of particular ideologies. Unlike human institutions, which evolve and decay over time, AI systems could maintain perfect fidelity to their original programming across vast timescales. This possibility creates both tremendous opportunity and existential risk. If beneficial values become locked in—those promoting human flourishing, moral progress, and the expansion of compassion—the result could be a golden age lasting millions of years. But if malevolent or misguided values become entrenched, the consequences could be equally durable. The stakes of getting this right are therefore astronomical, making the cultivation of wisdom and moral reflection in the present moment a matter of cosmic importance.

Existential Risks: Extinction, Collapse, and Stagnation

The most direct threat to humanity's long-term potential is premature extinction. While natural risks like asteroid impacts once posed the greatest danger, human-created risks now dominate the landscape. Engineered pandemics represent perhaps the most serious near-term threat, combining the destructive potential of nuclear weapons with the accessibility of biotechnology. The democratization of biotechnology is proceeding at breakneck speed. The cost of sequencing a human genome has fallen from hundreds of millions of dollars to roughly one thousand dollars in just two decades. Gene editing tools are becoming increasingly powerful and accessible. This progress promises tremendous medical benefits, but it also enables the creation of pathogens far more dangerous than anything found in nature—diseases that could combine the lethality of Ebola with the transmissibility of measles. Laboratory safety standards have proven woefully inadequate even for naturally occurring pathogens. The 2001 foot-and-mouth disease outbreak in the UK, which cost £8 billion and led to the culling of millions of animals, was followed by two additional outbreaks from the same laboratory in 2007. Similar patterns of negligence appear throughout the historical record, from anthrax leaks in the Soviet Union to smallpox escapes in Britain. The risk is compounded by the potential for great-power conflict, which could trigger arms races in biological weapons or other destructive technologies. The current "Long Peace" between major powers, while historically unprecedented, may not be sustainable as global power dynamics shift and new technologies alter the strategic landscape. Nuclear war remains a persistent threat, with close calls during the Cold War demonstrating how easily deterrence can fail.

Population Ethics and the Value of Future Lives

The question of whether preventing future lives constitutes a moral loss strikes at the heart of population ethics, one of philosophy's most challenging domains. Common intuition suggests neutrality about creating new people—we favor making existing people happy rather than making happy people. Yet this intuition leads to paradoxical conclusions when examined closely. Consider parents choosing between having a child with migraines or waiting to have a healthy child later. If creating people is truly neutral, then having no child should be equivalent to having either the healthy or unhealthy child. But this implies that having a child with migraines is no worse than having a healthy child, which contradicts our clear judgment that health is preferable to illness. The fragility of identity compounds this puzzle. Small changes in timing—a different route home from work, a longer line at the store—would alter which sperm fertilizes which egg, changing the identity of every future person. Major policy decisions like ending fossil fuel subsidies wouldn't improve the lives of specific future individuals; they would determine which entirely different people come to exist. If creating people were morally neutral, such policies couldn't be justified by their benefits to future generations. The total view offers a more coherent approach: one population is better than another if it contains more total wellbeing. This implies that creating additional happy lives makes the world better, even if it leads to the seemingly counterintuitive "Repugnant Conclusion" that a vast population with barely positive lives could be better than a smaller population with excellent lives. While this conclusion challenges our intuitions, it follows from premises that are difficult to reject: that making existing people better off while adding new happy lives is good, that equality isn't intrinsically bad, and that "better than" is a transitive relation.

Summary

The fundamental insight emerging from this analysis is that our ordinary moral horizons are dramatically too narrow. By focusing primarily on immediate consequences and present-day concerns, we risk neglecting what may be the most important moral questions of our time: how to ensure that humanity's vast potential is realized rather than squandered. The argument demonstrates that taking future generations seriously as moral patients, combined with realistic assessments of both the scale of the future and our power to influence it, leads to a revolutionary reorientation of priorities. This perspective offers both tremendous hope—that we might help create a flourishing civilization lasting millions of years—and sobering responsibility, as the choices we make in the coming decades may echo across all of human history.

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover
What We Owe the Future

By William MacAskill

0:00/0:00