This week on Sea Change Radio, the second half of our two-part series examining the effective altruism movement and “longtermism.” We speak to philosopher Émile Torres to better understand the movement’s futurist vision and its shockingly callous view on climate change. Then we discuss how the Sam Bankman-Fried scandal and ensuing cryptocurrency collapse may end … Continue reading Émile Torres: Effective Altruism + Longtermism Pt. 2 →


This article and podcast Émile Torres: Effective Altruism + Longtermism Pt. 2 appeared first on Sea Change Radio.

This week on Sea Change Radio, the second half of our two-part series examining the effective altruism movement and “longtermism.” We speak to philosopher Émile Torres to better understand the movement’s futurist vision and its shockingly callous view on climate change. Then we discuss how the Sam Bankman-Fried scandal and ensuing cryptocurrency collapse may end up affecting the future of philanthropy.


00:02 Narrator – This is Sea Change Radio covering the shift to sustainability. I’m Alex Wise.


00:19 Émile Torres (ET) – On the one hand, I think it’s good that we should look more carefully at the best ways to do the most good in the world. But on the other hand, it’s important for that focus to lead one to end up ignoring all of these kind of non-quantifiable positive effects that one can have. You know, engaging in well, what the effect of altruists would say, would describe probably as ineffective altruism.


00:45 Narrator – This week on Sea Change Radio, the second half of our two-part series examining the effective altruism movement and longtermism. We speak to philosopher Émile Torres to better understand the movement’s futurist vision and its shockingly callous view on climate change. Then we discuss how the Sam Bankman-Fried scandal and ensuing cryptocurrency collapse may end up affecting the future of philanthropy.


01:31 Alex Wise (AW) – I’m joined now on Sea Change Radio by Émile Torres. Émile is a doctoral candidate in philosophy. Their forthcoming book is entitled “Human Extinction, A History of the Science and Ethics of Annihilation.” Émile, welcome to Sea Change Radio.


00:01:47 Émile Torres (ET) – Great to be here.Thanks for having me.


00:01:49 Alex Wise (AW) – So Émile, we spoke about effective altruism and long termism with Alexander Zaitchik recently. Alex kind of spelled out the basics and gave a recap of the connection between Sam Bankman-Fried and effective altruism and walked us through the history of it, but we didn’t really get a chance to explore as deeply as I would have wanted into the connection between that movement, long termism, effective altruism and climate change, and the environment, and also what it might mean for the future of philanthropy. So first, why don’t you explain your history with this idea, you’ve written quite extensively on it.


02:38 ET – Yeah, that’s right. You know the the term long termism was coined in 2017, but the idea you know goes back two decades or so. And so I mean, initially, as you know, the individual will McCaskill who coined the term noted in a A blog post where he talked about the coining of this term before you had the word long termism individuals in this space were generally just described as. People who focused on existential risk. So the the concept of existential risk is really central to the long term worldview, and this idea was introduced in 2002 by Nick Bostrom who’s a Swedish philosopher in who, in 20 in in 2005 founded the future of Humanity Institute, at which many of these. You know, leading long termism like Will McCaskill and Toby Ord have held positions? So basically the the idea of existential risk is any event that would prevent us from fulfilling our long term potential in the universe. 1,000,000 billions, trillions of years into the future. That may mean you know radically enhancing ourselves, radically transforming the human Organism so that we become some kind of, you know, superior. Machines as well as colonizing space and maximizing value within the what’s called the our our future light cone. The region of the universe that’s in principle accessible to us, and so an existential risk, once again is any event that would prevent us from achieving these vast and glorious ends, to quote Toby Ord. And so initially I came across this idea of existential risk, probably in the mid 2000s and found it very alarming because the individuals who were discussing existential risk. Acknowledged that a primary source of danger facing humanity in the 21st century arises from the development of certain advanced technologies, but those same individuals argued that it’s absolutely crucial that we develop these technologies. There’s only one way forward. The reason is that you need these technologies. The very technologies that will introduce unprecedented risks to humanity. You need them in order to fulfill our long term potential. We need to develop advanced, you know synthetic biology and biotechnology, molecular nanotechnology, artificial superintelligence and so on in order to, to, you know, to spread through the universe. And create, you know, literally astronomical amounts of value, perhaps to actually create you know this sort of techno utopian world. And so I found this to be very alarming initially, but I sort of changed my mind after reading some books about the impractical about the infeasibility of preventing the development of certain technologies. So it’s a kind of techno-determinist view that you know some of these extremely dangerous, so-called dual use technology. Are going to be developed by somebody at some point, and so if one accepts this kind of technical determinist view, then maybe the best thing to do is to join the team rather than try to resist. You know the because it’s just maybe a losing battle. You know working to prevent the development of these technologies. So that’s how I sort of ended up in the existential risk space and initially I mean I my my views evolved and I became very sympathetic with this sort of long term list picture of what humanity could become in the very you know, far future, yeah, so I ended up just writing a bunch of books on this. You know, publishing a lot of papers and it was only much more recently, really in 2019 or 2018-2019 that certain aspects of the long term ideology suddenly started to appear problematic or flawed, or potentially even quite dangerous. And so my views, you know there was sort of a 180 shift or pivot in terms of my ideological orientation. And that’s how I ended up being this, you know, one of the the more vocal critics out there.


07:04 AW – Well, you you’ve been able to look at both sides of the coin at least, so I think we’re getting a a clear-eyed picture. But before we dive into some of the examples that would raise eyebrows for listeners or might be some of the bullet points that you use, how does the longtermist define long term? We’re looking pretty long term like when a policymaker looks at short term versus long term risks or a CFO of a company talks about short term versus long term risks. They’re talking, you know, one or two years versus 10 to 30 years. But this is way out there and then also, maybe let’s define existential risks because we have a lot of them staring us right in the face right now, but sometimes the long term is view tends to make it more ephemeral.


07:59 ET – Yeah, absolutely. It’s true that what the long term ists are thinking of is the very long run future of humanity. 1,000,000 billions, trillions of years into the future. In fact the the the long term list view sort of crucially rests on a subfield of cosmology, which was really founded in 1969 by individual at the University of Cambridge named Martin Rees. Now he’s he’s has the title Lord Martin Rees. And then this field was developed in the 70s the field is called physical eschatology and this is the first sort of really rigorous look at the evolution of the entire cosmos, including, you know the future of our planet, the future of our solar system, and so on. And what the scientists discovered with these physical eschatologies found is that the universe could remain habitable for an extremely long amount of time, so Earth itself will will be, you know, sufficiently hospitable to complex living creatures like us, for you know, maybe 800 million years or a billion years, and at that point the sun is going to get more and more luminous, the surface of Earth will increase the temperature, that is, will increase significantly, the oceans will boil, and so on.


09:29 AW – And this was laid out before climate change concepts were popularized.


09:34 ET – Yeah, I mean so climate change, this was the connection between carbon dioxide and climate change. Was, you know, made all the way back to at least the early 20th century, but it kind of wasn’t until the the 1990s, the early 2000s, that a genuine consensus emerged, which is the view. Of course, you know 98, probably higher percentage of that of climate. Scientists agree that climate change is is real. It’s anthropogenic and it’s very dangerous. So yeah, I mean this was several decades before that these scientists physical eschatologies were figuring out the you know, the how long Earth were main habitable, how long our solar system will remain habitable and the universe itself, which is, it’s just an enormous number. It’s you know at least 10 to the 40 years into the future. That’s the point at which protons will decay, and that’s a hard limit on biological life. So it’s so basically the the the vision of long termism is crucially founded on these empirical results from science, so the longtermist idea it’s useful, actually, first of all, to distinguish between moderate long termism and radical long termism.


10:55 AW – Maybe you can if you could define those two. Let’s parse those out, please.


10:59 ET – Yeah, so I mean moderate longtermist says that ensuring that things go well in the very long term is a key priority of our time and radical long terms and changes that from the indefinite to the definite article if you will, in terms of grammar. So it’s ensuring that things go well in the very long term is the key moral priority. Of her time and basically the idea is that OK, so if if Earth is habitable for you know, maybe another billion years. The number of future earthlings future people on our planet could be absolutely enormous. Then if we colonize the solar system, you know we could live for even longer than a billion years. If we colonize the universe, we could exist for just you know, 10 to the 40 years, or perhaps even longer. Further, that’s just the temporal dimension. But of course we can spread out in space, further increasing the human population. So as a result, long term ISTS have estimated going all the way back to the early 2000s. Some of the first estimates of how many future people there could be, and then the idea is OK if what I want to do. Is to positively affect the greatest number of people possible, and if most people who exist exist in the very far future, maybe what I should be doing is looking at how my actions might influence them? Millions, billions, trillions or even just thousands of years from now. Rather than how my actions affect people who are contemporaries, you know who are, for example, suffering global poverty and so maybe the best way to maximize the impact, the goodness of your impact on the world is to focus on these very, very far future people, simply by virtue of the fact that there are so many more of them than there are people around today.


12:57 AW – So it’s a cold, calculating concept of saying, “these trillions of people that will exist you’re you’re going to be helping them, so you’re actually doing more good by focusing on the trillions in the future than the billions that exist now,” it’s that kind of calculus.


13:15 ET – That’s exactly right, yeah. So it’s not some people confuse long term is with with the claim that current people don’t matter. Future people do matter no on the long term is view. Everybody counts for one, everybody counts for the same. The thing is that there’s just so many people who could exist in the far future that that ends up being much more morally salient.


13:38 AW – It’s changing the rules so that the reality is exactly what you’re talking about is like we are not counting current poverty issues as much as future poverty issues. So the policy ends up effectively dismissing the current population for the sake of the future population.


13:56 ET – Yeah, that’s right. That, in practice, that is what is going to happen. I mean, some of the long term is leading long term is like Hilary Graves has been explicit that if you hold especially hold this radical long term as view, then alleviating global poverty almost certainly isn’t the best use of our finite resources everybody can agree that global poverty is very bad, but there are so many possible people in the distant future that simply, it’s just a numbers game.


14:42 (Music Break)


15:06 AW – This is Alex Wise on Sea Change Radio and I’m speaking to philosopher Émile Torres. Their forthcoming book is “Human Extinction, A history of the science and ethics of annihilation.” So Émile, speaking of annihilation, we’re looking at a pretty cataclysmic future with climate change. Why don’t you share your thoughts on the long term view of climate change and where it goes off the rails in your estimation?


15:34 ET – Yeah, I would say that it I’ve I’ve argued this in writing before that I think it’s impossible to read the long term literature and not come away with a rather rosy picture of the climate.


15:47 AW – They don’t deny it in any way, which is important to set forth, but they’re so techno determinist as you call it, that they’re convinced that silver bullets will come and will be fine, so let’s focus on some other stuff. Is that oversimplifying things?


16:04 ET – No, I think there’s three points to make. On the one hand, so it’s absolutely true that long term is, you know, they they, they’re not climate denialists. But oftentimes, if you look at what they write. They seem to have a view of climate change that doesn’t really align with what genuine climate experts believe, so you know, will McCaskill has written in his most recent book on long termism called what we have the future that you know global agriculture really isn’t going to be devastated by 15 degrees Celsius.


16:43 AW – Warming 15 degrees Celsius just to give our Fahrenheit loving listeners an idea. I mean, we’re talking about one or two degrees Celsius being a major game changer for life as we know it.


16:57 ET – Yep, 15, it’s really an astonishing number. I mean, I sent it to a bunch of experts and unsurprisingly every single one had the same response that that is outlandish and completely implausible. So yeah, there’s so there is a sense in which you know there there’s a kind of soft denialism about the potential for climate change to really have catastrophic consequences. In addition, there’s a there’s a deep faith that I think is really quite pervasive in the longtermist community, that technology will ultimately solve these problems. So for example, some of them have argued that the best response to climate change isn’t to reduce the human population, it’s to increase the human population. Therefore, we should be more worried about underpopulation than overpopulation, and the reason is that the more people you have, the more individuals there will be working on science, for example, research and development.


18:04 AW – Just that there will be enough monkeys to type out the perfect novel.


18:09 ET – Yeah yeah. So basically what you do is by increasing human population you accelerate the rate of progress and is if you and what’s going to solve the climate crisis ultimately be just it be you know more technological and scientific progress, quote UN quote, and so they they end up with this sort of counter intuitive fonts to the climate situation. So OK, we accept that climate change is actually happening, but what we think it should be done is people should have more kids raise their kids rights, increase the number of scientists out there, and then we’ll stumble upon you know, a efficacious solution, technological solution to the problem of climate change. And so I mean that that is also, you know, very controversial idea and the other you know bizarre, at least in my opinion, quite bizarre response to the climate situation is to argue that the reason we should stop burning fossil fuels right now. Is that we may need these fossil fuel reserves at some future point in time. So imagine that we burn up all of the fossil fuels that are, you know, available to us, or would be available to us and then industrial civilization collapses. If we want to rebuild industrial civilization. We may need to have you know oil, coal, gas, and so on to go through another industrial revolution. And the whole reason we’d want to go through another industrial revolution is, on their view, precisely because the goal is to fulfill our long term potential. Millions, billions, trillions of years from now, that means colonizing space,  increasing the future human population immensely, maybe creating vast computer simulations in which digital people live. And if we don’t have industrial civilization, we probably won’t be able to do that, industrial civilization is a stepping stone to that. And so therefore they argue that we should you know there there’s superficial alignment between their suggestion that yes, we should stop burning fossil fuels immediately, and the climate scientists claim that we should stop doing that. But the reasons the underlying reasons are really radically different. And some would argue that their reasons are not particularly good reasons. I mean, essentially they’re arguing we should make the same mistakes again. I mean, we should, just, you know, pollute the planet once more.


20:37 AW – But let’s say let’s save some of the fossil fuels so that we can make those same mistakes again in 100 years or 1000 years.


20:45 ET – Yeah, yeah.


20:46 AW – The longtermism, effective altruism movement has really captured the imagination of a swath of the billionaire class and that has real effects on the way nonprofits look at the world, the way that all of us look at giving. Why don’t you connect the dots for us if you will, in terms of how you see the fallout from the Sam Bankman-Fried collapse, the cryptocurrency collapse the effect of altruism and long term views of a very long view of how we need to care about this planet and our people and the impact it’s having on today’s philanthropy space.


21:30 ET Yeah, so I’ve been, I guess a little concerned that the the long term list view is going to distract from cause areas or you know, charitable causes that I think are really much more important. Things like alleviating global poverty, you know, reducing animal suffering. You know, eliminating factory farming and so on. There are lots of causes that are more, you know, sort of near term, just to tie this back to what we were discussing before. I think climate justice is a huge issue and there are a bunch of scholars who started talking about climate reparations and of course there was the recent agreements from, I can’t remember how many nations around the world. Basically, taking the first step towards climate reparations, which to me seems very the idea, is very compelling. You know it’s just completely unfair that people in the global S are going to suffer the most as a result of climate change, even though they’re in many cases they’ve contributed almost nothing to the problem. So within the effective altruist community, the initial focus was global poverty and over the past 10 years many of the leading figures have shifted towards more long term causes and that means neglecting relatively speaking. It causes, like you know, global poverty and so on. I think the the, or I should say, Sam Bankman-Fried himself was an ardent, longtermist and you know, he explicitly said in the article that he’s just not worried personally about things like global poverty. What really matters to him is the very long term future of humanity. 1,000,000 billions, trillions of years into the future, and so that’s why he set up the FTX Future fund, which you know he funneled a lot of his fortune into. And that was really, you know, trying to to support research on long term issues, but with the collapse this sort of catastrophic collapse of FTX, it’s not just that funds for long termism have dried up in many cases at least. But also that I think long termism itself has suffered some pretty significant reputational data. And so my as a critic of long termism, my hope is that one of the consequences of the FTTX debacle is that attention will be refocused. On issues that I consider to be much more important than thinking about, you know the long term future of humanity and how many digital beings there could be within our future lightcone in fast computer. Simulations again, things like climate justice, climate reparations, global poverty, animal welfare and so I’m tentatively hopeful that you know longtermism will the force of longtermism will, I should say, the influence of long termism will weaken as a result of and consequently, you know individuals, even within the effective altruism community will then start focusing on global poverty and you know these more traditional issues a bit more.


25:12 AW – I also can’t help but think that the data-driven concepts that the EA movement sets forth the long term view is very data-driven that that can distract a lot of charitable giving from being more effective when it becomes overly data-driven because philanthropy is not just something that should be viewed on a spreadsheet and you can’t just balance sheet this away there. There’s an art form to it. You have to find organizations that align with your values or passions. Maybe you can speak to that?


25:48 ET – Yeah, sure I think the idea that we should want to maximize our impact is clearly good. Nobody wants to be, you know, giving to a charity where they’re just going to end up wasting a bunch of the money. Or yeah, to want to help as many people as much as possible, uh, so that’s that’s a good. The problem is that there is a very strong quantitative within this approach and as a result, there are. You know there are all sorts of possible effects of 1’s actions that just can’t really be quantified, and if you can’t be quantified, you can’t insert it into this. You know expected value calculation, which is what a lot of the effective altruists rely on heavily in as well as the long term is that’s how they end up saying, even if there’s a tiny probability of helping a huge number of people in the future. The expected value is still much greater, focusing on those far future people than on current people, and so you know, you could imagine you know helping you know an unknown person in your community. There are all sorts of consequences that are real but not quantifiable, so you know it might foster trust. Maybe helping somebody gives them a certain kind of hope that helps them through the next day and that just can’t be quantified. You can’t put that in, use that as a variable in some sort, so on the one hand, I think it’s good that we should look more carefully at the best ways to do the most good in the world, but on the other hand, it’s important for that focus to to lead one to end up ignoring all of these kind of non quantifiable positive effects that one can have, engaging in well what the effect of altruists would say would describe probably as ineffective altruism.


27:46 AW – Well, it’s a fascinating topic and I could keep talking to you for quite some time about it, and hopefully we can have you back when your book, “Human Extinction, A History of the Science and Ethics of Annihilation” comes out. Émile Torres, thanks so much for being my guest on Sea Change Radio.


28:02 ET – Thanks a lot for having me, this is great.


28:17 Narrator – You’ve been listening to Sea Change Radio. Our intro music is by Sanford Lewis and our outro music is by Alex Wise. Additional music by DJ Vadim and the B-52s. To read a transcript of this show, go to SeaChangeRadio.com stream or download the show, or subscribe to our podcast on our site or visit our archives to hear from Doris Kearns Goodwin, Gavin Newsom, Stewart Brand and many others. And tune into Sea Change Radio next week as we continue making connections for sustainability. For Sea Change Radio, I’m Alex Wise.


This article and podcast Émile Torres: Effective Altruism + Longtermism Pt. 2 appeared first on Sea Change Radio.