There’s no denying that technology plays — and will continue to play — a critical role in addressing the climate crisis. But could super-intelligent AI actually solve the problem for us, as several tech billionaires claim? Or is this over-reliance on speculative technology simply a way to distract us from tackling big, real-world problems. Manjula Selvarajah sits down with astrophysicist and author Adam Becker to separate the hype from reality.
Featured in this episode:


Further reading:
Subscribe to Solve for X: Innovations to Change the World here. And below, find a transcript to “Infinity quest.”
Adam Becker: I think that if somebody asks you a question when you’re up on stage in front of an audience and your first impulse is to say, well, I don’t want to say this. Maybe don’t say it.
Narration: That’s Adam Becker, astrophysicist, science journalist and author of the book More Everything Forever: AI Overlords, Space Empires and Silicon Valley’s Crusade to Control the Fate of Humanity. It’s about the plans tech billionaires have for the future — and why they don’t quite add up.
Adam Becker: The kind of AI that they’re talking about is obviously not ChatGPT. It’s this idea of super-intelligent AI, something more intelligent than any human or even all humans combined. This is both ill-defined and not something that anybody knows how to build.
Narration: Adam’s been keeping tabs on the tech scene in Silicon Valley, and to illustrate the kind of messaging he’s seeing, he’s just showed us a clip from 2023, with Sam Altman, CEO of Open AI, and the co-founder and former chief scientist Ilya Sutskever.
It’s a bit hard to present video in the audio realm of podcasting, but let me break it down for you.
In it, you see the two men on stage at a conference, in Israel, and they’ve just been asked about solving a problem as complex as the climate crisis and what they see as the potential for AI.
Sam Altman: I hate, I don’t want to say this because it … climate change is so serious and so hard of a problem. But I think once we have a really powerful super intelligence, addressing climate change will not be particularly difficult for a system like that.
Ilya Sutskever: Yeah. We can even explain how — here’s how we solve climate change.
Narration: Altman and Sutskever describe a vision of AI where you could simply ask it to tell you how to make a lot of clean energy, how to efficiently capture carbon and then how to build a factory at a planetary scale to do all that.
Sam Altman: If you can do that, you can do a lot of other things too.
Ilya Sutskever: Yeah. With one addition that not only you ask it to tell it, you ask you to do it.
Narration: For Becker, this is an example of the rhetoric he finds especially dangerous.
Adam Becker: The idea of super-intelligent AI is a fairytale, something that comes from science fiction, not science. There’s very good scientific reasons to think it’s ill-defined and not something that we’re anywhere near doing. Even if you do come up with a good definition for it. Similarly, a lot of these tech billionaires talk about other technologies that they think are coming very soon, like colonizing space.
Elon Musk talks about this a great deal, so does Jeff Bezos. Musk has said that he wants to put a million people on Mars by 2050 to serve as a backup for humanity in the event of a disaster here on Earth. A disaster that he never says by the way, is climate change. He talks about asteroids or nuclear war, and what he seems to miss is that Mars is such a terrible place that you could have both an asteroid impact and a nuclear war here on Earth, and Earth would still be a better place for humanity. And I am not trying to be a downer. I’m saying that we live in a really great place and we should do what we can to preserve it.
Narration: I’m your host, Manjula Selvarajah. This episode was recorded live at MaRS Climate Impact in Toronto last December, where I sat down with Adam Becker.
In our conversation, we explore the tension between tech ambition and realistic climate action, and ask whether the stories tech billionaires are selling us are getting in the way of solving big, real-world problems.
Plus, we have a question from guest Marcius Extavour: technologist and former chief scientist at XPRIZE.
You’ll want to hear it.
Manjula Selvarajah: You know, you are not anti-tech. No.
Adam Becker: No.
Manjula Selvarajah: So what do you think the role is for tech?
Adam Becker: I think that technology is part of the solution to almost any problem that humanity faces, certainly a huge part of the solution to, uh, the climate crisis. For a long time technology was one of the blockers, right?
We needed cheap green energy. Now we have that. I guess my take is that technology’s great, but technology is never the entire solution. Right. You know, certainly there are more green technologies that it would be nice to have that we don’t have. But I would say — and I think I’m not alone in this — that at this point, the primary block to humanity handling the climate crisis in a serious way, is no longer technology, but is politics.
Manjula Selvarajah: So you believe the ideas already exist.
Adam Becker: I think that most of the ideas that we need, in terms of technology, are already out there. I mean, it seems like we need better carbon capture, but you know, we really need to be deploying a lot more green infrastructure than we are in many places around the world.
And that’s a question of collective action, not a question of technology.
Manjula Selvarajah: Now you define some of these ideas as fantasy, and there are things that happen when we pursue fantasy. We’re going to get to that in a minute.
Adam Becker: Yeah.
Manjula Selvarajah: But, in your book, you describe some of these ambitions of the tech billionaires as dangerous, implausible. Let’s talk through some of them. Why — you’ve talked about Mars — why is colonizing Mars not an option?
Adam Becker: So this is a weird question to be talking about here at MaRS.
Manjula Selvarajah: Yes, yes. We mean the other Mars.
Adam Becker: Yes, the other Mars, the big one. This is something I talk about at length in my book, and so I’m tempted to give another flip answer, and say I could write a whole book about that, but instead I’ll give a shorter answer and say, there are many, many, many reasons, like I just alluded to.
Mars is a really awful place. And there are a lot of reasons we can’t live there. The gravity is too low, the radiation is too high, there’s no air and the dirt is made of poison. And there’s no good way to solve all of those problems. Elon Musk keeps talking about terraforming and the closest he’s come to giving a real plan for what that would look like to make Mars more Earth-like is to set off a lot of nuclear weapons over both poles of Mars to, you know, release all of the gases in the martian polar ice caps.
That won’t work. I mean, if, even if it did work and released all those gases, there still wouldn’t be enough atmospheric pressure. There would still not be any oxygen. You still couldn’t have plants there. For all of these reasons, and many more besides, Mars is just a terrible place. Also, you know, he talks about having Mars as a backup for humanity, and that’s why Musk says he wants to put a million people there by 2050. And putting aside the number of rocket launches that would require, and the fraction of those rocket launches that would fail, and thus the number of people who would just die for no particular reason, a million people’s not enough. If you want to have a completely independent high-tech civilization, you need a larger, you know, economic and workforce base than a million people. The best estimate is more like half a billion or a billion people.
Manjula Selvarajah: Because you have this, these complicated systems, and someone needs to maintain and run them and far more complicated systems than we have here.
Adam Becker: Yeah. Or at least as complicated as the ones we have here. And, and I think that this points to a larger issue, which is that these tech billionaires think that they’re the smartest people in the world because they’re the richest people in the world, and therefore they think that they understand everything and they don’t. And so that means that they consistently oversimplify absolutely everything. They just think that the entire world is simpler than it actually is.
Manjula Selvarajah: So let me move on to another idea here: singularity, and I’ll have to get you to define what singularity is, too. But why is the idea of singularity so farfetched?
Adam Becker: The singularity is this idea that there’s a point coming soon in the progression of technology where the rate of technological change will happen so quickly, mostly led by AI in some sort of self-reinforcing, improving cycle, with AI making itself smarter and smarter and smarter, that it will fundamentally transform human civilization into something possibly not even human, will lead to technology with God-like powers of creation and destruction and transformation.
If that all sounds kind of wishy-washy, that’s because singularity like super-intelligent AI is another one of these ideas that has a great deal of currency among the leaders of Silicon Valley that is nonetheless pretty poorly defined. So given that it’s poorly defined, it’s a little bit hard to pin down.
But why is it not plausible? There are a lot of reasons.
One of them is that it’s this sort of naive tendency to take an existing trend and try to plot it out into the future. But of course, what do you mean by technological progress? The classic example that they give is Moore’s Law. Um, which, yeah, that’s an exponential trend in how many transistors we could cram into one chip.
And Ray Kurzweil, who’s someone who’s talked a lot about the singularity and has maybe been its most prominent and loudest evangelist, you know, he thinks that Moore’s Law is some sort of law of nature. But it’s not a law of nature. In fact, there are limits to how many transistors you can cram onto a single chip, and we’ve reached those limits.
Moore’s Law is depending on who you ask, either already over or about to be over because it turns out that silicon chips are made of silicon. And silicon is made of silicon atoms, and those have a particular size, and you can’t make the transistor smaller than that. But more fundamentally, I think Kurzweil and these other people, they forget that something like Moore’s Law is not a law of nature, it’s a decision.
It was a business decision made by the leaders of the semiconductor industry. So even more generally than that, I think the proponents of this idea of the singularity, they see these exponential trends and think, “Oh, we’re just going to follow those out into the future.” But the one thing that we know about exponential trends is that they end. They always end — especially when you’re in a closed system like the Earth.
Manjula Selvarajah: There’s also kind of some naivety about the understanding of human intelligence.
Adam Becker: Yeah.
Manjula Selvarajah: I mean, even brain scientists will tell you that they don’t even fully understand it.
Adam Becker: Yeah. No, that’s exactly right. I mean, there is very much this idea that the brain is just like a computer and so there’s this, this sort of singularity-adjacent slogan that you’ll hear in the AI industry from people like Ilya Sutskever and others that scale is all you need — that if you just take current AI systems and make them bigger and more powerful and give them larger training sets, then you will reach something as intelligent as a human and then even more intelligent. You know, this is wrong for all sorts of reasons, but one of the reasons is that the human brain is not a computer.
The computational analogy to the human brain is just that — an analogy. And it’s just the latest in a series of analogies that people have had for the human brain. Over the last few centuries, there’s been a tendency, understandable and reasonable, to compare the human brain — the most complicated thing we know of in nature — to the most complicated machines that we have at a particular time.
So now it’s a computer. Fifty years ago it was, you know, a telephone network; 50 years or 75 years before that it was a hydraulic system — an analogy that Freud made use of in his psychoanalytic theories. And before that it was like a clock and stuff like that. And you know, the brain is more like a computer than it is like a clock, but it’s not a computer.
Manjula Selvarajah: You know, these ideas are supported by some smart people you’re talking about: Sam Altman, Elon Musk, Jeff Bezos. And you know, they probably know the science, I would imagine they understand the limitations. Why is it then that they are throwing their energies, attention and a significant amount of money into these visions of the future?
Adam Becker: There’s a quote, I don’t remember who said this, but it’s old. It’s like a hundred years old. That it’s very difficult to get someone to understand something if their paycheque depends on them not understanding it.
You know, I think these guys have something even more than a paycheque at stake. These ideas provide them with a sense of purpose and a sense of like moral absolution, because if you’re trying to save humanity, then anybody who gets in your way, is a foe of humanity, not just a foe of, you know, your particular half-baked ideas.
And I’m also not convinced that they really know this stuff. You know, not to be too dismissive of autodidacts or people who have alternative educations, but I think it’s only in Silicon Valley where you could get someone who’s a college dropout, like Sam Altman being seen as a super genius.
And, and I’m just, I’m going punch up here unabashedly. What is it exactly that Sam Altman has done? Well, he dropped out of college, had a failed startup, and then was handed control of the, you know, most famous and successful startup incubator Y Combinator, and then took over OpenAI at a point where they were already well on the way to creating a ChatGPT — a product that I would say that there were at least good questions about whether it ever should have been released. And for all of this, we call him a genius.
And maybe this doesn’t quite answer your question, but like Jeff Bezos is a true believer, right? We know he believes, like, for real, that humanity is going to space because he’s been saying it since he was in high school, way before, you know, he had all of this money.
I think that when you have all of this money and all of this power, it makes it very hard to hear anybody telling you that you’re wrong. And it makes it very easy to believe that you’re an expert in everything. These guys have some kind of expertise. But it’s not the right kind of expertise to actually evaluate the ideas that they seem to believe in.
Manjula Selvarajah: It’s interesting that Silicon Valley is known or presents itself as a place that believes in contrarian thinking. Right? And it’s interesting because if this, we are not just talking about these three people or what are the five names that we’ve mentioned, having these ideas there is a huge following.
Adam Becker: Absolutely. Yeah.
Manjula Selvarajah: So when it comes to addressing climate change specifically. What do we lose in the chase of what you’ve described as fantasies?
Adam Becker: There’s this idea that these fantasies are a get-out-of-jail-free card. Jeff Bezos doesn’t want to think about the carbon footprint of Amazon’s shipping network or the rockets at Blue Origin, his space company.
And, and I know I keep picking on the same few people, but that’s just because I’m on stage and I’m only thinking of a few people. But, these fantasies provide a way of ignoring real problems and the real solutions to those problems by instead saying. You know, as Altman did, “Oh, we have a solution to the climate crisis. It’s to keep doing exactly what we’re doing, even though that has a huge carbon footprint.” Or to say, as Musk says, “Oh, Earth is doomed. And so we have to go to Mars, which is better.” You know, or, or you know what? Bill Gates says that, “Oh, the climate crisis isn’t that big a deal. We need to pay more attention to things like AI.”
There’s nothing wrong with dreaming an impossible dream. The problem comes when you have so much power and money that you force that impossible dream on the rest of us, heedless to all evidence to the contrary. And that leads to, you know, not just the opportunity cost of putting collective efforts toward something that’s not going happen, instead of fixing real problems with real solutions, but also takes those real problems and exacerbates that.
What does the pursuit of, you know, space colonization or super-intelligent AI cost us? Well, those are really carbon-intensive pursuits that, you know, have already led to coal plants restarting and unpermitted natural gas plants and they also lead to further concentration of wealth and power. And of course, those people think that this is all great. I think that not only is this bad for all of the rest of us, but it’s also bad for them because what, what good is a hundred billion dollars if civilization collapses.
Manjula Selvarajah: Do you think that it gives the rest of us these ideas — I mean they capture so much attention in media and other places, right — do you think that it gives the rest of us a false sense of comfort?
Adam Becker: Yes.
Manjula Selvarajah: Perhaps stops us from taking the actions that we need to.
Adam Becker: Yeah. I mean, to get another dig in at these billionaires. They’re also deeply unoriginal, right? It’s not like they came up with these ideas. The idea of humanity’s future being endless colonization of space, and, and I’m not even going to get into the whole use of that word colonization and the problems there, which, it’s a whole thing. I do talk about that in the book, but that idea is an old idea that’s been around since at least mid 20th-century science fiction, if not earlier.
Manjula Selvarah: So many of these ideas are from science fiction. Even the idea of the brain computer interface. I think brain computer interfaces are interesting. The idea of using them for getting to live kind of outside of your body.
Adam Becker: Yeah.
Manjula: So these are, most of these are kind of rooted in science fiction.
Adam Becker: Exactly. And so it gives us a sense of knowing what the future holds.
Oh, we are in Toronto. So I’m going to, I’m going to mention one of the great pieces of Toronto culture. Scott Pilgrim, at one point somebody asks Scott, “Have you thought about the future?” Meaning like his own personal future. And he says, “The future? You mean like with jet packs?” There’s this very durable idea of a jet pack, rocket ship, robot future that mostly comes from mid-20th-century science fiction that sort of we all have in mind as, oh, that’s like either what the future is or what a good future would look like. And it’s not true.
And in a lot of ways, that’s never what science fiction was even trying to depict. Science fiction is a literary genre, and it’s often used to examine questions about humanity and about how we are here and now. And if you don’t believe me about that, go watch some Star Trek and then tell me that it’s actually about space and not about, you know, problems here and now in like really heavy-handed metaphors.
And I say this as a huge Star Trek fan, but I think that there is this false hope. It’s, funny, I’m not going to call out the particular podcast that did this, but I’ve been going a lot of podcasts lately and there was this one podcast where they had started out the podcast by going out and doing like a bunch of, you know, person on the street interviews, asking people like, would you want to move to Mars?
And they got a lot of yeses, like an awful lot of people saying, “Yes, I’d be happy to go to Mars. That sounds like a fun adventure and, you know, the next step for humanity.” And then they came to me and said, “How could you crush all of these people’s dreams? Which is a hell of a guilt trip.
Manjula Selvarajah: You’re like, “Happy to.”
Adam Becker: Yeah, exactly. But yeah, no, I’m happy to do it. I think it’s important to point out where people have false hope. Right. This idea that, “Oh well, if we get things wrong here, we can go somewhere else.” It’s not an option. And, I think that if I had been able to do those interviews myself with the people on the street, I would’ve asked them. “Oh OK. So how would you feel about moving to the South Pole in the arctic night for, you know, a full season, like, you know, seven-ish months?” Because they are always looking for people to do that, and they often have trouble finding people to do it. And it is by every single measure, much, much, much, much, much easier to live there for that period of time than it is to live on Mars.
You can’t even get to Mars in that period of time. And you know, there’s lots of other places like that. The top of Mount Everest, the bottom of the ocean.
Manjula Selvarajah: It’s almost like we buy into the pitch, but we don’t really understand the reality of it. And therefore it’s easy to just go on and live our lives the way that we do.
Adam Becker: Yeah, exactly. Yeah.
Manjula: So I was watching that clip that you showed with Sam Altman and Ilya Sutskever, who you know is from the University of Toronto. Actually, he received an honorary degree and spoke at the University of Toronto this year. You know, they talk about the super intelligence that is going to come up with a solution for climate change. They present it as a saviour. Meanwhile, if you talk to most of the people in this room, pretty much everyone in this room will say they understand the carbon footprint of AI and understand the impact that it’s going to have on the environment. But I really don’t see this as a saviour or that it’s evil. Like it’s not black and white. Where do you think the possibilities are for AI where AI can genuinely help without tipping into fantasy?
Adam Becker: So I think that, and I hate to sound like a broken record, but I really think that one of the problems here is a question of definitions. AI is a term that doesn’t mean what it used to, right?
It’s come to be a very, very broad term. If you went back in time 30 years and went and talked to me when I was a kid and said, “Hey, I’ve got this little device here and it’s got an AI on it, and I can talk to that AI.” I would be very disappointed with the reality of it because 30 years ago, AI meant something like Commander Data from Star Trek, right. Or Hal from 2001. That’s not what we have.
Manjula Selvarajah: Not the best example.
Adam Becker: No, not, not the best example, but still it’s not what we have. Now we use AI to cover a huge swath of technologies. You know, everything from large language models to, you know, what we used to call machine learning.
And so I guess my cop-out answer to your very good question is it depends on what you mean by AI. Like are there places for deep learning systems to identify patterns that would be hard for humans to identify without that kind of assistance that could be used to find greater energy efficiencies in various industries?
Sure. Absolutely. Is a souped-up LLM going to save the world? No.
Manjula Selvarajah: Now you said that this is the one decade that we can’t afford to waste. When it comes to climate action, what is the most important shift in mindset that you’d want innovators, investors and policy-makers to make right now?
Adam Becker: Right now, and maybe it’s because it’s so early in the morning for me, and I’m feeling a little cranky.
Manjula Selvarajah: Really? Because that’s what your book sounded like.
Adam Becker: Well, I wrote a lot of that early in the morning, too. I tried to make the book fun.
Manjula Selvarajah: Yes. No, no, it was fine. I’m picking on you.
Adam Becker: Yes, of course. I’d like to think it’s fun, but I’m biased. I think that what I’d like to get across is that the leaders of the, you know, American tech industry, the leaders of what we broadly call Silicon Valley or Big Tech, you know, the tech oligarchs are not your friends and you can’t trust them, and you have to fight them explicitly.
And it would be nice if we could not do that and just like either work with them or, you know, at least try to ignore them. But I really think that at the very least, the events of the last year have made it very clear that’s not an option. And if we’re going to actually get real climate action, and address the climate crisis in the way that we need to, then we actually have to fight them.
Manjula Selvarajah: Jokes about you being cranky aside. It’s a fabulous book. It’s actually, I would say, if not my top read, it’s one of my top reads of the year. It’s fantastic.
Adam Becker: Thank you.
Manjula Selvarajah: It is an intense read. You’ve done a lot of research here. There is this one line that you have here that’s by — you quote writing by Carl Sagan. And I’ll just say something about Carl Sagan — I know you’re a huge fan of his. Carl Sagan was at my graduation. I’m aging myself here. Queens University. Any Queens University fans here. Woo-hoo. Believe it or not, he was, he got an honorary degree. You, I don’t think you got to meet Carl Sagan. Let me tell you, my parents and my brothers were more excited that he was there than that I was graduating. There are actually more photos of him than of me after I’d gone through this really difficult degree.
Adam Becker: That’s awesome.
Manjula Selvarajah: So that aside, I wanted you to read this part of the book.
Adam Becker: “Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves. The Earth is the only world known so far to harbour life. There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle? Not yet. Like it or not for the moment, the Earth is where we make our stand.”
Narration: That was my first conversation with Adam Becker on the stage at MaRS Climate Impact.
A few weeks later, right before the new year, my producer Ellie and I jumped on a video call with Adam, to clear up a few things and hear what’s changed since he wrote the book.
Manjula Selvarajah: Adam. Hello.
Adam Becker: Hi.
Manjula Selvarajah: It’s good to chat again.
Adam Becker: Yeah, it’s good to see you again. This is fun.
Manjula Selvarajah: How’s the book tour going?
Adam Becker: Well, I’m home finally for a while.
I’m not gonna hit the road again for a little bit, which is nice, because I was travelling. I mean, when I last saw you, that was actually my last stop for a while.
Manjula Selvarajah: Your trip to Toronto.
Adam Becker: Yeah.
Manjula: So how, and is the reaction like changing? Like, is it a lot more hate mail, a little hate mail? Like where are things standing now?
Adam Becker: I don’t actually get that much hate mail, which I’m pretty happy about.
Manjula: Well, let’s change that today.
Adam Becker: Yeah, exactly. Yeah. No, that’s what Canada’s known for, right? Hate mail?
Manjula: But we apologize after. How dare you. Of course, we apologize after.
So we have a question for you from Marcius Extavour. He’s a scientist and a technologist that uses AI to uncover patterns in environmental data. He’s a strategic partner at ODE and former chief scientist at XPRIZE Foundation. OK, so here’s the question.
Marcius Extavour: Hi, this is Marcius Extavour. My question is whether it’s valuable or even possible to separate critiques of some of these tech billionaires from critiques of the individuals actually doing the work every day — because it’s not them and the types of problems and technologies they’re developing. I think a lot of these individuals, scientists, engineers, programmers, designers, product developers, and skilled professionals like this would be surprised to hear that they are spending their careers working on heartless, baseless and foolish obsessions as you’ve mentioned.
So how should we think about these folks that really do think they are trying to use technology to reduce poverty, to fight climate change, to understand space and space exploration and, yes, to build artificial intelligence. How should we think about those individuals and how do you think about them in light of the critiques you’ve made about the tech bosses? Thank you.
Adam Becker: That’s a great question. Look, there’s a tendency to think that the future of technology is on rails, that we know what the future holds, that, you know, there’s just some technology that is next. You’ll hear a lot of the tech billionaires talk this way.
There is this idea that, well, you know, this is just where technology is going. Talking about technology as a vast impersonal force, when the fact is that we get to make decisions about where technology goes and what kinds of technologies are developed, and that’s something that the leaders of the tech industry have taken upon themselves and have decided that they’re going to do without, you know, input from the rest of us, or really much regard for what would be good for the world. Now, that’s not to say that these companies don’t do valuable things sometimes, right? It’s good to send up satellites into space to explore space. That was a particularly interesting choice of words, you know, partly because I’m an astrophysicist by training, but also because most of the satellites that SpaceX sends up are not about exploring space.
They’re about building a giant, unsustainable telecommunications constellation that could very well lead to a kind of ecological disaster in low Earth orbit called Kessler Syndrome. And you can build something that is in and of itself good, but if you build it as part of a larger project being created by a company that is devoted to social ills that can take what you’ve built and twist it into a form that takes it beyond what you had in mind.
So, I think that actually the rank and file workers at these tech companies need to take a hard look at what they’re doing and, I would really like to see them unionize, because I do know that a lot of them don’t hold the political views of the tech billionaires that they work for.
And it would be nice to see them remind their bosses that they — the bosses — would be nowhere without the hard work and efforts of, you know, workers at these companies, and the best way to do that is to unionize.
Manjula Selvarajah: I do think it can be really hard once you’re someone who’s in that system to kind of walk away from your ownership or walk away from a pretty highly compensated role that you may have to take on these other things. You know what I mean, Adam? I’m just being realistic.
Adam Becker: Totally, and yeah, it is always going to be harder to act against your own, like personal self-interest, but the fact is that they can afford it.
And so that’s to me, like the case for, you know.
Manjula Selvarajah: The imperative perhaps.
Adam Becker: Well not just the imperative that they have, but the imperative that the rest of us have to regulate them. You know, I live here in the U.S. And here, they have gone a long way toward just capturing the entire government apparatus of the country. Now they have a lot of political power. They also have power to shape the discourse, and some of that power comes from their ownership of social media platforms and increasingly news platforms.
Manjula Selvarajah: So they have a handle on the places where we have discourse.
Adam Becker: Right, exactly. And also what it is that we talk about. If we want to have a healthy democracy, we need to talk about, like what the future is going to be like and where we’re going to try to take our country and our civilization and our species. And they are injecting huge amounts of impossible bullshit into that conversation. And that makes it difficult to talk about the real stuff that actually matters.
We need to be very clear about the lies that they’re telling and about the things that they’re saying that are not true, and I think going after their visions of the future is an important part of draining their power.
Manjula Selvarajah: Now you spoke about this deeper embrace between these tech bosses and the current U.S. administration.
Adam Becker: Yes.
Manjula Selvarajah: What concerns you the most? Because it’s also shifted since you wrote the book.
Adam Becker: Yeah, that’s right. Because I wrote the book before the election and so before the election, I felt like I was writing a warning and now it feels like I, you know, am just describing what’s already come to pass in a lot of ways, and warning about where they want to take it.
So what’s the most, what’s the most disturbing thing about it? To me, probably the flagrant contempt for democracy and democratic accountability. They lined up behind Trump because they wanted the government to leave them alone, and they saw Trump as a vehicle to just buy their way out of any possibility of regulation or democratic accountability for their actions.
That’s a recipe for, you know, a serious power struggle between the ultra wealthy and the rest of us. And, you know, that’s not something…. There are, there are, I think people out there who really like that idea who, who are like, yeah, yeah, yeah, yeah, we have got to go after these guys. I’m like, we do have to go after them, but I don’t like that.
You know, that’s not the direction I wanted us to go in. I wanted us all to work together to try to make the world a better place. And now in order to do that, we’re going to have to go after these guys and try to wrench their power away from them because we don’t have another option.
Manjula Selvarajah: Now you and I didn’t have the time to get into this on stage. You were talking about super intelligence. You said there’s very good scientific reasons to think that it is ill-defined and not something that we’re anywhere near doing.
Adam Becker: Yeah.
Manjula Selvarajah: But what are those reasons?
Adam Becker: Well, the idea of intelligence is frankly, hopelessly ill-defined. The history of intelligence testing is tied up with eugenics and various kinds of pseudoscience that were popular around the turn of the 20th century. And if you look at the definition of intelligence that’s used by some AI researchers, like there was this famous paper called “Sparks of AGI” a couple years back, talking about behaviour that they were seeing in, I think it was GPT-3, maybe GPT-4.
I hesitate to use the word behaviour for a large language model. In any event, the definition of general intelligence used in that paper came straight out of eugenics, and that paper is from like two years ago.
The other thing, I think that fundamentally the definition of AGI, insofar as it has a definition, is like those robots from science fiction. It’s not going to happen.
You know, the abilities of the human brain and humans are still way, way beyond anything we can get computers to do. If you want to see a good example of that, think about the amount of text that just GPT-2, not a particularly good large language model, like, one of the early ones, had to ingest it. Like just that model had to ingest many, many thousands, I think it was maybe even millions of times more text than any human could read in a single lifetime. And that was to do a pretty bad job of producing text.
Compare that to a human baby. A human baby encounters large amounts of language, but nothing close to the scale that GPT-2 ingested much less GPT-3 or 4 or 5, and yet babies go from not being able to speak any particular language to sometimes being fluent in more than one language over the course of just a few years. And so that, that’s already an indication that you know, we are not actually particularly close. We are reaching the limits of GPU architecture. We are reaching the limits of LLM architecture. So what’s needed is like a fundamentally different approach.
Now, of course, it is possible that tomorrow someone will develop a fundamentally different approach that suddenly gets us to the doorstep of this. But I wouldn’t bet on that because it seems like the kind of architecture of these computers as a whole, like modern computers, is just too wildly different from what happens within the human brain and the human body.
You know, when you think about the breakthrough that led to LLMs, right? The transformer architecture, the main thing there was making it easier for it to ingest large amounts of data quickly. That’s what the breakthrough was. And so what this led to was something that was better at predicting the next word or pixel in a sequence, but that is not enough. Just predicting the next thing in a sequence like that is, is simply not enough to do what humans do because it lacks so much context about the world around us and it’s not enough to say, “OK, well then we’re just going to put it in a robot body and hook it up to cameras and stuff,” because you need it to sort of grow in concert with its interactions with the world. Otherwise, you’re not going to end up with something that has any ability to really understand what the world around it is.
This is not something that we’re anywhere close to having. And if you take a look at, say, Rodney Brooks, who is a AI scientist, cognitive scientist, expert professor at MIT, he has said and he’s right — if you just look at the history, that the history of AI is AI booms followed by AI winters. And he thinks, and he’s not alone in this, that we are on the verge of another AI winter, if not a full-blown tech winter. And he said it’s going to be cold.
Manjula Selvarajah: So where do we go from here?
Adam Becker: Yeah. Well, I keep thinking about the AI bubble.
At this point, basically, the only real justification for the valuations of the companies investing in AI and creating AI would be if these myths about a super-intelligent, godlike AI were true and a bubble does less harm the earlier it pops.
And so like if the AI bubble popped right now, that would cause, you know, some real economic consequences that would be very unpleasant for a lot of people, possibly including me. But it’s only going to get worse. So where I’m hoping we go from here is that more and more people realize that it’s a bubble, and we’re starting to see signs of that. We’re also starting to see signs of like a public backlash against AI slop. We’re seeing signs of a real public backlash against billionaires. Uh, massive.
Manjula Selvarajah: And data centres…
Adam Becker: Yeah, data centres. Massive popular support for increasing taxes on the ultra wealthy here in the U.S. Even support for limiting, like, capping the amount of wealth that one person can amass, which is something I talk about in my book.
So it’s very nice to see that bearing out in the polling data. But the silver lining to the horrible political news in this country is people are waking up to the fact that, you know, the ultra wealthy do not have their best interest in mind and are warping reality around them to suit their needs through the power of their concentrated wealth.
So where do we go from here? I think we’ve got to take some power back from these people in the form of organized labour, in the form of organizing to win elections, even in the face of like fascist headwinds here in the U.S. And I think we have got to regulate the tech industry and I, and I’m seeing like a popular groundswell of support for all of these things.
Manjula Selvarajah: Lost to watch for, Adam. Adam, thank you. We should let you get back to the rest of your day and your book tour. Thank you so much.
Adam Becker: Thank you. This is fun. Thanks for having me.
Solve for X is brought to you by MaRS. This episode was produced by Ellen Payne Smith. Lara Torvi, Sana Maqbool and Sarah Liss are the associate producers. Mack Swain composed the theme song and all the music in this episode. Gab Harpelle is our mix engineer. Jason McBride is our senior editor. Kathryn Hayward is our executive producer. I’m your host, Manjula Selvarajah. Rate, subscribe or leave a review. We read them all.