Masters of Disruption: How the Gamer Generation Built the Future [20]
In part four of our interview, John Carmack reveals his vision for the AI future: the Universal Remote Employee.
This post is part of a longform project I’m serializing exclusively in my newsletter, Disruptor. It’s a follow-up to my first book, Masters of Doom: How Two Guys Built an Empire and Transformed Pop Culture, and it’s called Masters of Disruption: How the Gamer Generation Built the Future. To follow along, please subscribe to Disruptor and spread the word. Thanks!
This interview has been edited and condensed for clarity. For the first part, click here.
David Kushner: Elon Musk said ‘With artificial intelligence we are summoning the demon.’ How worried are you about what AI might unleash?
John Carmack: I was just down at Starbase a couple of days ago. I was getting a tour from Elon of everything down there, and I tried to bring him out a little bit on AI. He's in a tough position because Tesla is actually one of the leading AI companies in the world. And yeah, it was clear, he didn't really want to have a big conversation about it. I do not share the same fears in the same way. Like the fast takeoff fear - the idea that we will slip over a line, something will become super intelligent and it will have a Singularity, like a short, a bounded, tight Singularity - in many ways that really is nonsense. If nothing else, this idea that a computer is going to go and take over the world, even all the computers of the world, it just can't. Opening TCP connections, it doesn't matter how smart you are, there's a fundamental limit to how fast you can even connect to all the computers in the world, let alone subvert them and hack them in various ways. Just because something is going to be smarter, if it achieved super human performance that doesn't necessarily make it exponentially self-improving. There are reasons to believe that we may inch over the line of human intelligence and find that we're already on the decreasing returns part of the Sigmoid [curve] for what we can get out of it. I don't think that's the case. I was talking with Elon about how it's plausible that we could achieve superhuman AGI, and the world might not look that different even a couple of decades after that.
“My goal and vision is a Universal Remote Employee. It writes text messages. It logs on to Slack chats. It does the job that a human would do.”
David Kushner: When do you think we’ll achieve superhuman artificial general intelligence? And what will it be like?
John Carmack: I think there’s a fifty-fifty chance that we will see have a line of sight to AGI or sign of life in 2030. What that might look like would be something like a toddler with learning disabilities. If it was something that you interact with it, you could say, ‘okay, this is recognizably a human class intelligence, even if limited in various ways.’ On the one hand, it is a little bit disconcerting to some people to realize that the difference between a McDonald's Fry chef and CTO of a top company is not that much in terms of low intelligence. It is a small set of differences that go between baseline human or sub-baseline human and peak human capabilities. If we get to the point where we have something that interacts like a toddler, everybody will probably agree that we've got our line of sight. It's just a matter of scaling and learning. There might still be many years or a decade of effort necessary.
Right now, we look at big things like: how does super-intelligence affect energy and agriculture, some of the the main things that drive the world around? We already have crops and processes designed by super intelligence, relative to the rest of the world, and they're still not globally distributed. I see no reason why if we get even better genetic crops that they will, all of a sudden be everywhere, revolutionizing everything. And on the energy sector, I think it is extremely unlikely that we will, all of a sudden, magically have a formula that turns water into energy. There might be advances in plasma containment that helps with fusion reactors, but it still takes a decade to build the plant to go ahead and test things like that. It's possible that media production might get revolutionized very rapidly. Even there though, there are going to be ideas that germinate that then take three years to bring to production, even if AI are magically replicating and doing all of the work.
My particular vision, the angle that I'm looking for and the goal that I'm looking at, is a Universal Remote Employee. The pandemic has put a good point on the fact that more things can be done remotely over Zoom meetings and such then we used to assume needed to be done physically-present. I think there is a path to building something where you have an AI that interacts with computers the same way humans do, the same modality. It has a synthesized avatar that you look at, that talks with you. It talks with a voice. It writes text messages and logs on to Slack chats and whatnot, and basically does the job that a human interacting through a laptop or something would do.
“AI can do the behavioral equivalent of a deepfake, you just need to give it enough data.”
David Kushner: What about uploading consciousness?
John Carmack: With uploading consciousness, the brain scanning still needs multiple additional scientific discoveries to be able to do that. The best case for this is probably laser ablation of brains to scanning everything through there. We sort of know the direction we need to go to get there. It's not physically impossible. But I think we're actually far more likely to get the equivalent of human uploading by just having AI. Machine learning has shown that AI is very good at mimicry. If you can give it enough data sets, it can wind up doing a very good job of making a generative model for something like that. I think we will get to the point where AI can basically do the behavioral equivalent of a deepfake, you just need to give it enough data. Like if people are life-logging everything they do, you will be able to have an AI that just extends their life. This is a general intelligence that we are trained to mimic, this certain lifepath of someone. And, yeah, I think it will be quite common that you wind up skinning your AGI to look and behave very much like particular people, or designing your optimal view for what you wish they were like. It’s Westworld-like, where you dial in the behavioral traits and the different things like that. I think that's very much where it's going to wind up being.
David Kushner: You’re 51 now, will this happen within your lifetime?
John Carmack: I think it's almost positive that I will live to see an AGI, I wouldn't say dominated, but an AGI-infused world. First of all, I'm in great health and my grandmother lived to 102. I think I'm going to be around and still kicking in my nineties. It’s physically possible with the computers of today, the large-scale ones, we can write an AGI today. We just haven't because we don't know how yet. But with the biggest computers - Facebook, Microsoft, Google data center-sized computers - that's enough processing power to run an AGI right now. I might be off by an order of magnitude, potentially either way. I have reason to be hopeful that you don't need tens of thousands of GPU's. You might only need a hundred given certain conditions. It's at the scope right now where companies and nation states could be doing, theoretically-fielding, an AGI once we figure out how to do it. We will see a factor of a thousand price drop in that, as it goes forward. Even if these turn out to be optimistic estimates about how much computing power we're going to get, when you talk to NVIDIA Intel, the startups, even if they're wrong and they're off by an order of magnitude or two, 30 years from now, every grad student will have access to that level of processing power.
And to again put bounds on the scope of this problem, the DNA that codes for the brain is only about 40 megabytes of code. That's a small amount. It's less than our games that we’ve written. We don't know what it needs to be right now. But we also know it's not this intricate fractal crystal of complexity, where every bit matters. Because you can shotgun pieces of that DNA and still get a functional human being and a functional intelligence out of it. There's a ton of redundancy, a ton of compressibility. It's not that big of a problem. We don't know what it is yet, but when we get to a point where there's 10,000 graduate students trying every random idea that comes to their mind, standing on the shoulders of all the giants behind it, I am 98% certain that we will have fully-capable general intelligence certainly within 30 years. An, again, I think we're at 50/50 within eight years.
“Of all the big companies that are going to develop your AI overlords, Facebook is one of the least likely.”
David Kushner: You tweeted that there’s a possibility that there is already a human level AGI running in a secret lab somewhere.
John Carmack: The odds are low, at least partially because the community is not that large and it sort of knows where everybody is and what's going on. I do catch little tidbits of stuff about the scope of Google's work is larger than it is generally acknowledged. I think that it's probably not there. I think the odds are against it right now. But as we go close out this decade, the odds are decent. I've got my bet hedged somewhat conservatively in saying that we might have a toddler-equivalent AI. But it's also possible we get an-all-of-human-knowledge-omni-mind sort of thing relatively early.
My tack is very much along a human level interaction, human modalities. But that's not the mainstream approach. There’s not a lot of people that are openly going for this sort of human equivalent. Basically nobody's describing the idea of a Universal Employee like I am now, because then that brings up the Luddite counterrevolutionary things. That’s one of my reasons why I'm not doing this at Facebook. Because of all the big companies that are going to develop your AI overlords, I think Facebook is one of the least likely, along with maybe Apple, because the optics of it are just wrong. Everybody's expecting Google to do it and probably Microsoft. But if there was a big story about, well, ‘Facebook's on the Verge of Creating our AI Overlords,’ that would just play a lot worse. It's interesting because Facebook has super smart people in the AI team, they really do have one of the leading AI teams. But I think their vision is restricted in this way, they're just not going to go make the AI omni mind.
David Kushner: So in 30 years when this happens, let’s say, how will these Universal Remote Employees be used in our daily lives?
John Carmack: The way the costs are going to come down, everybody has their own team. Everybody almost has their entire corporate structure to help them accomplish their goals.
There may be three orders of magnitude uncertainty and how expensive this is going to be to run. As you're going to run a hundred or so at a time in this parallel plan thing, just because of memory computation, trade-offs, you might have 10,000 GPU's running and that's getting you 100 AIs running at that same time. It might start off where, at the beginning, at these very expensive times, your AI might cost $1000 an hour to run. You might be getting something that initially is learning-disabled and problematic and not producing much value, but eventually you get it up to the point where it can do something valuable. This will probably be a slow state where there'll be certain jobs and occupations that early on they're good at.
David Kushner: Like what?
John Carmack: Maybe they are good first at call center jobs. They could be friendly all the time. You can reset them every day so they don't build up resentment about different things. They can have the perfect face and voice for doing this while still being able to carry on a conversation and recall things in a way that a primitive chat bot couldn't. But eventually it will be like, "No, I want the AI as my Chief Technology Officer, or as my Chief Executive Officer.” Maybe it starts off and costs $1000 an hour to run an AI. That might be off by three orders of magnitude. It might be possible that if it turns out to be much, much easier, which is a legitimate chance right now. It might start off where you're only at 10 bucks or $1 an hour. But given that we know that we're going to see three orders of magnitude price performance, for sure, coming down the line, it will get to the point that you can run hundreds or thousands of these attached to a given task. There will be the questions of: do you make them more capable, or do you basically scale deeper on how you want to use them? It’s possible there may be political reasons that we try to not let them get too super intelligent.
I don't have really strong views on it. In many ways, I think it's a foregone conclusion that there will be super intelligent AGI's inside my lifetime. I guess it just doesn't bother me that much. I'm a really smart guy, but I know there are thousands of people that are significantly smarter than I am in the world. It doesn't grind me down too much to think that there will be AI. In many ways I can look at them as, I'm their distant ancestor by helping them come into being. That just doesn't tear me up that much, but I know some people are very, very wrapped up in human supremacy.