Discover more from Disruptor
Masters of Disruption: How the Gamer Generation Built the Future 
"With AI, I'm looking for that clever hack." In my new interview, John Carmack discuss his legacy, the future of AI, and why he almost went into nuclear engineering instead.
This post is part of a longform project I’m serializing exclusively in my newsletter, Disruptor. It’s a follow-up to my first book, Masters of Doom: How Two Guys Built an Empire and Transformed Pop Culture, and it’s called Masters of Disruption: How the Gamer Generation Built the Future. To follow along, please subscribe to Disruptor and spread the word. Thanks!
It’s been 20 years since I was living in Dallas and interviewing John Carmack, the co-founder of id Software, for my book Masters of Doom: How Two Guys Built an Empire and Transformed Pop Culture. The 51-year-old is now a consulting CTO for Oculus and a self-described ‘Victorian Gentlemen Scientist’ of artificial intelligence.
Despite all his achievements in video games, rocketry, and virtual reality, he’s not looking back. “There are these continuous golden ages as we go on,” he tells me recently over Zoom from his home office, which is decorated with posters of his games. “We are right in the midst of at least a half dozen major sort of scientific industrial cultural revolutions,” he says.
In the coming weeks, exclusively in my newsletter, I’ll be serializing excerpts from my new interview with Carmack. Today, we discuss his legacy, his approach to problem-solving, and his plans to hack the future of AI. The interview has been edited and condensed for clarity.
David Kushner: How do you look back on your legacy, and how it's manifesting across technology today?
John Carmack: I've said a lot of times that I consider myself almost a remarkably unsentimental person. I very rarely wind up going back and reminiscing about anything. There's enough other factors outside of me that bring them to my attention, the anniversaries and other people reminiscing. I feel I don't need to have any personal motivation for that. I largely could just go very extended periods of time without really thinking about any of the old days. [I] stay super engaged and forward-looking about things that I have to do.
Now, obviously it all kind of pulls into my my kind of gestalt view of everything. Everything that I've been through contributes to everything that I continue to go forward. In some ways that might be one of my strengths from a systems level. Like I've always said, I do not have the detailed memory that John Romero had for anything, even back then and certainly to this day. I don't really even trust my own memory of things that I don't continuously refresh and polish, which is also sort of mutating view of things as go through. If you revisit it enough times, it kind of slowly morphs in your personal memory of things.
The old days feed into my views on software engineering, I can tell when I have an in-built bias on things. I'm cautious about that nowadays. In modern engineering decisions, when I've got something that I'm looking at in terms of Oculus, Facebook, or artificial intelligence or something, I have a bias. [It’s] my blink reaction to an approach that is going to be informed by, in what are in some cases, outdated views and methodologies. I have certain things from the 80s and 90s baked into my engineering sense. The positive way to take that is to look for areas where that still matters. People nowadays might not be paying as much attention to areas of efficiency and what you can do more with less. Those are still my core competencies.
While the exact things about picking the right assembly language instruction might not matter as much nowadays, [it’s good to be] aware that there are these worlds of power and efficiency that are largely underutilized by the procedures and engineering processes that we do today. Every once in a while, I do still get some case where I'm looking at some artificial intelligence thing and then I realize that, ‘oh, maybe this can be viewed through the lens of binary space partition trees,’ things that I was doing way back in the day that I have a lot more built-in sense for in certain ways. But I have to be cautious. There are aspects of problems that, when you look at them through a certain lens, you might miss really important things.
“People assume I'm a math wizard. But everything I did making games was basically algebra II and trigonometry.”
I'm about to turn 51, but I still think that I've got a good open mind about learning new things. That’s been one of the really grand things for the last two years, jumping into this very different field of machine learning and artificial intelligence.
I have a perspective that's not usually shared by most of the field, which is kind of your academic PhD program people. I think that may yet be valuable or even critical as we take steps there. It’s been really grand expanding my horizons into a lot of areas. I've push my way through on some of the more theoretical math-heavy side of things that I tend to shy away from. A lot of people are shocked to know that I really don't have a strong calculus background. People just assume I'm a math wizard. But everything I did [making] games was basically on a really strong high school math background, algebra II and trigonometry.
David Kushner: How much has your approach to problem-solving evolved now with your work in AI?
John Carmack: The best solutions are when you wind up realizing that some unrelated approach is actually the right thing to do. Right now what I’m doing in my work in AI is assembling a toolbox. I'm very cautious about the mainstream, mainline approach that Google and Open AI and Microsoft and all these companies are well-suited to pursuing. There’s very little value in me going on and being by the 1020th smart guy pursuing the mainstream approach. So I am looking for angles for coming at things in a way that's not conventional, because the conventional approach may yet turn out to work. Given enough time and resources, it almost certainly will. But there may be a shortcut to the future, and that is very much what I've always been about.
“With AI, I’m looking for that clever hack.”
It's like, okay, side scrolling is easy if you've got enough computing power. You just kind of draw the screen anew every single time. And if you were willing to wait another five years or whatever, then you could just do that in that simple, straightforward way. There are approaches that may be these hacks that let you bring the future forward by a number of years. It’s likely there's a situation like that with AI. There may be these factors of 100 or 1000 difference based on the exact approaches that you use for it. I am looking for that clever hack, that way that, by looking at the problem differently, it may be possible to do it with 1% of the resources of doing things large scale.
In pretty much everything that I've done before, whether it's in the early games or the rocketry or virtual reality, I felt I had line of sight on the solution, that I was not far away. Maybe I didn't know exactly what to do, but it always seemed like it was definitely there. I was very rarely concerned about whether the problem was solvable. It was just how long would it take me to get to a solution.
Here’s a quick primer on Carmack’s innovations with “side-scrolling” games.
David Kushner: What was the line of sight that you had about gaming when you started out, and how does that compare to your work in AI now?
John Carmack: In the earliest games, the side-scrollers, they were very much trying to emulate the arcade and console. There were no Nintendo-style games [for PCs], so there were clearly valuable games to be made. And there was enough hints in the way that the video controllers worked that you could do certain things. It seemed like the pieces were there, but nobody else had demonstrated exactly what we wanted to do with it. But I felt it was very likely that there were multiple paths to getting it done.
That’s absolutely going to be the case with AI as well. It's not that there's one crystal singular path that's the only way to do it. There's going to be a ton of ways to do it. So that's beneficial. But unlike these other cases of being able to look at the arcade games for the side-scrollers, or being able to look at the silicon graphics and Evans & Sutherland image synthesizers for the 3D gaming, we only have the existence proof of biological life for intelligence. We don't yet have some super high-end artificial intelligence proof that I'm just trying to optimize down onto a lower performance substrate. So it really is a research problem that nobody in the world knows the answers to right now. But I take my hope from the fact that we do have the existence proof of biological intelligence.
While there are some people that make arguments about how neurons may be exponentially more complicated than we're giving them credit for, I don't buy that at all. If anything, I think our biological neurons are going to turn out to be a kind of crappy substrate for intelligence. And we are not going to have to replicate a hundred trillion synaptic connections and 86 billion neurons in our virtual intelligences to replicate human intelligence. So yeah, I'm not worried about it being possible, but the route between here and there is unknown.
“There are some deeply fundamental aspects of learning that have started to seep into my consciousness.”
David Kushner: When you talk about finding a shortcut to the future, is that a discipline that you have developed or do you think that ability is just innate in you?
John Carmack: There’s definitely a way of looking at things and seeing the opportunities. Before I was going into AI, and I was looking at next steps to do, I was really looking at economical, nuclear fission energy sources. And you start to see certain signs about the way things are. There are so many things in nuclear fission that looks very much like aerospace pre-Space X, where we knew how to do things better. But there are these enormous economic decelerating forces and just the sluggishness of everything. But the physics was always there. It could clearly be 10 or 100 times cheaper than the way practice was going. The things slowing it down were organizational aspects, not laws of physics. Nuclear energy very much seems to be like that; I am looking again at what are the laws of physics that are constraining this?
With artificial intelligence, we have many of these things that we can put numbers on. We know 86 billion neurons, 100 trillion synaptic connections, 40 megabytes of code in the DNA that affects the development of the brain. So we have bounds at least, on what's going to be necessary for this. But the AI question is very interesting. There are three orders of magnitude of uncertainty, which is much more so than when we talked about many of these earlier things. But we are going to get three orders of magnitude of performance for these specialized computations by 2030. And it might even be sooner than that.
There are some deeply fundamental aspects of learning that have started to seep into my consciousness and my view of the world and my decision-making process…which has been pretty interesting and enlightening for me.
My interview with John Carmack will continue next Tuesday here in my newsletter. To read more of my feature stories, as well as posts from my longform project, Masters of Disruption: How the Gamer Generation Built the Future, subscribe below.