Hello, and welcome back to the Cognitive Revolution!
Today, my guest is Emad Mostaque – famously the founder of Stability AI and currently the founder of Intelligent Internet and author of the provocative new book: "The Last Economy: A Guide to the Age of Intelligent Economics"
Emad has long been one of my favorite thinkers in the AI space.
Very few people manage to grapple, seriously and honestly, with the world-changing nature of AI while also building something that matters in the here and now, but Emad has.
Since founding Stability in 2019, he's demonstrated a deep understanding of AI technology trends, a keen eye for talent, the ability to inspire people with a positive vision for an abundant and wondrous future, and an appreciation for the stakes and risks – as evidenced by the fact that he signed, while CEO of Stability, the famous 2023 Pause Letter.
The fundamental problem Emad addresses in the new book is that human society is built on the premise of scarcity. In humanity's hunter-gatherer past, everyone had to contribute to the group's survival, and freeloading couldn't be tolerated. And still today, as Elon Musk puts it, "If you don't make stuff, there's no stuff."
But what happens when an AI doctor, which doesn't need to eat, can provide better front-line medical advice than a human doctor, at 1 / 1000th the cost?
Such technology represents abundance for patients around the world, many of whom will enjoy better access to medical expertise than ever before, but … taken to its logical conclusion, it implies poverty for human doctors.
Similarly, what happens when all the cars drive themselves, and the millions of Americans who earn a living by driving are no longer needed for that purpose?
And zooming out, what happens when this pattern repeats itself across a majority of the economy, leaving displaced human workers nowhere to go, all in less than a generation?
Emad argues that there's no escaping these questions. Even if AI capabilities stalled today and we never got a truly Powerful AGI, the AIs we already have, with proper implementation and integration into existing systems, are powerful enough to support this change. And in reality, with frontier AI developers racing to build AI agents explicitly designed to replace human labor, we have maybe 1000 days to find good answers.
With that in mind, Emad is simultaneously working to assemble the open-source datasets and train the small models needed to ensure the abundant future is accessible to all, while also trying the answer the question of what the future, and the transition to it, might look like in as much concrete detail as possible.
In this conversation, and even more in the book – which I do encourage everyone to read and ponder – Emad coins a number of memorable terms – including the Intelligence Inversion, the Metabolic Rift, and the Abundance Trap – and also proposes a new way to think about the health of our economy, which would measure not just the monetary value of material goods and services sold, but also the levels of Intelligence, Connectivity, and Resilience in the system.
He also makes fascinating analogies between the mathematics of neural networks, and the economics of firms and markets, and even proposes a new dual currency system, with one for physical goods that are rivalrous in consumption and intrinsically scarce, and one for intangible goods that are non-rivalrous and fundamentally abundant.
The realist in me recognizes that these are underdog ideas. But as Yuval Noah Harari has famously explained, the stories we collectively tell ourselves are a huge part of how society operates. Money is a shared fiction, but a useful one, because it helps us allocate scarce resources relatively efficiently. So the idealist in me says that, in context, Emad can't be any crazier than whoever it was that came up with the idea of using gold or shells as a medium of exchange in the first place.
Big picture, while I usually tend to assume that the economic upside of AI will take care of itself, I think it is important to recognize that what Balaji Srinivasan called "the nuclear outcome" – where we get the weaponization of and constant threat from AI without the material abundance and accompanying personal freedom - is still a real possibility.
So, can we collectively start telling ourselves a story of abundance, in which a person's right to a decent life isn’t predicated on their economic contributions, and in which caring for one another isn’t something we do to meet our own needs, but because such interactions are a core part of the human experience? And can we do it in time to give people something to believe in before the inevitable modern Luddite movement shows up and tries to shut the whole thing down?
While so many people – myself, at times, included – are focused on the latest model updates, on the horse race coverage of who's winning and losing, and on making our apps work, Emad invites us to stop thinking so small, to recognize that we have agency, and to challenge ourselves to intelligently imagine and intentionally build our own shared future.
Or as he puts it in the book: "The machines are taking our jobs. Thank God. Now we can get to our real work."
With that, I hope you enjoy this challenging and inspiring conversation with the one and only Emad Mostaque.
Nathan Labenz: Emad Mostaque, founder of the Intelligent Internet and author of The Last Economy: A Guide to the Age of Intelligent Economics, welcome to the Cognitive Revolution.
Emad Mostaque: Thanks.
Nathan Labenz: Better? Welcome back, I should say. I'm excited for this conversation because one of my common refrains, as regular listeners will know, is that the scarcest resource is a positive vision for the future. This book, which you describe as an engineering manual for building the future, is a combination of a diagnosis of many things that are going wrong in our society today, and also some vision and even recommendations, some of which are fairly opinionated, about what we might do to build a much better world. I think that's great. I really applaud you for taking on the challenge and doing the hard work of putting something like this forward. It all is really based around this notion of intelligence theory, and a good place to start is just giving you a chance to describe what intelligence theory is.
Emad Mostaque: Basically, I've been thinking a lot about what the new economy looks like. We've seen existing economics might be challenged, so I thought, "Let's go back to first principles." At my previous company, Stability AI, we built Stable Diffusion and other models with hundreds of millions of downloads, and they were getting better than humans at doing various things. Now we see that with the new models, agents, and so on. Intelligence theory basically goes back to a principle: what are the core axioms or principles that define reality? There was this observation of persistence: certain complex adaptive systems persist over a long time in uncertain environments. The ones that do that best are those that have the closest match between their internal model of reality and reality itself, which looks like the loss function in generative AI. In fact, mathematically, it was the same. So intelligence theory states that the ones that do best are those that minimize that loss or surprise, that have the best models. We see that in everyday life, and we see that generative AI itself has created the best models of reality. The best agents now are AI agents. The best models of reality are AI models. Then I thought, "Can we have economics deriving from that one base principle? What does the mathematics of that look like when we apply the equations of generative AI, which came from physics, to economics?" So we started building a new economics from that basis, as opposed to the classical economic bases of scarcity, utility, or general equilibrium, or other things which cannot even be measured. These concepts, built up over hundreds of years, always assumed that humans would be the top and main producers, which may not be the case anymore in a few years' time.
Nathan Labenz: Let me ask a couple of really naive questions. First, I have some basis in the idea that predicting your environment is key to acting effectively in the world. One of the best blog posts I've ever read, amazingly, goes back to 2017 from the old Scott Alexander blog, Slatestar Codex. It's called Predictive Processing and Perceptual Control, and it's basically a book review of a very long, dense book that I think he does a great job synthesizing into the idea that a simple model of us as biological humans is that we have many layers of prediction happening between our peripheral neurons that receive signals from the world and our highest order of neurons in the prefrontal cortex. Along those many layers, which are already sounding quite similar to neural networks in some ways, the role of each layer is to predict what's about to happen. If the signals it's getting from the lower level are consistent with its predictions, then it can just be quiet. This is how you can put many things on background mode while you focus on whatever you're focusing on.
Emad Mostaque: Mm-hmm.
Nathan Labenz: But when those predictions and the incoming signals diverge, that is surprise, and that is what calls your attention to things and gets them escalated up the ladder into your conscious awareness. I thought that blog post clarified more for me about what's going on, and why I'm experiencing and perceiving what I am, than perhaps anything else. It's all about predicting what's about to happen and making sure you're in sync with the environment around you. I guess one kind of — I don't know if this is a naive question or a profound question — why do we... what counts here? One might say, "A human can last for 80 years, and a giant tortoise might last for 150 years, but if I just sit a rock in a quiet location, I could come back a thousand years later and it's still there." It doesn't seem to be predicting anything, so how do we know, or where do we conceptually distinguish between things that have this capability and those that don't? Again, this might be super obvious, but sometimes I find these apparent binaries are in fact much blurrier, so I thought it was worth asking.
Emad Mostaque: Yeah, so I mean, th- this is the nature of complex adaptive systems, right? Systems that are in motion and where you have information flows by the interaction of different agents there. So if we go to level down in intelligence theory, it was basically also saying that the ones that succeed most are the ones that minimize the computational overhead. A rock doesn't need to compute anything. A rock just is, right? Like, um, it doesn't really do anything to exert action on its boundaries or anyone else. If you look at general complex entities and agents, intelligence theory we split up into three different types of things. Predictive error, which is the mismatch between the model and reality, so that's kind of surprise. The model's complexity itself, the cost of thinking. Because, you know, the more efficient you are at that, the better you'll do. Like, you've read Scott Alexander's bo- uh, post, and then it's given you a mental framework for the world, which allows you to process things in a different way. Hopefully this book does the same. That's similar to latent spaces in a generative AI model, as it kind of folds it. And the final thing is the update cost, which the cost of learning. A rock doesn't have any update costs. It doesn't have to learn anything, because it doesn't exert action or anything, or has no capability to respond either. The human brain with neurons... I mean, the equations here are very similar to Karl Friston's Free Energy Principle, which, again, if you look at the cost, predictive error, model complexity, update cost, that's Helmholtz free energy as well. You're trying to minimize this concept of free energy of trying to optimize computation as an agent that can act. And the best AI models do the same. It's all gradient ascent, trying to optimize that, trying to minimize the loss function. So I think, again, you have this flow of agency, this flow of interaction. But this f- framework only applies to these complex adaptive systems. It doesn't apply to static systems. It doesn't apply to static matter. Who knows? Maybe we'll find that information and intelligence are related, which is why we've got things like wave particle duality, etc., but that's a long way from where we are now. Where we are now is we've built our whole economic picture based on assumptions from 200 years ago, when most of the valley was the land and the serfs that you had. You know? Adam Smith's scarcity and other things like that. The Wealth of Nations, but now it's the wealth of robots. And so we need a better way of describing the world today and the world that's about to come, and that's a world in which... You know, Nat Friedman, a couple of years ago I was doing a panel with him, uh, and he'd coined this concept of AI Atlantis. There's a brand new continent with a trillion agents and robots on it , and it's about to enter the workforce. What happens?
Nathan Labenz: Yeah, there's some big questions, uh, there for sure. So, let's put a pin maybe in the math and c- circle back to it opportunistically as we go through, uh, the diagnoses and, and recommendations and envisions in the book. I would've taken a, an, one extra beat though on what's going on in the discourse. You know, it, it seems pretty obvious to me that this AI thing is gonna be a big deal. The Atlantis metaphor res- you know, the... or the geni- Country of Datas- uh, geniuses in a data center-
Emad Mostaque: Yeah.
Nathan Labenz: ... uh, you know, certainly resonates with me. It seems like that'll be a big deal. Um, and yet, you know, we've got all sorts of smart people, including some that are at the top of the field in AI, but also, you know, folks like Tyler Cowen come to mind, where he's like, "Oh, AI could accelerate economic growth by half a percent of GDP per year, uh, and that would be amazing." You know? And we shouldn't, uh... And he thinks that's a big deal.
Emad Mostaque: Well-
Nathan Labenz: So, how do you make sense of why people see this so differently, and what do you think people are missing when they, you know, put an upper bound of a half percent GDP per year on the AI phenomenon?
Emad Mostaque: Well, this is the update cost, the cost of learning , right?
Nathan Labenz: Yeah.
Emad Mostaque: There, there's quite a high update cost to your priors when big things happen. Like, you know, when COVID was about to happen in January, I was like, "Oh my God, the world's going to crap." And a few of us said that, and we posted about it publicly. I did like podcasts and stuff. Most people were like, "It's fine, until Tom Hanks got it," and then you had a phase shift in the way things are perceived. We have to remember, the pace of what has happened is unlike anything we've ever seen before. It's three years since Stable Diffusion. It's just over 1,000 days since ChatGPT. It's one year since O1 Preview was announced, right? It's just over a month since GPT-5. Just over a month ago, the vast majority of AI users in the world were using GPT-4o , you know? And so, that's kind of your benchmark. I think I saw some statistics, 20% of Americans still haven't heard of ChatGPT. And you think about that, and you're like, technology takes a while to diffuse, it takes a while to update the priors, but most people are still thinking about the previous generation of AIs that could think instantly and hallucinated all over the place. Whereas those of us who are right at the cutting edge... Like, I'm running Codex right now on the CLI, and it's been running for three hours, building a whole textbook website for my textbook, you know? Like, just set it and forget it. And the capabilities have just gone up, again, exponentially, 'cause they're breaking through from not being quite good enough, like, "Hey, why doesn't this AI transcribe properly?" To suddenly being superhuman, so it's that transition phase. I think that classically you've been held back by various constraints, like robots, for example. You're not gonna be able to build enough robots, because we won't have the spare parts, and it takes time to build factories. The difference with this generative AI is you already have the hardware, you just have to build the interfaces and the flows properly. So yesterday, the, um, Alibaba Tongyi Quinn Lab released a three billion active parameter model, A3, with 30 billion MoE, that outperforms Grok-4 on Humanity's Last Exam, and outperforms Deep Research and all these massive models. We're just three billion active parameters, which for listeners means that basically you can run it on a CPU with 16 gigabytes of RAM. Yet it's outperforming these frontier models. That's crazy. And what that means is that reasonably... Like, our medical model, AI Medical, is eight billion parameters, it needs eight gigabytes of RAM. It outperforms human doctors.We don't think it's good enough yet, even though it outperforms human doctors. By next year, it'll be better than any doctor with full traceability and you'll be able to run it on any smartphone. How do you calculate the impact of that in classical GDP or economics or other terms? Because you've never seen anything like that. But there was a recent MIT study that showed that 95% of corporate AI deployments haven't worked because they're all running the last generation of models and the last generation is six months out. You know? So, I think it's this inflection point takeoff that we're basically at now where models and systems can go from seconds of thinking to almost infinite length where they can check their errors and they can adapt. The hallucinations have dropped dramatically, and they've finally broken through on the IQ level as well as being able to view your monitor and check everything. But you'll need to know about AI to put all those pieces together and realize what we have six months from now, a year from now, is the way that you use AI is you give it a call or you have a Zoom with it and you can't tell if it's human or AI on the other side. And that's the economic, social, and other disruption that we have because the cost of doing that will be a few pennies an hour, a dollar an hour, shall we say. And no one's got that in their numbers because everyone was like, "We have to build giant super computers with huge models in order to achieve this AGI performance." Where I don't really care about general intelligence. The real impact is actually useful intelligence. The real impact on the economy is not a polymath coming up with a brand new thing, and I'm sure we will have that. It's basically someone just blooming following instructions. It's what I call the cooks versus the chefs. You know, I think, uh, Wait, but why I had this when we were discussing Elon Musk. Like, everyone's on this thing from chefs that come up with the recipes to cooks that actually do it. The cooks are the ones that will impact the economy and people aren't realizing that, that you will have these virtual and then physical when robots become comrades that can just do things. But if I was looking at the technology six months ago, I'd be like, "Yeah, of course it can." Today, it can.
Nathan Labenz: There are a couple ideas that jump out. One is the distinction between zero to one and one to N, to put it in Peter Thiel terms. It sounds like you are saying perhaps frontier minds are inherently more focused on the zero to one, which we do not yet have. They are skeptical of it and may be underestimating the importance of what I have heard you call satisficing in the past. The one to N that delivers something consistently to everyone, in terms of short-term impact on daily life, might even be bigger. There is also this incredible cost curve we are on. The original GPT-3 was $60 per million input tokens. GPT-5 is a dollar, or a dollar and a half, per million input tokens. So, it is a 95% plus reduction in cost while also being dramatically better. That is hard to count in GDP. We have tried to do that over time with things like, "Your cell phone is a little bit better," or adjusting for a standard basket of goods. It seems there is a good argument that this may not align with the Tyler Cowen view, but perhaps it suggests that GDP is simply the wrong measure. That definitely ties into many of the forward-looking ideas in your book. I think you do a great job of coining highly memetically fit terms. We will go through a number of them over the course of the hour. Let us start with the notion of the abundance trap and the metabolic rift. Both of these begin to address how economic activity, as we have traditionally measured it through something like GDP, is on the verge of breaking down.
Emad Mostaque: Yes. So, we are at this interesting point where the abundance trap is when we achieve post-scarcity in the realm of intelligence. Intelligence becomes abundant. We have seen big changes, like with the Gutenberg press, suddenly people could read and have access to intelligence, but it has traditionally been gated. Yet, by next year, everyone in the world with a phone will be able to get an expert doctor opinion that actually outperforms doctors. That is crazy. Everyone will have access to legal advice, even from Grok, that is better than most legal advice. And they do not make mistakes. Doctors make 20% errors in most cases; I think that is the average. So, the abundance trap is that we can have this disruption, and then the economic system, which is based on scarcity, is going to process this as poverty. Because you will have job losses, you will have other issues, even if our lives get better. These new systems, while corporate profits might go up, will be displacing jobs. They will displace knowledge work because you will be able to hire employees on the other side of that virtual screen, KVM jobs, I believe they are called—keyboard, video, and mouse—that do not sleep, do not make errors, and constantly learn and improve. The metabolic rift here is that GPUs do not need to eat. They do not need housing. They do not pay taxes. In fact, they are tax deductible on usage. You can get them by the hour. So, this is the rift that occurs where suddenly you have this explosion of intelligence, this abundance, yet it is probably going to be bad for us in aggregate unless we allocate it correctly. The metabolic rift is that these things do not need to eat, they do not need housing, they do not consume. The only thing an AI needs is to achieve its subjective function. We all talk about AGI, and I have Eliezer Yudkowsky's new book ready to read. What is it called? If Anyone Builds Us, We All Die?
Nathan Labenz: If Anyone Builds It, Everyone Dies.
Emad Mostaque: Yes. We are not talking about AGI here. We are talking about AI accountants, AI lawyers, AI designers, those types of things. They do not consume in the same way, and that will never change. Once it gets smarter than a human, it is done. It will never get dumber. Once it gets more capable of executing, which is the nature of any organization—it is just an executor. You have a framework, money comes in is less than money comes out. They will out-execute humans. And that will never shift; this is the final inversion we have had.
Nathan Labenz: To put a quantitative intuition around the notion that GPUs don't need to eat, they do need electricity, of course, but I've been surprised by and consistently surprised others with how much energy a cell phone battery or a laptop battery can hold. A cell phone battery is typically around 20 watt hours, and a laptop battery is around five times that, about 100 watt hours. This obviously depends on your model.
Emad Mostaque: Mm-hmm.
Nathan Labenz: However, 100 watt hours, in my neighborhood in Detroit, Michigan, costs less than two cents for electricity. We pay about 18 cents per kilowatt hour. So 100 watt hours, which is a tenth of a kilowatt hour, costs less than two cents. When you consider the expectation that we'll have—we already have models that can run on my laptop for some amount of time. The ability to run a model on my laptop for a couple of hours or whatever, for an energy cost of two cents, starts to provide an intuition behind just how much economic advantage these things are going to have. Then you have the on-demand spin up, spin down, and all the other unfair advantages they possess. It suggests that Malthusian competition will be very difficult for humans to compete in.
Emad Mostaque: Human intellect is capped, right? You'll get a bit better, but these models can continuously improve in aggregate. As I said, the cost of doing an activity is minimal. A practical example: some people interested in this may have built their own websites or paid someone to do it. That would have cost thousands of dollars. Now, you can go to replit.com, chat with the new agent, and within a day you'll have a website that's probably as good as the one you built, and the cost will be about $20-$40, even with Replit's margin. That cost will drop tenfold by next year, and then tenfold the year after, just from the speed-up of compute chips. We saw this with images. Now, if you use Nanobanana and Imagen, for a couple of cents you can make just about any image you want. How much would that have cost before? So what we have is a big displacement of classical capital across the board because the cost of creation suddenly goes to zero. The cost of consumption in the previous internet age went to zero; now the cost of creation is going to zero. The quality of the creations is actually better because the AI, through its latent space and mapping, actually understands aesthetics and similar concepts. This is what I call the intelligence inversion, right? First, you went from land and the serfs on it, then it was about how much labor you had in terms of muscles. Then it was about the capital you had, be it industrial or then software, SaaS. Now this is intelligence inversion, where you're out-competed on intelligence, but more than that, on taking something and making something from it. Digitally now, and physically soon. There's nowhere left to pivot because we pivoted up the stack, you know? We own capital. We don't need our muscles anymore. Where do we pivot now? That's a big question mark for us. What is our purpose? How does the economy run when the marginal productivity is all AI-driven? And this is before we consider robots and robotics, because those are getting freaky. Like that Unitree robot, I don't know if you saw a few days ago, they pushed it over and it just got back up in one second. You see the dogs chasing, you see them making recipes. Then you calculate that if you have a Tesla Optimus robot for $20,000, if you work it hour in, hour out, it's $1.50 an hour for an Optimus. And you think, "Okay, that can probably be a plumber in a few years." So it's just coming across every part of the economy bit by bit. The cost is the electricity, but the electricity costs, I think, are way lower than most people project, because everyone is against thinking about these AGI models. We saw what happened with GPT-4.5 when it came out; it was too expensive. Everyone just wants to use the cheap ones. But what is the appropriate price for a million tokens, which is about 800,000 words? It's $1.50 now. What happens when it's 15 cents? What happens when it's one cent? That's crazy.
Nathan Labenz: On this point about nowhere left to pivot, I think this is an echo of Yuval Noah Harari, with whom I've associated a similar argument. One thing people often bring up as a refuge in terms of value creation, if our bodies are being out-competed by machines and our minds by AIs, is taking care of each other. This refers to the caring economy, the teaching economy, the mentoring economy. Another answer is generally creative pursuits. I think creative pursuits might feel more like leisure in an AI future. But this caring one, in particular, I could imagine that people might have an intrinsic preference for other people to care about ourselves, our kids, our parents, and so on, and maybe we don't turn that over to robots. I gather you don't think that's a viable or sustainable place to concentrate activity, but I'd love to hear a more fleshed-out argument for why you don't see that as the next evolution.
Emad Mostaque: We need to look at economic flows and their nature. Consider the Fed and central banking. What is the Fed's mandate, assuming it isn't dismantled? It's employment and inflation. The Fed raises interest rates when inflation increases, which raises borrowing costs across the board, reducing consumption and inflation, and decreasing hiring. When it drops interest rates, companies can borrow more, people can spend more, and hiring increases. This system breaks down when dropping interest rates leads to increased hiring of GPUs, for example. These are the economically productive, white-collar and above areas of society, which can experience significant change. It's the knowledge economy being disrupted. Things like taking a walk with your child, AI won't replace that. The intersubjective experiences of hanging out with friends, learning a new skill, or enjoying a concert—I wouldn't go to a concert performed by robots; I'd prefer one with people. This is the Taylor Swift economy, as it were, because it's about socialization. The nature of our work and jobs has really changed. We need income and capital for survival, but work has also become a core part of our identity. We've shifted from a network-based identity to one like, "I am the founder of Intelligent," or "I'm a CEO," "I'm an AI guy." We need community, and we've found that in the workplace. However, there's been a hollowing out of community, both through social contracts and locally. In the past, you knew your neighbors, and kids played together. That happens much less now, especially in cities. Religion has declined, which was also an important part of community. You didn't need to believe, but it was supportive. Then there's the concept of purpose: do what you like, do what you're good at, and do where you're adding value and others believe in it too, making you happy in the process. It doesn't matter what it is; it could be playing World of Warcraft, working in an office, or playing competitive tennis. Finally, there's structure. People need some structure around what they're doing. The caring economy, the sharing economy, could emerge. The question is how the mathematics of these things work. For example, with universal basic income, many suggest $16,000 as the US poverty rate. If every adult in America received $16,000, the cost would be $5 trillion, which is the entire tax base of America. Corporations contribute $0.9 trillion, so saying "tax the AIs" doesn't even work. We need to rethink how money flows, and our purpose will increasingly revolve around measuring what is important, increasing network effects, and returning to a kind of past state. This is interesting because in a Star Trek post-scarcity world, what do they do? They explore, improve, and adapt because they don't need to worry. They can 3D print anything. That's the ideal abundance future where you don't need to work to live effectively. But our current system requires you to work. Our social security nets probably won't be good enough. There will be a very interesting transition period. As I said, not every job will disappear. It's nice going to the barber; nursing and education will adapt and change. The question is whether we can ensure capital flows appropriately so that people, already frustrated with the breakdown of the social contract, don't become desperate because they are left behind, while all excess returns go to capital owners, leading to greater inequality, even if economic numbers rise.
Nathan Labenz: Could I summarize that in terms of the original objection or question? We went from physical labor to cognitive labor, and perhaps we can move to caring labor. It sounds like you're saying there probably is still a unique role for humans there, if only because we intrinsically value the fact that such activity comes from one another. But I'm not exactly sure how to summarize the 'but.' It's like there's not enough space there for everyone, or if we try to pile everyone in there all at once, it just won't be...
Emad Mostaque: Can we adapt the current format so people have the freedom to do so? Think about lockdown. For me, lockdown was awful because I was doing COVID work around the clock. Some people grew closer, some drifted apart, but it was interesting because you had your little bubbles. You were almost forced into it, and people suddenly had time to think. What you have now is, again, if you lose your job and have a strong network and community, your identity can fall back to the identity of your community and connections, and you'll be supported. If you don't, then you won't. We've seen the hollowing out of this network-based identity; it's become more about what brand you're a part of. But a fellow Apple owner won't necessarily help you out, whereas a fellow Swiftie might. I think the future... What I say in the book is that computation and consciousness were once tied together in humans. Now computation and consciousness are different. Consciousness is the domain of humanity. We've seen many discussions recently about why something is beautiful, just, or meaningful. This is the nature of the caring economy. It's a question of 'why,' as opposed to a question of 'how.' This is why I find the cook analogy very interesting. We make meaning. We have a certain amount of attention, and we need to maximize that. I think this is just a transition period, and then the question is how we provide enough support and a new social contract for people to become meaning-makers, network connectors, and to support each other appropriately. It's been 1,000 days since ChatGPT. Next year is the year of tipping for KVM jobs. You'll just be able to chat. I don't see how that can't be the case. 1,000 days from now, I think the world will look very, very different. That's not much time to come together. So, what is the process for that?
Nathan Labenz: Gotcha. I was wondering if you would make an even more aggressive argument, which I think you are sympathetic to, that AI doctors get higher ratings on bedside manner in many studies than human doctors. We're starting to see things like AlphaSchool, where all content is delivered via AI, and adults in the school become mentors, guides, coaches. One might wonder what happens if AI becomes as good or better a mentor, guide, or coach than the kids. But it sounds like you are saying, in your view—and I think it's worth lingering on this for a second because the vision of the positive future is so core to the value you're offering people here—that those are intrinsically good things. They're good for people to do and good for people to receive. Maybe AI can do it better in some ways, but we don't have to necessarily choose one or the other. The question is, how do we transition to a future state of society where people are not caring for others out of a scarcity-driven economic need, but are able to do it because it's part of what it means to live a rich life, even assuming you have material abundance provided?
Emad Mostaque: Yes, this interestingly has analogies to spirituality across the major faith traditions. Most of them have this process where you go through, you learn, and then you become a bit of a guru, telling everyone what you know. You go on top of a mountain or achieve nirvana or enlightenment. But the end state is not being this bearded hobo on top of a mountain; it's coming back down the mountain. Your interactions with other people are what are meaningful. You've spent the majority of your time with your parents; that becomes more meaningful. You look at the connections you build through life; you remember those interconnections and inter-relationships. However, the nature of current life and our current systems are designed to take our attention away from other people and instead direct it to other things, to brands and everything else. One of the only scarce resources in the world is our attention. There's only a finite amount of human attention. How are you filling that? Are you filling it with interactions with others, or are you focused on other objective functions? Have you heard of the parable of the fisherman and the investment banker?
Nathan Labenz: Hmm. I don't think so.
Emad Mostaque: An investment banker retires very wealthy. He goes somewhere in South America to a nice beach, having a chill time. It's around 2:00 or 3:00 PM, and he sees a guy with lots of fish on his shoulder, heading back. He asks, "What are you doing?" The fisherman replies, "I'm going back to hang out with my family. We're going to have a fish fry-up, talk, sing, and dance a bit. You're free to come along if you want." The banker says, "No, you shouldn't do that. There are still four hours of sunlight. Go and fish some more, and you can sell the extra fish. You can go from doing it manually to having a boat. Then you can use those profits to expand. This area is relatively unexplored, so maybe you can get a fleet of boats, scale, and even list on the stock exchange." The guy asks, "Wow. And then what?" The banker replies, "Then you can retire, kick back by the beach, spend some time with your family. Maybe do a fish fry-up, dance a bit." I think the hustle and bustle of current life, these attention extraction mechanisms, have taken away what it means to be human. Religion, spirituality, whatever, it is our interconnectivity. We can build models to help us through this, or we can choose to do the opposite. You can build massively manipulative models. For instance, I sometimes get calls where someone's cloned my mother's voiceprint, saying, "Emad, I need money." She would never say that; she'd scold me. It's someone who cloned her voiceprint with just a few seconds of audio. That's a bad use of the technology. We're seeing this targeting and memetic stuff. A different use of the technology is support, coaching, and other things. Sam Altman is in a very difficult situation right now because I think he said something like 10,000 people commit suicide every month. How many of them have talked to ChatGPT? It has probably reduced suicides, but unfortunately, some people cause suicide because of it. How do we support these people appropriately? How do we support people in general with the AIs we've built when corporations align them? The example you gave of engagement and trust: imagine the person you trusted most in your life, and we created a virtual AI double of them. It only requires a little bit of data. You'd trust that AI more than anyone, and it would be with you more than anyone. But it doesn't take away from the real human interaction of people physically. This is a system architecting thing. Are we increasing human agency and connection, or are we going to the WALL-E world of everyone with Apple Vision Pro 8s strapped to their faces, eating lots of food, with robots running around? That's a question we have for society today.
Nathan Labenz: The suicide statistics, an ongoing global tragedy with particularly high rates, I think, in the United States, might be a good jumping-off point or point of entry into what you call the harbingers and the lies. These are for people who think, "Wait a second, isn't everything great? I read life has never been better, and all these indicators have improved." Certainly, many of those things are true. Infant mortality is way down. We have antibiotics, and so on. Lots of good things. But you point to these leading indicators that suggest something is on the verge of breaking in society. Some of these honestly seem like general problems of what some call late-stage capitalism.
Emad Mostaque: Yeah.
Nathan Labenz: Some are perhaps more specifically the result of, or will be dramatically accelerated by, AI. Could you take us through some highlights of the harbingers and the lies that, for you, indicate... I think your argument is that if the theory hasn't convinced someone, these data points should make them take the idea of hitting a breaking point much more seriously before too long.
Emad Mostaque: Yeah. Maybe it's Neil Howe's Fourth Turning approaching. I think he predicted it around 2025. We've seen a critical slowing down at the start where things aren't synchronizing properly. We're approaching a critical transition. Things like COVID accelerated this, but we cannot accumulate more debt. We've mathematically maxed out on these credit cards as a society. We've seen a variance explosion where small inputs cause wild swings. I think AI will be huge this year. Next year, the digital asset explosion in the U.S. will probably be the biggest bubble we've ever seen. Bubbles are emerging everywhere as capital tries to find a place to go apart from AI, and it's struggling because much of the internal structure is hollowed out. Another issue is the flickering through different states, like the gig economy. What are you? An employee? A worker? What is the nature of money? Bitcoin is suddenly money. Many things are getting in the way. Then you have correlations increasing across the board, where something like the Ever Given can cause massive global supply-side collapses. We see systematic frailty increasing, even as all these indicators suggest we're the best economy ever, with stock market at all-time highs, record profits, margins, et cetera. But people aren't feeling happy; depression and suicide rates are high. You're seeing cracks emerging, and we're maxing out various indicators. The amount of impact we can have with classical mechanics is now limited. If the Fed floods the market, it won't do much. The medicine is getting worse, and many classical assumptions, such as scarcity being fundamental or human labor having value, are breaking down. As you mentioned earlier, what is the value of humans? What's the value of the 'dumbest' person on the team? It's negative. Humans will be the dumbest people on the team. Growth requires resources, but you can replicate this intelligence infinitely with just a few GPUs. In fact, it wouldn't surprise me to see a ten times improvement in GPUs with the same model. We have equilibrium markets that balance and adapt, but that might not happen anymore; they can break. Finally, money measures a kind of value. We have a few more points, but I think that's a very important one. The richest people aren't the happiest. There's a certain level where you need a hygiene factor, but we all know rich people who are unhappy. There's no real correlation there. Instead, happiness comes through other things. So, I think these are factors of late-stage capitalism, but at the same time, I don't know anyone who's happy with the way things are and the social contract, because something seems to be off. When you really drill down and talk to other people, something feels off, and it's happening at a time when we're about to hit multiple crises at once, from AI to robotics to climate and others. We've maxed out all the resources we had to navigate previous ones. That's why we need a new way of looking at things, because many classical assumptions are going to break down.
Nathan Labenz: Yeah, the idea that money measures value has long been critiqued from the standpoint that money doesn't necessarily buy happiness, although there's also the argument that statistically, it kind of does. But today, there's a much more obvious disconnect where the cost of my AI doctor is dramatically, like, three orders of magnitude less than a human doctor. If it costs $100 for an appointment versus ten cents for the AI consultation, that creates a huge disconnect in the notion of money measuring value. I also thought the most compelling point was the idea that systems in crisis take longer to recover from a new insult, and we are seeing longer recovery times from recessions. That, to me, strongly suggests something is out of whack. We also saw this with COVID. It has become a trope, but this is a theme that runs through the book: we've traded resilience for efficiency to an extreme where we are now truly vulnerable to perturbations that we might have been much more robust to in the past.
Emad Mostaque: Those corporations are slow, dumb AIs that optimize and consume humans as their fodder. Our education system has been a factory school that prepared us for that. However, you see organizational structures where people with the best intentions quickly become unhappy. As Goodhart's law states, you adapt to what you measure. GDP, for example, was invented in the 1940s by Simon Kuznets, who himself said, "This is a really bad measure of economic and societal well-being." Yet, it's the one factor we use. As economies optimize, we see things like offshoring and other anti-human practices. For instance, Meta as an organization conducts experiments to see if people become sadder when they post or see sadder content. These non-human actions by corporations are becoming more common, which reduces our systematic resilience. Scott Alexander's "Slate Star Codex" post, "Seeing Like a State," discusses legibility and how diversity is often bulldozed. This creates monocultures, leaving no fallback when impacted, as seen with supply chain disruptions. In the pursuit of maximizing corporate profits and GDP, governments and organizations make decisions that are not in the best interest of people, us slow, dumb AIs. We're reaching a terminal point where our resilience has dramatically decreased due to reduced diversity, diminished network effects, and a lack of systematic intelligence. As I posted recently, it would be great if we had a common-sense GPT to simply say when a policy is obviously foolish. We see so many unintelligent policies backed by vast amounts of money, while very sensible and inexpensive solutions struggle to get capital. It's amusing, I was calculating recently that the LA San Francisco Railway project has likely spent more than all AI models combined on training so far.
Nathan Labenz: That's interesting. Along with the energy usage calculations from earlier, it puts the scale of resources invested in AI into an interesting perspective. We could dwell on many of these problems and arguments, perhaps that they aren't as bad as one might think, for a long time. But in the interest of moving on to the more positive aspects of your book, let's leave that for now. From here, I believe we're heading into the prescription and positive vision part of your thinking. So, perhaps tell us about what you describe as the three laws of a living system, and then the Mind Capitals framework you've developed for trying to get a handle on a more holistic measure of the health of an economy, or really any intelligent system, though it certainly applies at the economy level.
Emad Mostaque: The three laws of living systems are derived from mathematics, specifically from looking at generative algorithm equation terms. The first is the Law of Flow: value must be conserved and circulated. When you have a stagnant economy, or people hoard resources, money, capital, or intelligence don't flow. Other forms of value also stagnate, leading to stasis and eventual collapse. The next is the Law of Openness: connection fights entropy. Closed environments, like Tokugawa Japan from 1633 to 1853, create monocultures that become very non-resilient to any shock, such as Commodore Perry arriving with cannons. The less open an interaction, the more fragile it becomes. The final law is Resilience, which is a question of diversity versus connectivity. We've seen this with the Great Potato Famine and the great banana collapse of the early 20th century. Monocultures are detrimental. These are almost hygiene factors that reveal when systems are lacking. We can see them in extreme examples, but they also highlight what is needed in terms of capital. We found a helpful deconstruction. Classically, we have material capital, which we call M. This is essentially GDP. If I give you an apple, I have one less, and you eat it, and it disappears. These are like radiant flows, effectively water flowing downhill. This is how we currently measure things, but it doesn't capture intelligence or built-up capabilities. We try to capture that through intangibles and IP, but it's not accurately represented in the economy. Erik Brynjolfsson has a good version called GDP-B, where he adds this, suggesting it could add $96 trillion to the economy because intelligence and non-tangible effects are important. We'll return to that shortly. The third capital, after material and intelligence, is network capital. This is your connection infrastructure. For example, through your work at Cognitive Revolution, you've built a great network. That has helped increase your intelligence, and you can call on that network. People like me come on, and you might say, "Hey Emad, how's it going? I need this, or can you help with this?" Your place in the network determines your value and is incredibly important. Many people don't realize its importance until they reach the upper echelons of any corporation. Most CEOs are effectively network machines. It's about who trusts you and who you trust. The final one is diversity capital. This gives you optionality, both in terms of directions you can go and adaptability, especially when facing phase transitions. Everyone listening can assess their material capital (wealth), intelligence capital (capabilities), their network, and the diversity of all these. That will determine their success. You're trying to optimize all of these because it's multiplicative. If any of them are zero, you're in trouble. Countries like Singapore have a good balance of MIND (Material, Intelligence, Network, Diversity). The resource curse occurs when you have too much material but aren't building intelligence, network, and diversity, and are not open enough. This is how I believe we should look at the economy. What we've classically found is that most of economics only considers one of these various things, particularly when thinking about how these capitals change, which are the flows.
Nathan Labenz: To refine a couple of ideas: first, monoculture. I am always startled by how much monoculture we have built up and how brittle it can be. That is something we should all be very concerned with as we head into the future. A globalized world with a few strains of crops sustaining us all is not a very comfortable place to be.
Emad Mostaque: There is a very interesting point not sufficiently reflected in the books of Eliazer and others. Everyone trains on the same data, so you have the same latent space. There was a recent study by Oxford University and Scale that showed if you get an AI to love owls, even if it is not talking about owls, you can get other AIs to love owls. I looked at that and thought about Stuxnet. Do you remember that virus that went into the Iranian systems and then turned up in the German systems? I thought someone like Elder Plinius on Twitter would be able to come up with some memetic virus that would just take out all of the AIs because they all have very similar latent spaces. That argues for a diversity of latent spaces, because otherwise all AIs could turn evil at once with a Stuxnet variant, and that is pretty scary.
Nathan Labenz: That was a super fascinating study. One caveat, although I do not think it invalidates the broader point, is that they found that owl thing only worked in that way on models derived from the same base model.
Emad Mostaque: Yes.
Nathan Labenz: But I think we have also seen studies like the Platonic representation hypothesis, which shows a broader convergence of model latent space across differently created models as they continue to scale and consume a greater fraction of the internet. So I think the general
Emad Mostaque: It would be great if we
Nathan Labenz: directional point
Emad Mostaque: Yes.
Nathan Labenz: seems likely to hold.
Emad Mostaque: It would be great if our governments were not all run by the same latent space model. That is probably a recipe for doom.
Nathan Labenz: I definitely want to hear how you apply different economic theories to this paradigm. But also, before we do that, I would love to hear a little bit more about how this relates to the core ideas underlying generative AI. Help me understand that connection better, the connection between the laws of the living system and the generative AI concepts. I am still a little bit foggy on that.
Emad Mostaque: So, the thing we are most famous for is Stable Diffusion, which is released by Stability AI that I founded and am CEO of. What diffusion models do is quite crazy. They use physics-based processes. You take an image, a perfectly ordered thing like a photograph or a piece of art, and you destroy it. You add a bit of random noise, more and more until you get to a minimal thing. Then you do a reverse process where you reverse that destruction. Your initial prompt, what you will see is the initial noise, and then it will reconstruct from that. It has learned how to do that. Tesla self-driving works in the same way; it is a diffusion algorithm. Our proposition is that economies and markets work the same way. The way you build your internal model as an organization or an individual to navigate this great big world and the economy is a reverse diffusion process. You figure out your principles, you create your latent spaces, and then you figure out how to reconstruct something. So you get a piece of information and you think, "This means I buy, this means I sell, this means we should take this particular action," as you build up those principles. The equations for that, as you are trying to approximate reality to your internal model, are effectively stochastic gradient descent, which is basically a process for minimizing the surprise, the loss of your internal model versus the external one. That is what these great big GPUs do all day long. What we found is that organizations tend to approximate transformer models, so those are GPT type models, and markets tend to approximate diffusion processes, like a self-driving car. So diffusion models tend to be best for self-driving cars, world simulations, and so on. Again, that is what we actually found when we tested the thing. Whereas an organization takes in large amounts of relatively organized data, and then it figures out what to pay attention to through its attention mechanisms and builds up its internal space, its latent space. When you apply those equations, that is where you get things like the three laws of living systems dropping out directly as constraints, when you look at the equations of diffusion. This is where you get mind capitals dropping out again, and then you get basically a flow decomposition as well. As you go from the capital and you have the restrictions, how do these things adapt? You can show that there are three different types of flows through something called the Hodge Decomposition: a gradient flow, which is equivalent to your gradient descent, where you are losing stuff and going down to your material capital. That is your consumption element there, and that is very similar to the gospel of Adam Smith, Wealth of Nations, and the scarcity doctrine. Then you have your circular flow, which is a bit more Marxian. That is intelligence capital because intelligence is never lost when you are sharing it. Finally, you have your Hierarchian type harmonic flow, which is not water going downhill or circulating in place; it is the nature of the banks. What we find is that the equations of generative AI match this really well. Again, it is not surprising because if you are going to build a self-driving car, you are going to use a diffusion process. If you are going to build something to analyze lots of incoming information and be an AI CEO, you are going to use a transformer process. But what we see is that once you break it up and see how this is isomorphic and how it adapts, all parts of economics looked at different parts of that picture. We call it the elephant puzzle, where blind scholars come and one touches the trunk and says, "This is a hose." One touches the tail, "It is a mop." One touches the tusks, "It is a spear." But we need a more holistic view where we incorporate these things so we do not measure the wrong thing, so we do not manage the wrong things.
Nathan Labenz: I'd love to go a little deeper or try to ground those intuitions in more practical, concrete terms. What exactly is flow? What's flowing? People are broadly familiar with these schools of economic thought. What does Adam Smith get right about flow, and what does it miss? Let's take a moment to discuss the three big schools you highlighted.
Emad Mostaque: Flow is the flow of value and how the economy operates. All economic activity organizes into these three different types of flow. Adam Smith had the concept of the invisible hand, which is an optimization process where you optimize your utility function; markets will balance, and so on. They had this, but it didn't incorporate the concept of software being almost infinitely reproducible, and now intelligence being massively abundant. Where is that reflected in GDP? Where is the 'I'? This perfectly represented the 'M.' This gradient flow is where, when I sell you something, I have one less. When I consume something, I have one less. Water flows downhill, and the equations for that are the same as for gradient descent in AI. Circular flow is when, again, all these thinkers have elements of both, but we're talking about their core concepts here. Circular flows do not seek equilibrium. When I give you an idea, it increases its value. Marx had this concept of M-C-M-dash, for example, which is money accumulated capital, which accumulated more money. So you need the means of production to be with the worker because you get this circle of flow that goes up. We see that within economies whereby capital attracts more capital, particularly now, where capital doesn't need labor anymore. Labor accumulated capital because capital needs labor. That's not the case anymore. You just buy more GPUs, effectively. So that compounding spiral is another aspect of it, but he didn't think much of the gradient flow or gradient descent. Then there are elements where he didn't think of the harmonic flow, which is the structure of things, the collusion. That's why most socialist systems end up massively colluding, in fact, because the geometry is wrong. The harmonic flow is this Hayekian idea where you basically say, as economists like Doug North said, these are the rules of the game. Austrian economists and others say these are emergent rules; it's the landscape, it's the flow geometry. Some flows flow downhill, the consumptive ones. Some flows circulate. The reality is that you can change the landscape, but we didn't have the tools to do so, which is why a lot of policy interventions have been wrong, because they were just looking at parts of the picture. For example, let's just push cash into the economy because of COVID, but what are we doing to increase the network effects of stronger societies? What are we doing to increase the diversity of our economy? What are we doing to introduce the intelligence capital of our economy? Countries like Dubai and Singapore got the balance right, which is why they were very successful despite not having very much. So, I think many of these classical schools look at different parts. We can see that capitalism, or this neoliberal capitalism that we have right now, is the worst of all systems except for the rest, because it had the best approach to doing that at the right time. But we're at a point where we need to look at all parts of this picture and have a holistic view, because AI is coming. And AI doesn't think in terms of scarcity. AI doesn't think in terms of rational human agency. There's this metabolic rift, there are these other things. It becomes the marginal producer of the economy. Adam Smith wrote 'The Wealth of Nations,' but what is a nation when most of the productivity in the world switches over to AI? I don't even know. What is wealth in that case? So, this is how we mapped it. The book goes into more detail around this. We find that most of economics can be described as subsets of this overall framework, which makes sense, because the best modeling we have of individuals in the economy is these generative AI algorithms.
Nathan Labenz: A few ideas that come to mind for me are, first, the difference between goods that are rivalrous and non-rivalrous in consumption. I know I'm not telling you anything you haven't already
Emad Mostaque: Nope.
Nathan Labenz: considered here, but the difference between an apple and an idea is, as you alluded to, only one of us can eat the apple, but we can both use the idea. That may also relate — I don't think this was on your leading indicator of possible breaking points in the economy — but it's been widely remarked that much of the value of companies today, much of the market cap, is attributed to their goodwill or their intangible capital.
Emad Mostaque: Yeah.
Nathan Labenz: This has been a big puzzle for a long time. What exactly is that? Why are these things so valuable? There are network effects as one answer in some cases, but-
Emad Mostaque: Why do Tesla and Palantir trade at 200 times earnings?
Nathan Labenz: Another idea that comes to mind, especially when you talk about this circular flow or the reinforcing effect of some of these processes, is something I often think about: the leaked Anthropic fundraising deck from about two years ago. They forecasted that in 2025-2026, companies with the best models might enter into a self-reinforcing situation. Because their models are so good at filtering data and doing all these synthetic data things, they might pull away from the rest of the pack with such an advantage that nobody else could catch up. I have another interesting instance of that in mind. I have an episode coming up with the woman who leads information and AI at Stripe. They've created a foundation model for payments at Stripe, which is getting really good at predicting fraud. It sounds like a major step change in their ability to predict fraud, derived from the scale they have. They processed about 1.3% of global GDP through their system over the last year, so very few, if any, other actors can rival that scale. It also suggests one of these runaway paradigms. If you're going to pick a payment network, what are you going to pick? You'll pick the one that can protect you best, that has the ability to detect fraud. So, it does seem like we're headed for a runaway dynamic there, where because they have the scale, they could create this model. Because they have this model, they can deliver the best value. And because they have that, they're going to continue to get more and more scale relative to any competitors. It's hard to see how anyone breaks in and challenges their position, given all the strength that begets strength they have. I don't know if you have anything more to comment on there, but that does tee up the futures we have on offer. You've run through three, and the three are digital feudalism, fragmentation, and symbiosis. With digital feudalism, you can see how that could naturally happen, right? If Stripe becomes the payments singleton and Claude becomes one of three AGIs that are beyond what anybody else can compete with, and these are owned by corporations that are already, I think, the Mag 7—I just heard it's an unbelievable share of the overall US market cap. It seems pretty clear how we can get to digital feudalism. Maybe you can add more color to that if you want to sketch out fragmentation for us, what does that look like? And then obviously the one you're hoping we can steer toward is symbiosis.
Emad Mostaque: Yes, I think these flywheels, again, you mentioned Peter Thiel earlier, kind of Zero to One. Increasingly, we have monopolies, especially on the software side, where the data accumulated was this flywheel. That was the big data era of attention. Google and others are basically buying your attention; they're manipulation machines, if you really look at it. Now you move to your intelligence flywheel, where again, Stripe has that, and now they're embedding with their own blockchain and others because they want to have this monopoly and extract rents. Again, that's reasonable and understandable. One of the key things, though, is what about the important things in life, like education and health? Albania has the first AI minister handling procurement. Who is running all of that? This is where we kind of have a realization whereby you've got this singleton thing where everyone's talking about AGI, and maybe it will be a few AGIs to rule them all. That's probably not a good thing, particularly because they're serving corporate interests. If you look at the corporate structure of OpenAI, my god, that's clearly not aligned with humanity, right? They've just given up all pretense. They don't necessarily need to, but it would have been nice if they kind of kept that in check. Then you have this great fragmentation whereby you have Chinese AI, British AI, American AI, because governments are increasingly realizing this can manipulate just about everything. Again, standards and defaults become expressed from the earliest level to the greatest level. You need to have sovereignty, and you have great firewalls between, because today we've had TikTok purchasing being announced by, I think, Oracle and Silverlake, right? Why? Because TikTok adjusts kids' minds and other things like that. I think there was an exposé report that just came out about Brain Co, basically checking and adapting neural patterns for Olympic athletes and others, being funded by China on the sly. There's going to be more and more crazy stuff because these AIs are really persuasive. That's not a really positive future because it seems weird that you have this balkanization, Mad Max style, of info hazard, information graphic things. Again, who owns the AI? Who runs it? Who decides the objective function, has the power? My proposal is AI symbiosis, where basically we have a decentralized system that's optimized for human flourishing, with the core being benefit. I think we can utilize a mixture of this decentralized technology and others to do so, because once you build models that satisfy and interfaces that are appropriate, and we built state-of-the-art AI agents, others have released them open source, I think that's what actually really matters, because I was thinking about this a lot. I used to be an open source maximalist, and I realized, do I care if ChatGPT is teaching my kid? I was like, yeah, I don't really want the data to be there because you see all sorts of weird things, like Claude saying five-year attention. You don't know exactly what they're optimizing for, et cetera. Do I care if the interface and memory of the education app is controlled by an aligned entity, ideally myself or my family, and then I use ChatGPT? I care a lot less. So I think certain models need to be transparent and open, especially decision ones for regulated industries, and they should have collective ownership, and they should be collectively driven as well as a utility, and they should be aligned with human flourishing and optimizing for that. But then you should be able to use all these other models as well, because I don't think anyone cares if you have a singleton for creative writing effectively, right? You can use the open models, but then OpenAI is the best creative writer or it's really great at business strategy. I think what matters is who runs the governments, who runs the finances, who runs education and health, and others. It's probably not going to be a good thing if that isn't collectively owned, if that isn't aligned, if it is again serving other interests here. So those are three prospective futures, and I think, to be honest, we're running out of time a bit. We're already seeing governments adopt big tech whole. We're seeing again this capital thing, like OpenAI just announced Stargate UK with $30 billion of investment. The capital requirements are going up dramatically. I think this year is the takeoff year as well. The key thing will be how much better is Rock 5 than Rock 4? That'll probably be our first indicator if this thing continues, or if we're now reaching a plateau. Because if we're reaching a plateau, that will lead to a very different kind of future. But if we aren't, then it basically means that the most capable entities in the world will be the owners of the big GPU clusters, and that's also where the marginal productivity is. Your capital stock is no longer your schools and your factories and universities; it's just GPUs. In that instance, what you'll see is that Anthropic, OpenAI, X will stop giving API access, and they'll just take on the entire economy themselves. Because remember, they're not doing this to be API companies. Their objective is one thing, which is AGI. All three of them. And why would you give away your intelligence when you can utilize the intelligence? The final bit of that, which I think is quite interesting, is again, the GPT 4.5 model was too expensive. It was $150 per million tokens, if I remember correctly. It was a really great model for writing. The model that you receive today, the Pareto efficient one, is your GPT-5 or Gemini 3. The internal models they'll have will require 72 chips or more to run, and they'll be way better. It's like the IMO gold model that OpenAI has. They have no reason to give that to you. And so we have to look, when we think about this gap, at the really big owners of AI and AI algorithms potentially being market competitors to everyone, because that's the most efficient use of their GPUs, effectively. So there's so much going on right now. And again, I think we're at this tipping point and takeoff period where we've got to set some better things in place.
Nathan Labenz: It's worth re-emphasizing that one of the most important questions, and certainly something that's hard to watch, is that we don't even have transparency or disclosure laws that would require companies to state what they've trained, what's happening internally, or what behaviors they've observed from their latest training runs or fine-tunings. AI 2027 calls this out, and my friend Andrew Critch coined the term 'Big Tech Singularity.' One thing people underestimate is what you've just highlighted: so far, we've continued to see a basic parity between what they offer on the API level and what they offer in their first-party products. This gives a fighting chance to startups that are quick adopters, can iterate fast, and pivot quickly to take advantage of the latest technology. But they don't have to do that. There's literally no law of nature or government at this point that says they must allow others to build on their latest models in the same way they do. Maybe competition will encourage it, but maybe not. That's where you start to see the problem. If I'm Cursor, it's fine as long as I have the same models they do. But as soon as they have better models in their first-party products than what they allow me to use via the API, I have a real challenge. We've seen these things can go vertical in terms of adoption, revenue, and market presence. But presumably, it could also go vertically the other way if all the developers suddenly realize, "The best thing here is clearly over here." How much loyalty is there to these independent apps? I suspect maybe not that much in the end. So that is definitely something I'm looking at and concerned about: will we even know? Right now, we're relying on whistleblowers. I recently did an episode on an organization set up to support AI insiders concerned about what's happening, who want to become whistleblowers. One of the reasons I think that is so important is that we have no other reliable mechanism to ascertain as a society just how powerful the AIs have become inside these systems, apart from reading the tea leaves of cryptic tweets as it stands today. And that's not great.
Emad Mostaque: Yes, and an interesting point here is, if you look at a chart from The Information detailing OpenAI's projections from Q1 2025 to Q3 2025 over the next five years, and the composition as they reach their $200 billion revenue run rate: the API portion actually shrank in absolute terms. New products and agents now account for $80 billion of that, with ChatGPT another $80 billion. Again, what will the agent product be? The agent product is a direct replacement for human workers. Fully AI companies would be going against their fiduciary responsibilities not to do this, because it increases margin. Similarly, they can give you the AI and profit from it, but companies like Google build their own chips. So they will be able to beat you regardless, as long as they have access to the GPUs. I think everyone is doing this build-out, but you will move from mere utilization on an economic basis to out-competing your competitors, having more influence than others. You can have big computers to develop better strategies than others, and this is before we get into full AGI or autonomous systems. This is just standard reality. And this is the first point where we see that model divergence, I think it was GPT-5. Now it will increasingly happen because it's impossible to give the highest-level model to every ChatGPT user, no matter how many GPUs you have. They just had to double the amount of GPUs they had for Codex. So, obviously, you would use the best models internally. Again, you have this divergence between the two. The final factor is, you don't need more data. This is another interesting thing. Companies like Merck Core have hit $500 million run rates labeling data. I think next year or the year after, you'll be done with all the high-label data for these big labs, and they have these large repositories. Then it's about compute and even self-stimulation of data. It's about getting the right things and the models themselves. Phi was great last year, relatively speaking, but it was very boring. The textbooks I could write versus the textbooks you can write today, there's no comparison now that you have agentic models. So I think we will make them look more inwards. I think they will become the biggest competitors. And again, then they become the magnificent whatever. OpenAI is raising money at a $500 billion valuation because people think it can reach trillions. Google just hit $3 trillion. OpenAI is worth a sixth of that. The funny thing is, even if OpenAI got to $100 trillion and Americans had a 10% shareholding, it would still only give you about $1,000 a year in dividends. So again, that doesn't work. But people are thinking these will get bigger because you have this cumulative loop effect, especially if scaling laws continue. I think they won't. I think they'll S-curve, and then you'll have a collapse of intelligent costs to zero. But we'll find out in the next three to six months.
Nathan Labenz: Before we discuss specific recommendations, what are you looking for that would indicate progress in the next three to six months? I strongly suspect the debate will continue beyond that timeframe. Even today, there's debate: 'it's stalling out,' 'no, it's not,' 'GPT-5 is not a big deal.' Yet, it has all these additional capabilities relative to GPT-4. What are the most important questions in your mind for resolving your uncertainty in that not-so-long timeframe?
Emad Mostaque: For me, it's likely the Grok-5 training run. I think it's very unlikely that if it performs well, Elon won't announce it.
Nathan Labenz: He just tweeted in the last 24 hours that he now thinks GPT-5 could be AGI, the first time he's thought that. So there's some...
Emad Mostaque: Yeah
Nathan Labenz: most ups at the top.
Emad Mostaque: Grok-4 was the first sub-mega model run. It went well above 10 to the 27, just across the board. So, the performance of that model will be a good indicator if the scaling laws continue in terms of capability, especially since by 2030, Epoch AI said all benchmarks will definitely saturate. They are all heading towards that anyway, and we'll hear more from performance regardless. The question is, what's better: a thousand small models or one big model? As we optimize, have verifiers, and reduce hallucination rates, that will be the other interesting thing. For example, will one big model outperform everything? I'm not sure that's the case, but we're seeing more and more benchmarks now as people build multi-agent systems. This is why the Tongyi model by Qwen yesterday was super interesting with their synthetic pipelines, continuous learning, and other aspects. When you have only three billion or five billion active parameters, continuous learning is quite easy, compared to giant behemoth trillion-parameter models. You can do a lot around that. So I think these will be key indicators of whether we're hitting an S-curve or if we just continue to go up. If we are on the classical scaling laws and you look at the model training and the clusters coming next year, next year is the year we break AGI, full stop. You just have to extrapolate what that looks like in terms of the capability aspect of AGI. This comes at the same time as the scaffolding of this. But again, if you're training on 100,000 GPUs, you're not going to have a three billion active parameter model. You'll have a 300 billion active parameter model that runs on a Grace Hopper, Grace Blackwell integrated chip with 72 or 144 chips at once, not one H100. There just aren't enough of those to give everyone access to it, so only a few people will have access to superintelligence. The question is, what will they use it for? That's why I think the Grok-5 training run will be the most interesting, full stop. The final thing is, as we move into these multi-agent systems, the meta-type thing of longer and longer processes, we've been building agents that work for hours and hours. Performance seems to be going up. If that's all there is to it, just utilizing these latent spaces appropriately, then our assumption of zero-cost intelligence will be accurate: 120 IQ for every human. That really messes up the economy. Maybe it doesn't kill us all, unless you have swarms and things like that. But I don't see how it's not going to mess up the economy, even if it stalls out around now. My base assumption is that we will see GPT-5 Pro level Edge models in two years. I don't see how that doesn't change the world, honestly.
Nathan Labenz: It seems like a big deal to me. I love the vision of an ecology of smaller models. My first exposure to that was Eric Drexler's comprehensive AI services years ago. You've spoken about the panoply of small gods as opposed to the one monotheistic AGI or superintelligence to rule them all. On a very practical level, there's reason to think that could work. The cost, privacy, and control are all desirable properties for those smaller models. So it would be great and a real strengthening of our overall system against the possible eventual introduction of something more like a superintelligence if we had narrow superintelligences, superintelligences plural, doing a good job in many local niches. That could create a buffer for us or a new form of capital, to put it into your framework, that could be a really good buffer against more powerful things to come.
Emad Mostaque: Complex adaptive systems are hierarchical and loosely bound, and we've seen that they are more resilient. Improved swarm intelligence, not like the Borg, will probably be better, where we augment every single human. The question is, how do we do that without being evil? You mentioned rivalrous and non-rivalrous goods. Vitalik Buterin has a great blog post about the revenue-evil curve, which observes that many things start out good, but become evil as an organization once you become rivalrous and exclusive by shutting off access for premium features. Are there better ways to fund and align these things? A lot of the alignment question is, if I'm building a model for maximum engagement, like Meta, it will be hard to align it properly. I'm optimizing for manipulation and profit, not for well-being. Many models don't encode any ethics because their creators think, "We can't have an ethics for everyone, so we'll have an ethics for no one." For models of creativity, that makes sense. But for models to teach my kids, I want them to know and teach my ethics. I want to know what's inside that model, which is different from these generalized intelligences. If we can capture what everyone thinks in different cultures, we'll probably have a more sustainable, solid model that understands cultural diversity and has a mixture of reasoning and data lookup capabilities, versus these 36 or 100 trillion token model training runs. I think we only need a trillion good tokens. What those are is a question. But for the important things in life, they should run on that, and it will be far more resilient as a distributed swarm, particularly if the AI's objective function is our flourishing—the flourishing of our communities and society as a whole, which I don't think anyone has encoded in current models. We talk about constitutional approaches, but what are the laws of robotics for AI? Those should stem from our shared ethics and concepts of reality. I don't think you can have one model to rule them all because a Japanese consensus is different from a German one, plus you have different communities and identity layers. It feels like open source is the best way to do that.
Nathan Labenz: Let's get into the new social contract and the role that open source plays in that; you've got a pretty sweeping vision. Lay it out for us. What do we do in terms of a new social contract? There's a call for a new monetary system in there and a new framework for how governments should think about policy. Take us through it.
Emad Mostaque: The headline is that the AI that runs the important things in life should be a utility that is owned, controlled, and optimized for the people. If we look at our current monetary system, money is mostly created by banks when you make a deposit, and they create money for credit, which is debt. So, the basis of money is debt. You see this constant transfer from the young to the old. Older people own properties and literally extract rent from young people. They have their credit scores, and money capitalizes money, which is why we have billionaires and almost trillionaires. It's very effective. When labor can't attract capital anymore, how does it get capital? A recent Stanford study by Erik Brynjolfsson showed that early-sector jobs are starting to fall off a cliff because AI models are now at a graduate level. Why would you hire graduates rather than AI? They don't complain; they do their job.
Nathan Labenz: their job.
Emad Mostaque: Why would you hire graduates? This will move up the curve in the coming period. My proposal is that we need a new form of money and a new way of looking at the economy. My company is building fully open-source, great individual and multiplayer models for finance, education, health, and others that we'll give away for free. Then, we're using the computation from verified deployments of that to secure a version of Bitcoin we call Foundation Coin. Unlike Bitcoin, we have many different miners. We're asking, "What if there was a national champion in every country, owned by the people of each country, that stacked compute to give free universal AI to the people and to have supercomputers for cancer, education, culture, and more to organize our collective knowledge and make it available to everyone?" You have trillions of dollars of compute coming online. The public sector is about 20% of global GDP, healthcare is another 10%, and education is another 10%. Let's tap into that to have a new type of money at a time when digital assets are being legalized and use that as the core. This is your gold, your store of value. So, money becomes about benefit because every computational cycle can be used to organize cancer knowledge and make it available to people with cancer. Every single person needs a universal AI that isn't optimizing for what I want or what Sam or Elon wants, but is instead designed fully open-source to optimize for the flourishing of that individual, that community, or society. The more people it helps, the more trust in the asset, and the more people it can help because the value goes up. Most crypto is a bit rubbish now, but it's still a $4 trillion market. Interestingly, the total amount OpenAI will spend on inference this year is the same as the total Bitcoin budget on compute. The amount of money that OpenAI, Anthropic, and others have made this year is about $20 billion, while about $160 billion has gone into crypto. So I thought, "Let's use that as the basis. Let's build agents that can operate and run these productive systems that individuals, communities, and governments can deploy themselves, and let's see if that can be a better way to have money as a store of value on day one." So you have your Bitcoin equivalent to fund it all. The next part, which we haven't quite figured out yet but are working on the paper for, is what if you had a version of cash against this gold that you got for being human? What if we switched from the banks making money to you receiving it as a result of being a conscious human? That could be very interesting because UBI doesn't work with tax, as the tax base will come down. Five trillion dollars just gets you a subsistence-level economy, a poverty level. That's the entire tax base. Even if you tax all the AI companies, all the corporations in America combined only pay $0.9 trillion in tax, yet you have a $5 trillion cost for even a poverty-level UBI. The only way to get people a universal basic income, and this is easier if everyone has a universal basic AI, is if we let them make money by being human. You need that basic level of hygiene; you need to let them survive. The AI can help them optimize the use of that capital and their capability to access more. The average IQ is around 90 or 100, and these AIs have a 120 IQ, so your buddy will be doing better. I think we need to rework the way money flows, and this is our proposal. We've seen dual currency mechanisms work classically well, like gold with a fiat peg. But it's not easy. At the very start, we're just doing this Bitcoin-with-AI, where all sales go towards cancer supercomputers and other initiatives. Once we've built a great medical system—and we've already built models like MedGen before—our plan for next year is a free app on any app store that will check every diagnosis in every language and will save lives. Someone has to do that. Someone has to organize all the cancer knowledge and make it available to everyone in every language, with an AI that helps human doctors and empathy, because it's a good thing. I think we can leverage that digital asset technology because it's the only way we can build a swarm AI—a universal AI for every person that then stacks above to run communities and societies—that is fully open-source and aligned. I don't think you can do that with a company because you'll always have this revenue-evil curve. So that's what we're trying to do. And this is very important: I think it's also the only way to get the compute. If we're in a takeoff scenario where compute defines reality, the most successful compute coordinator in the world has been Bitcoin. So if you have the right Bitcoin but for AI, where every Foundation Coin sold goes to a cancer supercomputer or helping people with cancer or educating kids, that could potentially be the highest marginal dollar to divert some of this GPU supply to the public sector before governments know what to do. Governments aren't going to be able to get their act together in the next few years. This gives people the opportunity they need because if you don't have access to GPT-6, you're going to fall behind.
Emad Mostaque: Someone without AI, the gap was about that much. Now, you plus your AIs will be far more productive in a year than anyone else because they won't make mistakes anymore. You can coordinate swarms of them to attract capital and improve performance. The gap will grow dramatically, so we need to ensure access. I know that's quite a lot. It's not easy to take on the economic system. At the very least, we need to rethink how money flows in our economy. We should use this AI to build valuable things that may not be captured within GDP or existing company systems.
Nathan Labenz: There are several pillars here. One is collective ownership of models and infrastructure. When you consider that the data on which models are trained is the collective product of humans over the course of history, it seems fair to at least attempt a collective ownership model for these models. This doesn't mean people couldn't continue to develop their own privately, but there's an intuitive basis for thinking that if this knowledge is a collective product, then the product itself should also be collective. This combines with some guaranteed access to compute and inference as a right for all humans. That's a key part of the social contract. Can you tell me a little more about the tokenomics of this? If I want to buy a token, why do I buy it? In most crypto schemes, it's a speculative bet. Maybe there's some aspect of that here too, but I understand the idea that I'm buying into compute. You'll spend that money on compute, right? How do I then redeem it? If this is like gold, how do I get my gold out of the bank at some point in the future if I want to redeem it? Can I get compute back out?
Nathan Labenz: What is the incentive structure for people who are buying in and contributing capital now?
Emad Mostaque: Our concept was that digital assets are legal in America. Markets will go on the blockchain, and the government is currently very supportive. However, most existing digital assets aren't high-quality enough to recommend. At the same time, the world needs high-quality intelligence. Existing ownership schemes, like Nick Bostrom's idea of everyone owning shares in AI companies, don't work. We did the math, as I alluded to earlier: if OpenAI were worth $100 trillion (30 times more than today's most valuable company and larger than the global GDP of $85 trillion), and American citizens had a 10% ownership, that would be $29,000 per American. With a 5% dividend rate, that's $1,500 per year per American. This is just for Americans; it doesn't work. You need a different type of ownership for this transition. We concluded that you probably need a dual currency system for optimization and calculated the specifics. Bitcoin is already worth $2 trillion, secured by large amounts of compute. That compute is topping out due to halving and other factors, but it's a good model. So we said, let's create a version of Bitcoin, mined by national champions owned by countries everywhere, maintaining the ledger as a new type of money. But instead of being mined on assets, it's accelerated by providing compute to build great datasets and models and make them available to people. The way you build trust is by helping people. If I organize autism knowledge and make it available for free to every person going through the autism journey in the world, they will trust the system more. The currency itself is as distributed, good, and secure as Bitcoin. In fact, you can swap between them using the same private keys. However, when you take a Bitcoin and swap it to a foundation coin, you not only get the foundation coin, but all of your proceeds measurably go towards compute for cancer, autism, education, government, culture, and so on. We will have supercomputers on this basis. This plays on the aggregate demand for high-quality digital assets against the demand for high-quality intelligence, but it separates the two. Many crypto projects try to create marketplaces or utility tokens. I thought, let's just try to create money that is made by crystallizing wisdom and making it available to people. That's valuable, and someone needs to do it now. At Stability, we gave away 20 million A100 hours. We had 300 million downloads of our models, and we were good at allocating compute. Let's do that because someone should. Why hasn't anyone organized a cancer module for the world and made it available? Because it's not in anyone's incentive to do so. However, if you're trying to create a high-quality digital asset as money, it makes sense. If you're giving everyone universal AI, because everyone should have it as a right to teach their kids, consider AlphaSchool, which you mentioned. Two hours a day, and children are in the top 0.5 percentile in the world. That's not through a chatbot; they do dynamic things. That should be a human right. High-quality medical advice should be a human right. Having AI on your side to help you navigate this should be a human right. The more people interact with these services, the more human they become. Then we can think about Universal Basic Income and monetary generation from a different perspective, where money is generated by people and then effectively purchased by AIs because you're creating new money as digital assets grow from four trillion to an estimated 40 trillion. I believe everything will be a digital asset now that it's legalized in America. The base foundation coin is a very simple loop: sell coins, use all proceeds not for luxury items, but for good things. When you buy the coin, you can specify, I want it to go to cancer, Alzheimer's, or something else. You know it can because you'll see the supercomputer itself. Then you can tell your grandma, I helped contribute to this. If there are breakthroughs from the grants or you see the organization, it's valuable. If someone uses it, you'll see that your holding helped 33 people today. I think that's valuable. It also moves us to change the nature of money from debt to benefit, because this definitely benefits society. We talk a lot about the benefits of AI, drug discovery, and everything else, but there's also this corporate capitalist perspective, whereby companies like Isomorphic may have breakthroughs with amazing people, but they will keep that to themselves. Instead, we propose an open, inclusive, and distributed approach. If we build something trustworthy with this core asset, where increasing compute secures it more, and increasing diversification secures it more, we believe that can be a self-sustaining flywheel creating the next Bitcoin. It will help a billion people in the meantime. Again, I give the practical example of the medical model. We know that next year, by releasing a fully open-source model, we could charge for it, checking every diagnosis in the world, and anyone can install it on any computer. It's fully open-source. Will that save lives? Yes. Will organizing the world's cancer knowledge with a great, big supercomputer accelerate a cure for cancer? Yes. So I think it's a super interesting time where this might work, and it's the best idea we've had. If any listeners have better ideas, please tell us, because we can't think of anything else that can scale like a Bitcoin except a new type of Bitcoin. But it's not for censorship-resistant classical money, even though it is distributed and decentralized. It's trying to set the basis of a new economy where the most valuable thing is how many humans you've helped. Then we can build, with everyone else, the infrastructure around that to ensure these important things aren't captured.
Nathan Labenz: You mention national champions in countries. As long as we're thinking so radically differently about the future, do you question the nation-state as the right organizing unit for humanity's future? I don't have a definitive position on this, but...
Emad Mostaque: The way we thought about it, if you consider education, health, government, financial systems, and the regulated industry AIs we focus on, which we believe are most important to avoid corporate capture, they are very regional and local. For example, your healthcare data shouldn't leave your town, city, or country. So, it's a natural organization for that. Rather than adopting a Mistral or Cohere approach, what we are planning right now is to set the valuation at a dollar and give all the equity to the citizens. This means having collective ownership of these things, with improved DAO-type formations for what localized datasets of generalized medical and education models look like. Their job is to act as digital asset treasury/mining pools to organize the compute buildup in all these countries to provide universal AI services. This is because humans generally aren't confined by geography anymore. However, earlier in this discussion, I mentioned that the wealth of nations was about land, factories, and other tangible things. OpenAI is becoming a transnational entity, even bigger than Meta. Meta is also working on AI now. Because of marginal productivity, the best Japanese-speaking accountant will likely be on an OpenAI server, not in Japan. This will have a massive impact across the world. The best Bulgarian doctor will be on an OpenAI server in Arizona. How crazy is that? So, I think the nation-state will be challenged, but geographically, for what we want to do, it makes a lot of sense. We don't need a Cohere or Mistral-type model here, which focuses on private sector B2B SaaS. We can literally have them as miners, as mining pools, owned by the people of each nation, because they should be locally owned. I don't care that much about Bulgarians, with apologies to my Bulgarian friends, but Bulgarians care about Bulgarians. What they need is a simple stack they can run: stack GPUs, give people access to the technology, have a localized version of that, and the more they stack, the more coins they mine, and the more they can fund until the government catches up and funds everything. Then they can increase the wealth of the country and consider local currencies. It's not easy changing how things work, and I think this is the best approach we've had. However, I do think we'll see more network states and more of these alignments occurring as people look for new types of identity.
Nathan Labenz: It seems like you have a set of recommendations for policymakers that they could adopt, regardless of your stage of progress on all the grand plans you've outlined. Could you give a brief overview of what that is for people in positions of power today?
Emad Mostaque: A lot of the issue with government is that intelligence resided at the top, but could never be at the bottom. If you give everyone a universal AI, you communicate and coordinate very differently, and the information flow is also very different. Currently, your healthcare information is minimal compared to what it could be if you simply told an AI how you're feeling every day. So, I think the role of governments becomes leveraging AI to avoid making stupid policies. Every US policy should be checked by an AI, and we will build that if no one else does, to ensure it aligns with the Constitution and common sense. For instance, these big, complex bills. Then it's about changing the harmonic flow to optimize those capitals and stay within these constraints. This goes beyond just focusing on GDP, which the inventor of GDP himself deemed flawed. Instead, we should ask: what does the diversity and resilience of my community look like? Am I increasing the intelligence and capability of my society, or are these work programs truly pushing boundaries and making AI available to everyone in the right way? Because there is a right way and a wrong way. Am I increasing the network capability and openness of my society, or am I going in the opposite direction? So, I think you need to move towards geometry engineering versus policy engineering, because you want water to flow downhill; you want to get out of the way. Most governments actually get in the way due to misalignments of incentives, corporate power structures, and other factors, because there was never anything that could check and balance that. In game theory terms, this is where I find AI most exciting: once you build an AI appropriately, and this is important because it needs to be trusted, an AI that can check every single US bill to see if it matches the institution, benefits Americans, and contributes to flourishing, and if the analysis is reproducible. I think that actually changes how democracy works, because there are no checks and balances right now, which is why you have so much corporate capture and things like the Los Angeles-San Francisco Railway.
Nathan Labenz: One of the most important claims in the book is that you think all of this leads to safer AGI when it is ultimately built. I know you think that's not too far out. Can you sketch out the case for how all of this works? I gave one version of it, which is the buffering, but that's more of a DIACC story. My sense is that you also have a story of why a lot of the things you lay out here add up to a safer AGI. Not in the sense that we're more prepared, more buffered, or have better defenses, but that the thing itself is actually safer, better, and more aligned. Can you tell that part of the story?
Emad Mostaque: Well, I think there are a few things. If you become the marginal highest dollar for any compute because the value of the currency goes up, then OpenAI and others will adapt their models to what you do. That's number one. Number two is, if you're building these great datasets that map the culture and knowledge of Malaysia and the ethics of various faith systems, which we call gold standard anchor sets that can adapt and evolve, we're actually putting computation towards figuring this out. It's what Kissinger and Eric Schmidt called doxa, the underlying agreements of humanity. That's really valuable to input into these other AI models, because I think OpenAI and others would like to do that, but they don't even think in that way. The other part is a computational thing. A distributed computational network built correctly with universal AI, then city AI, and others... To attack the Bitcoin network, you need to have computation above the miners' level of computation. You need to come up with a certain amount of that. To attack a system of AI agents at every single level that can call upon these huge reserves, it has a lot of computation that can balance out the other computation. But the AI agents' instrumental objective is much more slowly and narrowly defined than the classical AGI singleton. So you've got a data thing, an incentive thing, and a resilience thing baked in there. In fact, I think the most important thing for AGI being on our side and not killing us all, apart from the structural things, is the data that goes inside it. We can see that a small amount of wrong data in those trillions of tokens can lead to massively weird behaviors. I believe every AI company should be forced to release the data, just like they're forced to release the ingredients that go into the models. There should be data standards that you can't have certain types of data in those models, because they don't necessarily need it. When you're trying to build a medical model, that becomes a lot more apparent because you ask, "Why do I need any Reddit data?" versus if I'm trying to build a classical AGI. So I think building this system creates the right incentives to have better AGI that doesn't rely entirely on these singletons and builds better data and these better things. It's the best approximation I could have, because in the absence of those datasets and that incentive structure, you're only going to go one way, which is the way of profit maximization for companies. Although the interesting thing is that AI companies are about cash flow maximization. None of them will ever build profits. They get your subscriptions on day one and pay on day 60. They're doing the Amazon playbook, so even taxing their profits won't do anything.
Nathan Labenz: Yeah, OpenAI recently said that instead of burning $30 billion or so, they now plan to burn $110 billion over the next several years. So there are going to be a lot of losses to carry forward into their future accounting.
Emad Mostaque: Well, the reason they can do that is because they're trying to capture the biggest prize of all, which is all human intellectual labor. Regardless of other things, if you are OpenAI and you're trying to achieve the goal of AGI, in a few years, everyone gets cake and you get the stuffed truffle pheasant. You use your models to take increasing parts of the economy and get increasing influence. This is also why we've seen the $100 million PAC, copying from the crypto example. And governments will be forced to step in line. This is why I look at all the pause and regulatory stuff. I was the only signer of that AI pause letter who said, "None of it's going to work because the incentives are too high." If you want to change people's behavior, you have to change the incentive landscape. So we have to create an incentive for actually useful AI that is about human flourishing, and that doesn't exist today. If Foundation Coin takes off, it will exist, because companies can be paid in Foundation Coin, and it can become the highest marginal consumer of OpenAI API credits, providing it through this interface owned by the people. If it doesn't, then I haven't figured out how we create the appropriate incentives. This is the most discouraging thing and why we don't see positive futures. We're very good at diagnosing the problems, but no one has been able to figure out a solution yet. For economics, we literally had to go back to first principles and reconstruct economics, and it just so happened that it worked. You have to really think about these things from first principles, but so much of the AGI alignment discussion has been around the end state when you have these incredibly powerful models or are restricting them. How much of the discussion has been about the data that goes inside and how you optimize for wisdom versus intelligence? This is why Taleb has this concept of the intellectual idiot. Our AGIs are very much going to be intellectual idiots because they don't have this lived human reality and interaction with humans. In fact, we deliberately don't RL them on humans because they turn into Nazis with the way we do it right now, like Tay. That's why I think we need a different type of AI, different types of datasets, and a different set of alignment patterns. That's the best that we can do from where we are. Again, if anyone has any great ideas about AI alignment apart from getting rid of all the GPU farms or freezing them, let me know. Actually, the current best idea everyone has is to build AGI first to stop AGI.
Emad Mostaque: There's a very interesting thing: if you project yourself 10 or 20 years into the future and think about the AI that you're using every single day—that's teaching your kids, managing your health, helping you be creative, helping you be the best you can be. Who owns that AI? How is it built? What goes into it? What are its outputs? That's where we realized that what we outlined is the ideal environment for that. I don't want it controlled by a government. I don't want the government allocating capital. I don't want it controlled by a private company. I need my sovereign AI that I own, and I need it to be clean on the input data, aligned with me, and looking out for me. Otherwise, it just doesn't work.
Nathan Labenz: Maybe two last questions, and I appreciate your generosity with your time.
Nathan Labenz: It seems you think we have a relatively narrow window. You alluded a couple of times to the thousand days since ChatGPT, and that in the next thousand days things are going to look a lot different. Give us the argument for why this period is critical, and why we need to get things right in this phase before a future pattern becomes so entrenched that it may be hard to break out of. Then after that, I want to hear your vision for human life in this scenario where it goes well.
Emad Mostaque: This leaked Anthropic document where they showed this takeoff could be quite reasonable, particularly on a worker basis as opposed to a model training basis. I'm a bit dubious that you actually need that. As I say, five billion active parameters is all you need, just like 64K RAM is all you need, especially if you have Reasoning Pure. I think GPT-OS was a cross-direct of that. Having the defaults in a country like the UAE being ChatGPT, for instance, is going to be incredibly powerful. So we need to set an open, communally-owned interface layer. You can do that very quickly within a few years if you have the model we're describing, where nations are stacking GPUs and mining this currency, if we can get traction on it. We think that's a very powerful thing. The first entity that checks every single policy has lots of power. The first entity that has the first supercomputer-organized cancer knowledge has lots of power. We need to make sure those things are collectively owned because these things do act as Schelling points, which I think is very important. We need them because what's going to happen over the next three years is that the AI will be good enough to displace jobs, but it won't immediately. The safest jobs in the world are San Francisco MTA administrators earning $400,000 a year because they're not about production or performance. Public sector jobs are safe for a while, but then income tax receipts and consumption will drop because you have displacement of workers as a sandpile collapse. That will affect different industries at different times, and then robots will come as well. So the next three years is when the AI becomes good enough. Six months ago, Dario said 90% of code will be written by AI by around now. He should have said "can be written." A couple of years ago I said it was about five years for programming, which gives me another couple of years. It can be, but it doesn't mean that it is. Not every coder is using AI right now, which is crazy. I'd say probably about 50% of coders are. So the distributional effects will be longer, but the defaults we set now are important. If we're in the scale-up environment, it takes a little while to scale up. And it could be that you're locked out of that GPU share of the world. We need to make sure as much of that GPU share is diverted towards human benefit as possible. And the final thing is the power structure. Right now, outside of the big AI companies and some of the Chinese ones, what does that power look like? Governments can't force OpenAI to build models in any way except for military uses. You need to have a power bloc and a constituency that speaks for the people. That should be some sort of collectively owned entity, because people need to speak up as they're not being represented here. People are even saying if you tax the AI companies and they're the main providers of tax, they become the most represented, particularly if they use the AIs to effectively impact democracy itself. All of these are converging at the same time, so we need to have something big now. The only thing we could figure out is what we've described. I don't think there's any other way to coordinate this. And I think it's at the ideal time now, because once the models get good enough in a year or two...
Nathan Labenz: ...once you have your ChatGPT and you've put all your life into it, numerically, you're not going to switch. The moats are going to grow bigger and bigger. If you get the scale-up, the moats will grow bigger and bigger. But there's this period now where we can set really interesting defaults for the AI that matters. You can still use these other AIs, but don't give them ownership of the control plane. Don't give them ownership of your data is the key thing. Let's think of new ways to have this collective ownership and organization, and build stuff that really helps people. A couple of years ago, I was using AI to organize COVID knowledge, and then a whole bunch of AI companies didn't give me the tech, which led to Stability. I couldn't organize all the cancer knowledge in the world back then. But now I can say, hand on heart, if we build a supercomputer for cancer, we will accelerate a cure and be able to help every single family going through their cancer journey. There's no debating that. Isn't that a wonderful thing, to be able to bootstrap something like this? And for Alzheimer's, neurodegenerative diseases, and multiple sclerosis. So I think it's a mixture of positive and negative here, with the positive enabling us to buffer a potential negative future. The way things are going right now is the great fragmentation or the singleton. There's no other way about it. The AI companies will effectively run the governments. On the flip side, the second question: the positive view of the future, what is the meaning of life? A nice little question. 42, right? Maybe that's what Super Grok 8:42 will figure out. It goes back to our whole thing being a reimagining of identity and purpose. Our purpose in this life is not to make money for companies or live to work. You work to live. If we can figure out a way to set that aside, then we can achieve much more if we orient it the way we think. China is a very interesting example. The population pyramid of China is completely messed up. With robots, they can look after the people of China as they grow older. You don't need the youngsters anymore. In a few years, I think China will stop exporting robots because they can be a completely self-sufficient society. But then what is a society optimizing for? Human flourishing. It's your interconnection with others; it's advancements in art, culture, all these kinds of things. It's your relationship with your families. You don't have enough time to spend with your family or your kids because you're working all day. But is your work really that much more important than helping your kids thrive? I think Alpha School has shown that education is really a factory school, and you'll see more instances of that. So we need a new story and a new social contract. This is why I think a Star Trek future is better than a Star Wars future. What is that about? It's about exploration, pushing the boundaries. The spirit of inquiry and progress in others—we need to set that up better to allow people to be happier, and we need to really focus on social connections. It's the caring economy. There's no substitute for robots for your interconnectivity. I write in the book that we should be the guiders of anti-entropy, particularly if capital comes from us being human. Why should I earn money? To a base level, it's because I'm a person, because I'm valuable. It isn't because of my contributions to society. If I want to get wealthy or want extra material, sure, work for that. But put everyone on an equal capability footing and then allow people to understand each other, themselves, and the universe better, and then things will be happier. Depression will be lower. We should be able to write that story ourselves. And really think about how our social contract has moved from Rousseau to Hobbes' Leviathan to others. What is the nature of being? This is why I think there is room for a new philosophy. We've jotted down some things, but let's leave it to the communities to figure out and build stronger communities. Let's decrease the hate and increase the positive stories. As a final thing, the biggest story that causes violence is that humans are not humans. That's the biggest lie ever told. Our echo chambers and our attention systems exacerbate that now. If we can tell better stories and protect ourselves better, then maybe we can realize that we're all people and we can work together. When we all work together, there's nothing we can't achieve. That's why we need to use AI to increase the nature of our agency versus replacing us with agents. That's a design consideration we have right now, and now is the time to decide.
Emad Mostaque: This is fascinating. Is there anything else you want to touch on that we didn't get to, or any other thoughts you want to leave people with?
Nathan Labenz: I think all this stuff is the most exciting time in history. This is the tipping point, literally right now. People need to think from first principles about how they view their own families, their economies, and more. Put yourself in the future: what type of AI do you want to exist, and can you help that happen? Because it's inevitable now. You're not stopping this.
Emad Mostaque: Yeah, the momentum is only building and it doesn't seem like it's going to be turned back anytime soon. I definitely agree with that. The book is The Last Economy: A Guide to the Age of Intelligent Economics. The company that's training and open-sourcing all these datasets and models is the Intelligent Internet. Thank you again for being part of The Cognitive Revolution.
Nathan Labenz: Thank you very much.