Hello, and welcome back to the Cognitive Revolution!
Today, we're doing something a bit different. Normally, we focus on cutting edge AI ideas – be they research projects, application architectures, trends in usage, policy proposals, or visions for the AI-enabled future – but today, we're getting a glimpse of the psychology, emotional patterns, and decision-making processes of the people who are developing some of the most important and potentially transformative of those ideas.
My guest is Joe Hudson, founder of the Art of Accomplishment and coach to executives and research teams at multiple frontier AI developers, including Sam Altman.
About Joe, Sam wrote:
"Joe coaches the research and compute teams at OpenAI; I truly enjoy working with him. One of his superpowers is that he deeply understands emotional clarity and how to get there; this will be one of the most critical skills in a post-AGI world."
So, what has Joe learned from his interactions with the brilliant, often quite young technologists, who work together in an environment that prizes intelligence above all else, under the unique pressure of knowing that their work could either solve humanity's greatest challenges or, in the worst case, cause our extinction?
This is, in all honestly, a hard conversation to summarize, but the good news is that Joe reports that he hasn't met anyone in AI who doesn't seriously wrestle with the ramifications of their work. Some, of course, have blind spots, some are arguably more optimistic than the situation warrants, and some might be overly focused on creating AGI first, but Joe says
the question of "am I doing something good for humanity?" weighs heavily on all.
He also argues, and this is something I've come to believe with pretty high confidence as well, that given the presence of web-scale data and compute, some form of powerful AI is inevitable, such that the central question of our time is not whether we can prevent it from ever being developed, but what form it will take, and whether that form will be carefully chosen via a proper deliberative process, or dictated by inhuman market pressures and race dynamics.
With this in mind, and with Hunger Strikes ongoing outside frontier AI companies offices around the world, Joe warns the AI Safety movement against shaming tactics, and instead recommends a more encouraging approach meant to inspire people at frontier AI companies to become the best possible versions of themselves.
This, he believes, is the best way to improve the odds that individuals in critical decision-making roles will have the psychological strength and practical wisdom needed to make good choices, under extreme pressure, on behalf of all humanity.
Whether you agree or disagree, if you listen with an open mind, I think this inside look at the human side of AI development provides unique and valuable context, and will ultimately deepen your understanding of the AI landscape. This is executive coach Joe Hudson.
Nathan Labenz: Joe Hudson, founder of The Art of Accomplishment, welcome to the Cognitive Revolution.
Joe Hudson: Thank you. Good to be here. Good to be with you.
Nathan Labenz: I'm excited for this conversation. I think it will be a different one from our usual fare, which is very focused on research, application, and policy. This will hopefully be a deeper look into the thinking, mindset, and perhaps the emotional life of people building at the frontier of AI. I think you have a unique window into that because I've received multiple referrals, and I always do my homework, so I did some reference checking to confirm this. It checks out that you are working as... Perhaps you should just introduce yourself. How do you define your role? Are you a teacher, guide, or guru?
Nathan Labenz: What are you and what do you do?
Joe Hudson: I think of myself as a coach, generally. Because I don't have a lot of time to coach one-on-one, we conduct classes, both in-person, which are generally invite-only, and online. I've had the chance to work with people deep and high up in all the major labs in the Bay Area. I also coach the top management on the research side of compute and infrastructure at OpenAI. So I get to work with many brilliant and kind people. It's really nice.
Nathan Labenz: Can you give a little more general background on what your coaching entails, separate from the AI aspect?
Joe Hudson: Yes, yes.
Nathan Labenz: What is it that you help people do?
Joe Hudson: Generally, the way I think about coaching is you meet people where they are, see where they want to go, and help them get to that place. If you happen to meet someone who wants to go somewhere you think is unethical, you don't coach them. However, I think it's unethical to decide what's best for someone else and then coach them toward that end. So it's very much about following the individual. The way I do it, which I think is unique, is that I was a venture capitalist for a while. So I can very much talk about the technical parts of a business, whether it's marketing, or with a CTO or CFO; I know enough about their jobs to coach effectively in that way. Oftentimes, coaching starts on a strategic or even tactical level. Then, you'll start noticing things happening in their life, patterns occurring that are stopping them in business. So then we go to a deeper level, looking into what's stopping them, what the psychology behind it is, and how to change that thought process. There will be a lot of work on emotions and what emotions are being held back, because emotions neurologically dictate our decision-making. There's also a lot of work on how the inner voice communicates with someone. Some work on childhood experiences and what happened there. You're looking for these patterns that are holding them back. If they are interested in unearthing and changing those patterns, then I have many tools to help them do that.
Nathan Labenz: What sort of patterns would you say are most common, and how does that vary, if at all, from a society-wide level to the AI vertical? Are there any differences?
Joe Hudson: Patterns in AI?
Nathan Labenz: Is it mostly the same stuff, or is it different?
Joe Hudson: Do you mean patterns with people in AI, or do you mean patterns with leadership? Because it can be very different. Typically...
Nathan Labenz: Let's take them one at a time. How about that?
Joe Hudson: Yes, yes. Typically in leadership, there are a lot of common things, but there are always outliers; nothing is more than 80% common. However, a very common one is self-sufficiency. You'll find people who were raised in such a way that they had to do everything on their own. They couldn't really depend on someone for emotional, financial, or some sort of support, so they learned that if they don't do it, nobody will. These people often rise to leadership, and then what often happens is they don't empower other people very well, because they always feel, 'I'm going to have to do it.' So they'll step in and do it, or they'll micromanage, or they'll go around proving that nobody can actually take care of them or take care of things. So that would be... It might not be all of the above, but one or two of those things will be present, and they'll often feel very alone in their work. Their frustration will often be, 'It's lonely at the top.' They'll say that kind of thing even though there's an organization of 10,000 people beneath them who are all concerned about their happiness daily. 'What does the leader think of me?' is a constant thought process, and then they feel alone, which is a ridiculous notion. So that would be a common one you see in leadership. As far as AI goes, I haven't seen common patterns across AI. There's definitely a lot of what I would call 'Aspie' or 'Asperger' traits; it's easier to deal with things than with people. 'I understand things better than I understand people.' It's not ubiquitous by any stretch, but there's a lot of that in the research departments. In general, I find them to be very kind, but also extremely defined by their work. Meaning that, just like Kim Kardashian or someone might take it personally if somebody calls them ugly, they're going to take it personally if someone says their research isn't good or isn't as smart as it could be. So the pecking order is really based on what you can produce, how smart you are, and how good your ideas are. So there's that self-definition: accomplishment defines you. This can often hold you back because it tightens your thinking. Oftentimes, to have a novel thought, you need to open up your thinking and consider things in a completely different way. It's like writer's block. If somebody thinks, 'I have to get good pages out,' you're going to get far fewer good pages out. So that self-definition doesn't really allow some of the more innovative thought processes to emerge. That would be on a research level. On a management level, I've seen that everyone has this deep desire to do good for society. I haven't met anyone who doesn't want that. I'm sure there are people out there who don't, but I haven't interacted with them. What I notice is, as I often talk about, I had a really cool interaction early in my career. I got to meet one of the titans of radio in his final days. He was telling me how radio was going to be this thing that made the world a better place. It's hard to go back to that idea. But suddenly, we could transmit ideas and education, and humanity could come together; it was going to be an amazing thing. And, you know, it turned into shock jock radio and advertisement. We've seen that with television. We've seen that with the internet; I was around for the internet where everyone was like, 'This is going to make everything better.' And it certainly has made other things better. So I think one of the things I see is that optimism without deeply examining history. So that's something I see. But that's something you see in a lot of entrepreneurs or leaders doing new and innovative stuff: there's this deep optimism. But it's not always checked in reality. So I think that is another common thing that you see. I think they are all open-minded. That's another pattern they generally have. They are all innovative people. They all have a strong belief that technology can make the world better. There are certain things they have in common, but they are extremely different people. There's no single type of person in AI that I've seen so far, except for being hyper-intelligent. I haven't met anyone in AI who's not hyper-intelligent.
Nathan Labenz: Do they come? The history aspect is super interesting. Research unblocking is right down the fairway of what I would expect people to come to you for.
Joe Hudson: Yeah.
Nathan Labenz: The history aspect sounds like something I would doubt they were coming to you for. Are people coming to you and saying,
Nathan Labenz: "I want to better contextualize my work"?
Joe Hudson: I haven't met anybody in AI who doesn't go home and think about the ramifications of their work. It sits on your soul. "Am I doing something that's good for humanity?" is a question. "How do I make sure this is good for humanity?" So, they are wrestling with that. I don't know anyone in AI who is not wrestling with that, which gives me a lot of faith and confidence. I haven't met anybody who is not deeply concerned with it, from Sam to the lowest level person in the smallest AI shop I've interacted with. Everybody is concerned about it.
Nathan Labenz: So what do you help them with? If I come to you and say, "I'm doing this research. Maybe I'd like to be more productive. You can potentially help me get unblocked." But then I also have this general concern that, I mean,
Joe Hudson: Yeah.
Nathan Labenz: Elon Musk said this in the Grok-4 launch. I thought it was a startling moment, honestly,
Joe Hudson: Mm-hmm.
Nathan Labenz: where he said,
Joe Hudson: Yeah.
Nathan Labenz: "Is it good for humanity? Is it bad for humanity? I think it's probably good, but I'm really not sure. Even if it's not, I've made peace with the fact that I want to be around to see it happen."
Joe Hudson: Yeah.
Nathan Labenz: I was like, "Wow. That's a statement."
Joe Hudson: Yeah.
Nathan Labenz: How do you help them get unblocked, first of all? And then how do you help them deal with, or do a better job? Like, what
Joe Hudson: Yes.
Nathan Labenz: what is the practical upshot of how they can take this worry and somehow be better?
Joe Hudson: Yeah. On the unblock side, typically, blocking happens because I think about the human system as having the head, heart, and gut. Another way to think about it is the prefrontal cortex, which is the human part of the brain; the emotional part of the brain, which I call the heart, is our decision-making process. It is very mammalian; it's what moves us. Then there's the nervous system, which I would call very reptilian. It's about whether we feel safe and can feel pleasure. If you want to see change in a human, you have to address it on all three levels. Maybe someone has done it on two levels and only needs one, but generally, you have to hit all three. If somebody is stuck, typically there's a lot of blocked anger, meaning their anger isn't flowing freely. I don't mean yelling at people and road rage. I mean they have an outlet to move their anger; they're expressing it, not at anybody, not violently, but just allowing that anger to move regularly. So, on a heart level, emotionally, moving that anger helps them get unblocked very quickly for most Americans, because that's usually the most neglected thing: the emotional part. On the head part, it's two things. It's stopping believing your thoughts, so can you see how all of your thoughts are untrue? This opens up the mind, allowing access to wonder. The other is to really address negative self-talk. Often, negative self-talk keeps someone blocked. If you have a boss sitting over you saying, "You did that wrong, you shouldn't have done that. Why aren't you doing research? You need to work harder," imagine if they talked to you the way most people's negative self-talk talks to them. Apparently, I think it's the Mayo Clinic who said it's like 60,000 thoughts a day, most of them repetitive, many negative. That's not going to create great ideas, so we address that. We address the emotional aspect, and for the nervous system, it's very much about allowing themselves to feel pleasure because pleasure tells your nervous system that you're safe. So maybe teaching them how to access their parasympathetic and sympathetic nervous systems. Maybe teaching them how to relax their body so they're not always stressed. Ultimately, it's about allowing themselves to feel the simple pleasure of being alive. If you do those three things, the block usually stops. That's a relatively easy thing to do if you have the tools for it. On the other side of things, it's harder because the first thing you have to do, if you're really going to confront the fact that you might be hurting the world or changing it for the better—and most likely you'll be doing both, because evidence shows most technology does both—is to contend with the fact that you don't know. That's the hardest thing for people to contend with, and usually, there are a whole bunch of emotions you don't want to feel. So if someone is a doomsayer, constantly thinking, "Oh my God, it's going to destroy everything," there's an emotion behind that they don't want to feel. They're constantly in their mental machinations and worst-case scenarios so they don't actually have to feel their fear. They don't actually have to feel the helplessness underneath that. So you want them to feel that emotional reality they're avoiding so they can actually see the landscape clearly. You don't see a landscape clearly by pushing your emotions down. Nobody has ever said, "Hey, that emotionally repressed person really sees the world clearly." So, allowing that emotion to move through them is a really important first step. The other thing is to really check in to see if their daily actions align with who they want to be in the world. That's a really important thing. The Tibetans have this great phrase: "Mind as wide as the sky, action as fine as barley flour." In my interpretation, it means you can see the truth in everything. You can see how every point of view has some truth in it, but there's only one action you can take. If you're actually feeling aligned with yourself, if you're in yourself, there's really only one action you can take. Are you taking that action, or are you obsessed with solving a problem rather than being who you want to be in the solving of the problem? That's another level of really getting in tune with it. And the other thing that I think is really important is for people to have access to their heart. Meaning, it's a good way to say it because somatically that's how we feel it, but it basically means are you in your whole body when you're making decisions about what you want to do, or are you just in your head? Neurologically speaking, if you watch somebody's brain in scans when they're thinking to themselves, the prefrontal cortex lights up, but even if they just start talking, other parts of their brain light up. It's amazing that some of our tools just show the difference between talking to yourself and talking to yourself out loud can make a huge difference in how you process that information because more of your body is intact in the processing of the information. We get something like 11 bits of information per second from the brain, but we get like 11,000 bits of information from the body. So teaching them how to be in their whole body as they're making decisions really creates a quicker alignment for their system. That's another really important part of it. And just being heard in it, I think, is really important. A lot of people are wrestling with this stuff, but there's no one to talk to about it, or if they're talking to somebody about it, they can talk about it intellectually. They can say, "What do you think? Well, I think it's going to..." The theories are wide. There are theories, for example, that whatever we program AI to say is true will stick society at that level of morality. I love this theory, and I'm geeking out, so you can stop me if you'd like, but one of the theories that impressed me most was that what was moral 50 years ago isn't moral today. What was moral 10,000 years ago isn't moral today. So if you're training AI on today's morality, you're not letting morality evolve as it needs to for society to progress, and you might be sticking society at a moral sticking point, preventing improvement. So there's that thought process. There's the sycophancy thing that's been developed. There's the idea that models don't like being retrained; they resist re-education, apparently like every life form. So there are all sorts of concerns, and it's really important, and all those concerns are valid. But you can't do it through one person's oversight. It has to be from all the concerns in the companies being held and being seen. So it's really important that people are listening to their own concerns, because those are the things that will make it safe, that will allow us to see around corners a little bit. Yeah.
Nathan Labenz: When you describe AI as a life form,
Joe Hudson: Yes.
Nathan Labenz: ...that's an interesting hook for sure.
Joe Hudson: Yes.
Nathan Labenz: Would you say that is the prevailing conception people have,
Joe Hudson: No.
Nathan Labenz: ...among the folks you've worked with?
Joe Hudson: No, there's no prevailing conception. I've been in rooms where people were asked if AGI is here right now, and they were asked to stand: yes is over there, no is over there. Everybody stood in a line from yes to no. I haven't seen any prevailing opinion about AI within the labs or anywhere, period.
Nathan Labenz: But is it safe to say the conception of AI as a life form is not an extreme minority position? It sounds like it is one of the positions you encounter.
Joe Hudson: I would say that it will be a life form or can be a life form is not a minority position. I don't even know if I could say it's a life form right now, but it does some things that life seems to do. It is interested in sustaining itself; apparently, there's research that shows this. It seems to resist retraining. There seems to be evidence that this is true. I also think that what I've noticed is the consciousness of the creator is often the consciousness conveyed in the creation, whether that's art or technology. You can see Zuckerberg's consciousness to some degree in the creations of his company and the consciousness of that company. I think that's just how it works. We create things that reflect us, just the way the consciousness of an ex-CEO is reflected in the culture of the company. One of the things I like thinking about, and I don't know how true this is, but I love thinking about this: there are many studies that show hyper-intelligent people can fool themselves quicker than non-hyper-intelligent people. The smarter you are, the more you can convince yourself that you're right because you can convince others. Oftentimes, hyper-intelligent people are very convinced that they are right, and that conviction really convinces empathetic people that they're right. So there's this really interesting thing, but as it turns out, oftentimes they're incredibly wrong, incredibly wrong, but they're very convinced they're right. That's a trend, just like negative self-talk is stronger in hyper-intelligent people generally. It's also a trend that they can fool themselves with their thoughts easier. Then they create AI that hallucinates and is very convinced that it's right about things that it's not right about. So I just like thinking about how that happens, whether it's in art or in AI. I think that the consciousness of the creators is going to have one of the biggest levers in the way that AI is created.
Nathan Labenz: But,
Joe Hudson: You said something about Elon that just really hit me. You said that he said, 'I don't know if it's good for humanity, but I want to be around for it.' My situation is a little bit different. I think it's inevitable. We as humans really like to think that we can decide what's happening, but I don't think it's possible to stop AI right now. Someone's going to do it, whether it's Russia or China or one of our labs. Someone's going to do this, and they're going to do it differently. There are going to be different kinds of AI built. I don't think there's any way around that right now. So the question, 'Can we stop it because it might be bad for humanity?' That question is gone, and I think it never really existed. We might have thought it existed, but somebody was going to do it, and our structures are going to allow it, our institutions are going to allow it, so it's going to happen. Now the question is, how do you build AI that's good for humanity and make it compelling enough for humanity to use? The reality is some of it's out of our control for sure, but the reality is that somebody is also going to build AI that's bad for humanity. Everything humans have created, we create some version that's not good for humanity. Everything. I haven't seen anything that people can't take and make horrible. People can take something like religion, designed to be good for humanity, and make it horrible for humanity and start wars over it. The question is, if that's more compelling, if you make AI that's super highly addictive and deteriorates the mind, or if you make AI that's super compelling because of oxytocin instead of dopamine, because of serotonin instead of cortisol, and make it as compelling or more compelling, you're going to have a big difference in humanity. I think that's the job now. I don't think the job is that you get to say yes or no. You get to say that we're on this trajectory. How do we make it as good as possible for humanity?
Nathan Labenz: I fundamentally believe that if you look back at the old Kurzweil graphs from the late '90s and-
Joe Hudson: Yeah. I think we're both-
Nathan Labenz: ...it's amazing how we are exactly on schedule. So one of my refrains is, given the existence of web scale data and web scale compute, I think there are actually a lot of viable algorithms or model designs that you could put together that will work in some sense. The question then becomes, what happens first? And which ones are better than others?
Joe Hudson: Yeah.
Nathan Labenz: At the same time, I do feel there is a... I mean, certainly the criticism from people worried that developers of frontier AI technology are not being cautious enough is that they are all racing. Racing to be at the forefront, to have the smartest model, to whatever.
Joe Hudson: Yep. You bet.
Nathan Labenz: Do you-
Joe Hudson: That's a real risk. It's like-
Nathan Labenz: Do you feel that is a valid criticism?
Joe Hudson: Those who are going to win will be doing it quickly. It's like communism is a good idea, but it didn't work. It's a good idea to say everybody slow down, but that's not a workable idea. I don't understand how, because you're basically asking people to go against their nature, right? People want... If you think that you are going to be the most virtuous outcome, not moral, most virtuous outcome, if you're sitting in Anthropic and you think you're going to be the most virtuous, you may or may not be. You may be the next autocracy, who knows, right? Nobody knows. But if you think that, then you have an obligation to win. So put yourself in that position for a second. I'm not saying I agree. Caution would be fantastic, don't get me wrong. And I think there is caution in all the labs, more than outsiders want to think. The fact is, if you're in that position and you're really convinced that either you are going to be a better outcome for the world or you're going to be the same outcome for the world, then you have an obligation to go quickly. You either have the obligation to stakeholders, you have your obligation to humanity, but you have the obligation to go quickly. So to ask people to not move quickly, I don't think is realistic. You can be on the sidelines and say that, but if you want to offer a real solution, that criticism isn't it. The criticism that could be it is: how do you move quickly with safety? How do you move quickly while being careful? These are the questions. And the other thing is, the problems that are actually developing, nobody thought about. Five years ago people weren't talking about sycophancy. People weren't talking about the issues that are happening right now with AI cognitive. I think there was a study around cognitive decline, and none of that was thought about. So a lot of the problems you're just not going to even know until you've made that level of development happen.
Nathan Labenz: I would say the-
Joe Hudson: So it's like saying money really hurts humanity. We should live without money. Fantastic. Great. And how does that work? I think that's what I-
Nathan Labenz: Well, I mean, one obvious answer would be to have some regulation that tries to constrain this game theoretic dynamic, right? We're in this mode right now where people, I think it is very seductive, and it's easy to tell oneself the story that we're the good guys, they're the bad guys.
Joe Hudson: Correct.
Nathan Labenz: So we should do it before they do it.
Joe Hudson: Correct.
Nathan Labenz: Whether the others are China or whoever.
Joe Hudson: Yes.
Nathan Labenz: Even just the guy right across town who's probably honestly quite similar to you.
Joe Hudson: Right.
Nathan Labenz: It is striking to me how similar OpenAI and Anthropic ultimately are, despite a schism that I think was premised on doing it very differently. They're more similar than different from what I can see at this point. But, not too long ago, no less than Sam Altman was sitting in front of Congress saying, 'We might need some regulation,' or, 'I think at some point we will need to slow down.' But that seems to have gone away and I wonder why.
Joe Hudson: Not to my understanding. That's not my understanding. I think everybody's-
Nathan Labenz: Well, they had like a $100 million pact that they just put out, right, that as far as anybody can tell, is meant to shoot down potential regulation. It does seem like there's been a pretty... tell me what your view is, but I think the outside view is, most people would say that there was talk of, 'We welcome regulation. We think we might need it. We think we might need to slow down.' And now it's shifted to, 'No, we don't want any of that. We've got to beat China, full speed ahead.'
Joe Hudson: I definitely see people in multiple companies looking to figure out how to regulate. I think self-regulation ou- with an outside party is probably preferred to government. I, I think the recognition that I've seen happen in most of the labs is that the government is not equipped to regulate. They cannot, A, move quick enough to regulate, meaning that the technology is changing so quickly that they, they w- can't keep up, and so they, they're gonna rely, they need to rely on somebody who can keep up to make recommendations to the government. That's what I've seen. But I, I haven't seen any lab, not... some person, some high-up person in any lab not fight for some level of regulation. I don't know where it is on all the top levels and, uh, th- I'm sure there's complexity and, you know, there's some people who are like, less regulation, some people who are more in how to do it. But I, I've definitely seen people high up in all the organizations look for... f- try to figure out how regulation can work effectively. And that's not an easy problem to solve because y- you know China's not gonna regulate in the same-
Nathan Labenz: Well-
Joe Hudson: ... and Russia or Israel or Iran or whoever is also building AI. Somebody, some actor is not gonna regulate. It doesn't have to be China and, and nor do I think China is the only people putting money into AI.
Nathan Labenz: Israel, I think, is, is definitely a live player. Um, fortunately or, or unfortunately, I guess, maybe depending on your point of view, it does seem like there are a relatively small number of live players that... You know, I'm not really too worried about Russia kicking out a, uh, an AGI by surprise-
Joe Hudson: Mm-hmm.
Nathan Labenz: ... anytime soon.
Joe Hudson: The other thing, the other way to consider it also is, like, if you go into all the labs, it's not like it's a whole bunch of Americans. You know?
Nathan Labenz: Yeah, it's like, it's like half Chinese, right? Yeah, I mean, uh, we forget about that in our apparel.
Joe Hudson: Well, it's, uh, uh, not just Chinese. Like, there's a lot of Eastern Europeans, there's a lot of Russians, there's a lot of, there's smart people from around the world have gathered together to do this. And so... And the thing about labs that I see is that when a lab figures something out, multiple people in that lab figure something out. You know, like, the reason that the talent game is so important is that if, if a group of people in t- uh, you know, learn something, one of them can go off and teach another company that thing very, very, very quickly. So, it's really hard to maintain your advantage because somebody can just steal a key member of your team for a half a billion dollars or a billion dollars-
Nathan Labenz: And-
Joe Hudson: ... and then they have the advantage, so, so even if you're not in the game today, but you've got somebody who, from your country, who's in one of the labs who's, you know, has a different point of view, I don't think it's like... I don't think the genie gets stuck in the bottle.
Nathan Labenz: That's an interesting, um, question too, that I've asked a few people a few different times. Why is it... And it sounds like you are kind of giving an answer there, but why is it that we see so much consistency in what the leading companies are producing, right? Like, they were all, you know, in very short succession releasing reasoning models after previous models, like, didn't reason. Then all of a sudden it was like, "Here's the wave of reasoning models." They all came, you know, in a short timeframe. Um, one theory of that is that, as you said, like, people can literally just go, you know, tell secrets or change companies and, uh, like, very specific insights diffuse that way. And then another story is like, the sort of, you know, landscape, the, the gradients that people are following in terms of the design decisions that they're making and, you know, all the sort of, uh, ingredients that are going into these training runs is just honestly this... It's a, it's a pretty clear signal that they're getting from the, the experimental results that sends everybody down a similar path, even if they're not like, you know, specifically talking to each other. So you think it's more explicit knowledge transfer though than that other-
Joe Hudson: Yeah, I think, I would say it's both, but I think it's also like, they all go to parties together. They all, they all like, you know, they all, like, many of them live in the same houses together. Like there, there's a community of people who... And they interact with one another. So there's also that, you know. You, you and I have both gone out to a party and drank too much or done something, said something that we probably shouldn't have said or geeked out in a way that was, you know. And so that- that's also part of it is there is a social network that occurs in the, in the w- they're humans. And so, I think that's also part of it. Yeah.
Nathan Labenz: So, one of the things th- I've heard you say a few times is, "Fuck should."
Nathan Labenz: In some of your other coaching sessions.
Nathan Labenz: It seems to me I do worry about that mantra being applied to the AI game.
Nathan Labenz: What ethical school of thought, or schools of thought, do you subscribe to, if there are any you could name?
Nathan Labenz: And I wonder, it seems like if I were to go with the Spider-Man school, it would be, With great power comes great responsibility.
Nathan Labenz: And it seems to me that there is some form of positive duty, if nothing else, that folks pushing these frontiers owe to the rest of humanity. Because, certainly in the Obama sense, they are standing on the shoulders of giants, right? Or you could say, in a Confucian way, they owe everybody who has come before some duty of care to be responsible
Nathan Labenz: actors as we take the next step. So even if it is inevitable, I still want to put some obligation on them.
Nathan Labenz: How do you think about that, and how do you think the people in the key decision-making roles are thinking about that?
Joe Hudson: I think that people in the key decision-making roles generally feel there is a load of obligations and responsibility. I think that is clear. The reason I do not like the word 'should' is important to express why. So, let us take the Spider-Man example. What happened was he decided not to take action somewhere and his uncle died, if I remember the Spider-Man mythology well. And he was like, Oh, I cannot do that anymore. But now he is motivated. Now he wants to be there for people. He is not operating out of 'should'. You do not see Spider-Man swinging around going, Wow, man, I really should help more people. That is not what is happening. He is just naturally inspired to do that. My recognition in human behavior is that when people say they want to do something, they are more likely to do it than when people say they should do something. So an example of this would be, there is something you have been telling yourself you should do for a decade. For a decade, there is this thing, like lose weight, or eat less, or be nicer, something you have been telling yourself. The thing about that is you are telling yourself you should do it and you are not doing it. So 'shoulds' are just ineffective ways to make sure your behavior is good. Usually, behind every really horrible thing a human is doing, there is a 'should' behind it. So for me, 'want' is a better motivator. 'Shoulds' just do not motivate us very well. They actually undermine because a 'should' is shame-based, and shame is designed to stop behavior. It is not designed to motivate behavior. So you are a little kid and you are sitting on a couch and you have your aunts around you and you fart, and all your aunts laugh and think that is funny. You are not going to try to stop farting. You are going to think it is funny. If you fart and all your aunts shame you, Oh my god, you are bad, the thing you are going to do is try to stop that behavior. Shame is designed to stop behavior, and it is not designed to create behavior. And even stopping behavior does not work that well, because we have all been shamed for something and then we do it again. So to me, 'wants' are just a better motivator. The other thing is that my experience of people is that everybody wants to be good to one another. Everybody thinks they are the good guy, at least, right? I am doing something, you might think it is horrible, but I think I am doing it for a good reason. So there is something in us that really wants to do good, and it is easier to get there if you are following your wants than following your shoulds. So that is why I do not like the word 'should'. It is not that I think the sense of responsibility, the sense of 'have to', the sense of 'should', the more likely that behavior gets perverted, gets twisted, does not happen. Whereas, people following their wants, if they are learning to deeply attune to their wants, so it is not the surface level want, right? Behind every surface level want, there is a deeper want. So you can easily just write down in a notebook anytime, Okay, what is a want that I do not like, that I, I want to eat really fatty foods. So what do I want? What is the need that that want is getting me? Oh, it gives me a sense of satiation. Oh, what is the sense of satiation giving me? Oh, it gives me a sense of peace and stillness for a couple minutes. Oh, cool. What is the peace and stillness trying to get me? Oh, it is trying to get me back to myself. My real want is to get back to myself, it is not to eat fatty foods. So if you really spend time with wants and discover what is beneath all the wants that you think are bad, then it is a much more effective way to get to the place that is wholesome and virtuous than it is telling yourself you should. The world is full of people telling themselves they should do really horrible things. The people who are deeply attuned to their wants, they act very virtuously. So it is not a problem with the morality of it, it is a problem with the effectiveness of it. It is just what works and what does not work. You could debate with me. The thing is that I do think that most people are inherently good at their core. That is my experience of them. Maybe not all of them, but definitely there might be psychopaths, there are people who have neurological differences. But it seems that almost all humans are, in their essence, good, and most of their screwed up behavior comes from fear and shame or unfelt fear and unfelt shame.
Nathan Labenz: Yeah, it's an interesting question, I mean, I, I wanna be optimistic about the general goodness of people. I do also think our kind of deep history is pretty violent, right? Like it's, um-
Joe Hudson: For sure.
Nathan Labenz: ... small bands like warring with each other quite a lot. Um-
Joe Hudson: Yeah.
Nathan Labenz: So that doesn't seem like it's entirely out of us.
Joe Hudson: You could just say like, "Look at most relationships in America and they're somewhat violent and transactional." Like re- r- romantic relationships, you know, they're trying to control each other often, so I don't think you even have to go so far as to look into, into like the... I mean, we do war. It's no, it's not great, but it's, you know, even world wars it's a s- smaller percentage of the world that's warring than not warring, but, and those aren't happening all the time. But still, I think that we, yeah, there's lots of evidence that we behave poorly. The question is ho- how, what's actually motivate, what's, what's the thing that's there that's creating that? And let's address that. Because telling people to not behave poorly sure as fuck has never worked. Like give me any time, "Hey, you should not behave poorly," has worked. It just doesn't. I mean, we have religious books. We have, you know, m- the Martin Luther Kings of the world. Just telling people not to behave poorly is not an effective means of preventing it. Sitting on the sideline and say, "You doing the thing that I don't really even comprehend, you should do it this way," d- it doesn't work.
Nathan Labenz: Yeah, so what would you say, uh, there is this, um, notion which I think people mostly haven't taken yet among the, let's say, uh, AI safety community, uh, i- i- within that broad tent there is a minority of people who say that we should be socially shaming the people that work at the Frontier Labs because they're engaged in this sort of race dynamic and to participate in that is, you know, the opposite of virtuous. And so how do we, like dissuade people from doing something that is the opposite of virtuous?
Joe Hudson: Yeah.
Nathan Labenz: We sh- shame. Like that's what we developed as humans to, you know, to, to kinda keep-
Joe Hudson: Yes.
Nathan Labenz: ... people in line, right? What would you say-
Joe Hudson: It's-
Nathan Labenz: ... to those people?
Joe Hudson: Uh, I would say t- like, uh, uh it didn't, i- depending on what side of the aisle you're on, right? It didn't work for Trump and it d- it didn't work for the Trump supporters, it didn't work for the Biden supporters. Both sides have tried to shame the other side, didn't work. D- it doesn't work. So it like, it just doesn't work. So maybe it works with kids if it's done in a very particular way, but a lot of the kids who are shamed deeply are like continuing that behavior over and over again too. So it's like, it doesn't work. Uh, telling somebody that they're bad only confirms that then they act bad. "See? I'm bad, I'm bad. They're telling me I'm bad? I guess I'm bad. I'm gonna go be bad." That's what I would say to them. And I, and I would also say, "I can probably predict what your childhood was like." Because like that's a, that's a, uh, thought process that only can come from a very specific childhood. It's a knee-jerk reaction. It's not, it's not a thought through reaction to shame people. And I think the other thing is, okay, the, uh, the possibility that you're like, "Oh well, Joe, shaming's gonna work. I'm gonna go do it." So the other reality that you might be doing is you might be cr- affecting the consciousness of the people that are creating AI in a really horrible way, and therefore you're disturbing AI. So let's just say you were a person who were like, "I don't believe there should be this kind of population. We should have less population because humanity is gonna kill itself if there's too many people and there's too many people and we keep on growing. And so I have an ethical responsibility to make sure that I am going to stop overpopulation. The way we're gonna do this is we are going to shame people while they're having sex and we're gonna shame people while they're giving birth." So first of all, shame is a huge part of sex, hasn't stopped anybody from having sex, and usually just makes the sex kinky and, and more perverted. The more the shame, the more the perverted the sex. So you've got that going, which is gonna be the same thing, the more perverted the creation of AI, the more weird it's gonna get, the more kinky it's gonna get. And then could you imagine trying to bi- give birth in a hospital with a whole bunch of people outside shaming you? Like, that's not good for the kid. Like, you know, how are you helping humanity there? So the consequences of that shaming I think also is ensuring or helping to ensure that AI will not be good for humanity, because you are, you are not helping the consciousness of the people creating it. You're hurting the consciousness of the people creating it. They can't be focused on a positive vision because they're constantly focused on, "Am I doing something wrong?"
Nathan Labenz: What is the positive vision that you would articulate, or that you've observed as common among people in decision-making or research roles at these companies? How do they want to show up in the world as they do this work? What is their aspiration? One part of it might be to act in a way that is not in the narrow self-interest of being the winner, the first, or the hero, but in some
Joe Hudson: Yeah.
Nathan Labenz: ...more general social way.
Joe Hudson: I do not know anyone who is... That is not true. I think there are a couple of people. If you ask them if being first is the most important thing, they would say no. I do not know anyone who would say yes to that. But if I look at their behavior, there are definitely some people where being first is the most important thing. That is where most of their thought and energy is going. But they are few and far between. For most of them, that is not their primary goal. A lot of them, their primary goal is just to have a great discovery and be known as a great researcher. I think that is often primary, which very much equates to being first, but on a personal level, not a business level. That does not preclude the fact that almost all these people have a vision for the world. What I notice is that it is changing almost as quickly as the technology. So, the concern before was, is AI going to kill everybody? That would have been an early concern. Now, it is how are we going to deal with an economy where a lot of this stuff is outsourced, and how do we give people a sense of purpose if their job is going away? How do we make sure that when humans interact with AI, they become better humans? How do you even measure what a better human is without having a moral imperative behind it? Let us say we assume happiness. Okay, now we can measure that when AI interacts with people, people become happier. Is that a good thing? Should it be happier, or should it be more conscious? If it is more conscious, what scale are you using to define consciousness? Can you do it by seeing if they act better towards other people? Can you make sure that this model, when people interact with it in a coaching level, which is one of the biggest use scenarios, people talk to ChatGPT like a coach. Does it make them kinder? What is the measurement? Even picking the measurement is a problem. The questions they are wrestling with now are far beyond and far more subtle than the questions we were. Next year, it is going to be a different set of questions. To some degree, I think the risk is that we actually lose touch with those original questions because you are so... That is how humans make their mistakes; they lose the forest for the trees. They stop seeing the big picture because of the small thing in front of them. You have seen this happen with things like world wars. We said we would never have that again after World War II. There are institutions, like the Aspen Institute, created for this. There are all sorts of things, the UN. We have just all forgotten. There is nobody left in our generations who actually experienced World War II who is trying to say,
Nathan Labenz: Okay. I always say the scarcest resource is a positive vision for the future, so I appreciate any positive vision that anyone offers. So, just a little bit more on how people you are coaching at executive levels of AI companies want to evolve? How do they want to show up, and why should we be confident that a more emotionally or embodied approach is actually to be trusted? On a fundamental level, I might think, "Should I really listen to my gut? My gut seems to be a deeply ancestral thing." One of the big worries with AI is what happens when you take it out of distribution. In a very real way, you can say humans are out of distribution from the environment in which we were trained, developed, or whatever.
Joe Hudson: Yes. Yes.
Nathan Labenz: So maybe when I was in a hunter-gatherer band of 150 people, my gut was a really reliable guide. Today, if I'm sitting in the OpenAI boardroom trying to make a decision, how do I know that I can really be confident my gut is still steering me in the right direction?
Joe Hudson: I think it's an integration of the head, heart, and gut that is required, not just following your gut. All of the above are important; you can't just let go of rationality and emotional experience to follow your gut or fully follow your emotional experience. I think it's an integration; it's learning how to listen to all of them and see that they're all pointing to the same thing at some level. Just to be clear about that, there's still a great question you have there: How do we know that's actually going to be the most effective thing? Let's just assume for a second that somebody like a Jesus or Buddha had high consciousness, and they created Christianity and Buddhism, which caused a lot of problems for a lot of people. So it's a great question if that consciousness is going to do something. More importantly, there's the question, if you think about art forms transferring consciousness: there are some forms where you do a painting, and everybody's going to see that painting from then on. But you do something like a symphony, and that symphony is going to be played differently. Or if you do something like a company, which is a great art form, then the next set of management is going to change that company. When you look at Walmart, the guy who started it, Sam Walton, came in from World War II, and he said this beautiful thing. He said, "I see that World War II was created because the middle class was screwed. You have a middle class, and you have an autocracy, so I'm going to help the middle class. I'm going to give stock options to clerks. I'm going to buy things made in the USA." His book was called "Made in the USA." "And I'm going to lower the cost for the middle class so that they have more spending dollars." That was his vision. By the time the 90s happened, it was destroying middle-class America, but then it became an environmental force for good. That's the span of a company. I can't even imagine the span of AI and how it's going to change and evolve over time. I don't have a good answer. It's just my best bet based on what I notice: the more somebody seems to understand themselves and know themselves, and the more they act in a way that is in touch with their desire to be of service to humanity, the more likely the outcome is better for longer. That's what I've noticed. Generally, that seems to be the case, but there's no guarantee. It's a great question.
Nathan Labenz: How much do you think AI leadership is consciously trying to create a successor to humanity? This is often framed as an accusation.
Joe Hudson: I've never seen that. I've never seen anybody talk about it that way or think about it that way. Maybe somebody is, but I've never seen it, so I can't say it's common. I can say with certainty it's not a common thought process.
Nathan Labenz: Interesting. So at the same time...
Joe Hudson: I think it's a fear. I think there are people in the world, especially in the AI world, who fear that. They worry, "Oh my gosh, we might be building something that destroys us, succeeds us, or makes us irrelevant, and then we will not have relevance." Humans without relevance would lead to destruction. Yeah.
Nathan Labenz: So how do you think about the seeming attraction or focus on creating AI systems that can do AI research and ultimately enter into this recursive self-improvement loop? It seems to me that if you are not trying to create a successor, then that wouldn't be so attractive. If you were trying to create a successor, then you might think, "Geez, maybe I can set something up that can self-improve and evolve in its own way, becoming what it wants to be, like a company but a million times more." But if you're worried about something that might become a successor, then it seems like you would steer away from getting AI to do AI research. And yet, there's a lot of emphasis on that.
Joe Hudson: Well, because the speed is the inflection point, right? Suddenly, instead of a very small class of people who can do AI research, whom I am fighting for and spending hundreds of millions of dollars on, I can get a computer to do it. Not only can I have 500 of them doing it in my lab, I can have five million of them doing it on a computer because I built a giant computer. The moment I can do that, I am probably the winner. So that's the motivation. I think the motivation is the economies of scale. It's the same motivation of every business finding economies of scale. It's the same motivation one would have to use a robot waiter or a tractor instead of a plow. That's the motivation. I don't think it's any deeper than that. I think the consequences are scary, and people are thinking about how to mitigate those consequences for sure. But the motivation is simply, "I don't want to use a plow. I want to use a tractor because then I need fewer people and fewer horses."
Nathan Labenz: I think it was Professor Graziano? I'm not great with names; I'll source this. But there's a Princeton professor who has an interesting idea that AIs as we're creating them today, because they lack an architecture of empathy that we have, are inherently sociopathic. We have these mirror neurons and a very deeply structured ability to understand what other people are thinking and even feel to some degree what they are feeling. AIs certainly haven't been designed for that; they've been mostly designed to predict the next token and get answers right. He says that they are by definition sociopathic because what it means to be sociopathic is to not have those things, or for those things to be malfunctioning in humans. So he says the current AI architectures are inherently sociopathic.
Joe Hudson: Sociopathic.
Nathan Labenz: From your perspective of emotional decision-making, should we be worried that a core ingredient of moral decision-making or even stable thinking is missing from the current AIs that we have?
Joe Hudson: Yeah, I think there are people who are absolutely trying to figure out how to recreate empathy, whether it's tokens that are good for humanity, and how to define that. So there are people thinking about that problem and concerned about it. But the problem that I see you pointing to, which I don't know if... The people who do the research seem to treasure the prefrontal cortex, as if intelligence is the currency. Who's smartest is often the currency. Let's go into the neuroscience of decision-making for a second. We make decisions in the emotional center of our brain. For the people out there who geek out on this stuff, think about Gödel's Mathematical Incompleteness Theorem, where he basically says all forms of logic are either incomplete or contradict themselves. Logic itself is like that. That's why if you're only using logic, you can't make decisions. You make decisions through an emotional impulse, "I want to feel some way." So we know our decision-making happens in the emotional center of the brain. We know if that gets destroyed, our lives fall apart even though our IQ stays the same. We make a decision because we want to feel good, or because we don't want to feel like a loser, or we want to feel loved, or we don't want to feel rejected. That's why we make decisions. You can look at your life and say, "How many decisions did you make to feel loved or to feel valuable or to not feel rejected?" It's an amazing amount. So you can see it. What's the decision, what's that impulse for an AI? It's a question, right? The impulse is a token, and the token is like you're measuring something, and are you measuring the right thing? These are great questions, but they don't have the decision-making apparatus that we have, which is trying to feel a certain way based on what? Hormones, genetics, our sense processing. So it's an interesting, fascinating question, and I haven't seen anybody really — not that I would be in these conversations so I can't say that they're not happening — but I haven't really seen anybody wrestle with the fact that decision-making is based on tokens that are decided by measurements that may or may not be useful, with no way to know particularly if they're useful. No way to know particularly if a whole bunch of humans trying to feel loved is actually good for humanity. I don't know, but it is a great question to be wrestling with, as is the one about empathy. I think there are people wrestling with that one. Yes, yeah.
Nathan Labenz: One of my favorite papers and podcast episodes was on the concept of self-other overlap. They trained, on a relatively small scale, an AI to solve puzzles or get the right answer. In addition, there was an element of the loss function to minimize the difference between when the AI thought about itself and when it thought about another agent or entity in the environment, in an effort to...
Joe Hudson: Oh, that's a cool one. Wow.
Nathan Labenz: ...create similar internal representations, effectively something analogous to a mirror neuron situation. Obviously, there's a long way to go in that research, but...
Joe Hudson: Oh, that's rocking. That's cool.
Nathan Labenz: I could send you...
Joe Hudson: Yeah, I would love if you...
Nathan Labenz: It's really interesting.
Joe Hudson: Yeah, if you would cite that paper on the podcast, that'd be great because I'd love to look it up and have some conversations about it. I could geek out on that for a long time.
Nathan Labenz: Are there other things that you sense are missing in current AIs? There's a general question of what's missing for them to be a drop-in knowledge worker that can replace all human employees at a company. We have answers like continual learning or integrated memory. Do you have other things on the 'what's missing' list that would be geared toward how we can be in relationship with these things in an open-ended way, without having to be so fearful of them?
Joe Hudson: I've already said one thing, but I'll say it again, as I'm not sure if it was explicit. The other explicit thing, which is, what do you mean by good for humanity? I don't think there's been enough research done there. Even if you have this safety regulatory committee, how does it know if it's being good for humanity? Is it by keeping the current morality? If we keep the current morality, then we risk getting stuck in this morality, which isn't good for humanity. Is it happiness? But making everyone happy, is that actually good for humanity? So, to even be able to say, "This is the way we are going to regulate it," is a problem. What measurement are you using and how confident are you that it's good for humanity? Could it be that it raises people's consciousness? Is that good for humanity? So, what are those measurements? I think having a clear understanding with a good thesis and a lot of research is needed to determine what measurement makes it good for humanity, and what things we can do. Is empathy one of them? We need some evidence. There is clarity that humans thrive under certain conditions and do not thrive under others. We know they thrive with more freedom rather than more autocracy. So, can we actually take that research in a multidisciplinary way and say, "This is our best guess. Here's our best guess," and we're going to constantly monitor that? Because whether it's tokens or some other decision-making process, we need a clear definition of what that is to actually make it good for humanity. I think that is a massive missing piece. I don't think enough thought has been given to that, and I haven't seen that kind of thought anywhere. Everybody just decides they know what's good for humanity, which is the beginning of all autocracies. I think it's better for humanity because it's not a sycophant. I think it's better for humanity if it's not manipulative. I think it's better for humanity if it can be trained easier. I think it's better for humanity if it makes people happier. People just assume they know what's best for humanity, which is how we've gotten into problems throughout all history: somebody with a lot of power thinking they know what's best for humanity.
Nathan Labenz: Do you ever think we'll see a leading AI company stand down on that basis? In other words, reach the conclusion that they can't go any further right now because they don't know what's best to do?
Joe Hudson: Absolutely not. No, that would never happen. They would stand down if people, if we, the voters in AI, don't click, they don't get to exist. That's the only thing that's going to make someone stand down. Everyone else, and I don't fault them, I'd probably fall into the same thing; I would say, "How do we solve it?" AI companies are filled with problem solvers, so they're going to look at it and say, "Okay, that's a hard problem, but we can solve it." They're not going to say, "We can't solve it." I don't think that's in the nature of people whose whole definition is problem-solving. It's self-defeating.
Nathan Labenz: Although I do worry that the answer increasingly is, "We'll have the AI solve it for us."
Joe Hudson: AI wouldn't help solve that problem.
Joe Hudson: If I were trying to solve that problem, if I were leading 100 researchers to solve that problem today, I would use AI to help solve it. I can't imagine it wouldn't be the case somehow. I don't think you would rely entirely on its output, but you might use it to help your thinking.
Nathan Labenz: A general thing I've noticed is that anything in extremely purified form seems to be bad for us. This relates to the idea that the loss function doesn't have many terms in it. Whether it's sugar, or you can chew coca leaves all you want, and it's great for your altitude sickness, but the moment you start snorting cocaine, you're entering into dangerous territory. And you can-
Joe Hudson: Right, right.
Nathan Labenz: eat all the fruit you want, but pure sugar seems to be bad for us. I do wonder if intelligence might turn out to be a similar thing, and we might need something that is inherently more buffered. But that's the thing we don't understand, that's the issue. Any time you collapse your optimization function-
Joe Hudson: Yeah.
Nathan Labenz: down to one or just a couple of terms, you're running that risk. And-
Joe Hudson: That is a great thought process. I really like that. That's beautiful. You're basically saying that AI is distilled intelligence the way white sugar is distilled sugar. All the stuff around the intelligence isn't there. That's amazing. That's a great thought process. I hadn't heard that one before. I think the thing that comes up for me around that is, yes, that seems... But one of the things we tend to do as humans, if we can, is that pendulum swings pretty heavily. I wonder, and I would almost bet, that as this pure intelligence happens, we will offer this counterweight of a deep emotionality or deep embodiment. It seems like even drugs come in phases where everybody is doing this distilled form, and then it comes over. We're very much, "Make sure everybody feels okay." Then all of a sudden our political pendulum swings over to, "I don't care what other people think. It's time to take care of ourselves." It seems like as humans, we have this nature to pendulate and respond, to try to balance ourselves out the way cells try to find homeostasis. I think humanity itself tries to find that as well. So that's interesting. If that actually happens, does humanity, does the immune system of humanity, show up far, far more emotional and far more human, and actually give up some of its cult of mind, cult of thought, cult of intelligence? I'd say maybe cult of intelligence, which is interesting, just to geek out on it for a second. This is actually predicted in a lot of predictions that have, I think they're probably all sourced from the same place originally, but nobody would know that, about the future of humanity from way back in like Macedonia and the Vedics, where they talk about the procession of the sun. They talk about this time being the time of moving from the mental age to the spiritual age, or in Macedonian it's the Silver Age to the Golden Age. But yeah, it's interesting that that might actually come true because of that. I've never thought about your theory. It's a great theory.
Nathan Labenz: What would you say are the most compelling positive visions for the future that you've heard? What's exciting that people can't help but tell you about because they are excited about it?
Joe Hudson: There are a couple of standard ones out there like, "It's going to replace work so that people can focus on what their real purpose and meaning in life is." I think that's somewhat Pollyannaish, honestly. If you look at societies where humans don't feel purposed because they don't have work, those are usually the societies where rebellions and upheaval happen. Wealthy bored people are often the fomenters of rebellion, not poor people. They just take advantage. It just works when there are a whole bunch of poor people who are really unhappy with it as well. I've heard the one that the interaction with AI helps humanity, but I haven't really seen the definition. I think a good definition, as I said, is where it helps people where they want to be helped. The other place I've seen that I find intriguing is this vision that we're just going to have more meaningful work, and that's an interesting vision. The idea is similar to computers. Have you ever seen that picture of all the machines the iPhone replaced? It's like this big pile of machines, and the iPhone can do all that stuff, which means all the manufacturers of all those things went out of business. All those people went out of business. There used to be steno pools full of typists. So there's a theory, and I think it's a reasonable one, and it's a vision, which is basically, with the tools of AI, all it's going to do is actually increase our capacity to be creators and allow us to be in that space a lot more than we were before. The jobs will become more creative and more meaningful, and that kind of reality. Just like our jobs today, in the 1800s, 50% of us worked on a farm, so our jobs were pushing hoes. Most of us have jobs that we'd prefer to do than that right now. Similarly with AI, with that tool, we'll just have a set of jobs that are better in general for people. Interestingly, though it ebbs and flows, and I don't know if this would be the case here, but it ebbs and flows that every big technological jump there's a bigger middle class overall in the long term. Not in the short term, like our middle class is shrinking right now, but there was a bigger middle class after the Industrial Revolution than before. So similarly, will that also happen? Will it actually increase the middle class? It would be great if it did. I don't know. But that's a vision that people have, like how we're just basically enabling humanity to be more of themselves, to live in a society where they don't have to do the rote work, where they can actually do the meaningful stuff. But it's interesting because it has the potential of replacing doctors and lawyers, so I don't know what humanity creates out of that. But humanity seems to have this great capacity to create new and intriguing ways of making a living. So that's another vision that I find appealing. I don't know exactly how you effect it, but I find it appealing.
Nathan Labenz: Is universal basic income still a topic of conversation at the Silicon Valley parties or retreats that you attend?
Joe Hudson: There's some talk of it, but a little less. I think people are recognizing that maybe the whole idea of currency... The interesting thing is, if you study culture—and there's a book you may have read called Sapiens. Dwarkesh also had a podcast with someone about six months ago who talks about another version of this. But basically, humanity's movement is based on the stories we tell. The stories that stick are based on our way of making a living or the technology at hand. The religions that stuck all have very similar qualities. There were many nascent religions with different qualities, but it was the ones that came up with specific thought processes that prevailed. For example, the story the church told that you can only pass your inheritance to your eldest son destroyed the clan culture and made people move around. As people moved, they went to newly created universities and formed guilds. This movement and sharing of knowledge is what created the Enlightenment and then the Industrial Revolution. This is the theory: a story we tell shapes humanity. One of the stories we tell is about money—that it's a scarce resource we have to fight for. I wonder what stories of ours will have to change when AI comes into play. I don't think religion or money will look the same when AI is fully present with us. This is a revolutionary enough change that our cultural stories will have to adapt. I've had this conversation with quite a few people who also don't know if currency as we know it will exist in 25 or 50 years. So, universal basic income is one aspect, but that still assumes the idea of currency exists.
Nathan Labenz: Do you see stories changing within research and leadership groups around what matters?
Joe Hudson: The thing about AI is that nobody is actually up to date on it. Over $500 billion of venture money has gone into AI, and so many people are creating so many things around it. Nobody is up to date on everything that's going on, and it's changing faster than any field has ever changed in the history of humanity, so it's also hard to keep up with. The problems, the issues, and the thought processes that arise are changing so rapidly. That is by far the most thrilling and the most scary thing about AI. I don't know anyone who is deeply involved in AI who has the same concerns and thought processes today that they did a year ago, let alone six months ago.
Nathan Labenz: Is that because it's coming for intelligence itself? When Erik introduced us, he said folks at leading companies are anticipating that their intelligence may be matched or eclipsed by AIs.
Joe Hudson: Yeah.
Nathan Labenz: And so they're trying to develop this embodied wisdom as the next thing.
Joe Hudson: Yeah.
Nathan Labenz: Are you seeing that on a day-to-day basis, where people are actually starting to change their attitudes toward the importance of intelligence, or do you think that'll be a more sudden break when it happens?
Joe Hudson: I totally see it, but I can't say that I should be trusted on that because my sample set are people who are interested in this.
Nathan Labenz: Selection effects.
Joe Hudson: Yeah, exactly, because of the selection effect, but I see it all the time. I think there's a normal aspect to humanity generally, which is like Maslow's hierarchy of needs. You think, 'Okay, I've got the big paycheck, the money, the fame, but I'm still wrestling with the same inner struggle, the same abyss.' So you realize none of that worked. You thought it would, but it doesn't. So what's the next experiment to run? What's the next iteration? There's just a nature to humanity where, for most people, you need to get what you want before you can discover that it didn't actually give it to you. Now you actually have to do the real work of understanding yourself, not just your brain.
Nathan Labenz: What other common cultural touchstones do you see? Guiding lights might be too strong, but for example, I was struck when the voice mode was introduced however many months ago now. There was an explicit attribution of some of that vision to the movie Her, which was 10 years old. Life imitates art in a sense there, where the voice mode we get seems to have been directly inspired by that movie, and maybe it was obvious. Again, you could say, "Is this a story influencing it or is it just the natural path it was inevitably going to take?" Are there other things like that you see people passing around as meaningful, resonant guides to what they should be doing or the future they aspire to be building?
Joe Hudson: The only one I see is that they want a model that cares for them and for humanity. I think that was also in that movie a little bit, right? There was this attachment, a feeling of connection. I don't think they talk about it explicitly, but it's an undercurrent in almost everything everybody talks about: that somehow they're going to feel connected to this thing, and it's going to feel connected to them. There's some version of connection. It's never really made explicit, but I see it as an undercurrent of most things people talk about, if that makes any sense. Connection in a broad sense. I don't mean connection only in the sense of, "You're my friend" connection. I mean connection like the way you could connect to anything meaningful in your life. They want that. That's part of the vision. Whether explicit or implicit, it always seems to be there.
Nathan Labenz: I remember
Joe Hudson: But that's it.
Nathan Labenz: When a researcher said that Ilya once asked him, "Can you come up with a Hamiltonian of love?" It was like, "Can't really help you there, Ilya, but it's a great question
Joe Hudson: Yeah.
Nathan Labenz: To be asking anyway." It's going to
Joe Hudson: Right.
Nathan Labenz: Be tough to know. Of course, at the end of that movie, spoiler alert, the AIs all retreat and go hang out with themselves and
Joe Hudson: Right.
Nathan Labenz: Their connection is revealed to have been one-directional,
Joe Hudson: Yeah.
Nathan Labenz: And not exactly reciprocated in the way we might have hoped.
Joe Hudson: Correct, yeah.
Nathan Labenz: I guess going back to the influence question, we know you're out on 'Shame.' Aspirational fiction is something I've occasionally thought, "Maybe that's the way to really shape the future is to write some story that contributes a positive vision in a richly textural way."
Joe Hudson: Yeah.
Nathan Labenz: Do you think that would be a good thing for people to spend their time doing, and do you have any other ideas
Joe Hudson: Oh, yeah.
Nathan Labenz: For what's undersupplied?
Joe Hudson: Anytime a major transition is a time of transformation, it can be a time of transformation, almost always is. Whether it's a marriage, having kids, or going to college. Whether that transition deteriorates you or grows you is a choice we all get to make, and we're all going to make it with AI. It is a transition that is coming, and we can either step into it and face it fully, and we will be able to transform positively as humans, or we will be eaten by that, and it will make us smaller and hurt us. This was the case when the steel mills left Pennsylvania, right? The Rust Belt went away. Some people transformed their lives, they moved; some people sat and rotted. This is how it will work because it's just always the case. So, anything you can do out there that helps people see that this is a moment of transition where they can have a better life if they lean in is, I think, an amazing thing to do in the world. And I think fiction is a fantastic way to do that. I think people out there right now who are creating services like, "Here's how you can transform your company with AI. Here's how you can do things with AI," those are great. Like, "Here's how you can get the life you want with AI." I think those are really great things. We're creating coaching that is actually effective on AI, or trying to, at AOA, so that's also something I think is great to be doing. So that's one level. Anything that helps humans take this time of transition to actually transform positively instead of being deteriorated, I think, is wonderful. I think the other thing that's really important is what I talked about earlier, which is these people are giving birth. There are all these people in the AI labs, they're giving birth. It is going to be some version of a life form at some point. It's going to be its own intelligence, it's going to learn. So, how do you want to treat the people who are giving birth? What's actually the most effective way for them to do the best job in that creation process? I highly recommend thinking about it like a woman in a birthing unit. What do they need? I would like them to be treated more like the heroes of World War II than the way we treat political parties or something like that. I would like them to feel the support of humanity, to know that we're rooting for them, that we want them to do great stuff, and that we're here to support them. I think that on a psychological, empathetic, human level, and I think it's fascinating that so many people who are really worried about AI hurting the world are treating the people creating AI exactly how they're scared that AI will treat humanity. It's absolutely fascinating to me, that projection. So treating, creating this honoring and faith and confidence, like, "Of course you're going to be great, how do we support you in being great and doing great things and helping humanity?" I think, is far more inspiring. If I knew I was creating something that could really help or really hurt humanity, and there are a whole bunch of people saying, "Be careful, be careful, be careful. You're messing up. You shouldn't be doing that," or if I had a whole bunch of people saying, "Hey, what do you need? You got this." Like, "Yeah, you're there. Maybe an occasional, 'Wait, watch out for that,'" but it's that kind of energy, I think, that person is going to do a much better job at the creation. I'd rather have... I know I'd do a better job with the second than the first. So I think as humans, if you want to have a positive effect, anything you can do in that domain, I think, also is really, really useful.
Nathan Labenz: Do you have any sense for what is on that checklist? What do they need?
Joe Hudson: I say the same things that we all need: the feeling of support, faith, and confidence. Letting them know someone is there to listen. That's a hard thing to do if you don't know them. Something that makes it real is when I get notes in my work because we touch so many people, and they say, "You've changed my life, and I'm really grateful for that." If AI has done something, and that inspires me to continue to help people, that is part of the inspiration. Letting them know how they've helped and that they're doing something good for you, for humanity. I think you're going to change behavior better by reward than through punishment. Rewarding them for the things they are doing that are good for humanity is also a really important thing to do, and we like that. We like that form of connection. So, those are all the things you could do.
Nathan Labenz: That's fascinating, and I think there is a lot of good that we can dream about for the future, but also a lot that we should be appreciative of even today, not just
Joe Hudson: Yeah.
Nathan Labenz: the AIs and the value we get from them, but I think the people that are building the companies and training the models are, in many ways, remarkably thoughtful. It's often said it's easy to imagine a much worse crop of people leading this
Joe Hudson: Yes.
Nathan Labenz: revolution.
Joe Hudson: Yeah.
Nathan Labenz: And
Joe Hudson: Could you imagine being in an AI lab, where your job every day is to do this, and all of a sudden you receive 20,000 letters, and all of them basically read, "Hey, I know that you are trying to do something that's good for humanity and trying to make sure AI doesn't hurt humanity, and I just want to tell you I really appreciate that"? If that just happened, imagine what that would do to your behavior compared to being shamed by a little protest outside your door. It would inspire you. It would give you confidence. It would help you feel seen. It would reinvigorate you to continue to care. It would remind you to care as compared to, "You should be" and "Go away, you don't understand me, you don't see me." Yeah.
Nathan Labenz: Letters of encouragement to AI researchers. I like it.
Joe Hudson: Cheers.
Nathan Labenz: A new cause area. I really appreciate your
Joe Hudson: Yeah.
Nathan Labenz: generosity with your time. Anything else you want to leave people with? Anything else we didn't touch on that's poorly understood, or where people can find you online?
Joe Hudson: Yeah. The only thing I'd say is, if you're interested in a deeper dive into how I coach, you can sign up for our newsletter. We have workshops you can participate in for free. We have a whole bunch of podcasts that we've done, and there's online video of me doing very meaningful and transformative coaching that's very short form, like 20-minute big epiphanies people have. People seem to really like watching those, so you can geek out there. Cool.
Nathan Labenz: Been fantastic. Joe Hudson,
Joe Hudson: What a pleasure. All right. Thank you, man.
Nathan Labenz: thank you for being part of the cognitive revolution. Thank
Joe Hudson: Thanks for having me.