Introduction

Hello, and welcome back to the Cognitive Revolution!

Today – just back from The Curve in Berkeley, where I had so many amazing conversations that I ended up losing my voice – I'm pleased to share an exploratory conversation about the impact of AI on education, with Johan Falk, author, speaker, and AI analyst based in Stockholm, Sweden, who spent several years as a classroom teacher and at Sweden's National Agency for Education before pivoting to focus on AI in education full-time in the wake of ChatGPT. Today, he's making videos to help teachers use AI in their work and classrooms, which you can find on his new substack, Graspable AI, at graspableai.substack.com.

The context for this conversation is that I was recently invited to give a keynote to an audience of 500 public school administrators in my home state of Michigan, where my kids are now in first grade and pre-school in the Montessori program of our neighborhood elementary school.

In another timeline, I could easily imagine myself being a sort of tiger dad when it comes to my kids education, but in this timeline, with the long-term outlook for the job market and structure of society subject to such radical uncertainty, I had for the most part been letting my kids be kids and watching how things develop before committing to a specific strategy for their education.

And so, as I've prepared for that talk – including by requesting ChatGPT research reports, trying various learning apps for kids, and talking to teachers and principals in my personal network – I've been struck by just how perplexing the challenge that AI presents to educators really is.  

AI, of course, offers unprecedented access to information and unlimited feedback & 1:1 tutoring, as we learned in our recent episode with Mackenzie Price of Alpha School, but at the same time enables cheating like never before, while also raising fundamental questions about the very purpose of education itself. 

To better understand how educators in different contexts are approaching these challenges, I invited Johan to help me explore the many facets of the relationship between AI and education, and we ended up covering a lot of ground, including:

While we end, as we started, with more questions than answers, I found Johan's frameworks extremely helpful for organizing my thinking, and I hope you enjoy this wide-ranging exploration of AI's impact on education, with Johan Falk, author of Graspable AI.


Main Episode

speaker_0: Johan Falk, author, speaker, and AI analyst, welcome to the Cognitive Revolution.

speaker_1: Thank you. Thrilled to be here.

speaker_0: I'm excited for this conversation as well. So you're coming to us from Stockholm, Sweden. And you, like me, have kind of managed to make a full-time job out of trying to keep up with AI. Which is something that not too many people have been able to do. So we're in a very privileged, I would say, small minority who have the luxury of spending so much time really thinking about, like, what matters in this space. And trying to translate that to people who do have full-time jobs that, uh, keep them from keeping up with all the increasingly dizzying pace of events. For starters, you know, we're ob- obviously primarily gonna focus today on the impact of AI on education, although I'm sure there'll be some digressions. But for starters, you wanna just kinda introduce yourself? Tell us, like, who you are, what do you do, who do you do it for?

speaker_1: Yeah, sure. Uh, so I, I have a mixed background. I'm a physicist originally by training. I was, uh, also spent three years as a science journalist, web developer, then became a teacher. Finally decided to go into education, which I've been longing for a decade or something. I spent a few years as a teacher and then went on to the National, uh, Agency for Education here in Sweden. Fully engaged in improving mathematics education in this country, which I managed pretty well. And then ChatGPT was launched and everything was turned upside down. I s- uh, initiated a team at the agency working with AI in education. And, uh, after 18 months with that, I decided to leave the education, uh, the agency to focus on AI more broadly because I think it's, it is such a huge question for the, well, the society and the entire world. And right now, I'm looking for a way to, to have a positive impact on AI risks or AI safety. And, uh, running my own business while looking for a way to do that.

speaker_0: Yeah, cool. Well, we're definitely very kindred spirits in that respect. Um, obviously Sweden is a quite different country, quite different context and quite different philosophy from what I'm used to here in the United States. Super big picture, I mean, you know, a couple things that kind of keep ringing in my head right now are, first of all, Sam Altman saying that his kid who was just born is going to grow up in a world where they're never smarter than AI. And-

speaker_1: Yeah

speaker_0: ... I think that's like a really, you know, kind of striking reality, right? For, for us it's, it's sort of happening in our lifetimes, but already people are born who are never gonna be smarter than AI. That's pretty wild. There's also, of course, the Dario forecast-

speaker_1: Mm-hmm

speaker_0: ... that we might see very significant labor market disruption in the not too distant future. I would say in the US it's often a tacit assumption, but the working assumption behind education for the most part is, it's about teaching you to be economically productive so that when you're done with the education, you can enter the workforce, you can make a good living for yourself and make a contribution to society.

speaker_1: Yeah, to, to the GDP.

speaker_0: Yeah. How does, imperfect and incomplete as that measure is and all the more so all the time, how do people in Sweden think about it? Is it the same? Is it different? And is it starting to change in your mind and potentially-

speaker_1: Mm-hmm

speaker_0: ... in society more broadly in light of AI?

speaker_1: Yeah, I mean because AI forces us to discuss these questions, i- i- it puts most things into perspective, not least education. And I would say in Sweden we have, and basically all over the world I would say, there are more or less three different perspectives on education. We have the supplying, um, competence for the labor market that you talk about. We also have the kind of fostering citizens, that is providing them with the knowledge required to be a functional and responsible citizen, but also values we have in society. And then we have the third part, which would be just growing as a person. Learning arts and stuff that isn't productive, isn't really necessary in any functional way, but it's good for you, good for the world because you- Mm-hmm ... when you feel good, it's good for the society, things like that. So I would say those three parts are more or less the ingredients in, in the purpose of education in every country.

speaker_0: Yeah, it is striking that the, the latter two certainly here in the US are just radically more contested than the first. You know, it seems like we have a pretty clear sense of what it means to be employable anyway, right? Like, you can either get a job or you can't-

speaker_1: Mm-hmm

speaker_0: ... so there's some sort of ground truth that emerges there. When we think about making you a good citizen, that obviously raises the question of, well, what is a good citizen? And when you think about-

speaker_1: Mm-hmm

speaker_0: ... growing as a person, it's like, well, in what ways and with what values toward, you know, toward whose measure of, you know, what a good person should be? I imagine that those ideas are probably a little less contested in Sweden, but maybe not.

speaker_1: Yeah. Yeah, yeah, I would say so. Um, well, I, I haven't lived in the United States. I've been there once or twice, but my impression is that in, in the US you have much more of a culture of competition than we have in Sweden. We have, well, not communist, but we have-

speaker_0: Mm-hmm

speaker_1: ... a strong s- social democratic culture in Sweden. We have high taxes. We have free, uh, um, healthcare. Well, to s- to a certain extent, free e- education. Even higher education is free in Sweden. So we have much more of a strong community, strong, well, common good in our society, which I guess lowers the competition part of education and just there's a bit lower competition in society overall, I guess.

speaker_0: So do you see any change if only in yourself in terms of how you think about the purpose of education or what you are trying, you know, as you advise school systems? Like, are you reframing in your own mind-

speaker_1: Yeah

speaker_0: ... what the purpose of the whole enterprise is?

speaker_1: Yeah, I am. I think this first part, the providing the labor market with competence is... Well, should be question... Well, it, it depends so much. I mean, we don't know where AI is taking us. Eh, and you might think there's a 2% chance that we have a radical different future or we have a 99% chance of a radical different future. But, eh, say if we are moving towards a future with UBI, that means that, that competence for the labor market is just, or basically irrelevant. And the, uh, schools should be focusing much more on how do I as a person, as, as a human feel healthy, well, wellbeing, stuff like that. We have science for doing that as well. But, the parts of... Well, it's not very much taught in schools nowadays. And you could also question, well, how much should we try to reorganize schools today when we don't know the future, and also how much... I mean, I have a kid, two kids, one is seven, one is 10 years old, they are gonna spend roughly 10 more years in school, and I don't have any idea how the world and society will look when they are done in school. And should... Uh, well, I know that everyone needs to learn how to read and write, how to understand themselves, to cooperate with others, things like that. But I mean, solving quadratic equations, I don't know. Learning new languages, I think that might be good, useful. But is it something that is necessary? Probably not. Yeah. So much is unknown right now, and I think the most important thing that we can learn is to increase the agility in the educational system. 'Cause that's the... One of the things I keep coming back to when I, I, I think about and analyze what A- AI is doing to education and society is the pace of change is a huge challenge. It takes years to change a curriculum. For example, five years is rather quick. And if you look at five years in the AI world, it's massive changes. So that means if we want to tackle the changes that AI brings about, we need to be much more agile than we are today. And that's a real challenge. I don't know how to do that.

speaker_0: Yeah. That-

speaker_1: Yeah, that's a long answer to, uh, to, uh, a short question. That's starting somewhere.

speaker_0: Well, we've got time for the long answer, so don't, um, don't worry-

speaker_1: Mm-hmm

speaker_0: ... about that at all. The pace of change challenge is, is real for everyone. You know, it's even real at technology startups. You know, I've experienced multiple instances where we've been working on, you know, some new feature, some new capability, whatever, for the product. And the way we are building it becomes essentially obsolete before we're even able to launch it, and then you're faced with this weird... And that, that's usually, like, a couple of months at most, you know, from feature sort of starting, you know, development to actually getting online for users. And it creates, you know, the... Probably same challenges that are familiar everywhere, where it's like, "Well, we're kinda far along on this." You know, some cost-

speaker_1: Yeah

speaker_0: ... fallacy, attachment, should we rip it all out, one unknown. You know, then there's, there's always the question too of, like, well, this new way seems better but, you know, we haven't really stress tested it, whereas we've kinda gotten, you know, comfortable with the paradigm that we were building on over here, so what do we do, you know? It just... All, all too often, I think, even in technology startups, there's like a reluctance to switch paradigms because of all these different reasons, which largely kinda boil down to emotional reasons. Some... You could also say, like, risk management is in a non-emotional-

speaker_1: Mm-hmm

speaker_0: ... way is, like, part of that as well. But yeah, it's only-

speaker_1: And structural, structural slowness, just the inertia of, of big, uh, organizations or small organizations.

speaker_0: Yeah, it's an order of magnitude harder, maybe two orders of magnitude harder at, at different kinds of organizations. So I guess... Well, maybe let's go right to that. Like, is anybody doing a good job of this today? You know, I've got a, um, invitation coming up to speak to a bunch of public school administrators in my home state of Michigan, and one of the things that I've been thinking about with them is they might need to rethink their procurement methods. You know? And by the way, the same thing is happening at, like, the Pentagon, DARPA. I know that there's major anxiety there where it's like usually we, you know, go through this whole super long process, and now we don't necessarily know what's... There could even be a new entity that doesn't even exist yet that we wanna be buying from in a year's-

speaker_1: Mm-hmm

speaker_0: ... time. Like, how do we even conceptualize that?

speaker_1: Yeah.

speaker_0: Is there anybody that you've seen in the education sector, or maybe a little more broadly in the public sector, who has figured out a workable model for keeping up with the pace i- in any respect?

speaker_1: Well, uh, no basically, but I've done some research looking at where in the world we have seen some good examples of what to do. And South Korea and Singapore are doing a good job of implementing AI in education. They have strong top-down incentives and approaches to just making AI happen. I think it is Singapore that also has good standards for data in education that makes it easy or easier to apply AI to a lot of things happening in, in schools and education. There are some other... I think... Well, the US is partly a good example. You have Khanmigo from Khan Academy that is being rolled out to, to... They were saying that there is going to scale up to a million students and teachers, which is pretty good. And, uh, I think it was Microsoft giving money to make that happen or-Well, making the economical stuff happen. China is doing a pretty good job. They have been a- actually working with AI in curriculum for quite some time. I don't ev- know if it's 10, 15 or 20 years. But now they have... Well, that has started to become reality. They also have something s- kind of similar to Khanmigo. It's called Squirrel in, in China. Obviously a bit different also when it comes to values. And Estonia is a good example. It's a small country. They are agile, which is... I think it is crucial. They also have good data in- infrastructure in Estonia, so they have a good platform to build on. And they rolled out AI support for learning. But it's... I mean, everyone is, eh, wrestling with this, trying to understand what to do and how to do it, and the technology is moving so fast that we can't really know what's working or not because all the studies we get are, like, two years old when they are published. And it's also difficult be- because AI is so many different things. I'm mostly... I'm often focused on the language-based AI. I think that's the most transformative thing happening right now. That is the thinking machines. I mean, in neural networks, machine learning, fine, you can apply it to many different things and that is important and it changes a lot of things, but language-based AI is advancing so rapidly and it's also being adapted rapidly compared to, I don't know, applying neural networks to educational data, for example. So, um... But some of the studies, well, concern using AI in more traditional ways, and that is not as interesting, I think, because it's not moving w- with the same pace. And I think the... Well, if I were to pick out s- some things that I think are important at some kind of national l- level, that is it would be that you have a strong strategic initiative from top down saying, "This is something we should do. Everyone should use AI," or, "Every teacher should learn AI in schools." Give resources, provide resources for actually doing that. Give some kind of clear guidelines or clear guidance saying that this is so- okay, th- that is okay, and these two things we shouldn't do, is it's prohibited to use learning data for other purposes than using in education, for example. That's something I think is done in South Korea. And then, also just being bold. Eh, non-action is a great risk right now because you miss out on so much opportunities and you risk e- ending up on the wrong side of the digital divide if you're not acting now. And many countries are, well, not acting because they don't really know what to do. That's quite understandable, but doing something and improving along the way is better than doing nothing, I would say.

speaker_0: So for one thing, we can always take inspiration from Estonia. I have been amazed on multiple different levels with the quality of... And of course, Singapore and, in many cases, South Korea as well. But Estonia, I think, is a little bit of a hidden gem when it comes to the good governance that it has really, in many cases, like pioneered actually really interesting paradigms. You mentioned there, of course the, you know, fundamental challenge of pace and then the fact that, like research which obviously takes time is often, you know, outdated when we get it. I do wanna come back though and just take a little survey, 'cause I know you've done the hard work of going through the literature that exists, and try to get a sense for like what research we do have, like what does it say?

speaker_1: Mm-hmm.

speaker_0: And then I definitely wanna unpack the guidelines that you would recommend in a little more detail as well, and that's, you know... We have all the time, uh, you need for that. B- maybe before we do those two things, like how do you organize... 'Cause what... I think another fundamental challenge that people have with something like AI, because it does touch everything... I experienced this, you know, in, in my own way where I'm like, "I'm trying to keep up with AI." Well, what does that mean? Well, increasingly it means a whole layer across the entire not just economy, but society, right? And now I'm like, "Well, geez, I really can't keep up with..." Sounds pretty ridiculous if you say, "I'm gonna try to keep up with everything going on in society." So-

speaker_1: Yeah.

speaker_0: But that's, like, kind of what trying to keep up with AI is converging to. So there's kind of no way to do that but to have some shortcuts, heuristics, you know, some sort of taxonomy of, like, what exactly does... what bucket, you know, does any given thing fall into. So I suspect that there's probably even a gap there for a lot of people. Like, when we think about AI in education, um, that can mean i- uh, to itself, like, a lot of different things, right? It could mean the kids are using ChatGPT to cheat on their homework. It could mean, you know, personalized learning. It could mean whatever.

speaker_1: Yeah.

speaker_0: So how do you organize that in your own mind? How do you recommend people set up kind of a mental framework for breaking that down so they can then zoom in on the different categories? Or perhaps, you know, allocate them to different people in their organization if that's what they're trying to do?

speaker_1: Yeah, really good question, and for a long time it was just a big mix of things for me. But then some different themes emerged and, I mean, I'm calling it, eh, four different sides of a cube, which means, eh, we still lack tw- two of the sides, and that's... I think that's important to remember, that we don't have the full picture yet. But there are two aspects of AI in education that are often discussed, eh, and ta- take a lot of time for teachers and students and principals that I don't think are that important actually. The first one... Well, yeah, the first one is using AI for learning. That is the Khanmigo stuff, for example, but also just having an AI study buddy, things like that. That's one part.Using AI for learning. We'll get back to that, I guess. The second one is for teachers using AI just to save time with stuff they do outside the classroom, outside teaching, like using Excel but you have more powerful tools so you can get work done quicker with AI. Third thing, which is kind of... strangely silent, is teaching about AI. Having AI as part of a curriculum, helping students develop their AI competence, which I think is really important and really urgent because there are so many things that... and what... Kids have been li- uh, living in a world with chatbots now for almost three years. And, uh, in Sweden at least, an- and most countries I would say, we're still not helping them understand what this is, how you should relate to AI, a- and use it and not use it. And then the fourth aspect is the, well, system-level effects on education. How schools in themselves might change and what, uh, what education is might change, or how the role of teachers might change, how we use books or... Well, a lot of things might change, and we don't really know how, but that's one important bucket as well. So four different buckets, and two buckets we don't know what they are.

speaker_0: So which two of those four were the ones that don't matter so much?

speaker_1: The- the first two. Well, t- today, if- if looking at, uh, a- the state of AI in education today, I would say it's not that important using AI for education a- as a learning tool, which is a kind of strange thing to say, but I- I think I can back it up. And the second one is using AI as a tool for teachers outside the classroom. Well, those are two most immediate things you come to think of when you think about AI in education. But when it comes to using AI as a learning tool, the tricky thing is that we don't really know how it works or if it's a good thing for learning. There are some studies that... well, coming more and more studies, a- and you have probably heard some of them saying that you can double the... well, half the time you need... learn twice as much in the same time when you use AI. And some of them are saying that, well, I- if you use AI, you don't... you- you learn less. And they are useful, those studies, but it's also when you dig deeper into them you realize that this study was based on 25 adults in Nigeria and it might not translate to my middle school class in Sweden, for example. Or this study showing that you learn less when you use AI was based on 18 people who were told to use a chatbot to write their essay. And, well, it's really difficult to generalize from that, and that means that we can't really use what we... I- I think, which is strange because I'm kind of into AI, but I don't think we should push for introducing AI as a learning tool for everyone. I think i- it's fine to use it as a learning tool for teachers who are interested in doing that, feel that this could work for my students in this situation. But saying that teachers should use AI as a learning tool is, I think, a mistake still. That being said, we need to keep a close eye on the research because there are quite a bit of promising results, and it's promising enough that we might conclude that this is really good, we really start... should start using this for everyone that is 16 years old or 13 years old when learning languages, or when, I don't know, kids with special needs or something. But only when we have good research showing that should we do that for everyone or... categorically.

speaker_0: So at this point-

speaker_1: That's the first bucket

speaker_0: ... you would recommend a sort of classroom by classroom, which effectively means, like, teacher by teacher approach. It would be sort of a- a matter of their style, their enthusiasm, and I guess-

speaker_1: Yeah

speaker_0: ... the upshot of that would hopefully be that, like, in the diversity of approaches that will naturally emerge, collectively people will learn, students will have, you know, a variety of experiences. Um-

speaker_1: Yeah

speaker_0: ... is- is it essentially a sort of hedging our bets strategy because we don't really have a- a clear answer?

speaker_1: Yeah, I think so, but also co- considering AI as one of many tools that we should use or could use. People are different, as you say. It... Teachers have different styles. They have different AI competence. There are different situations, different things you want to teach the students, and I think it would be wrong to treat AI as some kind of silver bullet that works for everything and everyone. And teachers... Teaching is really complex. Teaching one person is difficult. Teaching a whole class is bizarrely difficult. And teachers make, like, 2,000 decisions every day when it comes to how they teach. And I think if we raise teacher AI competence, they will have a much easier time seeing when AI could be used to inspire students, to help them practice mathematics or just, I don't know, speak German, or inspire this student to actually read half an hour every day. But it might also be a great idea just to go outside. Let's have a lesson outside today. Or this- this kid should really be... just run for 10 minutes and then he might- ... be able to sit still. And AI isn't going to help with that, I guess. Uh, so- so I think AI can bring a lot of good things, a lot of good tools when it comes to teaching possibilities. Y- yeah. And the biggest possibility is the personalized learning. Helping... well, the-... Adapting learning to every single student, their level of knowledge, their interests, their pace of learning, things like that. And it's possible that that might actually work. There are some signs that it is working, but it's also quite possible that it doesn't work. We have seen this promise before, w- starting with, I don't know, cassette tapes. But now we can individualize learning for everyone. We have YouTube. We have books. It's possible today to have individualized learning, but only for the students who are actually active learners themselves. And I think that is something that is becoming more and more important in the world of AI.

speaker_0: So if I was gonna take the booster angle for a second, which is a pretty natural position for me to take, I suppose-

speaker_1: Yeah

speaker_0: ... one of the things I often say about AI is there's never been a better time to be a motivated learner. Flip side of that is it's also never been easier to cheat on your homework. We can come back to the cheating part.

speaker_1: Yeah.

speaker_0: But the... And I'm, I'm glad you touched on kind of individual tutoring because, at least from my perspective, and I'm far from an expert, but it seems to me that, f- from the conversations I have, one of the most widely cited and I think generally believed ideas in education is that individual tutoring, one-on-one tutoring is like the gold standard. And it delivers the best outcomes, and, you know, we know about the two sigma effect and the idea is like maybe we could have the two sigma effect for everyone if we can get these AI tutors to really work. For me, it does feel when I like want to learn something that AI is just an unbelievably useful way to go about it. You know, i- for me, that's often like taking a paper, could be a machine learning paper, e- even more so if it's like a AI for biology paper, there's just so much stuff that I don't know that is pretty important missing pieces in my knowledge to make sense of what I'm currently trying to make sense of. And the AIs are just so good at answering those questions. I really do feel like, man, this is an indispensable advantage, you know, in, in terms of my ability to, and my confidence, you know, that I'll be able to make sense of things like this. Where in the past, I might get two paragraphs into the abstract and be like, "I don't know. Maybe this, I may have to come back to this another time." Now I can kind of throw it into whatever, chatbot, and start asking my questions. And I usually do get pretty far. Um-

speaker_1: Yeah

speaker_0: ... w- we also... Now, interestingly, I, I did an episode, as you know, with Mackenzie Price from AlphaSchool not too long ago.

speaker_1: Mm-hmm.

speaker_0: And, you know, they're the two-hour learning folks and some of the stats that are most widely cited originate with them, in terms of compressing the academic day down into two hours and still making great progress over the course of a calendar year. Notably though, one thing she told me was they don't use any chatbots in their mix of AI tools. They have an in-house development team. They build some. They have a, you know, procurement team that goes out and licenses apps and whatever, and they kind of hodgepodge this whole thing together. So it's, it's striking to me that like what I'm doing, which I find really valuable is not even a part of their mix. All of that said, I do think like a certain amount of humility and like, "Let's not force this on everyone. Let's not make it a one-size-fits-all thing just yet," probably does make sense. There's, I think there's like some prudence there. At the same time, like I'm worried when I think about my own kids going through school that if we tell educators like, "Ah, this isn't really proven yet. You know, it's not for everyone, whatever," that I think you might end up with like a, a majority of people being like, "Well, if it's not for everyone, it's not for me, and then I don't have to worry about it, and so then that's great." Like I can-

speaker_1: Yeah

speaker_0: ... just keep doing what I'm doing, and that doesn't seem to be the right answer. So I guess to try to finally land on a question- ... here, um, what do you believe? You know, i- we don't have research to prove everything, but what do you believe-

speaker_1: Mm-hmm

speaker_0: ... about learning? Like how, w- what do you do when you wanna learn something new? What do you want your kids to have? You know, given that you've gotta have some choice t- ma- you know, they're only gonna be seven and 10 for a while, right?

speaker_1: Yeah.

speaker_0: So like, what do you want for them right now while they are seven and 10? You know, w- even as the research kind of final verdict remains out on a lot of these questions.

speaker_1: Yeah. Yeah, well, there are several layers to this. I think for seven and 10-year-olds, at least 10-year-olds, it's quite possible to use chatbots to learn more. And I, I agree with your experiences. Well, I, I too have this experience of being able to learn more, do more with AI. But also, when I wanna learn something, I often watch a YouTube video and I'm... Well, okay, so what do, what do I believe? I, I think there is a tremendous potential in AI when it comes to learning. I think the ability to converse with an expert on just about anything is a, well, c- could accelerate learning in a way that it, we haven't seen. Well, u- unless you're some kind of roli- royalty and you can have your own personal expert tutor, then this is something we haven't seen before. But I also think that this wil- well, this would make it really important that kids want to learn stuff because this is an enabler for someone who wants to learn. And I listened to an interview with Salman Khan from Khan Academy, and when he talked about the early effects of Khanmigo, and he said that some kids, they get it right away, and they just r- run e- a- and, and learn a lot of things. And some kids are stuck. They're, they're confused. They don't really know what they're doing. They can't get anywhere. And when he, the... When they saw this, th- they talked to the teacher and said, "We're having problems with these kids. We don't really know what, why they're not getting anywhere."And the teachers say, "Well, the, uh, it's the same thing, uh, in the classroom without Khanmigo. Uh, you, you can ask a question, what, how's it going? Do you need help with anything?" And they can't articulate what they're doing, what they need, what they want. Uh, and I think that is, that will become much more important to, to activate... well, to make students more active and self-going, self-driven learners. Maybe AI can help with that as well. Now I'm drifting from your question, sorry. But I think that teachers will have... that will be a much important part of the role of teachers, to, to motivate, to inspire, to help kids get started learning. And then they can learn anything they want using AI, uh, the subject knowledge and perhaps also keep things going. Be- because, I mean, I used to be a math teacher, and I know there's a wide span of math knowledge, uh, in every class. And some kids are under-stimulated and for... and some kids are just lost, uh, struggling with negative numbers, uh, when they're 16 years old. And if you can adapt the teaching and learning to their actual level, then I am certain that it will be be- beneficial for their interest in learning. Yeah. But, well, okay, so my basic answer is I think AI has a tremendous potential for accelerating learning for everyone. And that is, I mean, it's in rich countries like Sweden and United States, but also in poor countries. If you can have an expert tutor for $20 or maybe $2 a month, that's great. Uh, I mean, that, that could change so many things.

speaker_0: Yeah, I think the retail price of Khan Academy in the US has been $4 a month. And yeah, with a little subsidy or whatever here or there, I mean, the free version n- now of ChatGPT is, like, quite, uh-

speaker_1: Yeah

speaker_0: ... generous with its limits.

speaker_1: Mm-hmm.

speaker_0: So, I do think that is very m- well, worth keeping in mind that... I often quote Biden on this, uh, "Don't compare me to the almighty, compare me to the alternative." Eh, for folks- ... who are in disadvantaged positions and really don't have great alternatives, then the, you know, the AI o- option becomes like a no-brainer in a lot of situations. Uh-

speaker_1: Mm-hmm

speaker_0: ... not just for education, but, you know, even medical advice.

speaker_1: Healthcare, yeah.

speaker_0: Yeah, is something I often go to. So, yeah, I mean, uh, uh-

speaker_1: Mm-hmm

speaker_0: ... your comment about motivation also definitely resonates. That was a huge part of the chat with Mackenzie from Alpha School. You know, they're, like, even renaming teachers to mentors, guides, and coaches, and the job description has totally changed. They are not responsible for presenting the content anymore at all, and they are really not m- much at all responsible, I don't think, for, like, grading homework or doing any sort of evaluation. It's all... that stuff is all done on the tablet via, you know, that sort of hodgepodge of AI-powered systems. And the adults in the room are entirely about, like, the sort of... everything else, right? Everything that's not the content. So, I think that is, is pretty interesting. Are there any other models like that that you are aware of? Like, uh, are there any... I mean, she's, she's had a lot of, uh, good press recently. Are there any other pioneers that you would point to at the, like, school level that are doing interesting things?

speaker_1: No, I don't think I've n- I don't think I know of any, uh, school models like that. I mean, I've seen individuals doing a lot of interesting stuff. Also, like, parents creating AI tools for their kids for practicing stuff or learning stuff and just going on learning adventures. But I haven't seen anything organized on the school level. Now, I, I think the Alpha School... Uh, I listened to your interview and also have heard about Alpha School earlier. I think it's really inspiring to hear schools using, well, different concepts for learning, re-imagining what schools can be with or without AI. But maybe the closest thing I can find is, uh, Khanmigo. And, and also perhaps Khan Academy w- without AI stuff, where you have learning maps and you, uh, it's a gamified environment where you collect stars and I don't know, stuff. And have spaced repetition or whatever. But it's not organized as a school.

speaker_0: Any other highlights from the... even, you know, individual parents, families that you think are kind of remarkable enough to, um... or, you know, or sort of differentiated enough to merit a mention?

speaker_1: Well, I, I, I have a friend or acquaintance who is... uh, he's a lot like an AI wizard. He's... e- every second night, he's awake and inventing something new, like AI musicals, uh, one night. And some other time doing AI stuff that could help, uh, people with d- dyslexia. And then he realized, "Well, this might be good for people whose, uh, aren't, uh, native Swedish speakers." And he's started tweaking. And every... all code is built by AI, of course. And I think he at some point also had a tool that could interpret handwritten text that was shaky because of Parkinson's disease, I think it was. No, I think we need more exploring. A- and that's also why I think that teachers should b- be free to use AI in the way they think is suitable for their classroom, their students, or, uh, the whole class or just individuals. Because then you can see that this student needs to be challenged. He's interested in black holes. I don't know. Or something. Uh, here's a chatbot. Go talk to Gemini or whatever a- and learn more about black holes. On Thursday, I want you to tell me what you know about black holes and also write up three questions that you think are really difficult but also interesting that your classmates w- would be interesting, uh, interested in learning about." Something like that.Or it could be, I have a friend... Well, friend's kid who was visiting and got tired of us adults talking, and he wanted to just play on his phone. A- and instead of doing that, I got my laptop, and started an interactive story for him, adapted to, I think it was 10, 12 year old, well, whatever, at the time. And he started reading, and what was kind of confused was, "What's this?" "Well, okay, read." And p- pick your options. It was a scenario, uh, scene described, and then he picked, I don't know, option two and went on. And h- half an hour later, his mother said that he ha- she hadn't seen this, her kid reading this closely or int- intently for so long time before. So, I mean, if you experiment, you will find new ways of using AI. But still, I don't think it's urgent to start using AI as a learning tool. A- and this, I think could be a, a way to switch over to something else b- because kids are using AI for learning by themselves. What you do in the classroom is one thing. You can put... Well, have, have policies and guidelines and stuff, or just say, "No, no AI in this classroom." But of course, kids will use AI anyway at home, and they will use it for learning or for cheating, and some of them will use AI in a good way that actually e- enhances their learning, but some of them will fool themselves, uh, into believing that they're learning stuff while they're actually not. And I think that's a great risk. Well, great wi- Yeah. It's one of the greater risks when it comes to AI in education. B- because if you use AI in a way that harms your learning too much, then you will get behind in school, and that will lead to accumulating problems, and then you might, I don't know, don't get your degrees and not get your job or just feeling outside society. And that is happening right now and has been happening for almost three years now. So, we need to ... Even though we don't know ourselves how to do it, we need to teach our kids, our students, how to use AI in a good way for learning, and also how they shouldn't be using AI for learning. Because it's... Well, you might cheat yourself, uh, and it can cause more harm than good.

speaker_0: Yeah, so this is, uh, in the bucket probably of like things kids need to be taught, right? About AI.

speaker_1: Yeah. Bucket number three.

speaker_0: And of course there's a, there's a challenge-

speaker_1: Yeah

speaker_0: ... there, which is that the adults don't necessarily know it either. Um, so what's, I guess... Eh, and it also, I... This is part of why the taxonomies get tricky, 'cause they all do sort of bleed together.

speaker_1: Mm-hmm.

speaker_0: The teachers have a lot of opportunity, as we all do in our, you know, white-collar work, to use the AIs to become more productive. You said that's not like necessarily super urgent. They can do it maybe if they want to, whatever. But presumably getting hands-on in those sort of daily utility sort of ways would translate into a much better understanding from which to teach kids what they need-

speaker_1: Yeah

speaker_0: ... to know about AI. So, let's take maybe a beat on that second bucket of what can teachers be doing, what should they be doing to get, you know, their own time back. And, uh, we did an episode on this with respect to doctors as well. It was... I was amazed by... I mean, I sort of knew this, but I think it's similar across medicine and, and teaching where it's like you have your day at work, and then you have your whole like extra night shift, where for the doctors a lot of times it is translating notes. You know, kind of actually doing the paperwork that follows the actual interactions with patients. And for teachers, it's a lot of grading homework and, you know, processing all the, the stuff that got produced by the students during the day or it could've been homework for students, of course, as well. What should... What do you think teachers should be... I, I personally, again, go to AI all the time with my writing and ask- ... for critique. So, and I usually find it's like at least somewhat good. I don't necessarily take every suggestion that I get, but I seldom don't ask. You know, I can say that at this point for sure. So, w-

speaker_1: Yeah.

speaker_0: You know-

speaker_1: Yeah.

speaker_0: What should teachers be doing, and then that obviously inform like what they should be teaching the kids about AI itself.

speaker_1: Yeah. Yeah. And m- my view on it is that... Well, what I say, n- tell t- teachers is that you should use AI, uh, in your work outside classroom if it's useful, if it actually helps you. But what I'm thinking is that of course it will help you. Because you could use AI, for example, for writing. Well, a- as you said about... I agree completely with the doctors and teachers analogy. The teachers have so much to do outside the actual teaching. So, taking your notes, turning them into something actually readable, something that you can send to students, or parents, or to, to your boss or something, headmaster. You have just going through a lot of information. F- understanding... Well, learning about how should I work with this... These three students who have these challenges. I've never met those challenges before. Go into research, summarize, then spend some time, uh, well, t- talking to a, a chatbot about how I could approach this in my classroom and getting some ideas I could try out the next day. That would... Eh, could take now 20 minutes instead of four hours. Just reading through research papers take so... Just finding them takes... Would previously have taken a lot of time. What else? Well, planning stuff, getting ideas for how to... Well, creating material is another thing. Here's an old math test. I want to have three different versions of this, and one of them should be on the theme of soccer. And then I get, uh, ideas fr- from, from the air that I can not use... Sometimes use straight off, but this is inspiration for me and I can adjust it and tweak it and use it after that.

speaker_0: How 'bout personalizing content to kids? I mean, this is something that, you know, Khanmigo can sort of do this. The learn and... Or I think it's study-

speaker_1: Yeah

speaker_0: ... and learn mode from CHATGPT can do this. Really, you can just prompt pretty much any chat by imagine to do it. It doesn't even have to be education specific in terms of its design. But I have no idea. You know, it sounds nice to be like, "Oh, you're interested in basketball, therefore we'll make all the problems about basketball." Does that really matter to kids? I mean, my sense is like that sounds nice. It seems like it would wear off pretty quick. If you like basketball-

speaker_1: Yeah

speaker_0: ... but you're not that into math like, how many problems are you really gonna go through because they are like framed in terms of basketball-

speaker_1: It doesn't work

speaker_0: ... but maybe there is research there that-

speaker_1: Yeah. Yeah. Yeah

speaker_0: ... that matters? I don't know.

speaker_1: I agree. I think it... Y- y- it's an easy hook to use but if- for the long run it doesn't work. Ex- except for very particular types of students. The biggest reward I think instead is the feeling of learning something. When you can get to that, you have something really good going. When it comes to individualized, y- you could use an... I think that's a great opportunity for using AI to individualize exercises or something. Sometimes u- using basketball or soccer because, e- depending on the... Which student it is. But mostly I would say that adjusting to their level instead. There is a problem that we need to... Either you need to trust the AI and say, "This is probably good. I'll just send it off to the students right away without reviewing everything." Or you need to spend a lot of time on it, which teachers don't have. We should also be open to the idea that this is 90, 95% good. Some of these exercises will be bad or... Bad, I don't know. Won't, won't really work. But that's okay. I, I've saved so much time on this that I can in- instead, I don't know, talk more to my students or follow up in different ways. And the next result is positive. And I think that is important to keep in mind. We need to... It, it has to be okay to make mistakes. And sometimes it's just... It's okay with a net negative because we learned something on the way. But it also... A lot of times we have... Something becomes not as good as if I would made it myself but I get so much time to do other things that we get a, a real boost in learning.

speaker_0: Another dimension of personalization that comes to mind is just like modalities. Uh, and this is another thing where I feel like I'm not really sure what the research would indicate. But there's at least these concepts, you know, of sort of, "I'm a visual learner. I'm a-"

speaker_1: Yeah.

speaker_0: "... you know, audio learner." I do feel that myself. I mean, part of the reason I am in the podcast game as opposed to running a blog is like, I just absorb audio content a lot better. You know, even in bed at night, I can like stay engaged with audio for a long time but then the minute I try to switch over and read something, I, uh, g- very quickly end up going to sleep. So, it feels real to me. How real-

speaker_1: Mm

speaker_0: ... is that according to the research and do we have... I'm thinking like Notebook LM type products here perhaps that could even take you from, you know, textbook to like a conversational format. Even in a... I mean, Notebook LM is even interactive these days. So-

speaker_1: Yeah

speaker_0: ... what do you see there?

speaker_1: Yeah. The, the science, the science, uh, when it comes to learning styles is pretty clear that, that learning styles don't exist in that way. Sadly bec- but, but it's a... The feeling of them existing is real, so it's a really difficult misconception to, to get rid of. That being said, I, I think there are good ways of using different modalities for learning. Not because someone is necessarily an, a visual learner but because their, the different modalities can be used in different ways. If you're on the bus going somewhere, while maybe listening to something is great. If you... I can't come up with an idea about when an image is good but it probably, uh, places where images are useful as well. Or, or text-based stuff of course. And moving between modalities is useful, I, I... As a tool. And I think the science when it comes to learning styles says that we should blend between them. Don't quote me on that. But if that's true, then AI is useful as well, of course.

speaker_0: Yeah. How about... I mean, another... The Notebook LM interactive mode also suggests like interactivity as a kind of fundamentally new primitive in education. Like we have... I mean, and the chat bots bring that as well. So, you can have that interactivity across modalities. Is there any established truth about the value of...

speaker_1: I am not sure but I would definitely think that interactivity is good. I've seen some research on AI improving learning and they explained due to interactivity. I think that was... That was based in language learning. But I mean, the learning process physiologically is, is connected to dopamine and stuff. And interactivity, uh, increases anticipation and stuff which means that you probably are learning more when it's... When stuff are interactive. Yeah.

speaker_0: Mm.

speaker_1: So in that way, I, I think it's fair to say that interactivity is good and AI increases possibilities for interactivity tremendously. And I think also that, that is why we think, at least I think that listen to two people talking about something is more interesting than a monologue. When you listen to the LM... Notebook LM, uh, podcast stuff, two people talking to each other is a good format for getting engaged.

speaker_0: Yeah. It's coming, it's coming for all of our jobs, uh, before too long, podcasters included. Uh-

speaker_1: Yeah

speaker_0: ... and truly I don't...I'm not joking when I say that. I mean, the... It's still a minority of my audio listening time that goes to Notebook. uh, LM, but the fact that it can take in whatever paper, cup or couple papers, whatever it is that I'm kind of immediately focused on, where there just literally is no other content out there. You know, again, it's like, uh, compared to the alternative. For me, the alternative is reading and probably falling asleep reading. So it's-

speaker_1: Yeah

speaker_0: ... as imperfect as it is, it is definitely starting to, uh, steal some share of my time and attention. And I assume that that's only gonna go up and up and up from here.

speaker_1: Have you tried-

speaker_0: Um..

speaker_1: ... the presentation video stuff at NotebookLM?

speaker_0: No. I've done-

speaker_1: Uh

speaker_0: ... the audio only primarily, and the interactive-

speaker_1: Mm-hmm

speaker_0: ... a bit. But, so what-

speaker_1: Yeah

speaker_0: ... they make slides now too?

speaker_1: Yep. Yep, it does. Uh, uh, and talks to the slides. And I feel... I, I still, when it comes to AI in education where I know a lot of things, I can do better presentation and better stuff. Well, maybe I don't do bet- better presentations, but I have better content in them. I know more what is important to communicate. But I have seen people on stage, a- as a teacher and other cases, that don't do a better job than NotebookLM. So yeah, they're coming for our jobs.

speaker_0: Yeah, no doubt about that. Yeah. Yeah.

speaker_1: Wait another six months and, and... Yep.

speaker_0: Yeah. Interesting. So okay-

speaker_1: Mm-hmm

speaker_0: ... so just finishing up on what teachers should be doing. One thing I think it's, unless you're gonna very much surprise me, I think it's well-established they shouldn't be doing, is using AI, um....

speaker_1: For grading.

speaker_0: Detectors, basically.

speaker_1: Oh, yeah.

speaker_0: Well, yeah, grading-

speaker_1: Oh

speaker_0: ... is another interesting one.

speaker_1: Yeah, yeah. Yeah. Uh, well, yeah, j- just first, AI detectors, no, they don't work. Dead end. Uh, using AI for grading is really tricky. And, eh, I would say no a- a- if someone's asks me again, I would say no again. Then I would say, well, maybe you could use it as a second opinion. When you've graded your s- your essays or tests, you can run them through an AI and see what the AI says. And if there's any big differences, you could have a second look at those. But if you use them as your first assessment there is a great risk that you will fall asleep at the wheel and just use AI assessment. Which probably will mean that you give some disadvantage to, I don't know, kids who don't have English as their first language, or some kind of, uh, atypical, well, groups. And we don't want it, that to happen.

speaker_0: Are you aware... I wonder, I mean, this, eh, this sort of reliability of AI grading is a huge-

speaker_1: Yeah

speaker_0: ... question in a lot of respects right now because notably like, you read the technical reports on the new models that are coming out, and a lot of the data that they are reporting about the new models is itself AI generated, right? It's like, you know, and I was even involved in a little research that, that worked this way, where we were trying to assess the... Coherence was one dimension and alignment of a particular model. And the way we were doing that was just doing all these generations, taking all the generations and feeding them into another LLM for alignment, uh, assessment.

speaker_1: Yeah.

speaker_0: And, you know, I'd say it works at the level of like major differences in aggregate scores are meaningful, right? So if we have one like model A and model B-

speaker_1: Mm-hmm

speaker_0: ... and model A gets a much higher alignment score than B, like I would believe that that's like re- reflects something real going on most of the time at least. There's also-

speaker_1: Yeah

speaker_0: ... like the question of, you know, just how consistent or reliable are the human raters, right? Which-

speaker_1: Yeah. Yeah

speaker_0: ... uh, sometimes what you hear, what you see in these technical reports is like, "We validated this strategy by, you know, sitting down with some experts and looking at their assessments and comparing our assessments to their assessments." And kind of talking ourselves into the idea that like our, the AI assessment-

speaker_1: Yeah

speaker_0: ... was like similarly good.

speaker_1: Yeah.

speaker_0: So it seems like we're all gonna be-

speaker_1: When, when we're kee-

speaker_0: ... judged by AI before too long. Is there any way that that's not gonna happen?

speaker_1: No. No, it's not. And I think we'll need the human evaluations to validate the AI assessments to make sure that they're on the right track. And one or two generations down from that we won't have the competence to, to do the human evaluations anymore. Eh, wha- when I worked with the National Agency for Education, I was, for a few years, responsible for national tests in mathematics. And we had at that time oral tests. Part of the national tests were oral in mathematics, and we did some evaluations. And the results when it comes to how reliable human evaluators were was that they were basically like tossing dice, and sometimes worse. Uh, in those cases, eh, uh, it would make sense to use AI assessment as well, uh, in- instead. Maybe. The problem with AI assessment, e- even if we can have more consistency w- with AI assessment is that we have a single LLM doing all the assessments. And that means that whatever biases you have in that LLM will affect all the assessments. You know, whe- when you have a, a thousand teachers assessing instead, you at least have some noise making it lea- less probable that some groups, well, o- of students are disadvantaged. On the other hand, you can, of course adjust, f- find these biases for LLMs. At, at least, well, when you find them, if you find them, you can adjust for them, which is much more difficult when it comes to humans. But, yeah, it's tricky. Uh, uh, but I think if you a- a- as a teacher, s- you shouldn't buy on your own, just start using a chatbot for grading student stuff. It requires quite a bit of a framework and tests and stuff to have it reliable enough.

speaker_0: Let's say I'm a teacher and I come to you and I say, "All right, I heard your warning."... but I'm too busy. I'm gonna do it anyway, so I'm gonna do it. Now you tell me how to do it as well as I possibly can. It sounds like a couple-

speaker_1: Huh

speaker_0: ... a couple of ideas that have come to my mind just while listening to you talk about the challenges are, one, maybe use multiple different LLMs. Maybe use different prompts. I mean, this is sort of in keeping with the general trend towards scaling inference compute, right? Like instead of grading the thing one way, grade it five different ways, and then you could maybe have some sort of resolution idea where like if they all agree, you go with it. If there's, you know-

speaker_1: Yeah

speaker_0: ... four to one vote, maybe you go with it. If it's three to two, maybe you have to do it yourself. If you see any grades that are like more than one grade apart, you know, if you're- if you're on a seven-point scale and you see any that are two, um, any evaluations that are two points apart, like you have to go in and read that one yourself as a human.

speaker_1: Yeah. Yeah, uh-

speaker_0: Keeping in mind, okay, I'm not gonna- I'm not gonna listen to if you say no.

speaker_1: Okay.

speaker_0: So what- what else would you tell me-

speaker_1: Yeah, yeah, yeah

speaker_0: ... uh, to try to make it as fair as possible?

speaker_1: I would start by- I would start by saying I- I definitely understand your needs. You need more time, and so let's try to do this in a good way. And by the way, you- if you do this properly, you can start a company and make a lot of ru- money of this- on this I think. I would say break down your assessment on- in different scales. So if you're assessing essays on, I don't know, how- how English is, just writing skills, make sure that you have four different scales that you're using. Typos, using rich vocabulary. I don't know. Different ways. I'm not a language teacher, so I don't know these things, but there are such different aspects that you use when you grade or a- assess essays.

speaker_0: A rubric, in short.

speaker_1: Yeah. Yeah. And have these also n- kind of numeric scales, A to F, one to 10, something. And- and then have some kinda method of composing the results into a- a final grade, because that means that you can check afterwards or the student can check or their parents can check, this LLM or the- the assessment says that this was bad in the essay. We don't think this is bad. It- it's a three, it should be a seven. Then you can look at that and see, yeah, you're right. We should change this. That's much more easy to do than the AI says is a five, in general. That's one. Second one is that you should try to identify groups that could be disadvantaged. So you have non-native English speakers, Eng- English as a second language. You probably want to separate boys and girls to see if any of these groups are advantaged or disadvantaged, and you might have some other categories as well. And then you just look at what your assessment says for these different groups and see that the results are okay. I mean, it could be that some groups are actually performing worse than others, but it should be proportionally so when the AI assesses them. And then you should also have some way of, I don't know, complaining or saying that this is wrong. So- so a student or their parents can say, "I want you to have a second look at this." And you should also be transparent with this being assessed by an AI. If you do all of these things, I actually also think that you have complied with the EU AI Act. A- and then you can start selling this in the European Union.

speaker_0: Okay.

speaker_1: On the other hand, it might take more time than you have as a teacher. But what's- where you go to-

speaker_0: You could fine code your way there and uh-

speaker_1: Yeah, yeah

speaker_0: ... shorten her these days.

speaker_1: You go to Lovable and- and get it going. Yeah.

speaker_0: Okay, cool. Coming back then to part three, or things that schools and teachers need to be thinking about teaching their students about AI itself, what, um, what do they need to be teaching students about AI itself?

speaker_1: Yeah, all right. So I've already mentioned learning to use AI as a way for learning. That is an AI competence in itself. Even if you don't use it in the classroom, you need to teach kids how to use or not use AI for learning. That is one of the really urgent things I think. And another really urgent thing is helping students, kids understand AI friends or AI companions. There are ... Well, we've probably both heard of really sad cases where AI companions have caused severe harm, and there are risks. Uh, so not- not just like suicides and things like that, but- but also emotional harms, s- uh, uh, social harms that we should avoid. And personally, I think that AI companions should be prohibited for anyone below 18. It's- we don't know enough about this. Well, you don't have to go that far, but you need to help a student. Just discuss with kids what is AI friendships, how should we relate to them, what are some warning signs? When should you be- be worried, when shouldn't you be worried? Because even though I'm against AI friends for kids, I'm pretty sure that 90%, 95% is no worries at all. But you should be aware of what to look out for when it comes to AI friends. So those are urgent. We need now. More skills, well, deep fakes, critical thinking is one of the really important ones. Not really that urgent as the other two I mentioned. And then we have general skills. Some of them are the same things that- that teachers should learn, like using AI for writing, managing information, automating stuff. Yeah, probably some more that I don't think of right now. But then we have so much more that isn't practical use of AI tools. It's understanding how is AI affecting democracy, how it's affecting power balance in the world or in society.Concentration of power, the effects on the labor market. It, how rapidly AI is evolving. Learning the basic about AI as a technology because that helps you understand what to expect from AI and what to not expect. And more legal aspect, ethical aspects. I feel I could go on for quite some time. I'm writing a book about this.

speaker_0: Yeah, this could be like easily an hour out of every day to cover all that.

speaker_1: It, yeah, it could be. But also in, in some ... Well, I, I think AI affects, AI competence should be taught from preschool to adult education, and it affects most subjects. In many cases its, well, a natural part of the subject I think. In some other cases its new content that you need to add. Writing for example. If you have English writing creative writing, then you probably teach feedback processes and stuff like that. Well, you can incorporate AI in that as well. You can use AIs to, to get feedback on your text but you can also go a step further and create a panel of readers, typical readers, and have an AI mimic those and give you feedback based on what, I don't know, middle-aged white men think about this text. Uh, things like that. So, so sometimes it's just a small tweak. In other cases if you're teaching programming then you need to bring in a AI tools and help students learn to code with AI. And that is, uh, probably more of a shift than you have in, in writing.

speaker_0: Going back to the AI friends thing for a second. One of the challenges I always, and I do worry about this for my kids. I mean they're a little young for it now, but I'm expecting, you know, probably this year if not, if not this year it'll have to be next year. That we're gonna start to get like toys floating around that are kind of-

speaker_1: Mm-hmm.

speaker_0: AI friends. I mean, there are some already on the market of course, but like I haven't-

speaker_1: Mm-hmm.

speaker_0: I haven't been asked for one. I, which I think means my friend's kids don't have any which means, you know, they're just not that popular yet. And I'm sort of like, oh man, I agree I think intuitively with the precautionary principle here. Like I don't necessarily wanna run some crazy, you know, AI girlfriend experiment on my hypothetical teenage son or whatever. You know my kid's only six, but if I-

speaker_1: Yeah

speaker_0: ... project forward a few years like when he's 12-

speaker_1: He'll soon be 16. Yeah

speaker_0: ... I don't think I want a, I don't think I want him to like have his first girlfriend be an AI girlfriend. That just seems like too much maybe. But then again I'm also like well this, this tutoring thing has so much upside. And what makes a good tutor, right, I mean it's obviously a lot of things. It's like having the knowledge and the skills to impart. But it's also building the relationship and the rapport, and like having fun together and making it, you know, a time and a way to, way to spend time that you look forward to. So I think it seems like the line between sort of AI tutor and AI friend itself is gonna get really blurry.

speaker_1: Yeah.

speaker_0: And you know you kind of ... I always think back to Eugenia uh, Koida who started Replika. She was actually one of the very first guests on this podcast. And she's said a number of really interesting things. One of which was she thinks the moat in AI applications is ultimately going to be relationship. Meaning like you, for most people right, you she says you don't switch friends, you don't abandon your friends because you meet a smarter person than your current friend. It's the history that you have together. It's the, you know, the experiences you've shared. It's all this sort of, you know, intertwinement-

speaker_1: Yeah.

speaker_0: ... that you have with a friend that makes them your friend and not, you know. And there's another person who might be smarter or like might be better at some things. Like maybe you wanna be friends with them too but you don't like abandon your friend. And I thought about that for a long time. It's been over two years now since she originally said that. So I'm like, boy, this is definitely playing with fire. But it's kind of hard to imagine the best available AI tutor wouldn't have a lot of these elements.

speaker_1: Yeah. Yeah, I agree.

speaker_0: So I don't know. How do we make sense of that, right? It's tough.

speaker_1: Yeah. Yeah, yeah, it, it's really tough. And it's ... Yeah. The, r- wha- when it comes to teaching, the science of teaching and learning, it, it says that the relationship between the teacher and the students is one of the mor- most important factors when it comes to learning. So yeah, having a relationship between the student and the AI tutor will probably be important too. I don't know how to find a good way around that. And that means, ah, that we should tread fa- carefully, I guess. On the other hand, we also have the, the competitive environment. Meaning that those that move more quickly will get more benefits. Maybe one way of at least reducing the risk significantly is to look at the incentives. If the incentives are getting as much money as possible from people who are buying your services, paying a monthly fee, then you will have more of the dark patterns making people being stuck with your stuff, uh, more or less against their will. If you have the incentives being we should try to get as good education as possible and this is run by a non-profit entity, then you have better chances of not getting those dark patterns and dependencies. But I mean we still lack the map for navigating this terrain.

speaker_0: Yeah. And there's, I mean not only do you have to worry about potential dark patterns from-... you know, people that are trying to get you to subscribe. There's also the ad-driven model and the, you know, the commerce-driven model, which is, like, starting to take shape right now as well. So, I feel like as much as I don't wanna get hooked in, you know, I don't wa- I don't want my frie- my kid to have some sort of weird parasocial relationship with an AI that's designed to make me not churn off the, off the subscription. I'm maybe even more reluctant to think about the free version where it's like, you know, trying to get them to, you know, be excited-

speaker_1: Yeah

speaker_0: ... about buying whatever or just maximizing engagement, you know?

speaker_1: Paid by ads, yeah.

speaker_0: There's a lot of weird, um... And I'm not one, I'm not a hater. We just did an episode on advertising in AI apps, and-

speaker_1: Yeah

speaker_0: ... I do think it has its place. You know, like, i- it is easy to demonize that whole idea prematurely too, but yeah, I wouldn't... That's, that's one that I'm definitely like, "Oh God, I don't want to have my young kids being the subjects in that experiment." We got a lot of questions. Okay, how about systemic changes? Well, maybe, is there more?

speaker_1: Yeah.

speaker_0: I mean, you covered a lot there on things that we should be teaching kids about AI. There's... You know, this is sort of like what we do. Can we try to get the kids to understand the most important parts of it? Maybe misconceptions is something that I think is really interesting at the moment, especially 'cause I'm thinking about this, like, what do I wanna communicate to, uh, educators here in my home state coming up? One misconception that I hear a lot these days is the idea that... First of all, like hallucinations, you know, big emphasis on hallucinations, which obviously do still exist. Although, I would say both their frequency and the severity has come way down relative to when the sort of hallucination narrative was formed. So, in today's world, I'm, like, not really even sure that the output you get from a GPT-5 or a, you know, Claude 4.O5 or whatever is less accurate than what you would get off of Wikipedia.

speaker_1: Yeah. Certainly not when it comes to stuff that you teach in elementary school.

speaker_0: Yeah. The mistakes are-

speaker_1: If you go into fre-

speaker_0: ... usually pretty far out on the fringes, yeah.

speaker_1: Yeah, yeah. Yeah, that, that's one misconception. I think one common, I don't know if it's a misconception, more like missed opportunities, is just using chatbots as a search engine. It took me too long time to realize that a lot of people use ChatGPT, for example, and type in, "Pancakes vegan recipe," and get a recipe for vegan pancakes back. But they're, think they're using Google but from OpenAI. Uh, and instead, they're... While, while they're missing out on using AI or chatbots as an assistant or someone to, to discuss stuff with or using them as a problem solver, things like that. So, that is something I think is worth showing people that, that there, you can do so much more with, with AI stuff. Well, we have the cheat detectors, AI detectors worth pointing out. What else? Misconceptions? I think, well, the, it's impossible for, I guess even for you, to stay up to date with what AI actually can do. And having an, uh... Well, basing your ris- your assessment of what AI can do on something, well, six or 12 months old experiences is, well, huge risk of being wrong because things are changing so quickly.

speaker_0: Another one I had in mind was the idea that the LMS are just predicting the next token, which I think-

speaker_1: Yeah

speaker_0: ... has kind of become wrong in two ways. I think when people say that they sort of have, and they probably haven't even heard this term, most of them, but they have essentially the stochastic parrot model of a language model in mind. And so two things I want people to know. Our first, even when the model was in a very literal sense just trained to predict the next token, that doesn't mean, and in fact we have very good evidence to the contrary, that they don't have any higher order conceptual understanding, right? I think people, if you're just predicting the next token, you wouldn't expect things like a language-independent representation of, of certain concepts, right? But we do know now that, like, at least common concepts seem to be represented in a way that is detached from the English word for that concept or the Swedish word for that concept or whatever, right? These concepts that exist across humanity have representations that are higher order than the, you know, the specific tokens or words that are used to represent them. It looks o- in that way, like a lot more like thinking, right? I don't-

speaker_1: Yeah.

speaker_0: I don't think primarily in tokens. I have to kind of cash my thoughts out to tokens, but there's something going on inside that is higher order processing that is, like, not... You know, it, it is a lossy process to reduce that to a single token. So, I think people really fail to understand that. And then another one that's just even more literal but really important too, is they're no longer being trained exclusively to predict the next token. Like now, they are being trained to get the right answer, like with the whole reinforcement learning paradigm doesn't really care what tokens you strung together to get to the right answer. Like in many cases now you are just judged on, as you being the language model, did-

speaker_1: Yeah

speaker_0: ... you get the right answer or not? And the reward is based on that final outcome, not on the token by token chain of thought that you took to get there. And this is also, I think, really quite well demonstrated. I mean, that is a fact. But like that this has profound effects, I think is really-... well demonstrated with some of the recent chain of thought visibility that we've gained with research from folks like Apollo, which is another recent episode. Where they got access to the chain of thought that the public doesn't see from the OpenAI models, and what they found-

speaker_1: Mm-hmm

speaker_0: ... inside these chains of thought is like, this is becoming its own dialect. Like, the AI is now, it's kind of speaking in weird shorthand, like using terms in kind of non-standard ways. And it's like just very clear that-

speaker_1: Disclaim, disclaim-

speaker_0: That, yeah, exactly

speaker_1: ... watchers. Yeah.

speaker_0: Yeah. That is not predicting the next token in any, you know, text corpus I've ever seen. So whatever's going on there, it's like definitely a different kind of thing. So anyway-

speaker_1: Yeah.

speaker_0: I don't know. I'm just maybe-

speaker_1: Yeah.

speaker_0: Maybe I'm just kind of rehearsing my, part of my upcoming talk here, but I think those really stand out to me.

speaker_1: Yeah.

speaker_0: Because then, why does that matter? Biggest reason is just I think it leads people to underestimate where the technology is really at. And-

speaker_1: Mm-hmm.

speaker_0: ... from my perspective, not just for education but like society wide, one of the biggest mistakes we could make would be to underestimate how good this stuff is, how, how powerful. You know, we can leave-

speaker_1: Yeah

speaker_0: ... good, bad aside for the second, but just how powerful it's become.

speaker_1: Yeah, powerful. Yeah.

speaker_0: You know, we're not doing anybody any favors if we allow them to comfort themselves or, you know, de- f- figure, "Well, I don't need to worry about that." Right? Again, it, it all comes back to it, for so many questions, it comes back to, are people comforting themselves or convincing themselves that they-

speaker_1: Yeah

speaker_0: ... don't need to worry about this because it's not that good.

speaker_1: They're just-

speaker_0: It's just predicting-

speaker_1: ... predicting the next one

speaker_0: ... the next token.

speaker_1: Yeah.

speaker_0: And it's got so many mistakes anyway. I think those things are, you know, kind of, at this point have kind of become dangerous memes that have outlived their usefulness. Though it is still important to know that you're not gonna get 100% infallible accuracy from AIs either. There, it does feel to me like that has kinda swung the other direction where people are like underestimating their capability rather than, you know, originally they were... They were originally perhaps overestimating needed to be corrected, but now it seems like it had swung the other way.

speaker_1: Yeah. Yeah, I, I agree. That is an important mess- message. And perhaps in the same vein, I think it's also important to point out that AI models are not traditional software. You can't look at the code and see what kind of decisions are being made. We, we can, with a lot of effort, understand small parts of what's going on inside, but they're essentially ba- black boxes dr- drawn rather than built.

speaker_0: Yeah, yeah. That's... Taking a note on that as well. Okay. How about on the systemic changes front? I mean, that could go a lot of different directions. I have a couple of candidate ideas that I wanna throw at you, but w- what's top of-

speaker_1: Mm-hmm

speaker_0: ... mind for you in terms of what people need to be preparing themselves for?

speaker_1: I, I, I have such a mix o- of stuff that I don't know where to start. But the e- AI tutors is probably one of the most salient things that you could look at. If we start having stuff like Khanmigo actually working in full scale, it means that the role of the teachers will change. No longer being the, well, te- teaching subject knowledge, but being a mentor. Like Alpha School, their pa- paradigm will be more common, become more common, which is a huge shock to, to the educational system. We have the labor markets. Wha- what is demanded from, from schools and education will change when labor, the labor market changes in different ways. It could be, well, we don't need... The, the demand for, I don't know, accountants will go down heavily. Okay, then we need to sh- shift the, the balance between different kinds of educations. We also have entry level jobs disappearing. That means maybe w- we should have two more years in school bico- before you start working. Maybe we should have some kind of mentoring program, more mentoring or trainee programs at work that, that is sponsored by the state or something. We have potential mass unemployment in whole sectors. And that would call for reschooling, which requires mobilizing education in different ways. What else? Well, taking it all the way to, to UBI and stuff, that will change how education works. We have stuff like w- what is knowledge? What is b- being... Well, we're kind of used to being the only entities on this planet that are able to think in some kind of higher order, and that has changed already, I would argue, and seems to be changing more in, in the next few years. That will change how we, well, view learning, understanding, stuff like that, which will affect education, of course. Yeah, there's so many things. I'm, I'm... Y- you had a recent episode with Emad Mostaque, when he talked about economics going, well, somewhere really strange places, which I think is... It could be a low probability for that s- p- particular situation or that scenario, uh, coming true, but we have to plan. W- even if we would give it merely 5% chance that some radical things will change in the next five years or 10 years, we need to plan ahead, because the, the system, the educational system is so slow. It's a huge ship that we need to turn around in some way, and we need to start planning now and just, just looking at different scenarios, see what might happen.

speaker_0: So that's kind of a motivation or an argument for one of the ideas that I wanted to get your take on, which is basically, uh, will we see sort of the end of standardization? You sort of alluded to that a little bit with like some teachers, you know, may or may not wanna use AI in different ways, like don't make it mandatory. I'm kind of wondering, even just across the board, like should we be just thinking that standardized education is kind of a thing of the past? Even to the point of like standardized tests maybe go away. I mean, when I spoke to Mackenzie from Alpha School, I was struck that she was kind of...... I felt like one foot in two paradigms, right? Like, she's pushing the AI-based content delivery and all that stuff as, as far as she can, but she's still holding herself accountable and her schools a- accountable for results on standardized tests. And I was kinda like, well, that's interesting because... And I sort of get it in the sense that that's what the rest of the world understands right now. So if, for no other reason, you know, she's kinda gotta prove a- as a pioneer that this can work on terms that everybody else accepts as valid. But-

speaker_1: Yeah

speaker_0: ... it also struck me that the whole thing is kinda superfluous relative to the depth of it. You know, that's one snapshot, one morning, one set of problems, whatever. But the... Compared to-

speaker_1: Yeah

speaker_0: ... the depth of understanding that her AI system has about the students, it seems like it's, like, a pretty limited-

speaker_1: Yeah

speaker_0: ... signal, right? So this also connects to being judged by AIs. But it seems like in the future we will probably not need to sit for one test one time and, like, get a score. But rather-

speaker_1: Mm-hmm

speaker_0: ... whether we're in school or potentially just, like, doing my thing, right? I can imagine turning on a recorder of some sort. I've ha- previously hypothesized that people should be paying me to watch me use my computer for reinforcement learning data purposes. I'm still waiting for those offers, people. I hear they're starting to materialize in some- ... places. But another version of that would be just, like, assessment, right? Like, instead of me doing a test, maybe I install some software, even as a professional, and it just kind of watches what I do. And then it can, you know, a week later or whatever could be like, "Okay, based on 40 hours of this dude's computer use, like, here's what we can tell you about him." And I would expect that would be, in many cases, like, a really valuable signal with a lot of texture to it that you wouldn't get from, like, people sitting and doing multiple-choice tests. So yeah, I guess-

speaker_1: Yeah

speaker_0: ... how far do you think this sort of-

speaker_1: Uh-

speaker_0: ... uh, post-standardization-

speaker_1: ... I agree 100%

speaker_0: ... vision goes?

speaker_1: I, I think that it makes total sense, what, what you say, that if we have an AI that kind of monitors what I do, what I learn, uh, what problems I'm have, what I'm excelling at, i- it makes no sense to reduce that to a letter A to F. It's, I mean, it's good because it fits with current metrics. But otherwise, it's just insane. And I think this is weirdly perhaps one of the most radical ideas I have, but I think that the age of grades is coming to an end. Grades is something that... Well, cheating is a problem, in particular where you have, when you have AI in the picture. Because it's so easy to produce stuff that looks good but doesn't reflect your knowledge. This is only a problem when you ha- you're actually going for the grade, that when the grades is the thing moti- or the exam is the thing motivating you. If you're motivated by learning, then AI is basically nothing but a grade tool, not a problem. And I think we need to make that transition from having a educational system largely driven by grades to an educational system largely driven by the will, uh, the, the desire to learn. And I don't know how to make that transition, but I think getting rid of the grades would g- give, uh, well, be, be a big kick in the right direction. That would hurt a lot. But I, I think also that it would be, uh, in the end very beneficial. And as long as... Well, then you have... Not everything will be better. You need to... There will be new problems that you have to solve in some way. I mean, you have... You wanna go to university. That means you have to have some kind of admission tests or you'll have just random access or, I don't know, and other things. But I don't think... Well, it becomes more and more problematic to have grades as the motivator for pe- uh, kids to, to learn stuff. Well, it might already may, m- be more harmful than good, but it will get worse for every six months that passes, I think. Yeah. And, and from a wider perspective, this current educational system was built for the first Industrial Revolution, and that was almost 200 years ago. And I'm not sure that the same system is fit for the fourth or whatever it is Industrial Revolution with AI going on now. Fifth? I don't know. I'm, I'm into math, not social sciences or history. Yeah. But that, that's a really big, uh, the big perspectives.

speaker_0: Do we have any alternatives? I mean, yeah, I'll, I'll throw one at you and you can tell me-

speaker_1: Yeah

speaker_0: ... what else you've seen in the wild. I guess this isn't exactly an alternative to grades, but in terms of, like, what would motivate people... I guess maybe I start with what I think demotivates people, right? What, what seems like it is highly demotivating is, "I'm being, you know, made to learn these things. I'm being made to do this work. I'm then getting, like, a grade on it. And, uh, uh, the whole time I know that no matter what I do, I'll probably never be as good as AI at doing this thing." So that, that seems like it just plain sucks from the beginning, right?

speaker_1: Yeah.

speaker_0: So, what can we do that is sort of inherently not something that AI is gonna be better than you at? I think when it comes to, like, things framed in terms of economic contribution, that's potentially a vanishing set. But one thing that I think could still be, you know, just, just inherently is, like, not something AI can do for you and also might be motivating is just figuring out and expressing clearly what you really think. So I've been toying with this idea of, like, what if we reframed writing assignments not as sort of minimum-... uh, word length or minimum, you know, it's gotta be a five-page essay or a five-paragraph essay or 500-word essay, whatever. Usually those are framed as the minimum, right? In- in schools.

speaker_1: Yeah.

speaker_0: At least in my experience. What if we instead put caps, and potentially like low caps, like tweet-length caps. Like sometimes I find my own-

speaker_1: Mm-hmm

speaker_0: ... thinking is most pressed when I'm like, I wanna, I have a feeling or I have a thought, I have an impulse on this, and I wanna tweet about it, but now I've gotta express myself in that limited space. And maybe I'm m- just precious about this, but I feel like I wanna put something out that I fully stand behind. So I guess the idea is kind of like, could a writing assignment be reframed from showing you can write and, you know, eh, um, a sort of expository form to, can you put something forward that you could read to the entire class in 15 seconds perhaps, but which you fully stand behind?

speaker_1: Yeah. I think exercises like that are really valuable. Trying out new- new forms and new, well, ways of doing things. Another thing I've been thinking about is writing aso- about something that you really care about. Eh, your favorite hockey team, y- your favorite mu- band or something. You could have an AI write that for you, but you won't be satisfied with the first output that you get because there are so many aspects that you think are important about this hockey team that you want to get into this text. Uh, and that is kind of the same thing as you're saying, that you're expressing yourself. That becomes the important part. And I guess that is one of the, well, one path to- to having something that you are better at than the AI. On the other hand, I- I- I'm not sure that is the important part, that you should be better at something, eh, because you might want to play a musical instrument. You will never be as good as the professionals, but you'll still enjoy playing that instrument eventually, and you'll... Well, you're not doing it for excelling at something. You're doing it because you want to d- develop. You, uh, l- like the feeling of being able to do more to express yourself better and things like that, uh, like growing as a person. Maybe that is a way forward. I don't know.

speaker_0: Yeah. I think these things... I- I like the idea of just something you really care about. And I do agree that, you know, there's... I have a... I'm r- thinking of my one friend, um, Chris, who's a real AI whisperer. S- Reminds me of what you said about your friend earlier, just like constantly- ... you know, creating new things. And he's of the mind, and kind of, eh, at- at times like coaches me. He's like, "Dude, nobody really cares, you know, like whether you did it or the AI did it." He's like, "I- if the AI is as good as you, and it probably is, like it doesn't really matter." And I think that in many ways that's true. Like from the other, from the perspective of the person consuming the, you know, the output, that probably is true more often than not. But I... If it's something I care about, you know, if I'm like-

speaker_1: Mm-hmm

speaker_0: ... "Is this what I wanna put my name on? Is this what I really wanna, you know, project into the world? Is this what I think other people should seriously consider?" Then it really does matter to me. And I do end up toiling over my intro essays for the podcast, probably well in-

speaker_1: Yeah

speaker_0: ... excess of what I need to because I feel like it's, you know, it's me. Like I, oh, you know-

speaker_1: It's you, yeah.

speaker_0: Eh, that's the thing that the- the AI might be able to do a better job of analysis. But if I have anything to offer, it's gotta be something that's like at least sincerely felt, you know? Th- th- that's, um, that's one thing I can... And I still definitely can con-... I mean for no, you know, no surprise, but like at the end of the day, I can still articulate something that I feel I wanna s-... I'm like prepared to stand behind better than I can get an AI to do that for me. If that stops being the case, if the AI gets to the point where it can just like read all my writing and write something that's like, "I literally couldn't have said that better myself." Or that like, you know, that captures my perspective on the matter as well as I could e- ever have hoped to articulate it, then we're in a really weird world . I mean, we're already getting into a weird world, but that's gonna be like extremely weird-

speaker_1: Yeah

speaker_0: ... and probably like quite un- unmooring.

speaker_1: Yeah, that's like... Well, uh, th- there was some really weird results, blind tests, randomized blind tests for the Turing test, where people, I don't know if it was 60 or 67% of the time, thought that the AI was the human. Which is... Um, I mean, the- the- the maximum level of that test should be 50/50. But in some way-

speaker_0: Yeah, more human than human

speaker_1: ... the AI... More human than human. I mean, wha- what's that? Uh, th- that means we should be able to tell which is the AI by who is the more hum-... Yeah, I don't know. Yeah. And at the end, well, if you e- go into the deep utopia and stuff, maybe what we're really after in the long run is to, I don't know, express ourselves. I feel that the- the element of creativity or creating stuff is something that is intrinsically valuable and things like that. May- maybe that is something that drives us, one of the things that drives us. And we have things like relationships with others, just experiences, I mean experiencing the nature or good food and... I don't know. That is something that we want to strive for and maximize, even when AI can do everything for us, that...

speaker_0: Yeah.

speaker_1: And then we have some really weil- weird futures. If we feel that AI can do basically everything better than us, it means that they can also raise my kids better than I do. If I let the AI raise my kids, they will become more happy than if I do it. What should I do then? Yeah, weird world.

speaker_0: Yeah, I don't- I don't have any good answers for that one.

speaker_1: Yeah.

speaker_0: Here's another idea in terms of just...... Purpose of education or kind of, you know, rethinking some fundamental assumptions. I suspect this one might be more controversial in Sweden than it would be in the US, although it wouldn't be without controversy here, by any means. But...

speaker_1: All right.

speaker_0: Again, we're working from the premise that maybe... I think already, right? I mean, we've seen the GDP val from OpenAI in the last few days that showed, notably Claude 4.1. Opus, you know, at the highest level was like almost to parody in terms of how effective it was at basically doing domain work. You know, tasks created by experts, then Opus and an actual human expert doing the task, then another set of experts evaluating the task, and Opus is like almost to 50% in terms of how often the evaluating expert prefers its output to the human expert. So we don't have to go too much farther there, and then we're in this world that we've been envisioning where most people are probably not gonna be able to make an irreplaceable contribution to the economy. And so maybe though, at least for a time, and you know, I get, we don't... I feel pretty confident that we're gonna get there. What I don't know, although I certainly don't rule out either, is the idea that the AIs like, will they, will they begin to deliver paradigm shifts for us at some point? You know, will there be a sort of Einstein-level contribution from AI that like reframes things that we thought we knew, but in fact, you know, there's a whole new paradigm that both explains the previous paradigm and unlocks a new depth of understanding? Honestly, I think probably the AIs will get there, but it, that's much less certain. So anyway, for the time that we're in this zone where the AIs are like competitive with if not better than our, um, sort of, you know, mass expert culture, but maybe not yet at the truly revolutionary Einstein level, one thing you might think of education as serving to do is identify and cultivate uniquely special genius. In other words, like-

speaker_1: Mm-hmm

speaker_0: ... there's an Einstein in your country right now somewhere for probably-

speaker_1: All right

speaker_0: ... 10 or 20 different things. Can we figure out who those people are and-

speaker_1: Identifying the top 3% that really could excel at something.

speaker_0: Yeah. The- the, you know, people that actually could change 'cause again-

speaker_1: But why?

speaker_0: ... I don't know anything about... Well, maybe we still need them. If the AI-

speaker_1: Yeah, but if-

speaker_0: ... especially if the AI can't deliver that kind of paradigm shift.

speaker_1: Yeah, all right, but, but we're doing pretty okay right now without focusing on the top 3% exclusively, so I don't think we as society need them unless we feel like, oh, the aliens are coming. We need to mobilize or something. Um, I don't... I- I think it's an interesting idea and I think it's worth exploring because we should explore many different ideas, but I don't see that the society will invest in that. I think from a societal level, if you want the GDP or the productivity from these say 3% of people, then it, it's a huge investment to run the full educational system to find them. On the other, other hand, you could of course use some screening, but the option is, the alternative is to wait five more years and then you have AIs do that stuff as well. And I think that's much cheaper and easier, and we're still, from a societal viewpoint, left with a conundrum of what to do with the 97% of people, a- and that's something we need to solve. Should they have they, their UBI and be happy? And how can we do that in a way that they don't feel that they're being just unemployed and useless, but instead having a nice retirement and spending their lives in a good way? I guess-

speaker_0: How worried are you about that? My perhaps naive sense is the way I put it to Jake Sullivan was-

speaker_1: Yeah, I'm not that worried

speaker_0: ... if the political class can figure out the international relations, I believe in the working class's ability to spend the peace dividend. In other words, like-

speaker_1: Yeah

speaker_0: ... I think people will probably figure it out.

speaker_1: Yeah, yeah, me too. Uh, a- and if I'm worried, it is that a small minority of people will gather all the resources and the others will be left without the UBI. If I were unemployed today, well, a- and s- supplied for by, by the government or something, I would be happy, and I have a job that I find really interesting, but I would love to just play board games with my friends, hang out with my family and do stuff, read books, and I have a job that I find really interesting. 90% of people don't have that, I think, so yeah. I'm not worried about people complaining because they don't have jobs.

speaker_0: Yeah. How about any other, like, habits of mind you think people in the education world should be adopting or cultivating? One, I'll give a shout-out to my longtime teammate Matt Cahill from Waymark who is the son of two teachers, and he's made one of the mantras of the... He's the, the technology lead at our company. He's made one of his mantras, "Teach and learn every day." And what I think is interesting about that and maybe generalizable is just, and especially applicable to AI, right, is the idea that, like, we all have a lot to learn, and when it comes to AI, we're all on the same timeline. We may be different ages, but we're all experiencing these advances and these like changes to our reality at the same time, like regardless of what age we are and regardless of, you know, what we have or haven't otherwise experienced in life. So I feel like if I wanted to give one...Yeah, bit of advice to teachers, it might be that they should just embrace the idea that, like, they need to be learning right alongside with the students and sort of, you know, being hands-on, like, doing their experiments, making their mistakes, sharing all that stuff with the students. Showing that, like, this is an active... You know, basically modeling, like, this is an active learning process for me and, therefore, you know, of course it's gonna be the same for you.

speaker_1: Yeah, I agree. And one way that I phrase this for teachers and principals is make more mistakes. We need to raise the number of, of mistakes that we do because we need to learn and use AI at the same time. This technology is important and it's evolving so fast that we need to play with it. Eh, and that means that we should all- as, just as we allow our students to make mistakes while learning, we should d- apply that on ourselves as well. We need to identify the i- most costly mistakes, but as long as we keep those in mind, we should be, we should celebrate our own mistakes. Because it's fine making mistakes. It means that we're learning.

speaker_0: Any other, like, habits of mind or, you know, best practices, all-star standout example approaches that we should highlight?

speaker_1: Yeah. W- well, another thing that... A mantra of mine is that everyone, e- not least teachers, should find a way to use AI to save five minutes every day. The best way o- of learning AI, learning about AI, is to use it. And everyone can find a way to save five minutes a day using AI. If you do that, you will learn more about the technology, how you can use it, what it means for the world, what it means for your students, how you could use it in your teaching, what does... Your students should learn about AI, s- and stuff like that, while still saving time. So use AI, AI to save five minutes a day.

speaker_0: How about any, um, let's say, reading assignments you think should be more emphasized? These could be like things the teacher should read or the students read or, again, maybe we should all read them together. Like science fiction, you know, sort of visions for a pos-

speaker_1: Ah

speaker_0: ... positive future, or just anything that you think is, like, really useful cultural capital to bring into the AI era.

speaker_1: Wow. I don't know. I mean, science fiction is a good thing. Well, s- science fiction is not laser sabers, sabers and stuff. I- science fiction is picking a idea or several ideas about the future. How would... What would ha- would happen to the world or society if this was different or that was different? That kind of science fiction is a great idea of, uh, expanding your views of how things might be. Uh, and I think we need... Sometimes people say, "Well, the- these AI stuff, uh, th- this or that happening, that's just science fiction," which to me is, like, a weird thing to say. A lot of science fiction stuff are present right now in our world.

speaker_0: Yeah.

speaker_1: Yeah. So reading more science fiction or lo- w- watching science fiction is a good way to e- expand your views when it comes to AI. Otherwise, I don't know. But being curious is great also when it comes t- to literature.

speaker_0: And another great assignment, I think, is just trying to get the kids to do some of that future vision work.

speaker_1: Mm-hmm.

speaker_0: It... I don't necessarily expect the hit rate on, you know, great literature coming out of the typical classroom to be very high, but if only as a reminder to people that, at least for now, we still have agency to think about what the future could look like, think about what we want it to look like, and actually try to steer it in that direction. With so much happening all around us and so much feeling like it's kind of happening to us or happening by some, like, you know, exponential process that's kind of got a logic or a, you know, momentum or a life of its own, it does seem like a useful mindset to cultivate, just to be like imagine a future. Imagine something different-

speaker_1: Yeah

speaker_0: ... than there... than exists right now.

speaker_1: Yeah.

speaker_0: And, you know, kind of-

speaker_1: We have agency

speaker_0: ... flesh that out. Yeah.

speaker_1: Yeah. Yeah.

speaker_0: Yeah.

speaker_1: Maybe you should be a teacher. That was a good idea.

speaker_0: Uh, I... That's kind of what I'm trying-

speaker_1: Yeah

speaker_0: ... to... You know, to a degree, it's what I'm trying to do with this, um, podcast. Uh, very much with a teach and learn, m- emphasis on learning approach, no doubt.

speaker_1: Yeah.

speaker_0: I think we've covered all the bullet points that I had outlined, and I appreciate all your time. Anything that-

speaker_1: Yeah

speaker_0: ... we haven't talked about that you would wanna make sure we cover? Or any other thoughts, uh, to leave people with?

speaker_1: No, I think we covered things pretty well. It's been a fun conversation. It's, it's not often I have, uh, this opportunity to, to geek out on AI in education, uh, at this length. Yeah, well, I think... Well, for, for teachers, y- teachers might feel overwhelmed. "Wh- what I'm supposed to do? Everything is going so fast. I don't understand what AI is." And what teachers should do is to focus on their students. That's the role of the teacher. And when it comes to AI, that means that you should learn enough about AI to understand is this something that your students should learn about, you should teach your students in your subject? And if so, how should you do it? That's the main responsibility of teachers when it comes to AI. Apart from that, you really... Well, principals and school owners need to invest in, uh, AI competence for their teachers' professional development. And from a strategic level, like national levels, perhaps school districts, we need to look ahead and see w- what's going to happen in a few years, three, five, 10 years. How can we prepare for that? And that, that is a big challenge. But as a teacher, try to relax a bit, focus on your students, find a way to save five minutes a day using AI.

speaker_0: Love it.

speaker_1: And I have... Since a few days, I have a website where you can find ways to save time. Can I mention it?

speaker_0: Yeah, please.

speaker_1: All right. So I'm just starting something called Graspable AI with short videos aimed for teachers and others working in school, trying to explain what AI is, what it means for education and society and our students. So that's a place where you can find m- more about this.

speaker_0: So Graspable AI.

speaker_1: Yeah.

speaker_0: Te- tell me the, um, URL again one more time?

speaker_1: Uh, graspable.ai.

speaker_0: Graspable.ai. All right. Cool.

speaker_1: Yep.

speaker_0: Uh, I got a chance to preview some of these things, but I hadn't actually been to the URL, so that's great.

speaker_1: All right.

speaker_0: Cool. Well, Johan Falk, Graspable AI, thank you for being part of the Cognitive Revolution.

speaker_1: Thank you.