Episode 87
Trust No One Is Exactly What Authoritarians Want
Joel Breakstone on the dangerous space between healthy skepticism and total cynicism, and how to teach people the difference.
Published
Listen On:

Every major technological paradigm shift has broken society before it fixed it. The printing press gave us the scientific revolution, but it also gave us witch hunt pamphlets. Radio connected millions of people, but it also let demagogues broadcast hate into our living rooms. The television brought the world closer, but it also turned politics into performance. And now we have the internet and social media and AI, and the pattern is repeating, but the speed is meaningfully different.
Now one person can fabricate a story and reach hundreds of thousands of people in mere hours or minutes. AI can generate video that's basically indistinguishable from reality. And all of the platforms that are delivering all of this to us, they're basically engineered to reward whatever makes us the most outraged, angry, and divided. For leaders in the social impact space, I believe this is an existential problem. If the people you're trying to reach don't believe anything is real anymore, your message can't land and your brand doesn't matter and your mission falls flat.
So what do we do about it? To explore that question, I wanted to talk with Joel Breakstone. Joel is the co-founder and executive director of the Digital Inquiry Group. They're a nonprofit that spun out of Stanford to tackle one of the most urgent problems of our time. How do we help people tell fact from fiction online? His civic online reasoning curriculum has been downloaded millions of times across all 50 states, and it's built on a deceptively simple insight borrowed from professional fact checkers.
Episode Highlights:
[00:01:55] Was the internet a huge mistake?
[00:05:15] How algorithms and human psychology feed each other
[00:06:50] Is the internet fundamentally different from past paradigm shifts?
[00:08:30] From Stanford History Education Group to the Civic Online Reasoning curriculum
[00:12:35] Fact checkers vs. PhDs vs. Stanford freshmen: who evaluates sources best?
[00:15:20] Lateral reading: the counterintuitive skill that changes everything
[00:16:40] Why digital literacy mandates keep failing without materials
[00:22:40] What a "driver's license for the internet" might look like
[00:26:20] The collapse of institutional trust and rise of influencer trust
[00:31:05] AI as both threat and tool for digital literacy
[00:38:35] The ".org means trustworthy" myth and why evidence-based guidance matters
[00:41:50] What keeps Joel optimistic despite the scale of the challenge
Notable Quotes:
[00:11:25]: "The myth of the digital native is very much a myth. Young people, like the rest of us, need help making sense of the unbelievably crowded and confusing landscape that we encounter when we go online." Joel Breakstone
[00:27:30]: "It can become really easy to just throw your hands up in the air and say, 'Nothing's real. I don't know what to trust.' And that is a really dangerous place for us to end up because it plays into the hands of authoritarians. They want people not to know what to believe." Joel Breakstone
[00:24:30]: "This is not just a couple of skills. It's an orientation to how you make sense of new sources." Joel Breakstone
[00:35:50]: "Early arbiters of truth were often religious bodies. In modern history, that became media organizations and institutions and the academy. With the dawn of the internet and social media, arbiters of truth became algorithms. And now AI is just a new form of a new arbiter of truth that we have to question just like we questioned all of those others." Eric Ressler
[00:32:50]: "AI is not an oracle. AI is drawing information from somewhere. Students need to understand that information comes from somewhere. It's not free floating." Joel Breakstone
Resources & Links:
- Digital Inquiry Group — Joel's nonprofit, spun out of Stanford, developing free digital literacy curriculum and research
- Civic Online Reasoning (COR) Curriculum — Free curriculum teaching lateral reading and source evaluation skills, available to anyone with a free account
- Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online — Book by Mike Caulfield and Sam Wineburg (University of Chicago Press, 2023)
Full Transcript:
Eric Ressler [00:01:40]: Okay, Joel Breakstone, thank you so much for joining me today.
Joel Breakstone [00:01:45]: Oh, it's my pleasure. Thanks for having me.
Eric Ressler [00:01:45]: So lots to talk about today. Really a big fan of your work and when we worked together on civic online reasoning, I actually learned a lot about how to be a good digital citizen, something that I've been since I was really young. The first question I want to ask you is half in jest, but also not totally. Was the internet a huge mistake, Joel?
Joel Breakstone [00:02:05]: No, I definitely don't think so. I think it's an incredibly powerful tool and we are better for having access to it, but certainly our work shows that we need to help people, and our work focuses particularly on young people, to understand how to use that incredibly powerful technology well.
Eric Ressler [00:02:30]: Yeah. So I asked that question half in jest because I do actually think deeply about this at times where so much of my life has been shaped positively because of technology and specifically the internet to the point that I run basically a creative agency, a digital agency that maybe wouldn't exist without the internet in one form or another. We build a lot of digital tools and websites as well as brands and strategy. And yet I've seen in real time, the internet, maybe a little bit more so specifically social media, really negatively impact society in pretty meaningful ways to the point that it's influencing elections, it's influencing dialogue around really big picture things, even war. Can you talk to me a little bit about how the internet went from this really good faith, incredible experiment where we could suddenly be connected instantly to where we are today?
Joel Breakstone [00:03:30]: I don't know if we can trace the entire history briefly. I think that what has happened is that there has been a proliferation of deeply problematic content online and we have an information ecosystem that rewards people who can capture attention and that within that attention economy, bad behavior is often rewarded. That ranges from the ways in which algorithms are tuned to hold us on platforms and to make us pay attention to whatever it is, even if it's not good for us or if it's harmful for young people, and to spread deeply problematic content and dangerous and damaging content as well. And that has only intensified in recent years. And so without a doubt, there are fundamental issues that we need to address to ensure the wellbeing of young people and all of us, as well as to strengthen the democratic systems at the heart of our country and nations all around the world.
And so for sure, the stakes are very high and the need to address this threat is very significant and it needs to be a whole of society approach. Not one sector is going to be able to take on the set of ills that we are being confronted with. Instead, we all need to be figuring out ways to try to address the problems that have become all too apparent.
Eric Ressler [00:05:15]: So I want to hold a little bit on this. Specifically, you mentioned that the algorithms reward bad behavior. And I think that I've been curious about, and I don't think this is probably an either or situation, but how much of that is intentional because of algorithmic tuning from big tech companies to reward engagement basically at all costs versus how much of that is just a negativity bias in human psychology?
Joel Breakstone [00:05:45]: I think there's certainly some of both. And those two things feed into one another to create a landscape in which people get rewarded for that bad behavior and they can grow their following and their clout online by propagating problematic content, by being rage merchants. That is, as you note, a fundamental piece of human psychology, that we have that innate reaction and we engage, and that engagement is what platforms want. They want people to spend time there and to be engaged. And so people have responded to those incentives and we're all worse for it.
Eric Ressler [00:06:40]: Yeah. So I want to kind of... Your background is actually as a historian originally, if I'm getting my notes right here, working out of Stanford and Dartmouth before that. Is the internet uniquely different than other major paradigm shifts in technology and communications? For example, when the printing press came out, there was all kinds of mis and disinformation in pamphlets and the witch trials. And this was also a problem even before that technology. So is the internet unique in that it's just at scale and at a speed that is meaningfully different than past paradigm shifts?
Joel Breakstone [00:07:20]: Well, I think you've noted something that's important. That this is not a new phenomenon in terms of people spreading inaccurate, misleading, and dangerous content. That has been around for a very long time. But I do think that in the present moment, the speed of dissemination and the ease of dissemination is fundamentally different. When the printing press came out, everybody didn't have a printing press in their pocket that could reach across the world in moments. That presents a fundamentally different reality. And with the advent and rise of AI and the power of those tools now, anybody can create incredibly lifelike images and videos that can be spread all across the world. And that also is fundamentally different. The power of the technology is hard to get our heads around.
Eric Ressler [00:08:25]: Yeah. And I definitely want to spend some time on AI in a little bit, but before we get there, I'd love if you can tell our listeners a little bit about how your work with Stanford History Education Group led to the creation of COR, or civic online reasoning, which is essentially free curriculum to help combat this issue of mis and disinformation, but more broadly digital literacy. So can you at a high level outline, how did that come to be? What was the research showing? What did you learn from that? And what's the intervention that you guys have created to try and combat the negative downstream effects of what you found?
Joel Breakstone [00:09:05]: Yeah, absolutely. So as you noted, our organization was called the Stanford History Education Group, and we were based at Stanford's Graduate School of Education. And our focus was primarily on history education. I'm a former high school history teacher and my colleagues all had backgrounds in history education. We were making document-based history lessons and assessments and giving them away for free online. And just over 10 years ago, we were approached by a foundation, the McCormick Foundation based out of Illinois, and they wondered if we could do some similar work to what we had been doing around history assessments, making short tasks that asked students to evaluate real sources and do the same thing in the area of digital literacy, that they were funding projects that focused on helping students to navigate online spaces. And they wanted to have better evidence about whether or not those programs were effective in helping students to be more discerning.
And so we began to make short tasks that asked students to evaluate unfamiliar online sources, native ads, unfamiliar social media posts, websites that were created by public relations firms and asked students to make sense of them. And we gave these tasks to students ranging from middle school to college all across the country. And we saw a really disturbing result, which was that across the board students struggled with even the most basic tasks. They couldn't distinguish sponsored content from a news story. They didn't know that a PR firm was behind a website that was purporting to provide nonpartisan research-based evidence about public policy issues. They were easily misled by social media videos. And so this idea that because young people have grown up with digital devices, that they are better equipped to make sense of the information that those devices provide just really did not hold up. That myth of the digital native is very much a myth. Young people, like the rest of us, need help making sense of the unbelievably crowded and confusing landscape that we encounter when we go online.
And so yeah, our research revealed deep problems for students in making sense of digital content. And we released our findings, more than 7,000 student responses we collected, in November of 2016, shortly after Donald Trump was first elected president. And there was an enormous amount of interest in the question of misinformation and how to be a little wiser on the internet. And so we heard many, many inquiries about, well, what do we do about this problem? Your research shows that students are struggling. How do we help them do better? And so we wanted to try to identify expert practice. What are more skilled approaches to evaluating online sources? And so we did a research project led by my colleague, Sam Wineburg and Sarah McGrew, where we asked three groups of people to evaluate unfamiliar sources.
And we thought each group might be particularly well suited to evaluate sources that they hadn't seen before. And those three groups were Stanford University freshmen, young people in the heart of Silicon Valley who are online all the time, and many of whom will go on to found and work at tech companies, and then historians, people who have PhDs and are evaluating sources for a living. And then finally, fact checkers from the nation's leading news outlets, people who are responsible for ensuring the accuracy of information that those organizations publish. And what we did was to present them with online sources and to record their screens as they showed us what they would do to try to decide whether or not to trust those sources. And there were some really striking differences across those three groups. The professional fact checkers were way better than the students or the academics at evaluating unfamiliar sources.
And the thing that distinguished them more than anything else was when they came across an unfamiliar source, they almost immediately left it. They didn't read it carefully or closely. That's what the students and the academics did. They did what helped them to be successful as students and as researchers. They read carefully and closely. But on the internet, that often could lead you astray. One of the tasks we had people complete was to evaluate an article from the website, minimumwage.com. And it says that it's a project of the Employment Policies Institute. And the Employment Policies Institute has a .org website and it says that it engages in nonpartisan research and that it's a nonprofit organization. All these things that sound good and that the students and the academics read carefully and closely. In contrast, the fact checkers did something fundamentally different. They didn't read the article carefully.
They left it and they turned to the broader internet and opened a new tab in their browsers and searched for information. And by doing that, they found out that this is a website that is a front group for a public relations firm that's working on the behalf of people who want to keep minimum wages lower, and that this is not a nonpartisan effort. In fact, it's a very concerted effort to influence public policy. That information is readily available if you go looking for it. And that's what the fact checkers did. And what we did was to try to distill down their strategies. That move of getting off an unfamiliar page we refer to as lateral reading, of leaving an unfamiliar source and opening up new tabs and reading across them rather than staying on a single page and reading vertically, which often works well in a print environment, which is why it helps students get into Stanford and academics get their PhDs, but is not nearly as useful in online spaces.
And so what we've tried to do is to distill those strategies that we saw the fact checkers deploy effectively into a set of tools to teach students. And that's at the heart of the civic online reasoning curriculum, which is a set of resources that teach these skills like lateral reading to students and provide them opportunities to practice them with real sources from the spaces that we know students are spending their time, TikTok and Instagram, so that they have opportunities to practice and to build their capacity to sort fact from fiction when they are on their devices.
Eric Ressler [00:16:35]: So to me, this skill, let's say broadly under the umbrella of digital literacy, seems like the single most important skill to teach young people in the world in this moment. And yet you guys have free curriculum that you're providing. There's more and more tools out there. There are these checklist tools that you have, I think, rightfully called out as being inadequate or flawed, even if the intention is right. There's now a bunch of state legislation across multiple states in America. There's other legislation in other countries attempting to solve this problem, but it's not working, at least at the scale and the depth that we need it to work. And I don't mean that as a criticism of your organization. Obviously, as you mentioned at the beginning, this is going to take everyone coming together to solve this problem, but what isn't working about this intervention? Because my sense is that we have the tools, the methodology, the technology. We understand how to solve this problem practically, but it's not being solved nearly quickly enough. So what's getting in the way from your perspective?
Joel Breakstone [00:17:45]: No, I think that's exactly right. We have evidence that these approaches can work. Our studies have shown that by teaching these skills to students, we see them improve. We've engaged in research ranging from studies across an entire urban school district to interventions in college classrooms. And those results have been replicated by other researchers, both in the United States and abroad in Canada and in Italy and Sweden. So yeah, there is clear evidence that this is not an intractable problem. We can move the needle and improve students' ability to discern, and the public at large, that this isn't just restricted to young people. I think the key problem is, especially when we think about educational settings, is that there is not a school subject called digital literacy. There is not a home for this work in the school day. And so we can create new legislation that calls for the teaching of digital literacy, but until there is a way to make it a meaningful part of students' education, we're not going to see much progress.
This isn't something that can be solved for in a single workshop. Students need practice and opportunities to reflect with their classmates and their teachers about how to do these strategies effectively. And so we believe that the way forward is to find ways to build this kind of instruction into the existing curriculum. So not trying to create a brand new version of the school day, but instead find ways to work within the curriculum as it exists. So for instance, if we're thinking about the history curriculum, how might we have students investigate a TikTok about an issue related to reconstruction? For instance, the origins of the term grandfather clause. You can find very interesting sets of videos about that online, and we could teach about that video quickly in a broader lesson on a topic that teachers are already spending time on. Everybody in US history is teaching reconstruction.
We could spend a little bit of time by doing a quick activity at the end of a lesson and provide opportunities to practice. So really finding ways to weave these materials into the existing curriculum. And that speaks to what I would say is also the broader problem, which is that there have not been parallel efforts to create the resources and professional development for educators to implement these mandates. By and large, these legislative mandates have been mandates without materials. And teachers and schools and districts are being left to try to figure out how to address this incredibly challenging problem on their own. And so if you believe that this is a very pressing problem for young people and our society as a whole, we need to invest in developing materials that will make it as easy as possible for educators to enact this important type of work in their classrooms and to support educators in doing so.
This is new for everybody. This is not the kind of instruction that most teachers learned about when they were preparing to become teachers. And so we need to ensure that we are not just loading another responsibility on the back of teachers without giving them the support to do that well. So both finding ways to build this kind of instruction into the school curriculum and then making the materials to do that well.
Eric Ressler [00:22:35]: There's so many ways I think we could approach this, and I appreciate your pragmatism and always have in terms of realizing creating an entirely new school day is going to be difficult. And yet at the same time, we have done that for other really important topics. And even something like if we think about a parallel of getting a driver's license in the United States, everyone doesn't get to just start driving all of a sudden. You have to actually get a license, take a test, do practical application, and that can happen through high schools, but you can also do it third party. You don't have to do it through the school system. If you could wave a magic wand and redesign how this all worked and put aside all the barriers of changing the school system and the academy, what would the absolute right way to do this look like in your opinion?
Joel Breakstone [00:23:30]: I think that the elements that we've seen from our work is this kind of effort has to be ongoing, that this is not a simple, here is a small set of strategies, we told them to you and now you are ready to be a much more discerning consumer of online information. No, it's like anything else, any skilled practice, you need opportunities to try it out and to make mistakes and to learn. And importantly, you need to have a way in which to build that capacity over time. And so we need to think about how to make that a long-term strategy so that it's not just, "Oh, I learned that once and now I'm done with it." Instead, it's a disposition towards information. I think that's a key understanding. This is not just a couple of skills. It's an orientation to how you make sense of new sources.
It's asking, "What is this thing? Do I know what it is?" And even if you aren't able to track down exactly what it is, just having that question of saying, "I'm not sure," allows you to have a very different engagement with online sources. Just that pause can make a huge difference rather than just accepting at face value, which is what we've seen so often. Seeing is believing.
Eric Ressler [00:25:10]: Yeah. And I think that I can relate to that. I think through our work together, I have learned to practice lateral reading. And so whenever an unfamiliar source or claim is presented to me, the very first thing that I do is go to Wikipedia or do a web search. And I know that's not an infallible strategy, but what I've found is that even people who I'm close friends with or in my family who should know better, and I fall into this too, I think everyone has fallen for some version of mis or disinformation, even probably all the time without even realizing it. But I noticed that most people just go by vibes more than anything. Does it reinforce a belief that I already have, I think, is something that we all need to be wary of.
And does it just feel legit even if it's not? And I think one thing I'd love to tie in that I think is relevant to anyone doing this work in the social impact space, whether you're focused on digital literacy or just any kind of social change, is the relationship between information, disinformation, attention, trust and credibility, that whole ecosystem. And I think that the thing that I've noticed is that especially in the last five-plus years, there's been a massive shift in the general public away from trusting institutions, trusting organizations, trusting the government, and a lot of that trust has been reallocated towards individual people, either people directly in their lives or more and more individual influencers on the internet. I'd be curious to hear how you think about that problem and whether you think that is just an inevitable shift in our modern media ecosystem, or if there's something that we should be resetting back to trust in some of these broader institutions that needs to be taught as a skill.
Joel Breakstone [00:27:15]: Yeah. I think it's absolutely a shift of increasing distrust in institutions and more broadly, a distrust in everything, that as problematic content has spread and as AI slop has proliferated, it can become really easy to just throw your hands up in the air and say, "Nothing's real. I don't know what to trust. There's nothing there." And that is a really dangerous place for us to end up because it plays into the hands of authoritarians. They want people not to know what to believe. And then they say, "The only thing you can trust is me or my organization." And that's a problem. We want people to be empowered to make decisions that are based on evidence and are in the interests of them and their communities. And so it's crucially important that people have the tools to seek out information so that they can make good decisions.
If we end up in a place where people don't think they can do that, it's a pretty bleak future. So without a doubt, it's really important for us to make clear that there are strategies for finding better information and that you can use them. And that, as you note, it's not infallible. This is not a foolproof effort, but if you practice some of these ways of reasoning, you generally end up in a better place rather than just accepting information at face value and certainly in a better place than saying, "I can't know." And so it's impossible to know because then you're not informed and you aren't going to be empowered to be a civic actor in our shared democracy. And so it's really important to be able to both make that reality apparent to people and then to equip them with tools to help them to find the information that will allow them to make decisions that are aligned with their own interests.
Eric Ressler [00:29:30]: Yeah. I mean, I think the result of this that I think we're all becoming more aware of and hopefully are working as a culture, as a society to shift, is that now we all live in these siloed ecosystems from an information standpoint. You used to have the three TV channels or your local news, and that's how you got your information and your facts, or you talk to your friends at the school board or at the bar or wherever. And if you had a really weird, wacky idea, you'd bring that to a social situation and someone might be like, "Hey, Eric, that's a weird idea. Where'd you get that from?" And now you can find an entire community who's like, "Yes, that idea is exactly right." And so these weird, unhealthy, not evidence-based ideas are able to proliferate and flourish in a way that before this ecosystem and the ease at which we could all connect existed, basically we just get shut down by normal human culture.
And so with all of that in mind, I think the thing that I struggle with is we get into this situation where it's like, okay, well, I don't know who to trust. I can't trust anyone. You hear that a lot, but then you see the behavior and people go to, "So I'm going to trust this Instagram influencer," which to me seems like the exact opposite conclusion that you should draw from that ecosystem. So I want to bring this into a couple threads before we wrap up here. One, we should definitely go and talk more about AI because AI is, in my opinion, a double-edged sword around all of this because I think it has potential in an optimistic way to solve some of these issues, but it also has potential in a much more pessimistic way to just proliferate them even further. So let me ask you one question I have.
I mentioned earlier that my version of lateral reading used to be Wikipedia web search. My new version of lateral reading is to do deep research on a topic and have AI essentially do that for me. And I'm aware of the potential issues there. And I've watched one of the things that I use Claude to do a lot of research and it will show you citations. And I always check who are they actually referencing. And what's interesting is they will reference, or the tool will reference, sometimes partisan sources. But if I were to go do a web search, I would also likely find partisan sources and would need to get into this lateral reading, never-ending spiral where it's like, "Well, I'm going to go do another search and find another source, but now that source is also potentially unfamiliar." So I guess the question really is, how do you see AI affecting this? What are the ways that AI might be a helpful tool to counter some of this mis and disinformation and just digital literacy in general and what are the ways that it's going to probably be very problematic against this issue as well?
Joel Breakstone [00:32:30]: So the first thing is that it would be easy to say, well, your whole approach to lateral reading is irrelevant now because who's on a web browser anyway, who's doing a search? But the reality of it all is that it still is important to think about where information comes from because AI is not an oracle. AI is drawing information from somewhere. And as we think about how to move forward with teaching these ways of reasoning to students, that's at the heart of it. Students need to understand that information comes from somewhere. It's not free floating. And when we are encountering AI generated content, whether that is the AI summary when you use a search engine or if you are going directly to a chatbot, we want to know where the information is coming from. We should just not accept an AI generated response because it is polished and seems convincing.
We need to think about exactly what you said, which is what are the sources that they are providing. And that is lateral reading ultimately, of using other sources to become better informed about a claim or a person or an organization. That has to be part of the process. And sure, many of them are going to be partisan. That's fine. That's the nature of the beast. We just need to take into account what is the perspective of those sources and to be thinking about what are higher quality sources also. And that comes back to the issue about influencers and who to believe online, is to think about the authority of a source. Why is this person in a position to know on a given topic? And there are a variety of ways to think about that. And also, what was the process that was used to create this information?
Were there processes that helped to improve that ultimate product? Were there editors involved? Are there experts who were consulted? Are there checks to correct mistakes if they happen? Why is it important to have a correction policy if you're a news organization or if you are an academic press to think about the review of something before it's published or peer review? Again, none of these things are infallible. There are deep problems with news organizations and with academia, but it's better to have processes to ensure the quality of information than not. And so to really think about what kinds of sources would you want to use is a part of consulting and using AI for these purposes. It just needs to be addressed that we can't just think, "Well, it's a good result," without saying, "Well, what are those sources?" And if they're not linked, to ask for them, and also to think about prompts that push the models to provide sources from particular types of organizations or people. Those are all tools that can lead to better results.
Eric Ressler [00:35:45]: Yeah. And I think about this in the sense of, again, these paradigm shifts. So early arbiters of truth were often religious bodies, if we go back far enough in history. In modern history, that became media organizations and institutions and the academy. With the dawn of the internet and social media, arbiters of truth became algorithms. And now AI is just a new form of a new arbiter of truth that we have to question just like we questioned all of those others as well. But I am optimistic with, again, the right skills that AI can actually be a really good force for good for this. And I've, again, learning through you all, have been trained on lateral reading and click restraint and some of these other core disciplines and skills. And I've been able to translate that into AI tools in I think a positive way where often even just to your point around prompting, when I'm learning about an issue that I'm confused about and I hear different partisan opinions, it's like, "Hey, I'm interested in learning about this issue. What are the different opinions about this? Is there consensus about this at this point? Who are the sources that I should be looking at to learn more about this?" And it gives me a really good starting point very rapidly, whereas before I would've had to do all of that research on my own. Often in these deep research, either through Perplexity or Claude or ChatGPT or any of the big players, they're looking at three, four hundred sources. I don't have time to do that. Now you could say, is 400 sources any meaningfully better than the top 20? I don't know. That's an open question for me, but I've found that with intentional thought and using the tools wisely, they could potentially really help with this. Do you find that to be true or are you more skeptical about the technology?
Joel Breakstone [00:37:35]: I think there's real potential. And I think that we're in a moment where we need to do careful research in classrooms. For us as an educational nonprofit, we want to have evidence about what are practices that work. We think that there is real potential for being able to use these tools to verify viral claims on social media and to quickly surface fact checks, that these tools can potentially work very well in that regard. But just as we set out to initially identify what are best practices for verifying content in a space when we're primarily just using search engines, we need to do the same now. And we need to make sure that we are providing guidance that is grounded in evidence. I think one of the shortcomings of our approach to teaching students to navigate the internet was too often the guidance wasn't based in good evidence and there were really negative consequences.
Countless students learned that .org websites are more reliable ones, even though there is no evidence for that whatsoever, but it became just a bromide that everybody learned. So going forward, as we think about this new technology, we need to make sure that our educational approaches have some real backing for them. But without a doubt, there's real potential. And I think your phrase was how to use AI wisely. And I think that's what we need to be working towards, is to prepare people to understand how the technology works and what are strategies that will help them to find information. And importantly, that they as individuals are part of that process. It comes back to that same idea of empowering people to find information, not to simply say, "Well, in ChatGPT or Gemini or Claude, I trust." Instead, it's, how can I use this tool to become better informed?
And there's incredible power there and we should think about how we can harness it well.
Eric Ressler [00:39:50]: Yeah. And I think a lot of the same foundational principles can be applied to AI use responsibly. It's just not blindly trusting things, especially because AI is great at confidently being wrong. And honestly, in my use, I find that the newer models are getting better at that. And you see them in the benchmarks, what they call hallucinations are dropping, but they're still there and they can be not only confidently wrong, but very convincingly confidently wrong too, because they can articulate things in a way that sounds intuitively right and evidence-based, but then will make up citations. You see this tragically showing up in scientific reports. You see it tragically showing up in government-funded research. And so there are real risks, not to mention all of the ability to create AI slop and propaganda at scale more rapidly than ever before, obviously deepfakes and AI-generated video.
So we're getting into a very sci-fi moment as a society where it's going to be more and more difficult to know what is truth and what is real, especially within our own information silos and our fragmented ecosystem. And so to me, these skills are becoming existential for society in a way that I applaud Digital Inquiry Group and you and your team for doing the work that you're doing and providing the curriculum for free, which by the way, I should mention to listeners, you don't have to be a student in high school to use the curriculum. It's fully accessible online. All you need is a Digital Inquiry Group account, which is free to create. And my hope is that more people start to pay more and more attention to this. I think there's a lot of discussion around the downsides of the internet and the downsides of social media and the downsides of AI that I wish some of that energy were translated into funding, resources, efforts, policy, regulation, the things that we've been able to do for other major issues in society.
So I applaud you and your team for doing the good work and I want to leave you with one last question that will hopefully be a little bit more optimistic since there's been some existential discussion in this episode. What are you excited about? What's keeping you lit up and what are you optimistic about despite all of the challenges that we're seeing with all of this?
Joel Breakstone [00:42:10]: I would say that we believe that this is a problem we can tackle. It's not an easy one, but we've seen over and over again that there are approaches that can be used to help people to become better informed online and that students and educators want to take these issues up. We're privileged to have the opportunity to work with teachers all across the country and they are hungry for resources and for support. And their students want to learn meaningful strategies for engaging with the content that streams across their devices. And so I see it as a landscape of opportunity in that regard, that there are people who care deeply about this issue and are ready to take it up. We just need to, as a society, make sure that we're equipping them with the tools to be successful in addressing this crucial issue for us as a society going forward.
Eric Ressler [00:43:15]: Wonderful. Any listeners who are also funders, please consider hitting up Joel and his team. They have spun out of Stanford and are now Digital Inquiry Group, their own 501(c)(3) nonprofit, and we need to fund these efforts. So anyone with big wallets and opportunities to make meaningful gifts, please consider that. Also, any listeners who can make even a small gift to Joel and his team, would also recommend doing that. Very much worth supporting this work. Joel, thank you so much for joining me today. This was great.
Joel Breakstone [00:43:45]: Thank you so much. It was my pleasure.