Episode Transcript
[00:00:00] Guest: We went through some scenarios about is this cheating or not? Is this misusing AI or not? The 5050 scenario was half of these administrators said it was okay for students to put an elaborate prompt into an AI system about an assignment and then essentially follow the outline that was given by the generative AI system. Half said it was cheating, half said that that would be okay with them.
[00:00:31] Host: Welcome to the EdTechConnect podcast, your source for exploring the cutting edge world of educational technology. I'm your host, Jeff Dillon, and I'm excited to bring you insights and inspiration from the brightest minds and innovators shaping the future of education. We'll dive into conversations with leading experts, educators and solution providers who are transforming the learning landscape. Be sure to subscribe and leave a you on your favorite podcast platform so you don't miss an episode. So sit back, relax, and let's dive in.
Lee Rainey directs Elon University's Imagining the Digital Future Center, a research iniNative launched in 2000 and expanded in 2024. It investigates how emerging tools, including generative AI, might affect our everyday lives and learning environments. Previously, Lee was founding director of Pew Research Center's Internet and Technology Research team, where he oversaw More than 850 reports on the social impact of online connectivity. Those insights helped educators, policymakers, and the public understand the promises and pitfalls of digital innovation. Lee co authored Networked the New Social Operating System with Barry Wellman, exploring how mobile devices and social media shift our sense of community and self. He also worked as managing editor of U.S. news & World Report and served as a politics reporter for the New York Daily News. In each role, Lee has been driven by one how do emerging technologies change our culture and how we learn? His research shapes thinking around the ethical implications of AI classroom teaching strategies, and the broader impact of digital tools on higher education.
Welcome to the show, Lee.
[00:02:28] Guest: Thanks so much, Jeff. I'm really happy to be with you.
[00:02:30] Host: So tell us about this study. You led this study with AAC and you on how AI is affecting higher ed, and I was looking through this and I heard you on the Chronicle talking about it. Can you give us an overview of this AI study that you've undertaken?
[00:02:45] Guest: We've learned very quickly that as AI rolls out, and particularly after language models became a very popular part of the culture, that higher education was going to be heavily disrupted by these forces. So it's interesting to be in the academy as you listen to your colleagues and watch this unfolding. And it's also clear that segments of the society are hoping that higher education is going to fit, figure out the way to use these tools well and to make sure these tools are not misused. And so it's just an interesting center of the universe where the challenges are just right there in your face. How are we going to teach and learn in this new environment? And the call of the culture is let's make sure this stuff turns out right for all of humankind. That's why we're studying it. And Elon has taken a couple of steps to try to have a voice in the conversation. Just about a month or 2 After ChatGPT came into the world in November of 2022, Elon began to work on universal principles for thinking about higher education should deal with AI. And it relates to the kind of obvious problems of hallucinations and misinformation and bias and discrimination and privacy. And then we'd begun to do some primary research in the field and that's what gave rise to this survey. We interviewed college presidents and provosts and high ranking deans about what AI was doing already on campus, how they were responding to it, and then what's going to happen in the future as they think about it.
[00:04:24] Host: I like that you interviewed the non technical leaders. Tell me some of the key takeaways that you discovered in this research.
[00:04:33] Guest: Well, a couple of things. I mean the big eye popping number was that 95% of these academic leaders said that the teaching model that has been the anchor of higher education for centuries is going to be disrupted. About half of them said it was going to be a major disruption. So there is a way that this is a very dramatic big bang moment on campus to rethink really teaching and learning on campus. And then one of my most revealing batteries, I was going to say is one of my favorite. Well, it is a favorite in the sense that it highlights both the positive and the negative sides of AI in any educational circumstance, particularly in higher education. So we marched these college presidents and provosts through four positive potential outcomes and four negative potential outcomes. And they said yes, both things are going to happen.
The positive ones were that they were quite confident that over time AI was going to help enhance learning on campuses. There are all sorts of ways that they could see it would be used in teaching and learning and that would be beneficial. They think it's going to improve student research skills. So after the adjustments are made and after the sort of big disruptions occur in classrooms, students are going to learn more, they think, and they will be more skilled in that learning. They think it'll help student writing, which was One of the sort of surprises in this survey, that writing of course, is the thing that is most also worrisome as many people worry that AI will just be used to copy and paste term papers and things like that. But they thought eventually it's going to turn out positive for students. And interestingly enough, they also think it's going to increase creativity that just the set of tools as you use them for ideation and prompting sort of new thinking and moving into areas that are not necessarily familiar areas, but people can sort of learn from them. So, so they all think that those four things are for, for the good. The negative stuff was they are obviously heavily focused on academic integrity and they think that that's, you know, the high stakes are just as evident as they can be in that domain. They also worry about student dependence on especially generative AI models, that they will default to the models and take their answers and sort of lose the capacity to think critically or sort of shed some cognitive burdens that they probably should be sustaining rather than shed. They also worried about their version of the digital divide here. Digital inequities were front and center issue for lots of these universities. And they also worry about the decline as these models were also the culture in student attention spans. Big concern there.
[00:07:20] Host: So I want to drill down into something you said early on in that response, which was about the academic integrity. I noticed report shows that most campus leaders see an increase in cheating since generative AI has arrived. So I want your perspective on that whole thought of, you know, we've changed in just a couple years. How do we handle that? What constructive solutions can we consider going forward? Have we come around at all in the last couple years with the cheating?
[00:07:46] Guest: They're actually more frantic about it now than they were a couple of years ago, in part because the tools for assessing whether some a piece of writing or a piece of content was generated by AI or not, they are getting less good at detecting differences. And these leaders at colleges also didn't think their faculty were particularly well prepared to sort out and discern the bad stuff from the good stuff. The other thing that was very much top of mind for them was that as people were going to not pick up when AI was used, that again, this dependency would grow, that students would be able to sort of use AI as a course crutch to get through things rather than learn them directly. And so there's just a front and center concern about, well, even the definition in some ways. We had interesting findings among these leaders that there was a 50, 50 split. We went through some scenarios about is this cheating or not? Is this misusing AI or not? The 5050 scenario was half of these administrators said it was okay for students to put an elaborate prompt into an AI system about an assignment and then essentially follow the outline that was given by the generative AI system. Half said it was cheating, half said that that would be okay with them. There are other examples where we got a much more positive reading, where most of these leaders thought it was okay for students to stick their pieces of writing into an AI tool and get grammar corrected or to get flagged on where they weren't necessarily making their point very effectively or things like that. But this notion about how deep can you go in outsourcing the basic creative process to these tools. There isn't even a common definition in the minds of the people who are on the front lines about what's inbounds and what's out of bounds.
[00:09:39] Host: I think that shows. You asked a great question too in the survey when you get such a split. And it almost makes me think of the courts that are going to have to challenge so many of these situations. You know, we know AI content won't be covered, but if. How much does a human have to be involved for that to be covered? And that has to be tested right.
[00:09:59] Guest: In the courts and in the context of cheating itself. You know, how much borrowing is allowed and what is going to happen to intellectual property and citations and things like this. This throws open a whole array of questions that have been largely settled when you do citations, when you credit other people's work, and they're very. You know, there used to be very clear lines about the provenance of stuff. If you're using it right and citing it right, you're fine. And if you're not, you get thrown out of school or you get thrown out of the academy for violating them now. And I think a lot of that now the definitional stuff is up for grabs in interesting ways as a researcher, as I'm looking at them, but also sort of very hard cases that they're that are going to come before academic integrity offices related to this, you know.
[00:10:49] Host: What I think is real interesting, and I just wrote an article on LinkedIn about this is why we treat coding and programming so much different with AI compared to writing. We've all accepted that AI has pretty much taken over. It's the best programmer out there. I think it's. There's maybe 100 programmers in the world that can keep up with AI right now. And you can almost wear it As a badge of honor, if you're a high end developer that, yeah, I'm just reviewing code now. Like there's full acceptance. I mean, pretty much, but not on the writing side. It is a bit taboo and now I think it's being used more than people realize. But it's not talked about a whole lot.
Why do you think that is?
[00:11:30] Guest: I suspect that math gets a pass or math is sort of outsourced to, to or and coding and logic and all of the sort of related fields. It feels different from intellectual property that emerges from sort of standard writing and other creativity things. I mean, we've both been talking about the academic side of this, but the courts are going to have to sort all of this out too. And it's partly going to be in the creative process. When should things be cited and how much should be paid for it. It's partly on the liability side of things. I mean, the AI is now in cars. AI is in a lot of products. And who's going to be held liable if the products fail? The software writers, the providers of the completely made car that actually went awry. All sorts of things that were pretty well down the line of being settled in our culture. There were bright lines and norms about what was right and what was wrong. This is now sort of calling into question all of that. And of course the, the sort of livelihoods of everybody who is in this intellectual property realm, from college professors to creators of all sorts of media. It's just going to be a field day for lawyers, among other things.
[00:12:45] Host: Yeah, I agree. I think we're all hesitant to have it take our creative side and we haven't realized how to embrace that. I personally love the help of like, I have an idea, but I need a little more help with the idea and I feel like I can come out with more ideas. So it's really teaching me. And I've seen others talk about this, like how to ask the right questions. It's such an important part of life. And like if we can learn to ask the right questions, I think it can really be our ally in a way.
[00:13:10] Guest: I think that's so smart to recognize because the creative task now is the extraction task rather than the production task. It's sort of how do you get the right material out of this wonderful new tool that's available to all of us? And so prompting literacy is a new skill set that is much more valuable than before. And when you know so much knowledge, not necessarily all knowledge, but so much knowledge is available to us, it's the hunt for it rather than the production of it. That is the sort of talent of the future.
[00:13:42] Host: Yeah. One thing I heard very early on, it was only a few months. It was that spring semester after ChatGPT came out, when students had realized, gosh, they can really help me. But faculty hadn't realized how to really managed that yet. And one faculty early on I heard said, well, you have to do two assignments now. You have to do one that's using AI and one that's not, and show me how you how it improved. And I thought that was an early way that there were some faculty were using it, almost forcing students to keep up in a way, I think.
[00:14:12] Guest: Yes. And there are refinements now of the thing that you're describing where hear about faculty members who nitpick the material that comes in from students. Tell me how you did this. I don't care if you did an AI prompt, but I want to know what the prompt was and I want to know how much of the material that was generated came out of the machine and how much of it came out of your head. And it's, you know, back to the old days that at least I remember when we were taking more complicated math classes. The teachers would say, show me your work. I don't care as much as you got the right answer, but I want to know that you know what the logic of the system is and that you understand the concepts that we're trying to teach here. And I think Show Me youe Work is now the mantra of the future. It's going to happen in tests, it's going to happen in capstone projects. It's going to happen in portfolio building and things like that. We're going to have to be very discerning as creators about describing how we interact with this cointelligence and then explaining why the particular blend we've brought together of the material is useful and worth paying attention to or giving a grade or whatever it is.
[00:15:22] Host: I want to equate it to the administrative side. We're talking about the classroom pretty much. But I talk to a lot of marketing leaders, enrollment professionals in higher ed, and I really advise the leaders to talk to their staff about, please let us know. Save your prompts in this directory. Keep a prompt library for the ones who've, you know, embraced this. But let's, you know, when people leave, like we need to keep up with, if someone's really good at using AI, let's learn from them. And I found this resistance that people even admit they're using AI still? Yeah, it's kind of obvious, like, you can tell they're using it. But it's very interesting that let's embrace it, let's document how we're using it. We can all improve. I want to share prompts with people I work with, and let's how, you know, we can get better together out of this report.
[00:16:08] Guest: Absolutely. Related to that nice thing that you just said is the negotiation that now has to take place between a student and a faculty member about what's kosher and what's not. Some faculty members are just never do it, and others are like, do it all the time, but show me how you're doing it. As we're describing, even at the level of assignment now, there aren't rules of the road. Some professors now begin their classes by saying, here are the ways you can use it, here are the ways you can. But in many cases, they are orienting assignments around variance in usage. And that has to be negotiated sort of moment by moment to make sure students are staying on the right side of the line and faculty are explaining why there is a right side of the line to be worried about in the first place. So it's just all this fluidity in learning now that was not so much part of the sort of standard model for millennia.
[00:17:02] Host: Basically, one of the findings suggests that many faculty and staff feel unprepared for AI in teaching and in administrative tasks. What practical steps do you think institutions can take to build that comfort and knowledge?
[00:17:18] Guest: I think we're in this sort of Wild west moment where everybody's learning at the same time. There aren't asymmetries in what teachers know and what students know. Everybody's sort of in the same boat of figuring out what to do. So one invitation in that, that a whole bunch of schools, including mine at Elon University, is to think essentially we're co learning, co creating, co inventing as we go. And there are a couple of teachers here who basically said to their students, I don't particularly know this any better than you do. So one of your assignments now is to teach me is to help me learn things about it. And your grade's going to be reflected in how well you have sort of brought your best self and your smartest self to things I don't know about. So there's a really interesting boundary shifting that is going on in some rooms. The other thing that is, it's been interesting to see that just between 2024 and 2025, I think there is a lot less sort of just Blanket resistance to this on a lot of campuses that people are thinking, well, it is going to happen and it is the future. And parents who are sending their kids to our schools care about it, the students care about it, the people who are going to employ them care about it. So resistance isn't really very deeply an option. So basically diving in and playing with the tools is now a standard operating procedure. The other thing that hasn't been campuses are struggling with this in a variety of ways that we're interesting to hear about. But sharing best practices is absolutely an imperative for schools. But they don't yet have the mechanisms quite well to do that. I mean, literally sort of in the next building over there might be an AI genius who has figured out a system to teach it for a writing assignment. And you have no idea that person has done that. It's just sort of propagating sort of good outcomes is its own challenge these days.
[00:19:10] Host: That really reminds me of this thing I talk, I've talked about recently is digital governance in general. Where higher ed is struggling in general to allow information to be found on campus because of lack of digital governance, lack of proper search technology, things that AI can really help with. So I'm really trying to help schools build AI into that digital governance. Because right now with the lack of it, most schools don't have real policy or guidelines. It just trickles down to the faculty member. Right? It's like all the way down to them and they're all, like you said, they're all in different boats. But if we can get the temperature of our president or our CIO or our VP of marketing or whoever's owning that, you really need to start there because there's a lot of different thoughts at the, at the highest level. But once you have that idea, we have these different frameworks, I think, for how do we support that campus wide? Because we need, I think they need some help as to, well, the university can tolerate this, we accept this, and we encourage this level type of thing.
[00:20:07] Guest: The other thing that you're making me think about in our survey that was such an interesting finding was on this governance point is who's in charge? And you know, some faculty want there just to be pronouncements about right and wrong and when you can punish somebody or not for doing it. But there's again, in this wild west environment, there's not that. And I. And you know, one model for how universities might succeed at doing this is curriculum reform. It takes an enormous amount of time, but it's always going on on campus. And there always are questions about what's the best now, what's relevant, what are the best ways to understand the basic concepts of our discipline and things like that. And so porting over that model of how you do curriculum reform to the general process of using AI, it might serve a lot of institutions pretty well, but it is a blend of a little bit of top down, and the top down format is lots of permission, lots of experimentation, a few guidelines to make sure people don't completely go off the rails. But then once you've begun to learn what works, then promulgating that and sort of saying, this seems good, these are better ways to do things rather than the things that you might be thinking about in your own classroom. So there's a tremendous amount of flux and going in both directions.
[00:21:25] Host: Yeah. And I think your study addressed some of this. Right. Found that under. Tell me if I'm right, under half of institutions are introducing new AI focused classes.
[00:21:34] Guest: Yes. But it's, you know, it's an interesting. If you think about curriculum reform and how it usually takes many years to just get, you know, a new class authorized or a completely decrepit class deauthorized, the speed of this requires a lot faster uptake or assessment than curriculum reform requires. But it's, in a way, you know, a year and a half into the era of generative AI, there's a lot of ferment going on with task forces and curriculum committees deciding, should we create AI courses, AI specific courses? Should we create a discipline or a minor in it? Should we create partnerships with industry? Because that, you know, they are the frontline users of this stuff, and we will serve all of ourselves to pick up some of the best tips from that, too.
[00:22:22] Host: You know, Lee, I haven't really thought of it yet, but as we think of building it into our curriculum, I wouldn't know where it would live. Like, you think of all the colleges, right? We have College of Computer Science and Engineering. Often we have those colleges. We have Social and Behavioral Sciences, we have Communications, and it's ubiquitous. Right. It's like, well, so is writing, I guess. And so I would almost think, yeah, I'm thinking, and I'm just, this is just coming to my head is like, maybe should we have something in our GE requirements that's like, everyone takes 101 and if you want to specialize, we got to decide where that's going to go. But these are tough questions because we know how slow and how hard it is to get things through. And I often judge a School, because I have two kids that are out of college and the best schools that would keep up on this type of thing. And I would kind of equate it to technology. Like, who's keeping up with the latest technologies are the ones who can keep up with what's happening in the real world. Right. And that's the trick is the ones that are going too slow are like, you're not quite keeping up, but yeah.
[00:23:18] Guest: And that's one of the core governance issues, is who owns this stuff? And the answer is on general purpose technologies, it's everybody. And, you know, maybe the anthropology answer is different from the computer science answer, which is different again from the philosophy answer. But being out of the game is not an option. There are ways in which this enters sort of every component of learning and creativity and curiosity and human development. And if universities are supposed to create well rounded citizens and workers, it's required that they master this at the baseline level across the board.
[00:23:55] Host: I want to back up a little bit and talk about your time at Pew Research, which expose you to multiple waves of tech change. So I want to know, like, in your view, how does this moment in AI stack up against earlier tech revolutions?
[00:24:10] Guest: There's a familiarity to it and especially in the adoption numbers. And so we've looked at the very rapid spread of use of language models. For instance, we now are starting to ask questions. Do you use a language model like ChatGPT or Gemini or Claude or something? More than half the public already uses it. More than half of us adults now are users of these models. And that's probably a low number in the sense that it's being baked into familiar digital tools already. You can access it writing an email, writing a document, doing a PowerPoint, editing a picture. And people might not even be thinking, oh, I'm using AI now, but they are AI users. And so the adoption rate itself, the speed of it, is astounding. And we've never seen anything quite like this. And also for its ubiquity, it's not just sort of one thing that you're clicking on a computer to go on the Internet. It's sort of across the board in acts of daily living. But the adoption story is a familiar one. We're watching the same segments of the population become early adopters and then later adopters and things like that. The other thing that makes AI different, of course, is that it is general purpose and it's social. It's really social, even in more profound ways than social media is. I mean, at the end of the Day. If this is the year of agentic AI, we're being sold intimacy with our digital tools in the way that the companies are building it. We're going to want these tools as partners, as smart services and access to information on our shoulders. There are ways in which people are socially interacting with them, having conversations with them and things. So it's not sort of as bounded as previous technologies were. The other thing that's been so interesting to watch, and it's an endowment, if you will, from the social media era, is that there is already a sort of well established, quite broad infrastructure of resistance to just going ahead and accepting these technologies. In the dawn of social media, everybody was excited that it was going to be democratic, lowercase D. It was going to expand people's storytelling capacity, it was going to help people find new communities. It was going to break down the power that gatekeepers had on the culture, on books and movies and TV and all that sort of stuff. Well, for a while, yeah. I mean, that's still true. It's breaking down barriers and allowing people to tell their stories. But we didn't figure out for a while is you get all the crap of that too. You get misinformation and information warriors and folks who are using these tools as weapons to hurt other people and divide cultures apart. Well, that infrastructure is already in place now as sort of popular AI is rolling through the culture. So every time a new model is released, there are almost instantaneous ways that the culture of criticism and wariness and resistance kicks the tires and says, no, this is wrong, or these tools create these mistakes. Beware. And so there's this much closer to the moment, back and forth about whether things are worthwhile or not and whether things are real, true advances or not. And we didn't see that in the early days of the Internet and certainly in the early days of social media.
[00:27:29] Host: I like, I agree with that perspective. I want to wrap it up with this final question and look beyond the study and ask you. What emerging questions around AI and education are you hoping to explore next at the Imagining the Digital Future Center?
[00:27:45] Guest: There are just so many baseline fundamental elements of the process of teaching and learning that are still on the table. And one of the exciting things about being in, at the, at the moment of creation, the dawn of creation, is that all of, all of the ways that have served cultures well to teach and learn and grow people are now up for grabs. And there are ways in which they can break good or break bad in these circumstances. So a couple of things we'll be looking at basically how do we measure the effectiveness of teaching and learning in this environment. You know, a testing regime that has served our citizens incredibly well for a long time about who's mastered what and how good they are at it, and how they compare with other people who have done the same test. That's all up for grabs now in an environment where show your work is now the new test and in an environment where, you know, the tools can do pretty darn convincing imitations of human beings in that. So I think mastery and its implications, because I think mastery still has a place. It's the ground level truth against which you assess the performance of these tools. If I know stuff, I can know whether hallucinations are occurring or miscitations are occurring or all sorts of ways in which I should be careful as I'm thinking about them. Some element of mastery survives, but it's not necessarily the same. And the literacy, things are not the same. Asking questions about how people are navigating that and thinking about and teaching that are sort of front level questions. The other thing is this sort of all of the social stuff that's on the table about how do we.
What are humans good for in a world in which machine tools, machine intelligence surpasses some fundamental elements of human intelligence. And so it's, it's an absolutely here and now practical question about who. How are you going to get employed and get paid for your livelihood and take care of yourself and other members of your family? But then there's this sort of larger question about so many Americans in particular tie their sense of identity, their sense of purpose, their sense of meaning to their work. And so when that stuff potentially comes off the table for significant chunks of the population, how does that play out? I mean, we've seen such distressing news about the thing that's called deaths of despair, about how as manufacturing got hollowed out, particularly in the Midwest, a lot of people started dying for a lot of reasons. Suicide was part of it, addiction was part of it, and just general loss of purpose and nanomie was part of it. Well, we're now putting that process on an accelerated basis. And how do we work as a culture and survive as a bunch of civilizations in an environment where the most fundamental stuff about what it means to be human and how to think about being human are being scrambled?
[00:30:44] Host: No, those are great questions to pose. And I think that's a million dollar question that we can't. No one really knows. It was really great having you on the show, Lee. I will put links to your report to your profile in the show Notes. So thank you for being on.
[00:30:59] Guest: Thank you. Wonderful to be with you, Jeff.
[00:31:13] Host: We wrap up this episode. Remember, every EdTech Connect is your trusted companion on your journey to enhance education through technology. Whether you're looking to spark student engagement, refine edtech implementation strategies, or stay ahead of the curve in emerging technologies, EdTech Connect brings you the insights you need. Be sure to subscribe on your favorite podcast platform so you never miss an inspiring and informative episode. And while you're there, please leave us a review. Your feedback fuels us to keep bringing you valuable content. For even more resources and connections, head over to edtechconnect.com your hub for edtech reviews, trends and solutions. Until next time, thanks for tuning in.