AI, Technology and the Human Connection with Jenae Cohn

Episode 20 January 31, 2025 00:30:07
AI, Technology and the Human Connection with Jenae Cohn
EdTech Connect
AI, Technology and the Human Connection with Jenae Cohn

Jan 31 2025 | 00:30:07

/

Show Notes

In this conversation, Dr. Jenae Cohn discusses her extensive experience at the intersection of higher education and technology, focusing on the challenges and opportunities presented by digital reading and generative AI. 

She emphasizes the importance of adapting teaching methods to meet the needs of students who engage with texts in diverse ways, particularly in the context of AI's growing presence in education. 

Dr. Cohn also addresses ethical concerns, equity in access to AI tools, and the necessity of maintaining human connections in an increasingly AI-driven educational landscape. She concludes with advice for university leaders on adopting AI thoughtfully and critically.

Take Aways

TakeawaysEquity in access to AI tools is a significant challenge for institutions.

Chapters

00:00 Introduction to Dr. Jenae Cohn

03:27 Generative AI in Education: Opportunities and Challenges

08:41 Integrating AI into the Classroom

12:17 Faculty Resistance and Acceptance of AI

14:18 Ethical Considerations of AI in Education

17:26 Equity and Access to AI Tools

20:54 Maintaining the Human Connection

26:31 Advice for University Leaders on AI Adoption

 

Recommended Reading:

https://refusinggenai.wordpress.com/ 

 

Find Dr Jenae Cohn here:

LinkedIn

https://www.linkedin.com/in/jenae-cohn/

UC Berkley

https://www.berkeley.edu/ 

And her books:

Skim, Dive, Surface: Teaching Digital Reading

https://www.amazon.com/Skim-Dive-Surface-Teaching-Education/dp/1952271045/

Design for Learning: User Experience in Online Teaching and Learning

https://www.amazon.com/Design-Learning-Experience-Online-Teaching-ebook/dp/B0C9261296/

 

And find EdTech Connect here:

Web:https://edtechconnect.com/

View Full Transcript

Episode Transcript

[00:00:00] Guest: I feel like the biggest one that I see is this kind of push to create AI generated textbooks. I don't know what problem that's aiming to solve. We have no shortage of great textbook content out there. That is something faculty are really well trained to do. The issue is not the capacity to generate the content, the issue is the capacity to help students engage with that content. And I'm not really convinced that a lot of LLMs can do that in a human centric way. [00:00:28] Host: Welcome to the EdTechConnect podcast, your source for exploring the cutting edge world of educational technology. I'm your Host, Jeff Dillon, and I'm excited to bring you insights and inspiration from the brightest minds and innovators shaping. [00:00:42] Host: The future of education. [00:00:44] Host: We'll dive into conversations with leading experts, educators and solution providers who are transforming the learning landscape. [00:00:52] Host: Be sure to subscribe and leave a. [00:00:53] Host: Review on your favorite podcast platform so. [00:00:56] Host: You don't miss an episode. [00:00:58] Host: So sit back, relax, and let's dive in. [00:01:04] Host: Well, Today we have Dr. Jenae Cohn. She's a renowned thought leader at the intersection of higher education and technology, currently serving as the Executive Director for the center of Teaching and Learning at University of California, Berkeley. With a PhD in English from UC Davis, Jenae has over a decade of experience crafting innovative technology driven learning experiences that prioritize student engagement and digital fluency. Her work focuses on blending the best practices of online and hybrid education with cutting edge tools, ensuring that institutions remain forward thinking in the digital age. Dr. Cohn is also an author and Guestdvocating for thoughtful integration of AI and academia to enhance rather than replace the human elements of education. She brings a unique expertise in instructional design, faculty development, and the ethical dimensions of educational technology. And so her books. Recently she released another book. Her first one was Skim Dive Surface Teaching, Digital Reading, Teaching and Learning in Higher Education. And most recently she co authored Design for User Experience in Online Teaching and Learning. So I haven't read the second one. [00:02:20] Host: Janay, but welcome, welcome to the show. [00:02:23] Guest: Thank you, Jeff. It's nice to be here. I appreciate the invitation. [00:02:27] Host: Your book Skim Dive Service, I did read that when it first came out a few, I think four years ago or so. And I have to tell you so I used to work with Janay too at CSU Sacramento. You have quite a background too. I think you've worked at like five or six different schools. But when you're at Sacramento once I think you asked me like you just started and you asked me what I was reading and I said well, I'm Listening to a book, an audible book right now and I kind of felt like it's not really reading, but I go, is that really reading? And you said that's reading. Like why wouldn't that be reading? You were very adamant about it and it just clicked to me. I thought that was such a cool answer that I felt like, yeah, I'm reading. That was a few years ago. So thank you for that insight. [00:03:10] Guest: What a fun memory. I have no memory of him saying that. [00:03:13] Host: I'm sure you wouldn't remember that, but. [00:03:15] Guest: It'S wonderful because there's all these debates happening on social media right now still about how E reading is not reading or how audiobooks aren't reading and they're just upsetting. It's, I mean, how ableist is all of that. [00:03:29] Host: So I think digital technology has really transformed how we consume information and engage with text. So what inspired you to explore these challenges and opportunities of digital reading? And why do you think it's so important to tackle today? [00:03:44] Guest: Thank you for asking that. So I'll say my motivation for writing my first book came directly out of my teaching experiences. So my background as an instructor is in teaching first year composition writing instruction. And I remember in my early years of teaching feeling so unprepared to tackle the diverse variety of students lived experiences with reading and writing. I had this awareness that like how I read and wrote was really different than how my students were reading and writing. And I could feel that gap just so tremendously. And one of the biggest components of that gap I felt was just that students were reading and writing on screen almost exclusively. Even if they stated preferences for reading paper books or reading printed handouts. Most of the time the reality was they were trying to catch up on their homework on their phone, on the bus, on the way to campus, or they were in between classes sitting on their laptops in the hallways trying to get things done. And I just realized that the strategies I was teaching were really heavily rooted in paper based modalities and they just were not up to the task of thinking about the infrastructures and affordances of reading on screen. So when I wrote my first book, this was pre pandemic, it was pre Genai. I already sort of noticed that we just weren't being attentive these lived environments. And I feel like now, four years later, after that book's been written, we've been through this giant emergency remote instruction moment in the pandemic. We're now having a huge reckoning with reading in the age of Genai. As large Language models can so easily summarize readings on screen. We just still need to be responsive and adaptive to not just the substance or content of what's in our readings, but again, how students are accessing and engaging with them and how that changes our approaches. [00:05:36] Host: My quick little snippet of that is that I'm a big Kindle fan now. I love that I can change the background. Like, I love doing white on black, you know, having a black background. I also love sometimes seeing what thousands of other people have highlighted in a text. And I also love their feature of whisper sync, you know, to be able to listen to a book and then pick it up at home. And, I mean, there's just so many little things like that. But I want to dive into something like, kind of early here because it's such at the core, what we're. What I always talk about and what everyone seems to be interested in. And it's the potential of generative AI. What generative AI tools do you think colleges need to be looking at to support teaching and learning? [00:06:15] Guest: Yeah, so I'm going to flip your question a little bit around here, and I'm going to sort of, I think, suggest a different question that will ultimately answer your initial question, which is just that I would really love to see colleges and universities get clearer about the specific problems they're trying to solve with teaching and learning at this point point. It does feel like a lot of the hype around Genai is a solution in search of a problem that a lot of the conversations are trying to shoehorn in. Like, okay, so Gen AI can do X, Y and Z things. It can generate really authoritative sounding text. It can create images. It can really persuasively produce certain genres, memos, letters of recommendation, lab reports. I'm just kind of giving a few random examples that have come up recently. And these are all wonderful capabilities. But again, what is the problem that these solutions are aiming to solve? I'll take the really simple example of letters of recommendation. Faculty get inundated with these requests to write letters. So a problem is that faculty just don't have time to write all of these. And so in that sense, sure, an LLM is a very easy solution to that particular problem. You can create templates for a very defined genre that could theoretically save people time and energy. That's maybe a decent problem solution match. But there's a number of Genai solutions that, to me, again, feel like they are solutions without real problems. This might get me in trouble, but I'll say it anyway. I Feel like the biggest one that I see is this kind of push to create AI generated textbooks, which feels like I don't know what problem that's aiming to solve. We have no shortage of great textbook content out there. That is something faculty are really well trained to do. The issue is not the capacity to generate the content. The issue is the capacity to help students engage with that content. And I'm not really convinced that a lot of LLMs can do that in a human centric way. Because at the core of engagement is socialization, is the capacity to talk to people, engage with each other and get different ideas. So again, I'm giving you kind of a circuitous answer to your question. [00:08:18] Host: That that's a great, I love that perspective. Is quantity better than quality? Like we need. We don't need more of that. We need the better engaging content. Like you said. One thing kind of peripherally related to this, I guess is I heard that right after ChatGPT came out and I discovered I was just using it for everything and testing it out. We heard that higher ed was like, there was like a shut the door, we can't be doing this. And I heard some faculty having ideas around and tell me if this is prevalent, if you've heard of these types of solutions in the classroom where, you know, instead of banning it, which I know some professors like to do. No, you can't use it in this class. One professor said, yeah, you can turn in two. We're turning in two in this class. One is I want you to use AI and one you can. I thought that was a great solution. One way. I'm sure there's ways like still issues you gotta figure out with that, but what are some ways we can like embrace it, but like accept that reluctance in a classroom setting with like essays. [00:09:17] Guest: Yeah. So I think that there's kind of a few angles that I would love to see faculty approach here, which is, and I want to give credit where credit's due. I saw this approach in a resource out of Plymouth State University's Colab, which is kind of their teaching learning center, where they put together this repository of GENAI activities. And again, I'm not answering your question directly, I promise I'll get there. But they had this great system where they tag the activities with teaching for AI, teaching with AI and teaching against AI. And I love that idea of kind of categorizing activities in that way because I think I could see in one class doing all three of those things. Right. Like maybe there's a component where you're just teaching about and having a modular section. Really be committed to AI literacy development and understanding. How do these tools work? How are they already integrated into sort of core teaching and learning? Well, not even teaching and learning ecosystems, but just like compositional ecosystems, search ecosystems. Just to make that really concrete, most students are using Google Docs, Google Gemini is already there. And there have been sort of AI capabilities in Google Docs for a really long time, even prior to the, I think, mass consumer access to large language models. So there's that component of sort of teaching about where genai exists, how it works, how large language models are built so that students feel like it's not just kind of a black box or an authoritative knowledge engine, but it's something that is engineered and has very specific kind of functionality. And there's kind of that process of teaching for and against, right? Like, what kinds of activities can instructors imagine where Genai again, is useful? Again, I call it like it's a genre machine. It. It does a great job of replicating patterns and predicting the output that comes from those patterns. And so there's a really great teaching moment where people can think about helping students navigate generic pieces of writing and having extras where they critique that output. Genai look at how does it conform to the genre expectations, how does it resist or seem problematic with the genre conventions? And then there's moments where I think about in writing classes in particular, part of the value of writing as a discipline is that it's meant for humans to read and consume. And so I think it's obvious sometimes when Genai output is operating in a kind of predictive vacuum without a real human audience or input. So there's sort of an interesting exercise, right, where you could have students again critique or think about who is the output of Genai even really for. What does it mean for Genai to function as an engine for communication, if really what it's doing is just operating predictively? So again, I maybe talked around your question a little bit, but I think that just navigating the possibilities is part of what I hope faculty continue to wrestle with. [00:12:10] Host: Your idea to teach both sides. Makes a lot of sense and it seems theoretical, like, great idea, but you're at UC Berkeley. Maybe you guys are more progressive, accepting. I don't know how it is in your environment. Are faculty really equipped? How easy will they accept that challenge? I mean, that is the challenge, I guess, is changing faculty ways, right? How do we do that? And are you seeing that happening where more faculty will embrace a strategy like that? [00:12:40] Guest: I think there's a huge range of perspectives. I think, as with any new tool adoption, so I don't think this is actually unique to Genai faculty populations. Incredibly diverse. There's a huge range of backgrounds, experiences and comfort levels with technology integration at large. So you've got a camp of people who are so well prepared to navigate the nuances of this conversation. And a lot of faculty at Berkeley I'm humbled by, they're more prepared than I am. Insofar as they're data science experts. They've been studying machine learning and large language models for their whole careers. These people are beyond ready. And there's some folks in the humanities who've looked at the history of technologies and they too are prepared in an incredibly important way. But of course, you have a swath of folks who just look at Genai in a simplistic, binary way too. Right. It's a plagiarism machine. Some will say it's really just a way to output total slop. And I think that when we think about our programming from a faculty development perspective, we have to do our best to provide, I think, guidance that helps folks, even at different capacities, make the best decisions for their particular context, mindset and place where they are learning from. And that's not easy to do. I'm always worried we're missing the mark on that, but I do think it's a matter of just trying to provide really clear, straightforward, plain language information about what are your options? How does that align with the values that are meaningful to you in your classroom? [00:14:15] Host: So I noticed in your LinkedIn profile you had referenced an article and I looked at that and I read this article that was called Refusing Generative AI and Writing Studies and it had real great counterpoints to like why we need to really not completely refuse it, but it was very nuanced in how we need to approach that. Can you talk a little bit about those ethical concerns that were mentioned in that writing study? I'll put this link in the show, note notes too. It's a really interesting article. [00:14:42] Guest: It's a great piece. Big kudos to the authors, Jen sana Franchini, Megan McIntyre and Maggie Fernandez, who are really esteemed scholars in the field of writing studies and are really well positioned to navigate this nuanced argument. So yeah, they address several sets of ethical concerns. One is about the concerns of data privacy and surveillance. I think that's one that folks get kind of hand wavy about and it's worth taking a beat to consider. Most gen AI systems are privately owned and operated. They manage and steward data according to their own sets of privacy practices. And those data stewardship practices may not be in alignment with university standards for cybersecurity and data privacy. A lot of these LLMs, I'm thinking of OpenAI's ChatGPTs created edu versions of their license. Google's Gemini has similarly done so to kind of attempt to be at least like minimally FERPA compliant. But even within, I think, those EDU ecosystems, there still remains kind of this black box of where does the data go? Who's using this data to train future models? And that is an ethical question for students and instructors to consider. How does their intellectual labor and output get monetized by tech companies? This has been a problem with EdTech, as you probably know, Jeff, for a super duper long time. And I think Genai just continues to bring that to the fore, especially as instructors consider, do they make usage of certain LLMs mandatory? They have to, I think, grapple with the fact that if they're making it mandatory, they don't know where the student's data is going at the end of the day. And a big part of that piece is sort of recognizing that instructors and students have a right to question and know how their data is being used. The other side of that, of course, is that the other ethical conundrum in this equation is that a lot of folks who are concerned with Genai's capacity to engage in plagiarism, you know, want students to document their writing processes, you know, and submit evidence of that documentation. So rather than just submitting a final draft of the paper, submit, you know, whatever evidence of your chat transcript with your LLM, or submit evidence of your drafting in Google Docs. Well, that's an invasion of privacy too, right? Because it's providing this access to a very private learning process that if students know it's getting surveilled, may approach the writing process very differently than if they were actually able to just like do this work in private. So those are sort of two aspects of that. There's more, but I'll stop there. So I'm not talking for too long. [00:17:18] Host: Here, but I just wanted your take on it because I thought it was a great piece. So I wanted to get that in there. And it's something that I think the listeners should check out if they're interested. I always thought when Gen AI first came out a couple of years ago or came available to the masses was, wow, this is an equalizer. Like, no one should be creating C level work anymore. I'm assuming everyone's going to adopt it. But I've talked to other people in DEI kind of roles that say this is not. Students can't even keep up with like basic. They don't have the tool, they don't have the computer. How do institutions ensure equitable access to all these new tools for students? [00:17:54] Guest: Oh my gosh. It's a great question. I don't have a super straightforward answer because I think it's still so much in flux. I mean already we're seeing a lot of unfortunately inequitable access to the tools, right? So there's like multiple dimensions, I think, to equity here. I mean, one is just cost, right? To use the most premium versions of most LLMs, you have to pay a monthly subscription fee. Some students will do that, some students won't. So that's one piece. And a lot of universities have not been able to negotiate, I think very sustainable enterprise wide licenses for the premium versions of these tools because they're really expensive. And newsflash, a lot of universities don't have a ton of cash laying around to add on new enterprise license agreements to their already stuffed portfolios. So that's one challenge, I think, to the access and equity issues, affordability. The other challenge I do think is still a digital literacy gap for students. Some students just have the skills and knowledge to use Genai in more sophisticated ways than others because they've had prior knowledge and experiences with whatever coding, programming, understanding of how machine learning algorithms work than others. So there's I think, an access divide there. I mean, I know the field of prompt engineering in some ways was like a big flash in the pan. Some argue like we're already beyond prompt engineering and we have to think more critically about how we program Genai to most successfully meet our needs. But the reality is still that some students just again know how to query and engage and get better output and assess that output than other students. I think the other just like lingering equity piece I'm just still considering though is the quality of the output itself. Genai speaks in a very particular kind of voice. It's an authoritative voice. I'll say as a white woman, it's a white voice. It's not a voice that represents the diversity of how people speak and think. It does not reflect multilingualism. It does not reflect cultural engagements that we should value in writing. [00:19:58] Host: Does it magnify our biases? Right. [00:20:01] Guest: Totally. [00:20:01] Host: Kind of a big thought. [00:20:02] Guest: It perpetuates tons of stereotypes. I mean, there's been a lot of discourse I mean, this is outside of education, but Meta attempted to create these Gen AI profiles that are meant to be kind of like these user experience Personas that were fraught with just tremendous stereotypes. One of them was supposed to be a black queer woman that many critics just called an example of digital blackface. So these are all, I think, really great examples of how Genai might equalize some things and experiences, but really misses the mark on a lot of others. So it's just worth our while to keep being critical about what we're asking students to do and how that aligns or diverges from what's possible with these tools that students are using and faculty are using for that matter. [00:20:52] Host: Yeah. So your experience, a lot of what you've done, I think over the last few years is focused on that human connection. How do we keep focusing on the human connection while we keep leveraging AI for efficiency and personalization and things like that? That seems like a challenge. [00:21:12] Guest: Yeah, it definitely is a challenge. I wish I had a straightforward answer. I don't. I think the biggest thing we can do is just encourage continued community building in our workplaces, in our classrooms and just engender a culture of trust and openness. Well, let me take a step back and say what I think faculty can do, what even staff can do is just acknowledge and be really clear eyed about the reality of the landscape, which is just that there's tons of data. Great surveys have come out from many consulting agencies and institutional research in various universities. I know Berkeley, our own institutional research office has done excellent surveys of students. Students are using these tools. That's the reality. So what does it mean to again identify where tool usage is appropriate, where it crosses a line, what things really need or benefit from student to teacher human interaction. And part of that I think can just be a really open conversation where the instructor can guide a facilitated conversation. I'll just say this one favorite activity I recommend a lot is creating class norms at the start of the new semester or the start of the new quarter. Talking about genai can be part of those class norms. Hey students, what do you think are the best uses for Genai for your learning? And just kind of see what you say and turn it into a dialogue. [00:22:34] Host: Bring it up right in the beginning. Yeah, the policy seemed to be or the most of the governance and the guidelines and policies seem to be all the way down to the professor level because there's not a whole lot at universities. So probably good advice and going up to different topic here, kind of related to everything we talk about is pretty much in the classroom right now. But do you see a risk of any over reliance on AI in decision making processes within education in any realm? I guess you could even start at grading all the way up to maybe administrative decisions. What do you think about AI in those types of situations? [00:23:11] Guest: I think it's really contextually dependent. I'll give a couple examples. I could see Genai being extremely useful for big administrative decisions around say, making instructional materials accessible. That feels like a really powerful space because there's known ways that we can remediate documents, for example, remediate images and videos that Genai could really help us make some clear choices about. So there's some really routinized spaces for Genai. I'm even thinking of things like there are standard procedures for doing it, tool procurements that could be really much more efficiently standardized, perhaps with some gen AI tools and engines, because again, they're so routinized, they're such standard things you're looking for. There's really potential there as long as there's some human oversight. I think these decisions get dicier around things like grading where it becomes this existential question of what's even the value of learning. Learning is a human enterprise and we've done a lot of routinized automation of learning tasks and grading tasks over the years. There's automatic grading tools that have been around for grading quizzes and exams for decades. So that's not totally new. But I do think over reliance on these tools again kind of undermines the enterprise. If knowledge is meant to be shared between people and is meant to build among people, it almost makes the learning seem inconsequential. So I just think that the context here is, is critical. It's like, I don't want to, I'm all about improving efficiency where it's possible. But like we really have to be again, realistic about where the routinization undermines the ultimate goal of the work. [00:24:56] Host: Yeah, I can see because I've seen, I've talked to a lot of these AI grading companies and it's amazing what they can do. And they're saying, well, yeah, yeah, the faculty still has to go in and check the box. You know, like we're going to provide the recommendation or the proposed, you know, based on the structure, here's the grade. And I could see in the beginning like, oh yeah, the faculty will check. And then after a while might be like, yeah, it'd just be a little less attention every session that you spend or every semester, you know, I could see that kind of a slippery slope there. [00:25:26] Guest: Totally. And it's. I've been reading a lot about the influence of potential influences of Gen AI on the field of instructional design in particular because it's a field that is kind of vulnerable to. I think a lot of this sort of Gen AI overreach. Like it's very easy for Genai to output like a lesson plan learning outcomes content for learning modules, which is largely domain of faculty and instructional designers. And I've played around with some of the output and a lot of it's pretty good. I mean, right. [00:25:54] Host: It's a good starting point. Right. Gets you going pretty quick. [00:25:57] Guest: Totally. But again, it's the kind of thing where it's not always obvious for Genai plays in but it's often. Again, it's so generic and if you know it's being used, you know there's no human touch there. Again, it makes the learning experience feel sort of pointless. Like you start to have like these robots talking to robots and it feels like an endlessly absurd enterprise. [00:26:18] Host: Yeah, we're going to have my robot being graded by your robot someday while I'm out riding my bike, you know, exactly. [00:26:25] Guest: Like, let's just all ride bikes then. Why are we. Why are we having our robots do this profess. What's the point? [00:26:31] Host: Well, I guess to wrap it up, I'm going to ask you one final question is what advice would you give to university leaders who are hesitant about adopting AI? And you're the right person to ask. I think for the academic side, what's your advice? [00:26:43] Guest: Oh my gosh, I'm hesitating. I knew you were going to ask this question and I still feel a little like, wait, what is my advice? Because I think I am really. I'm sort of struggling still. I think what the best way to consider adoption Gen AI I think the biggest piece of advice I would offer is just to have a continued learning mindset. It's a generic piece of advice, but I think it is critical. I think it's really easy in higher ed for I should say, academics are really well trained to be very skeptical and to ask a lot of questions and to be people who problematize everything. And when you're in the position of kind of constantly problematizing, sometimes it's hard to take a step back and think about what don't I know here, what don't I understand, what do I still need to learn that is totally outside of my vantage point or heuristic for questioning. So I think that if as leaders, we can continue to cultivate an inquiry based mindset and sort of look at what's happening from a perspective of continued curiosity, of a continued willingness to kind of ask more questions and to be willing to sit with those questions and be patient too. I think there's a lot of false urgency to like move and respond and adopt. And again, I know that we don't want to resist change, the sake of resisting change, but there's a lot to learn and you can't learn unless you give yourself space and time to learn. So I think that's what I would advise. [00:28:18] Host: That's good advice. I think. A thoughtful approach. There's so much to learn. [00:28:22] Guest: There's way worse consequences if we adopt things uncritically than if we take our times that are a little bit slow. Yeah, but I mean, I guess I don't believe in moving fast and breaking things like a lot of our tech entrepreneurs. And that's like, you know, a cultural divide that I recognize comes from, like, my own bias and experiences in the places I worked. [00:28:39] Host: I think we can do a whole nother podcast. It's about like the whole contrast of the speed of technology and the slow pace of higher ed. [00:28:46] Guest: Oh, totally. [00:28:47] Host: They're colliding right now. [00:28:48] Guest: Yes. Yes. [00:28:49] Host: Well, thank you for being on the show. It was great. Great talking to you, Jenae. I'll put links to what Jenae mentioned in the show notes. So, so likewise. [00:28:56] Guest: Thank you so much, Jeff. And thank you for everyone for listening. [00:28:59] Host: Bye. Bye. [00:29:03] Host: As we wrap up this episode, remember EdTech Connect is your trusted companion on your journey to enhance education through technology. Whether you're looking to spark student engagement, refine edtech implementation strategies, or stay ahead of the curve in emerging technologies, EdTech Connect brings you the insights you need. Be sure to subscribe on your favorite podcast platform so you never miss an inspiring and informative episode. And while you're there, please leave us a review. Your feedback fuels us to keep bringing you valuable content. For even more resources and connections, head over to edtechconnect.com, your hub for edtech reviews, trends and solutions. Until next time, thanks for tuning in.

Other Episodes

Episode 4

October 04, 2024 00:25:44
Episode Cover

Streamlining Grading and Self-Hosted AI in Education with Chris Du

Chris Du, the founder of Insightful and TimelyGrader, is revolutionizing group project management and grading in higher education.   TimelyGrader, launched in 2024, uses AI...

Listen

Episode 17

January 10, 2025 00:32:32
Episode Cover

Exploring AI and Content Quality with Nick Burrell

In this episode, Jeff Dillon is joined by Nick Burrell, VP of Strategic Partnerships at ZogoTech, a company specializing in data analytics solutions designed...

Listen

Episode 1

September 13, 2024 00:22:17
Episode Cover

Embracing AI for Student-Centric Experiences with Ardis Kadiu

Ardis Kadiu, founder and CEO of Element 451, discusses the evolution of AI in education and the importance of personalization in student engagement. He...

Listen