Episode Transcript
[00:00:00] Speaker A: So it's actually these type of unexpected outcomes and results that helps us validate and sometimes invalidate our assumptions and essentially help drive our own product development. Because we're building a tool for instructors and whatever their feedback that they give us, we build.
[00:00:25] Speaker B: Welcome to the Edtech Connect podcast, your source for exploring the cutting edge world of educational technology. I'm your host, Jeff Dillon, and I'm excited to bring you insights and inspiration from the brightest minds and innovators shaping the future of education. We'll dive into conversations with leading experts, educators, and solution providers who are transforming the learning landscape. Be sure to subscribe and leave a review on your favorite podcast platform so you don't miss an episode. So sit back, relax, and so let's dive in.
So today I'm excited to introduce to you a visionary entrepreneur who is transforming the way education tackles group projects and grading. Chris Dew Chris found it insightful in 2019 with a mission to revolutionize group project management in higher education. Today, insightful is making waves across major universities in Canada and the US, including Arizona State University and Michigan State University. Chris's talent for innovation shined through when he pitched at the edge of Startup Alley competition, went on to win the prestigious Reimagine Education Gold award. But his journey didn't stop there. In early 2023, Chris saw a game changing opportunity with the rise of GPT 3.5 and four, leading him to launch timely grader in January of 2024. This cutting edge tool is set to redefine the feedback process for students and streamline grading for instructors. Since its launch, Timely Grader has already landed major contracts and partnerships with renowned institutions like the University of Illinois, Urbana Champaign, Calbright College, McGill University, and if that wasn't impressive enough, Chris will also be making his mark at OLC, accelerate and educals this year. So Chris DU is truly at the forefront of educational technology driving change with a passion that's reshaping the future of learning. So let's dive into this journey with his insights and see what's next for this trailblazing founder. Welcome.
[00:02:38] Speaker A: CHris thank you so much, Jeff.
That's the best intro I've heard in my five years.
[00:02:46] Speaker B: You earned it.
[00:02:47] Speaker A: I appreciate it.
I'll have to do even better to kind of match your expectations.
[00:02:53] Speaker B: Now, let's see. Well, just talk a little bit about maybe start with insightful and what inspired you to move from insightful to timely? Greater what was that initial problem you saw you were trying to solve?
[00:03:06] Speaker A: Yeah, for sure. So I actually graduated in 2018 from UBC and at the time it was cool. Well, I'll be honest, to start a startup right at the time because we wanted to do our own thing. Obviously we didn't know how hard this journey was going to be. And at the start we were very naive, said, you know, we want to tackle group project because as a student myself, group project was one of those things, you know, if you had a good team, awesome. If you had a bad team, it's a terrible experience. So we thought we could tackle it through technology and we've accomplished some of that, you know, especially with capstone projects and larger courses and larger projects. But it was definitely one of the more, I guess, challenging problems that we face now. One of the plus sides of working with insightful on with instructors and professors is that we learned a lot about their workflow. We learned a lot about some of the challenges that we weren't ready to solve then, just because with formative learning and feedback, we understood how important it was, but it was just, no, there wasn't a way to saw of it. And when I think it was, you know, late 2022, 2023, I actually went back to China just to visit my family. So I had a lot of time on my hands and then obviously just looking at 3.54 and I did a lot of testing experiments. I got some assignments from instructors that I work with and they were very grateful. I was very grateful for them to share some documents with me and their assignments. I read some tests. They're like, this is amazing. So I expanded that to more instructors. Started to sell like a, like a vaporware and people like, oh yeah, if you could solve scalability with feedback, it's gonna be huge.
[00:04:54] Speaker B: So they were impressed with the feedback, how quick feedback could come in or what did they really like?
[00:05:01] Speaker A: Yeah, so I. This is, this is an experiment I ran. It was a, you know, the ideal Sally vapor, whereas you're doing all the things manually. Right. I took your assignments, I took your submissions, and I was literally just going to chat GPT myself and then just putting the paper in, doing everything single one by one. It took a long time, but for the instructor, I mean, the next day, because that was, again, I had a lot of time on my hands. The next day I was like, oh my God, you wrote all of this? I'm like, no, no, the. I did. And they're like, the AI can do this. What? So that's, that's what they were excited about is the level of feedback, the quality and the amount of feedback and how fast it was right for them. Like to able to come up with feedback for 1020 paper, it would have taken a day at that level. And for me it just took maybe, you know, an hour or so. And that's what they were surprised and that's why I said, Chris, make this. Make this reality. And, you know, went ahead and did it.
[00:05:56] Speaker B: Yeah, I think we all have that memory of the first time we experienced that, had that moment of, oh, my gosh, look what this can do. And so you're in the space where you can introduce that to your, to these faculty. Must have been, must have been really fun. Walk me through how timely grader works just on a basic level.
[00:06:15] Speaker A: Yeah, it's very simple. Essentially, instructors, they need to tell the AI what the assignment is about. There's a lot of parameters that they can set, a lot of customization, but you're basically working almost like a TA. That's how I would explain it in a very simple way. Imagine there's a teaching assistant or some sort of grading assistant. You have to tell them what the assignment is about. You upload that, document the information. Then the next step is you do some calibration. Right. Let's do some testing, make sure it's actually good. You can't expect a first time grader just to be perfect all of a sudden. And they're not going to be perfect. Even how much calibration you have, there's always going to be that five or 10% of inaccuracy. Now, after the calibration, you let them run, you give them the papers to grade, they grade it and you review it, and as if it's all good, it goes back to the student. That's pretty much the simple steps with our platform. We wanted to keep it very simple. We wanted to essentially mimic that experience because instructors are already familiar with.
[00:07:16] Speaker B: Wow, I get it. So I'm just thinking about the benefits this could offer in like, these huge classes. Wherever I. There's a lot of research assistants or grad assistants that are grading a bunch of papers. Like, is that one of the things that this could really save is like really feedback and grading for large classes.
[00:07:35] Speaker A: I think there's two parts to it. I think one is if you're in a large class, it's very difficult to get good feedback.
I've been in a lot of gen ed courses. If it's business economics or anything like that, generally speaking, you're getting feedback once, right? And that's usually after you get the grade. And to be honest, most times you don't really care as a student. Right. Because you've got the grade already. Sure. If there was like another similar assignment down the road, then you could probably apply some of that feedback. But a lot of times I get a paper, I get the feedback, whatever, move on to the next assignment. Right. Obviously with these larger courses you've got teaching assistants, you have tutorials, a lot of students don't even go and ask for feedback. And that's where we see a lot of interest. Now, we also do support some of the smaller classes because regardless of how many students you have, providing feedback is always time consuming. Right. You might have one ta for a larger course, but that TA is going to be relied upon to provide feedback to like 50 students and that's just very, very challenging to do.
[00:08:39] Speaker B: So.
[00:08:39] Speaker A: Yeah, there is definitely use case for larger courses.
[00:08:42] Speaker B: So education can be very traditional educators, especially in higher ed. Have you run into, have you probably know that or maybe you've seen that. How do you address concerns about AI grading, potentially removing the human element from education?
[00:09:00] Speaker A: I think there's a couple, well, there's almost three or four things that, you know, we have to talk about. One is trust. The biggest issue with instructors who are more traditional or introduction more risk averse. And there's a lot of them at higher education is known for being risk averse. And I've gone through so many conversations where that has happened. Building trust is very important. Right. They need to see it to believe it. And what we want to always do is, hey, you know, just give us some assignments. We'll use AI to come up with submissions so you don't have to worry about student data and something like that. And here's feedback. You like it or nothing. If you don't like it, all right, we lose your business and that's totally okay. But if you like it, then that kind of starts the next conversation. Another part is the fact that what if we lose that human connection in grading, right? Or in the education process? For us, as a company by design, we wanted to embed a human in the loop in every step, in every process. I am a firm believer. A lot of our mentors are educators with 30, 40 years of experience and they all mention you just can't replace that human element. Assistance with AI is awesome, make it more efficient. But there's always going to be a human in the loop. There's always going to be somebody who validates it might take a little longer, but that's better than AI doing auto grading or making sure, hey, you feed in the system, your grading is done. You don't have to look at it, right. Especially with subjective assignments and assignments that are more creative. You definitely need to view that. And we've built tools to help the instructor validate versus just automating that entire process. That's not something that we believe.
[00:10:48] Speaker B: So it sounds like when I talk to other AI companies and different use cases, and I believe this myself, is that AI is more of a starting point than an ending point. It sounds like. Tell me if I'm hearing this right, that you've built in mechanisms to really require the instructor to engage at certain points to review before this happens. Is that kind of how it works?
[00:11:11] Speaker A: Exactly. So what we basically do is the AI grading suggestions. It provides first pass feedback. In most cases, the first pass feedback is pretty good. On average, we eight times the feedback that we provide to students. Now, there are always cases where there's hallucinations and every time an onboard instructor say, you better check it there, you have to click the button. If you don't click the button, we're not going to let you go to the next step. Right. You know there are frictions around that, right? You're adding that, embedding that human to loop into the process. But like you said, it's the first go. It's there for you to start, will help you go to the next step. But you still need to do the next step. You still need to go and click the grades, make sure the feedback is good, validated, then you move on.
[00:11:58] Speaker B: Yeah, I really like that you mentioned trust earlier because it clicked to me in this different way.
I am starting to trust certain LLMs more now because I'm trust in this different way to trust. I could see faculty maybe using this and saying in the beginning they're really spending a lot of time looking at what the AI outputs, and after a while they're probably going to trust it more. Maybe things might slip a little here and there. Yeah, it's just something that would be funny to see.
[00:12:26] Speaker A: You know, the thing is, we make mistakes too, right? Like as humans, I'm. I don't b's all the time, but sometimes when I say a fact, I know it's probably not true, but I'll say it so confidently that people just believe me, right? This happens and the AI is trained on, trained on, you know, human data. And I think we should all be aware that something could go wrong, that there's hallucinations, regardless how much we trust, we put in. I think last year we just had a complete lack of trust. Obviously when it came out was more like fear. Right. Oh my God. It's going to replace our jobs. Oh my God. We don't need instructors in the future. But then we're kind of tempering that expectation now. Yeah, it's not as good as we thought for some people anyway, or it's better than some people thought it was, or people just thought it was really bad. But now we're trying to temper that expectation down where people, okay, this is going to be a tool that I'm going to use. I am still going to be needed. So there's less of that fear and a bit more trust.
[00:13:27] Speaker B: Gotcha. So faculty and students are deeply involved in their learning management systems, whether it's canvas or brightspace or blackboard, you know, the big ones, and there's, you know.
[00:13:38] Speaker A: Dozens of moodle and other ones. Yeah.
[00:13:40] Speaker B: Doesn't matter which learning management system is being used. Is it a separate thing or is it connected to these lmss?
[00:13:46] Speaker A: Yeah. And, you know, this is. I have to thank all my experience working on insightful. Right. The reason why, you know, we've had so much traction with time degrader is that we really understood the especially higher education landscape where you can start with a self serve platform, but you eventually want to get integrated. Just because all the students are going to be on the RMS, all the instructors are going to be on. On the RM's, all the existing workflows are on the RMS. We need to eventually go into a integration. So we built API integrations with canvas and D two. I'm working on blackboard now from the very start. So we launched in January and we had the integration from, basically from the get go because we realized if you don't give institution a plan for integration, they're not even gonna bother to talking to you. That's actually one of the prerequisites. Do you have integration if you don't? Okay, well, build it and then we'll come talk. And in most cases, they won't use the integration because you have to go through security reviews, accessibility reviews, all different reviews, and a lot of red tape before that happens. But they wanted to make sure that you have that LMS integration already on the roadmap before they even pilot. So it's definitely very important.
[00:15:01] Speaker B: Okay, so they're going to use the. They can use their current LMS, which is, I think, a great move.
So I'm thinking about. I see this distinction between grading and feedback and let's say, like, there's an essay to be scanned and feedback needs to be provided on an essay. Does the faculty member have control over what criteria is being used in the, in the feedback of that essay? Like what is it going to spit back the sentence structure, the how is it providing feedback?
[00:15:33] Speaker A: Yeah. So what we basically do is, generally speaking, feedback happens after grading, right. You have to grade the student paper first, find out what they're good at, what's not good at, and then provide feedback. So that's kind of the same train of thought that we've deployed in our AI.
We have to have a rubric just because the rubric is so important in keeping the AI consistent. If you don't have a rubric, obviously the AI can provide feedback on a gazillion things, and we don't want that to happen. We want consistency, we want accuracy, and we want predictability for the AI results. So the instructor will need to put in the rubric the criteria, and we will basically go ahead and provide feedback on those specific criteria, especially the ones that the student did worse on. Right. Because there's really no point if you did like a on this criteria, don't really need to provide feedback because you already met the goals of. So the instructor can control the criteria and they can also, there's, you know, free parameters and a free pump engineering box where they can put feedback on x, y and z. There's a lot of customization, customization for them as well. But generally speaking, we want to provide feedback on criteria, on the rubric that the students scored lower because that's the most impactful feedback.
[00:16:50] Speaker B: Okay, makes sense.
Can you share any unexpected outcomes or success stories from institutions using timely grader?
[00:17:00] Speaker A: Yeah, this is a recent story. When you did, I realized, you know, after so many years, not too many years in the grand scheme of things, but developing tools, when you design the tool to do one thing and you give it to a user, you don't think about all the use cases, and inevitably some user will bend the platform to fit their needs. Right. It's just one of the things about product development. We did try to account for this by bringing more customizability, some free forming prompt in the platform. So I think the closest example I would have is, for example, with UIUC. Obviously we built the platform to provide greedy suggestion and feedback, but what they wanted to do is provide structured feedback, so feedback that, you know, if this rating was selected, the feedback would appear. And at the start we didn't think that was use case. Right. What was the point was AI. But now that we've thought about that, that we've worked with some. We talked with instructors. It just made a lot of sense. And when they started using it, they used a free form prompt to do it. It wasn't 100%, you know, 80% of the time it worked. But obviously we want to get to 100%. They had to do a lot of prompt engineering. They were able to replicate what they wanted. But what we want to do is now take that unexpected use case and build that as a feature that we can validate the result, that we can make sure it's 100%. So it's actually these type of unexpected outcomes and results that helps us validate and sometimes invalidate our assumptions and essentially help drive our own product development because we're building a tool for instructors, and whatever their feedback that they give us, we build, right?
[00:18:42] Speaker B: Yeah. That's how you build a successful product, right? For the customers, yes, for your customers. What does the onboarding process look like? If, like an institution is listening to this right now and they want it, like, how does that work?
[00:18:55] Speaker A: Yeah, so onboarding, I am pretty hands on with onboarding. You know, obviously we are still at a pretty small scale right now, and I treasure every moment that I have an opportunity to talk to an instructor who's not used time degrader just because I can talk to them, get their idea, get feedback firsthand. I always like to do a one on one call just with the instructor because that would allow me to understand their needs, what their assignments are about, what their learning objective, because every instructor does something a little differently. You know, one might want a PDF, one might want a doc, one might want to excel. How do we build a platform to support? All of that is by me talking to the instructors and onboarding them about how to use it. Generally speaking, I meet them, I show them the platform. They tell me what they're looking for, what their assignments, about, what type of feedback they want, and we just build experiments. Right? Give me some of your assignments. Let me build an experiment. We upload some papers and we're calibrating the system. So by the time that their course launches, we know what the feedback is going to be. We've thought about that entire full circle experience, like, how is rocket going to set up the assignment? What is the rubric going to look like? How are the students going to be seeing it? And then how is that feedback going to be provided back to the students? We think about all of that before.
These students basically go for a real run. So there's a lot of experimentations and onboarding. Maybe we'll automate things for now. But I really enjoy talking to instructors and onboarding. I'm just seeing their face. Oh my God. What this is what is capable.
[00:20:27] Speaker B: That's a great thing about a young startup is you get that attention, you get to input into the product roadmap. I think it's a great time to really embrace some new tech. And what trends do you see emerging in ed tech, particularly with AI? And how are you positioned to lead and adapt to that?
[00:20:47] Speaker A: Yeah, so this is something that obviously I saw that as a trend, but there's already a lot of people talking about it. But essentially, I believe, you know, in the near future, maybe in, you know, a few years, most institutions, especially the ones that are more, you know, they have a larger endowment, they have more money. I think they will have their own, essentially their own AI model that are self hosted on, you know, you know, the school campus, whatever it's on. We are already seeing this. Right. You know, when we work with a lot of some institutions, for example, you know, tech del Monterrey, Mexico, they have their own GPT model. And, you know, we want to keep all the data essentially in a walled garden. Right, which means the data does not leave the school. It doesn't go to OpenAI, it doesn't go to the states, it doesn't go outside of whatever that might be. It just solves so much of the IP issues because I think schools are still. So they're very worried about content, student data that's going to leak into the Internet, even though Openxx, they're not going to do it. There's always that lingering fear unless you have your own model. So I think that's going to be something in the future. And we're already, well, positioning ourselves to build in admin roles where you just basically put in your own API key and then you can start calibrating and start testing on our system, similar to how we work with OpenAI. So that way, in the future, if a school has their own, it's as simple as copy the coding. And that was it. So that's how I see that trend.
[00:22:29] Speaker B: Yeah, that's interesting. Well, this has been a great talk, Chris. I'm going to close it out with one final kind of open ended question and just ask you if there's anything else you want to tell our audience or any advice you have for, for budding entrepreneurs at the intersection of AI and education.
[00:22:46] Speaker A: Yeah, well, I will have to maybe go back on some of the podcasts I've listened to, you know, with other entrepreneurs, especially with AI I think, you know, even though we are still, I guess one of the many boats in the ocean was AI and aggregating and feedback. I think the key right now is how do we grow and how does the capability of the platform grow as the AI models improve?
If the AI model is really good, already 100% capable of tackling the problems that you have today, then there's a risk of it being essentially taken over by the next better model. You want to build a tool that is more resilient, that is pushing the boundaries of the current model, that gets better as the model gets better. So you're always, you know, there's less of a chance that you're going to get replaced. But that's how I see it. You want to build something that chat, GPT or Claude can't easily replicate.
[00:23:49] Speaker B: That's great advice. And is there, where can, where can anyone find you if they want more information about timely grader?
[00:23:57] Speaker A: Well, they can hit us up on our website. It's just timelygreater AI and I'm on LinkedIn, so if you guys want to, if anybody wants to reach out, just connect with me. Happy to chat. Share thoughts. Edtech is a pretty close knit community. Everybody knows each other. We're all willing to help. That's one of the good things about education. Everybody's so nice. Good answer.
[00:24:17] Speaker B: I guess we'll have that in the show notes. Thanks, Chris. Great to have you.
[00:24:21] Speaker A: All right, thank you, Jeff. Great chatting and yeah, let's reconnect afterwards.
[00:24:25] Speaker B: Will do. Bye.
As we wrap up this episode, remember edtech Connect is your trusted companion on your journey to enhance education through technology. Whether you're looking to spark student engagement, refine edtech implementation strategies, or stay ahead of the curve in emerging technologies, edtech Connect brings you the insights you need. Be sure to subscribe on your favorite podcast platform platform so you never miss an inspiring and informative episode. And while you're there, please leave us a review. Your feedback fuels us to keep bringing you valuable content. For even more resources and connections, head over to edtechConnect.com, your hub for edtech reviews trends and solutions. Until next time, thanks for tuning in.