Episode Transcript
[00:00:00] Ben Tasker: So the AI between times is this really funky time period, and I'm not sure
how long it's going to last because I'm going through it just like everybody else. But we're not
really into that AI future yet, the AI future economy that everybody keeps talking about. But we're
not really out of the past yet either. And the past is like 2019, so it's not even that long ago. It's
just right there. We can still touch it. But during this rapid transformation period, it's like a fourth
revolution, so to speak. The third being the factory horse and buggy and being before that, AI is
going to have the same impact as all of those transformations. So that's why I like to call it the AI
between times. It's just actually going to probably happen faster than those other revolutions.
[00:00:45] Jeff Dillon: Welcome to another episode of the EdTech Connect podcast. Today's
guest has worn just about every hat in the AI space. Data scientist, educator, strategist, and
even dean of AI. Ben Tasker isn't just talking about the future of learning, he's actively building it.
Whether he's designing AI programs for 36,000 employees at a global utility or launching
university pathways that turn micro courses into college credit, Ben's work sits at the intersection
of innovation, equity, and impact.
Ben currently leads a data an AI academy that upskills and reskills more than 36,000 employees
in the public utility sector, preparing them to be AI ready and future ready. Prior to that, he
served as dean of AI at Southern New Hampshire University, launching applied AI certifications,
reaching over 225,000 learners.
Earlier in his career, he worked as a data scientist in healthcare and developed predictive AI
tools that that improved patient outcomes and has driven product development and responsible
AI initiatives.
All right, Ben, it's great to have you on the show. Thanks for being here.
[00:02:01] Ben Tasker: Thanks for having me, Jeff.
[00:02:02] Jeff Dillon: Well, let's start off and I want to talk a little bit about early in your career or
even when you were a kid, what first sparked your interest in data analytics or AI?
[00:02:13] Ben Tasker: I think like most, I'm kind of in a future career state. There's a lot of rapid
change going on in the environment around us. So to be totally upfront, I didn't plan on data, I
didn't plan on AI. I did have a very inquisitive childhood. I always liked to figure things out. I think
that eventually led me to enjoying data and then that led me into data science and AI. But I can
tell you that my first time interacting with AI actually Seeing what AI was was when I was a child,
for some reason, my mother let me watch the first two Terminator movies. That was my first
reincarnation of AI and it's kind of stuck with me ever since. And it's kind of interesting because
in those movies, the AI has a Persona and a primary objective. And the AI today does have a
primary objective as well.
[00:02:59] Jeff Dillon: Wow. It's funny you brought up the movies. I wrote an article about those
a couple of months ago about even X matching a Terminator. What was the one with Hal Space
Odyssey? So, yeah, yeah, it's all coming true.
[00:03:11] Ben Tasker: It's not science fiction.
[00:03:13] Jeff Dillon: It's crazy that it's all here. Well, you speak about the AI between times,
this transitional era where AI is rapidly changing, learning and work.
How do you define that? And what are the most surprising challenges you've seen in this phase?
[00:03:30] Ben Tasker: So the AI between times is this really funky time period. And I'm not sure
how long it's going to last because I'm going through it just like everybody else. But we're not
really into that AI future yet. The AI future economy that everybody keeps talking about, but
we're not really out of the past yet either. In the past is like 2019, so it's not even that long ago.
It's just right there. We can still touch it. But during this rapid transformation period, it's like a
fourth revolution, so to speak. The third being the factory horse and buggy being before that.
AI is going to have the same impact as all of those transformations. So that's why I like to call it
the AI between times. It's just actually going to probably happen faster and than those other
revolutions. Just because we have more technology and the Internet's here. This rapid change
period is something that we're going to have AI oh moments. And those oh moments, we've
already kind of seen them, right? Like a company may lay off 30,000 people, but then has to hire
30,000 of those same employees back, plus 10, so they don't really save anything. Or we might
see a deep fake Marco Rubio, Secretary of State in the United States, where that deepfakes
making phone calls to other foreign advisors. And we didn't really hear the impact of that, but
there was probably some impact. So this funky period, we're going to have some AI moments.
There's going to be some rapid transition to AI it's going to get screwed up and then, you know,
there's going to be those success stories, the 5% that are successful. The mega companies, the
open AIs of the world, probably. But a lot of rocky road ahead, I would say.
[00:05:06] Jeff Dillon: You bring up Marco Rubio. I do think that's one of my big, biggest fears is
political risk that we could see with deep fakes. We're already, it's already out there. I remember
it was many years ago where there was a deep fake of Nancy Pelosi. I think it was Nancy Pelosi
stuttering and she actually didn't stutter someone. And that was a real easy way to. And by the
time it's close enough to the election, it's too late, you know, to, to dispel that. So I, yeah, it's
hard to be hard to keep up with all that. But you were ahead of the curve. And at Southern New
Hampshire University, helping shape one of the most ambitious applied AI education initiatives in
higher ed, that work didn't just touch students, it really redefined workforce readiness at scale.
So when you transitioned from higher ed into the workforce and industry training side, what
lessons carried over? What did you learn from the higher ed experience that shaped how you
approached reskilling at scale?
[00:06:01] Ben Tasker: So we have to break down the education model a lot of for this question.
First off, at higher education is currently based on time.
So it takes four years on average for an individual to get a bachelor's degree. It takes two years
additional, about two years to get a master's and it takes a little bit longer than that to get a PhD.
All that standardized. There's accreditation firms that look at the amount of time in the classroom
to make sure the curriculum's map to the time that the learning objectives that you're actually
reviewing map to the time. Everything's time. That's really the point. There's. And with AI, time
kind of changes. Yes, things can still happen in a four year time period, but they're going to
happen a lot quicker. We also have to think about the product of higher education.
It's essentially information dissemination. So there's an expert that stands in the front of the
room, a professor, and they disseminate their research and all their expertise on whatever
subject they're professor in to the classroom and. And now with AI at your fingertips, that's
constantly learning, constantly trying to get better, that has one of those primary objectives. To
constantly interact with you. You really need to become more adept and adapt more quickly to
the world around you. It's not just one and done anymore. And to me, I think degrees still have a
place in the world. They'll always have a place, but a degree, at least until recently, was one and
done. Upskilling. Reskilling was something we kind of talked about, but it wasn't a big initiative.
And so now we have workforce. So let's go over there real quick. So now the workforce is like,
hey, there's this AI thing going on.
I don't know what we need to do.
Maybe we can go lean into academia. And academia is like, well, we actually kind of didn't touch
it for two years. We were kind of afraid of it. Yes, some universities might have expertise, but
there's only a few of those compared to academia as a whole.
And there's a billion people that are going to need this upskilling, reskilling. So even if we could
help you, we don't have, we don't have enough room to help you. So you're going to kind of have
to figure out how to do it yourself.
So how do you do that? One of the ways that you do that is that you can look at your
organization. It's called skills planning. So you there each. Some organizations are skill based
already, but essentially they look at the skills their employees currently have based upon the job
roles. They have a five year plan, they need to then map to that plan. So let's pretend we want to
invest in AI, but maybe we don't have an embedded AI team, have cluny upscaler folks to get to
learn AI. You can then develop a curriculum around that. The curriculum doesn't have to be
academic in this case because there isn't an AI team. There might be some academics involved
in that, but traditionally you don't need that. It can be project based learning. It can be a
validation of some sort. Maybe you go out and learn a little bit and then come back to the
workforce to show what you know, so to speak. But this initiative that SNHU is doing is short
form learning. It's three two week courses, so there's six weeks total.
And individuals are constantly engaging with AI. They use AI to teach AI. So there's a chatbot in
the course that helps facilitate the course. You're learning more than just prompt engineering, but
at the end you do some sort of project for your company and that project could be developing a
chatbot, creating PowerPoint templates. If you're on the marketing team interacting with your
social or the company's social media brand automatically, so some sort of automation.
So it's very engaging, it's fast. It's not based on time, it's based on what you know. There is time
attached to it, but it's significantly faster. This, the same course model would be 24 weeks on the
academic side. And at the end, if students want, they don't have to, but they can take the micro
credential that they earn and they can trade it in for credit so they can get nine credits for that
micro credential.
So bringing this back over to the workforce, taking that experience, understanding that
organizations need a upskilling plan, understanding that individuals need to map to that plan and
linking it all back to skills, and it was really fun. I was kind of a natural shoe in for where I am
now. And that information is very transparent for any organization for the upskilling plans.
[00:10:28] Jeff Dillon: A lot of higher ed and workforce leaders are still uneasy about AI's impact
on jobs. You consistently flip that narrative saying AI doesn't replace people, it amplifies what
they're capable of. I truly believe that. Mitch Droll has one of my favorite quotes. He says, AI isn't
here to replace us, it's here to make us superhuman. Can you share a moment from your work
where you actually saw that amplification happen where human creativity or productivity reached
a new level because of AI?
[00:10:58] Ben Tasker: So going back to the AI. Oh moments. I'm not going to name the
companies publicly, but there's been a lot of companies, I think they have good AI, they
implement the AI, they let a mass amount of folks go and then they have to hire those folks
back. And sometimes they have to hire more than just those folks back because the damage
that they did was so bad with the AI that they have to fix it, plus keep doing what the business
does.
So that's the most drastic form of bad AI.
And these organizations have what I call point solutions. They're not system based solutions. So
a point solution could be automating just your customer service department without looking at
the entire organization.
It could be just automating some spreadsheets with AI for the finance department without
looking at the rest of the organization where companies go. Right. So now transitioning, to
answer your question, is matching the upskilling with the AI. So companies that do that, so that
have an AI learning plan that allow their employees to learn AI how they want, as long as it maps
back to the plan, are 52% more profitable. So 52% more revenue than companies that don't.
And they've been tracking that since 2002. Sorry, 2022, a little off there. And then individuals that
go and earn those skills actually earn a workforce premium as well. So there's social mobility so
those individuals earn more while they're at the company and the company does better.
Companies that just do the AI or just do the learning aren't as successful as the companies that
do both.
[00:12:37] Jeff Dillon: Right, right. That makes sense.
And now a word from our sponsor.
AD: Do your audiences know the true power of your institution's story?
With deep inquiry and expert strategic and creative development, Mackie Strategies applies
decades of leadership to help you drive branding, marketing and fundraising that get results.
Mackie Strategies, moving your mission forward in.
[00:13:11] Jeff Dillon: The context of higher ed, what do you think are the biggest opportunities,
the biggest risks you see when integrating AI tools for students and teachers?
[00:13:23] Ben Tasker: So this one hits home a little bit because there's today. I don't know if it
was published today, but I reviewed it today. There was some research issued by Stanford for K
to 12 education, and it's really not that surprising. But students in that category are at a much
higher risk to be susceptible to AI. And an AI risk could mean that it gives you advice, that if an
adult received it, they would know better. Maybe they're younger.
AI can have a lot of impacts is my point there.
And on the flip side of that, now I'm moving out at the research, the research and say this
specifically and not thinking about higher ed, just thinking about education in general.
There's some schools that aren't embracing AI. So what does, what does that really mean? So
that means that, okay, so I'm a student, I'm having trouble in a math class. I can just go up to
this OpenAI or Claude or Gemini Chatbot, I can interact with it and I can get the answers.
And then once I become more comfortable with it, maybe I can start asking it more detailed
questions. But it's free, it's right there. If the schools embed that AI into how they teach, how
they do their courses, having a 247 tutor, for example, the risk is greatly diminished because the
students will stay in that platform. It's monitored, there's flagging systems. It costs the same as
not free, but it costs the same licensing for the school.
And if parents need to get involved, they can. If they need the prompt chats or whatever, their
child is interacting with this AI now, bringing this to higher ed, it's similar but different. Sure, the
risk might not be the same as it is with a child, but I would say the risk is that if you started using
this tool and you continue to use it, you might not be an expert or an expert enough because you
get into some, not some great habits by interacting with the AI, Right. Like, so it's kind of easy to
impersonate, for example, finance until you actually have to go practice it. Right. It's a little bit
different, but I think they're overlooking that. So the last question you asked, what's the primary
example of how you could use this in the classroom? And I think it ties in. Well, here I have a
computer science faculty friend, and before they were dead against AI, because you can just ask
it to code for you and it will produce that code. So over time, they were able to use CLAUDE and
embed it into the lms. So it worked in the coding environment with the students, and instead of
giving answers, it provided steps. So if you needed help to write a function, it would give you the
steps, but it would never really give you the answer. And the students in this introductory coding
class, it's the first coding class these students take, are moving at a pace of eight weeks faster
than students without that coding access and without that GPT access.
So in other words, they're getting a full half a semester more material. They're understanding it
better. They're not cheating, they're actually trying to learn.
And it's embedded into the program.
So it's possible it's happening someone that was against it for it. I know it's a case of one, but
think about that on scale.
[00:16:35] Jeff Dillon: Yeah.
[00:16:35] Ben Tasker: Wow.
[00:16:36] Jeff Dillon: For an organization or university that's still early in its AI journey, what are
the first 90 days? What do you recommend them to set up? Set up for success?
[00:16:48] Ben Tasker: I think it's important to start looking at your responsible AI and AI ethics
before you even look at the AI. So what does that mean? Transparency? Accountability.
Fairness.
[00:16:59] Jeff Dillon: Like governance? Basically, yeah.
[00:17:01] Ben Tasker: It's. It's actually a little bit more than governance. It's thinking about how
we're going to use this AI system, not just a pinpoint solution. Who's accountable for it at the
organization?
Who's feeding into it? What data are we using? Are we telling people that we're using data? So if
you're a school, for instance, and you're a teacher and you make some assignments with AI, are
you letting your students know that it's made with AI? What if there's AI evaluation? Is that no.
And on the flip side, if I'm a student and I complete some of my work with AI, should I be letting
people know? Those are kind of controversial topics right now, but I would say yes. We need to
do all of those things. We need to be transparent. That way, when we have that AI oh moment
we can go back and kind of think about it like what went wrong, how do we fix it, how do we
actually implement the AI? Because I don't think we're going to get it right exactly the first time.
[00:17:51] Jeff Dillon: Right, yeah, that makes sense. Well, skills based learning is a big theme
for you. How, how should educators and trainers shift their mindset from degree plus time to
skills plus outcomes?
[00:18:06] Ben Tasker: So what's a skill? A skill is something that you're really good at and that
you can perform consistently over time. So I like to give the example of a Michelin star
restaurant. There's typically a Michelin star chef.
That chef is world class. There's not many Michelin star restaurants. You're going to go there
and you're going to get a great meal, you're probably not going to get food poisoning. But they
had to put in the time. It's a certification, it's earned, it's very prestigious. Same with any other
skill. You have to prove that you know it. So how do you start thinking about this instead of time?
So traditionally it might take four years, for example, to get a basic business education or a
finance education or a data science education.
But in this new work economy, you might be able to get an entry level job, you might be able to
partner with a mentor. Maybe I want to go into data science, but I'm a good project manager. So
maybe you can pick up some entry level data science tasks and you can learn from the data
scientists and the data scientists can learn how to do project management. They can groom the
backlog, they can plan the tasks, they can do the follow ups. I know not everybody might be
interested in those adjacent roles, but typically those two roles work together. And the benefit of
learning in that environment is that two experts can validate each other's output. So you really
can remove the middle body, which is the education firm. And employees can, yes, still go into
get a micro credential or go get a degree or whatever the case may be. But you can also learn at
work. It can be fun. You can break that down into smaller chunks. It doesn't necessarily have to
be four years. A lot of work functions are task oriented. So yes, I can go to school for four years,
but when I go get a job, I might just have to do social media post updating, see what other
competition is posting and make our posts better than that.
So we also have to think about how jobs change as well.
[00:20:02] Jeff Dillon: Yeah, I totally agree. You've had a rare vantage point point seeing AI
adoption unfold in healthcare enterprise. Higher Ed, each sector has its own language and pace,
but there's always common threads among these success stories across those different sectors.
What patterns have you noticed that actually get AI right?
[00:20:24] Ben Tasker: The common theme between all of them, and I didn't realize this when I
was in the project, so this is hindsight, is that they were all human based projects. So in
healthcare, we, when I was developing an algorithm to help identify stomach cancer patients, we
actually didn't even know that the patients we needed to identify were stomach cancer patients.
We had to come to that conclusion over time. And then that became a public health campaign,
which means that we had to encourage people to get their water cleaned because that's the
primary reason why the folks in this area were getting stomach cancer. We had to go meet with
clergy, school members to get folks to come in for testing and to test their water.
On the academic side, when I was building algorithms to help improve student success, similar
concept, different group of individuals. But how do you make students more successful? Well,
you have to give them more resources. Maybe they're not in the right program.
How can we measure that? Do we have that data? So it's really intersecting the data science
that a lot of folks love and the human element. So being able to see just one student earn a
degree that might not have was super rewarding for me. We saw many more than that. We saw
about a thousand. We also saw individuals bringing this back to healthcare that I never thought I
was gonna. Usually data science is in a cube somewhere in the back corner. I never thought I
was gonna meet a patient, but I did meet some patients because I had to knock some doors to
help get some, some water filtration done. They don't really teach those skills in the data science
program, but those human skills are, are equally as important as any of the data science skills.
[00:22:00] Jeff Dillon: You've been vocal about weaving ethics and responsibility into every
stage of AI adoption, not treating it as an afterthought. So that's especially important in
education, where the impact on students, faculty, and trust is so direct. So from your
perspective, what are the biggest ethical pitfalls colleges and universities need to watch out for
as they roll out AI on campus?
[00:22:27] Ben Tasker: So one of my previous answers to this question, I usually say that we
need to implement AI and we need to implement responsible AI together. And I still think that's
true, but I actually think we need to start thinking about responsible AI before we even start
thinking about generally I at an organization.
So what does that mean? Let's break that down a little bit more. It means that we have a
steering committee.
So the individuals that are helping define the data, the individuals that if something happens, we
have this committee come together to figure out what we need to do to mitigate the risk. A
committee that helps with the messaging. If something or when something does go wrong, we
have to also admit that something's probably going to go wrong. So we have to own that. How
are we buying our licenses, our licenses for AI? This committee can help review that process to
make sure that everybody has a say and those licenses are applicable for not just one
department, many departments. It's really thinking about it as a system and I know I keep
bringing it back to that, but a lot of people are like, let's just do this AI thing. Or now we're so far
behind and workforce wants to catch up and there's a need for more students here because of
higher education enrollment cliffs. How can we get more students here? Let's do this AI thing.
And they're not thinking about it holistically or that maybe they're partnering with one tool or
another tool or they have a technical AI program. But I would really challenge any college
president listening to this to think about embedding AI into all your programs and challenging
your faculty on how to do that. Applied AI is just as impactful as the technical AI. Applied AI, for
example, is how you build a chatbot with a low code, no code hope tool. It's how you can interact
with the chatbots. It's building responsible and ethical AI into the business.
Those are all jobs that are going to keep increasing, but they didn't actually exist just a few years
ago. So we're going to keep seeing that new jobs existing that didn't just exist, you know, a
couple weeks ago as we go through this between times. So instead of thinking about it in
degrees, by changing that mindset to skills and the future of work, and you can still require the
four year degree. I know that might not change at the institutions or all the institutions, but then
bringing that back to workforce, right, it's just more marketable.
[00:24:47] Jeff Dillon: You know, you've really had a front row seat to how AI is reshaping the
way people learn. From micro credentials to generative AI tutors to personalized coaching, the
pace is kind of crazy, but it's also really exciting and promising. I think. What emerging trend in
AI and learning has you most excited right now and what do you see on the horizon that could
truly change how we, how we teach and learn?
[00:25:14] Ben Tasker: The most exciting trend. So there's a lot of trends happening right now
there's 24, 7 tutoring or support and that was kind of already happening, but now it's happening
even more with agentic AI. There's also the automation of processes. We but the element that
gets me most exciting is that by having an AI enabled university and this might not be a right
now thing, but I think it will be in a couple year thing is that education is going to become more
accessible and it's not just going to be in the classroom or out of the classroom. Anybody that
wants to learn at any time that has access to these AI tools will be able to learn and they can
define how they learn. It's personalized and as long as they know how to use it, they'll be able to
get that output. So I think that is going to be the biggest driver for education because it's going to
have to change to that environment. Right. It's going to have to change the product and usually
with innovation, education and healthcare lag. Right. But now it's going to become a more even
playing field. They're going to have to be at the forefront of it, they're going to have to be steering
it. And that to me I think is very exciting because a of lot, lot of these changes many would argue
needed to happen and now it's kind of being forced to happen.
[00:26:27] Jeff Dillon: Yeah, right. Well, many in the audience might love to hear one practical
takeaway they can act on tomorrow. What's your advice for educators, learning leaders to begin
applying AI smartly in, in their context this week.
[00:26:44] Ben Tasker: So if you haven't already learned AI, let's put the AI aside real quick, but
let's develop a learning plan for ourselves. It doesn't have to be five year plan, it doesn't have to
be a five week plan. You can learn a basic skill today, right now, in 20 minutes. So what skills do
you think you currently have? And we're not using AI for this yet. What skills do I think I currently
have? I physically write it down. What skills do I think I can obtain and then maybe I go ask a
mentor, I work with AI to help develop that plan. But how am I going to acquire those skills? And
by understanding how you're going to acquire the skills, A, you're ready for this between times
and whatever happens after it and B, you're starting to interact with the AI now. So now that you
have these concepts and you're going to try to learn something about AI and you can define
what that something is, maybe it's making a meal plan. Maybe it's planning a vacation. Maybe
it's just using it for search instead of using another tool.
AI can help with a lot of things and that will make you more aware of it and then that will help
you identify the risks as well.
[00:27:48] Jeff Dillon: Yeah, I agree. Well, looking ahead five years, what would you like to look
back and say? Hey, we made this shift and it changed everything in ed tech and workforce
learning.
[00:28:00] Ben Tasker: Five years might be a little fast for this one, but I'm really hoping that skills
based learning, competency based education, any of those equivalents really becomes much
more mainstream. I think degrees are becoming too expensive, they're taking too much time.
Workforce needs to change as well. Skills are the way to go, so I'm going to stick to that.
[00:28:22] Jeff Dillon: Yeah, well, thank you Ben. I love your point of view and everything you've
done in your career. I will put links to Ben's LinkedIn and his company link in the show notes and
it was great to have you on the show.
We wrap up this episode. Remember, EdTech Connect is your trusted companion on your
journey to enhance education through technology.
Whether you're looking to spark student engagement, refine edtech implementation strategies,
or stay ahead of the curve in emerging technologies, EdTech Connect brings you the insights
you need. Be sure to subscribe on your favorite podcast platform so you never miss an inspiring
and informative episode. And while you're there, please leave us a review. You your feedback
fuels us to keep bringing you valuable content. For even more resources and connections, head
over to edtechconnect. Com, your hub for edtech reviews, trends and solutions. Until next time,
thanks for tuning in.