Episode Transcript
[00:00:00] Dr Pak: I would say probably, if I had to just say one thing, and this is something
that I've remembered from some of the best teams that I've worked with is adopting a Living Lab
mindset, using our own spaces to engage in design thinking processes. So MIT does this, and I
know that a university in Singapore does this, so they're doing it in Finland.
So originally the Living Lab was for sustainability and for transportation, and they would use their
campuses as living labs to figure out some problems that see if they could port that out to the
local community. Right.
[00:00:38] Jeff Dillion: Welcome to another episode of the EdTechConnect podcast where we
talk about everything higher ed tech. Today's guest is Dr. Cabrini Pack, a polymath and
professor at the Bush School of Business at the Catholic University of America.
Cabrini holds four degrees across four disciplines and is building theory in three areas, business,
education, and theology.
Her interdisciplinary approach gives her a unique perspective on the challenges and
opportunities facing higher education today. She's currently exploring how stigmargy, a concept
from cybernetics, can be used to improve everything from character formation to sustainable
consumer behavior. And this year her focus is shifting to artificial intelligence, where she will be
testing tools like GPT, Dall E Copilot, and NotebookLM with her students to explore their
potential in the classroom, advising and operational problems embedded in their ecosystem.
With a background in management information Systems and over 20 years in the corporate
world, Cabrini brings a sharp systems thinking mindset to the problems of modern academia.
She's especially interested in how agentic AI bots and error rate tracking could radically improve
redundancy, data quality and student services at universities.
Known for connecting ideas across disciplines and industries, Dr. Pak seeks AI as a potential
tool to innovate and design better processes, deeper insights, and smarter workflows.
Welcome to the show, Dr. Pack. I'm so excited to have you today.
[00:02:22] Dr Pak: Thanks for having me.
[00:02:23] Jeff Dillion: So you have degrees in four different areas.
How did that interdisciplinary path shape your current work?
[00:02:32] Dr Pak: Okay, so I'll just kind of briefly go over some of the things that I've been
trained in. And so my undergraduate is in biology with a minor in Chemistry from UNC Chapel
Hill.
And I got my MBA in Management Information Systems from the George Washington University
and an MA in Theology from Villanova and a PhD in Religion and Culture from Catholic U, where
I currently teach. And I also have doctoral training in marketing. I love the program before I finish
though. I really enjoy building bridges across them and so being trained in such different fields
makes me keen to find ways to connect the learnings from those different fields so that we can
kind of build a more coherent picture of reality. Right. So, for example, how might we learn
lessons from nature when we develop emerging technologies? So I always like to tell my
students that technology has a secret infatuation with nature. Right. If you just look at how we've
named things in technology, it's really like a lot of things that you find in nature. Right.
Or, you know, how might religion and culture influence ritual consumption in the marketplace?
Another thing that I'm looking at right now is transcendence, which is a concept that's native to
philosophy and theology, but might be used to tap into resilience from a psychological
perspective. Right.
And the last one, I'm finishing up a book on being in utero, and I'm pulling together insights from
embryology and prenatal psychology and kind of just saying, hey, are they converging towards a
metaphysics of being inside another human being for the first nine months of our life? And why
is that significant? You know, know, so prenatal psychologists are saying how important that is,
and, you know, maybe we can even talk about the womb as a theological place. So that's just
kind of like how my brain works. I don't know if that answers your question.
[00:04:18] Jeff Dillion: That's fascinating, and that's what drew me to get this podcast going.
When we first met, you had some great perspectives and research you're working on. You
mentioned building theory and theology, education and business.
What connects those efforts for you?
[00:04:36] Dr Pak: So if I had to describe the one passion that connects everything I do, that
is the human experience. So I really just love delving into the different facets of the human
experience, whether it's about the transcendent nature of human beings, how we learn, or how
we form communities of commerce and exchange.
[00:04:53] Jeff Dillion: When we first talked, you mentioned a term, stigmargy, in both education
and in marketing context. Can you talk about that term?
[00:05:03] Dr Pak: So stigmergy is actually something I'm borrowing from a cybernetics
researcher by the name of Francis Hylihan, who is borrowing that from a French entomologist
from the 1950s who discovered that social insects can solve a coordination paradox.
For example, if you look at how bees work in a hive or how ants work together, the paradox is
everybody looks like they're doing their own thing, but somehow they're coordinating, either
indirectly or otherwise, to build something extremely complex that nobody could actually do on
their own, not even with a few. Right.
So Hylahan developed this mechanism. So I borrow his concept, and I got permission from him
to modify IT for business, who defines it as an indirect mediated mechanism of coordination.
And this is between actions, where you have an action trace feedback loop. So some, simply
put, there are four elements. You have an action and a trace feedback loop. So the trace is left
on a medium and the trace triggers an action by an agent which then leaves another trace and
then it continues on in a feedback loop.
Those four elements you have action, trace, medium and agent can be used to design really
powerful coordination mechanisms. So that's how I bring it into my own work.
[00:06:20] Jeff Dillion: Wow.
So from other projects and industries, you're kind of now applying that to education and
marketing, right?
[00:06:27] Dr Pak: Yes. So one of the things that I do is I have developed a few stigmatic
feedback loops in my classroom and my students were unknowingly using them. I informed them
what it was, but it's helping them build better habits in the workplace. Also teaching them how to
collaborate together. We do consulting engagements and so they have to learn how to work on a
real world business problem, but they also have to learn how to team up very quickly, like media,
social swarm around something. And then I've in some cases been able to teach some teams
how to design their own action trace feedback loops as their solutioning problems.
[00:06:59] Jeff Dillion: So let's dive into AI. You know, you're tackling AI this year. How are you
approaching it differently than most?
[00:07:08] Dr Pak: Differently than most as in?
[00:07:10] Jeff Dillion: Well, I feel like there's not a great plan and your research has really
picked some different facets of AI, maybe some problem sets. Just your overall take on it when
we talked last was really interesting to me on how you see it applying to education.
[00:07:27] Dr Pak: Okay, so what's interesting about higher ed is that it's very often kind of
like a train station. You got a lot of different travelers moving in and out of there. Right. And
you've got some that are pretty stable operators within the train station and there's a lot of
exchange that occurs in that place. Right. So as regards to AI, because I'm 20 years corporate,
my first instincts tend to come from my corporate training and I'm always following emerging
tech in the industries, particularly in high tech, biotech, healthcare, another one. I look at those
industries with great interest and so I follow what's happening there. But then on the other side,
you know, as I'm listening to employers, a lot of employers are saying, hey, we wish we really
had these people doing these things. And we don't have really a lot of entry level folks doing X,
Y or Z. Right. So then for me that's like oh, okay. Why don't we just play with it in the classroom
and build their competencies while we're doing it? So I'm doing the same approach with any
emerging tech. When it first came out, I think Zoom had an AI assistant that came out a couple
of years ago, and we decided to experiment with it in the classroom.
So I said, okay, look, this is a workplace simulation. We're going to record this using the Zoom
assistant, and you're going to assess the quality of the meeting synopsis and tell me if you think
that this would actually work for a colleague of yours that you knew was out sick or couldn't
attend. Like, is this accurate? Does this accurately portray it? And they did it for extra credit.
And what we found was that, you know, it did a good job of summarizing things, but it
misgendered speakers. It sometimes put inappropriate emphasis on things because of the
repetitions that they heard. It did not always accurately indicate what was being said, so there
were some errors there.
And I said, would you use this in the workplace? So it was pretty strongly divided, actually.
They were saying, you know what?
I think it needs more improvement. So maybe there has to be a disclaimer or, you know, you can
use it for a quick summary, but you would have to have the recorded meeting for people to
actually watch.
So things like that. I like to turn the classroom into a lab where they're actively engaging
emerging technologies, which would include something like an AI tool.
[00:09:49] Jeff Dillion: So it's great. It sounds like that class was focused on emerging
technologies, the core of the class, to work with these emerging systems. Do you feel that
faculty that aren't as focused on that being the core discipline they're teaching, where they're
teaching something else, but they're using AI to teach it. Do you feel like you're on a spectrum in
your school to where, like, you're kind of on this end, like, you have free rein to kind of do what
you need to do, but maybe with lack of digital governance, it's harder for other faculty. Is that an
issue at your school, with some faculty not really understanding or knowing how to put the
guardrails up for AI while they're teaching?
[00:10:30] Dr Pak: Well, I'll tell you this. My actual course was in marketing, so I was just
using the tool because I'm like, all right, this is a workplace tool. We need this because we're
doing an engagement, and we had to record a meeting with the client. Right?
So I wasn't even teaching anything related specifically as, like, focused on AI. I'm Just like, okay,
let's use this tool to see if it works. Another time we use GPT to see if we could stump it. And
their interest was in forensic accounting. So for us, like, AI is like a hammer. It's a very fancy
hammer, but it's a hammer. It's a tool. Right. And you can only use it on certain kinds of things. I
would say that, you know, this is probably common to a lot of faculty, maybe not just at my
school, but what I'm seeing a range in terms of their ability or motivation to work on these tools,
is there's some underlying anxiety about AI, and we do have an awareness that it's also being
used in bad ways, like for cheating on essays. Right.
So one of the ways that we've worked around it is, you know, some of the profs have been like,
okay, you know what, you might have an essay exam, but you're going to have an oral exam,
too. And if you can't explain what you wrote about, it's an automatic F because, you know, you
can't explain a concept that you claim you know about. Well, that's a problem, right?
[00:11:44] Jeff Dillion: Yeah, yeah.
[00:11:45] Dr Pak: So there are things like that where there might be some anxiety about the
misuse of it, but there's also a question of literacy.
And so if you don't have a basic literacy of how these tools work, there can be a certain
resistance to it and a certain fear of it. Right.
So, you know, something else that I've learned is that, you know, when you're using the free
GPT tools and generative AI tools, you have to be really careful what you put in there because
it's not private. You know, there's a disclaimer that says it goes into a learning model. Right.
So I tell my students, you know, don't just stick anything in there. Be aware that that's not
private. Right. And so I think the overwhelming number of tools now that are available that are
equipped with some form of machine learning or intelligence can be so much so that there's a
little bit of inertia or maybe a lot of inertia against using it in a classroom.
[00:12:38] Jeff Dillion: Right, right. And in your testing, you reveal that it's great. The students
get to see, like, oh, gosh, this isn't.
This isn't ready for prime time. So you've spoken about. We know there's different types of AI.
There's predictive, there's generative. But what I'm interested in is what your thought is on the
latest. I think in the last Six months, I would say become popular. This talk about agentic AI, and
how would you say that applies to higher ed? Could you walk us through your thoughts on
agentic AI for higher ed? Is that even coming soon?
[00:13:09] Dr Pak: It's already here.
It's already here.
So an example of agentic AI, and I've heard people already developing this in other universities
and I know that we're working on some things here is a ta, a virtual TA that assists students in
terms of learning the concepts, even maybe for writing corrections on papers. Helping them
learn how to code is another one that you can use with certain types of agents. Another really
interesting application would be in advising and course planning. That's a workflow that tends to
get jammed up certain times of the year.
For example, you know, giving the student a form that they need to fill out and then submitting
the form to the correct place would be a helpful function to have rather than to have a human
being doing it over and over and over again. Right. So it's already here.
I think it's being used with limited scope, but there's a lot more that I think we can do if we can
properly frame some of the problems that we're facing in higher ed.
[00:14:08] Jeff Dillion: You mentioned using it to code. Do you have some favorite platforms? I
really think Claude does a great job for me and I've used Lovable. I've used some that are
devoted totally to just Vibe prompt coding. What's your thoughts on what's out there right now
that are good tools to use?
[00:14:25] Dr Pak: I don't personally use a lot of agentic AI, but I have colleagues that are
actually developing their own agents on the ground.
So we're kind of in the midst of a cauldron of, you know, people cooking up these different
agents and then experimenting with them. Right. Personally, I would say just from an MIS
perspective, my hesitancy with agentic AI right now from a systems perspective is if you attempt
to jam something into an enterprise level system without really having looked at the
infrastructure or the architecture or the capacity to handle the massive amounts of processing
that will occur, you run the risk of some system failures, potential breaches, poisoning your data,
things of that nature. And so for me, at least at the enterprise level, I have a little bit of a healthy
suspicion right now of even using that on a native platform to my environment.
[00:15:21] Jeff Dillion: I think that's what it boils down to is the trust. Because I think about I used
OpenAI's version that has the operator in it just to test, to see like what can this actually go out
and do for me. And I realized right away what I was running into is like, when it asked me to log
in and get my login, I'm like, that's where I stop. That's where I'm. Nope. I'm like, it's just not
going to be there. If I think about a university setting, and just the most basic thing I can think of
is like, you come in, let's say a student gets assigned their bot assistant, and there's as many as
you need, but it knows everything about you. If you give it access to your systems, it knows your
schedule, it knows you, like basketball knows that. But it can only get to this certain point of, like,
suggesting things before, like, how much do we trust it to actually confirm what I'm going to be
doing next? Or with my finances or things like that. We'll see where it goes. I think people I
remember when ebay came out, I didn't trust, like, how. How would that ever work? Why would
you ever trust to buy something of someone you've never met before? And here we are.
[00:16:20] Dr Pak: I think with agentic AI, there will have to be a very simple and graceful
way to correct for error, or to have an erase and rewind kind of a button where we're like, oh, I
didn't mean that to happen. Could you please, like, undo that somehow?
So this is what I would want is like some kind of friction against, you know, finalizing an error that
might have been made by an agent and say, hey, can we just undo that? Is there a grace period
where you can undo that action?
[00:16:50] Jeff Dillion: Yeah. Speaking of automating things, like, you shared some thoughts
about bots updating campus web pages a month or so ago. What's the real opportunity there
with bots.
[00:17:01] Dr Pak: Updating content if they have the right capacity to do this? And this is a
big if.
I think that there's a couple of things that can happen. One is cleaning up the dead links. That's
perpetually annoying. And it's a highly tedious project. And because of the number of static web
pages that are buried on the real estate of a particular organization, it becomes really hard to do.
It's like an NP problem, right? It's a needle in a haystack. So having a bot doing it, it's kind of like
having, you know, those automatic vacuums that just kind of bounce around the house and
randomly pick up roombas. Kind of similar, I would say a similar approach where you can do that
and a human being doesn't have to spend an Enormous amount of time trying to like update
everything.
[00:17:44] Jeff Dillion: Yeah, that's a good analogy because it's going to pick up with some things
you probably don't want to pick up. But there's got to be some checks. I call it practical AI. I've
seen look at site search within universities. A lot of schools are behind compared to maybe the
private sector or other industries where they haven't owned their own search on their website.
So what we need is maybe an AI that can kind of suggest like here are all the links what we
think we should link up based on, you know, the semantic understanding, what we think that
user wants. But do you want to approve that before we actually do that? Because there's so
much jargon and branding within university that may not be in a global training set that finding
that to be the step that higher ed really is willing to take right now, rather than just say, hey, let's
replace our search with a gen AI tool.
[00:18:32] Dr Pak: Yeah. I mean the other thing is that as things get updated and
administrations change and teams change, a lot of times you'll find some disjointed vectors.
They'll have the same thing, but they're pointing to two different documents and which document
is correct. So if I'm on the back end of this and I'm looking at it from a coding perspective, I
would want a bot to be able to compare the URL for the same phrase and say, okay, are these
the same and are they supposed to be like this or is one outdated and needing to be updated?
[00:19:01] Jeff Dillion: Exactly. If it could tell you, here's the ones you should look at and why
save you so much more time to say like, these are the same, this one's a little more updated or
something like that.
[00:19:10] Dr Pak: So much more efficient than having random people saying, hey, this link is
broken or I use the wrong form. I'm like, that's never going to get solved.
[00:19:19] Jeff Dillion: It's really great to have the unbiased view too because there's so many
people on campus websites that part of their identity and job is so tied to updating the web
pages they don't even know why they're doing it anymore. But it's just part of their job, you know,
is it really valuable to update some of these web pages? We really have to look at it from a
different lens.
[00:19:39] Dr Pak: I agree.
[00:19:40] Jeff Dillion: You raised the concept of error rates for AI and task based applications.
Can you talk about why that's so important?
[00:19:47] Dr Pak: Sure. For me. Okay. I learned a lot from my teams when I was working at
Deloitte and Blue Cross Blue Shield. We always had a Dr. Doom person in our group and what if
things go wrong, Right. So when I think about AI enabled applications, right, understanding how
and why they make errors will help improve the training protocols and the learning sets, right? It
can also expose something like algorithmic bias. So consider MIT's discovery. There was a grad
student by the name, I think her name was Joy and she found that facial recognition systems fail
to identify people of color with the worst results relating to darker skinned females because there
was algorithmic bias and that was exposed. And then she called out Amazon, IBM and other
tech giants who were using those systems as well as law enforcement agencies that were also
using those systems which were flawed. Right. So unless you're actively testing the error rate,
you're not going to find ways to improve the training protocols. Right? So that's why it's so
important. I don't think it's talked about a lot because it's not popular, but for me, as somebody
that actually looks at information systems and how people work with them, error rates are really
critical and they can spread really fast. So getting that quickly can help clean up the data set and
also improve the user experience.
[00:21:09] Jeff Dillion: So let's think about these acceptable error rates. If you look at other
applications of AI, let's take self driving cars, I think most people agree or you know, you have to
kind of accept the fact that they make less errors. Already it's been proven Waymo and the other
companies doing this make way fewer crashes and less severe crashes. But when there's a
crash it is severe, newsworthy, you know, and it's, we can draw out examples in history how,
why this is, but since there's no one to blame, like what error rate do we have to get to in like
with less stakes, you know, in higher ed to be acceptable. Like do you have any idea of like
where we need to be?
[00:21:50] Dr Pak: I think it's really going to depend on what you're trying to do.
It really is. It's going to depend on your context. Right. So I had a colleague, I remember, I was
just reading this about one of the studies she was doing where you have special operators in the
military that are in a situation where they have to assess mortalities and they have to assess
triage. Right. And is it helpful to have for example it delegated entirely to agentic AI or does the
human still need to have skin in the game?
Right. And what she found was that no, the humans still want to have skin in the game. They
need to be able to oversee this. And they need to be able to guide the right triage process.
Right. So it kind of depends on what you want to do and what the acceptable error rate could be
in that industry. For example, if you're doing brain surgery, your error rate is going to have to be
really, really, really, really small.
Right. You're working with neurons. Right. But if you're working on, say, for example, advising,
well, you can set it up in such a way where the preliminary work is done with, say, for example,
an AI assistant, but then the advising meeting is now not necessarily having to deal with all the
repetitive stuff, but can just really fine tune the coursework that they're trying to plan and do a
cross check with a human being who actually understands the system with more of the tacit
knowledge that a machine just can't pick up.
[00:23:15] Jeff Dillion: Well, let's talk about advising. Do you think bots are a good option for
student advising today?
[00:23:22] Dr Pak: I've seen a lot of experiments with it in other schools and I'm kind of
curious about it. So I would definitely be interested in testing something like that. My experience
with advising is such that the way my process works is we have an amazing advisory services
group and they've actually designed a spreadsheet that allows the advisor and the student to
see the courses they've taken, what's been fulfilled, what needs to be planned. Right. And we
also have the degree requirements across the top. And then I have a cheat sheet that I keep
when I'm doing my zoom meetings with my students that links up to the forms. You're changing
your major.
What happens if you fail a course and you have to take it again? If you want to transfer a course
in, if you want to study abroad, what requirements can you do? There's all these little weird
configurations. What I find myself 85% of the time doing is repeating the same thing over and
over again with regards to requirements. If that could be taken over by something like a machine
which never loses its patience and is okay doing that like a hundred times, that's great. Because
then once they come back to me and say, okay, look, this is what it came up with, what do you
think?
Now we can have a deeper conversation. I'm like, okay, well, where are you thinking in terms of
a career at this point in your life? You know, what are you trying to do? Where are your gifts and
talents? How are you feeling called to share them? So you have a much more robust and deep
conversation with that student, and we can accompany them as we should Be as advisors rather
than having to talk about all the mundane repetitious stuff.
[00:24:45] Jeff Dillion: It sounds like many other use cases where it can get you to 30% in
quicker than trying to fumble around with like, do I even have the right prerequisites to get in
here? You know, filter out the easy stuff first.
[00:24:57] Dr Pak: Yeah.
[00:24:58] Jeff Dillion: You worked in corporate and academic settings.
What could higher ed learn from the private sector right now?
[00:25:06] Dr Pak: Oh, so many things I would say probably, if I had to just say one thing,
and this is something that I've remembered from some of the best teams that I've worked with, is
adopting a living lab mindset, you know, using our own spaces to engage in design thinking
processes. Right. So MIT does this, and I know that a university in Singapore does this, so
they're doing it in Finland.
So, you know, originally like the living lab was for sustainability and for transportation and they
would use their campuses as living labs to figure out some problems that. To see if they could
port that out to the local community. Right.
I think that in terms of looking at how private sector does what it does and accelerates the
innovation production cycle, if campuses really, like, they're scarce, you have a lot of resource
scarcity. Right. If you could turn it into a living lab mindset, then you go from a linear knowledge
economy, which is highly wasteful, to a more circular knowledge economy, which then engages
everyone on campus. So I think that's probably one of my favorite things that I found just
working in corporate and how quickly we can respond when we're doing that on the ground.
[00:26:16] Jeff Dillion: Yeah, I like that one. What do you think is the most underrated use case
for AI in a university setting that schools are overlooking?
[00:26:24] Dr Pak: Okay, this is also just from an IT perspective because it drives me nuts
framing design problems across the university environment, but recognizing like you can use
something like an AI enabled kind of an assistant to go across the massive amounts of data that
are stored in all the local drives and shared drives and come up with a way of coherently
connecting those things. I think a lot of things that we miss, especially at the university level, is
the tacit knowledge that's embedded in these local pools. Very often on local drives, maybe
there's a shared drive, but it's only got a few users that can see it, whatever. And a lot of them
are actually pointing at the same problem, but they're not talking to each other because the
functions are siloed.
I think that's a potentially a very interesting use case. Assuming that the data is clean. That's a
big assumption. But Using and accessing all of that data that's so scattered all over the
university setting and siloed and developing a more coherent picture and finding the common
patterns there.
[00:27:26] Jeff Dillion: Yeah, I like that perspective. I have one last question for you.
If you had a magic wand and could implement one AI driven solution at every university
tomorrow, what would it be?
[00:27:36] Dr Pak: So I'm going to start with the two words that I have for a typical university
bureaucracy, and that is a constipated dinosaur.
[00:27:46] Jeff Dillion: Now may have the intro for our podcast here.
[00:27:50] Dr Pak: I mean, I think that a lot of us who are in higher ed know what this means.
You know, processes get stopped up and it's just, it's just crazy, like how much goes wrong and
stalls, you know, within the constipated dinosaurs. If I had a magic wand to implement one AI
driven solution at every university, it would be something to transform those constipated
dinosaurs into agile, whip smart adaptive service providers that not only anticipate the road
ahead, but helps its users get to their destination the most effective way. Not just the students,
but the faculty and the staff, you know, and the volunteers and whatnot. I mean, I think the first
thing that goes in a lot of these resource scarce kind of scenarios is that professional
development for the staff just goes out the window. But they're kind of like the glue that holds
everything together. And then when you lose talent like that, especially those who have long
institutional kind of knowledge, you lose a big thing there. Right? So you want to, you want to be
able to develop the talent. Well, and I think there's a way to do that, especially if you're engaging
something with AI and you can actually collect a lot of those insights and needs and then find a
more effective way to deliver. So streamlining processes and just meeting the needs of all the
users on campus.
[00:29:06] Jeff Dillion: Well, let's hope we can help the conservative dinosaurs out there. And I'm
going to let you take off to whatever you need to do next here. And thank you for being on our
show, Dr. Cabrini Pack. I'll put a link to your profile in the show notes if you want. Anyone has
questions for Dr. Pack, but thanks again.
We wrap up this episode. Remember, EdTech Connect is your trusted companion on your
journey to enhance education through technology.
Whether you're looking to spark student engagement, refine edtech implementation strategies,
or stay stay ahead of the curve in emerging technologies, EdTech Connect brings you the
insights you need. Be sure to subscribe on your favorite podcast platform so you never miss an
inspiring and informative episode. And while you're there, please leave us a review. Your
feedback fuels us to keep bringing you valuable content. For even more resources and
connections, head over to edtechconnect.com, your hub for edtech reviews, trends and
solutions. Until next time, thanks for tuning in.