Episode Transcript
[00:00:00] France Hoang: I want an educational system where people have better results because
they work for it right and because they've earned it, and not simply because they have access to
a paid AI platform. And so I think providing equitable access to the same set of AI tools across
your student body is something that you can do to open up doors of opportunity. And because of
our infrastructure, we enable institutions that don't have big budgets to be able to.
[00:00:29] Jeff Dillon: Welcome to another episode of the EdTech podcast where we explore the
intersection of technology, learning and innovation. Today's guest, France Huang, brings a
remarkable blend of public service, entrepreneurial grit, and a visionary approach to education.
France is the co founder and CEO of Boodlebox, a collaborative AI platform designed to
empower lifelong learning from classroom to career.
With more than 10,000 users across 650 colleges, Boodlebox is reshaping how students and
educators use AI to learn, collaborate and thrive. France's journey spans military service, the
legal field, national security, and launching successful companies that have generated over $600
million in revenue. He served as the Associate White House Counsel under President George
W. Bush, deployed with U.S. army Special Forces, and and helped coordinate humanitarian
efforts during the Afghanistan withdrawal. A graduate of West Point and Georgetown Law,
France has combined his expertise in leadership, law and technology to drive innovation in
education.
Today, he's on a mission to create the world's most human centric AI ecosystem for learning and
work. Let's dive in and hear his perspective on the evolving role of AI in education and how
institutions can prepare for what's next.
France, it is great to have you today. Thanks for being on the show, Jeff.
[00:02:01] France Hoang: It's great to be here. Thanks for having me.
[00:02:02] Jeff Dillon: So before we dive into the nuts and bolts of edtech, I would love our
listeners to understand the human story behind your work. Higher ed is driven by people with a
deep passion for improving education, not just building software. So what was really the
inspiration? Was there a challenging classroom experience or a moment of profound learning, or
did you realize there was an unmet need? Those are some things I hear sometimes. So tell me
what first inspired you to enter the edtech space and how that led you to where you are today?
[00:02:35] France Hoang: Yeah, very little thing. My entire life I'm a refugee from Vietnam. I was
evacuated from Saigon when I was 2 years old in April of 1975. Group in Washington State,
ended up serving in the military, going to West Point.
Then after military service had a wonderful opportunity to get a master's degree and then
eventually made my way to law school and served again in the White House, and then again in
the military. And then I've been very fortunate to be a founder of several different companies. A
law firm, an aerospace company, and now Bootle. And so I've had just an incredible array of
opportunities, all of which would not have been possible, none of which have been possible
without education. So I have a deep love of education.
I've served as a trustee of a university, became and remain a distinguished visiting lecturer at
West Point. So from the education side, I've done everything from who should we pick as our
next president? To open your textbooks to page 43. And so Buddha box. My current endeavor
brings together this love of education, this desire to serve, and this fascination with technology
and the ways that it can help. And frankly, you know, some concerns I have about the ways it
might hinder.
[00:03:44] Jeff Dillon: Yeah, tell me about some of those concerns about how it might hinder.
[00:03:49] France Hoang: Yeah, I mean, obviously, there's this thing, right? I think we all heard it
called AI that happened three years ago ago that had a little bit of effect on the classroom. I
think, Jeff, you and I, we came from a generation that, you know, we were pretty probably taught
very similar to how people were taught 100 years ago. And up until three years ago, we're given
a piece of information, we have to wrestle with it, engage in productive struggle, and then we're
assessed on it in some way, asked to demonstrate our comprehension of critical thinking. We
produce some sort of artifact of learning. A quiz, an essay, a project, and that's how we
demonstrate it. And then we receive feedback on it. And we do enough of that, and we get a
grade, and we do enough of that, and we get a degree, and then we trade that for a job.
Well, all of that has been disrupted by AI, right? Because the. The very artifact that we could use
to assess students as an educator and to show our comprehension and critical thinking as a
student can now be produced by AI.
And so there's a danger that students are going to rely on that and not develop critical thinking
and collaboration and communication and a whole host of skills that were always the byproduct
of education, but perhaps were really actually the most important thing you've got in education,
right, Those durable human skills. For educators, I think there's a danger that we overreact,
right? And we try to turn back the clock to 2021 and say, Gosh, this is anathema to how we
teach.
And so we just going need to go back to the way things were, and AI is going to undermine
critical thinking and creativity and all those things I think the path forward is a different one.
Right. How do we enable human beings? How do we be human first in an age of AI? And how
do we use AI to actually develop the critical AI readiness that we want our students to have in
order to function and thrive in today's world?
[00:05:36] Jeff Dillon: Yeah, you've really built Buddha Box. I mean, the way I see it around a
powerful premise, but that AI isn't just a tool for students in the classroom, but a lifelong partner
for professionals. Given how rapidly the workforce is changing this vision, moving AI from
academic experiments to career necessity is incredibly timely. Thinking about your journey from
the classroom to a 40 year career, what is the core mission of Buddhabox?
[00:06:06] France Hoang: Yeah, we want to provide a partnership, Right. With educators to prepare
students for lifelong learning and work with AI. And so that involves teaching with AI, that
involves teaching about AI, and it involves partnering with educators who are on the front lines of
figuring out how human and AI are going to interact and what that relationship is going to be. I'll
tell you, Jeff, I remain deeply concerned about getting that relationship right. Every time we have
technology, we wrestle with how we integrate. Sometimes we get it right. You and I are now
talking a platform to do this. Thirty years ago, I'd have to fly out to your studio, we'd have to sit
and do this in person. It's wonderful that we can do this right in between two other zoom
meetings, probably of the day.
There's other technology we didn't get right.
Social media is probably one we got pretty wrong. So I want to get that relationship right. And I
think if we start with where tomorrow workers are today, which is the classroom we get it right
from for students, they will take it and get it right in their workplaces, in their homes, in their
communities. And so that's my long term vision, Jeff, is to enable us to collaborate with AI in
responsible ways that make us better as humans. Right. And that extend and amplify, not
diminish.
[00:07:22] Jeff Dillon: Yeah.
You know, I think this discussion around AI in education, it often centers around fancy features
like automated grading, content generation.
And these are powerful, but they risk focusing on automation over impact.
And that's what I love about what you're doing. I think the real question for many educators is
does the tool genuinely improve the core human centered experience of teaching and learning?
So it shifts the focus from technical capability to really like an ethical alignment, I think. And
that's where I really was thinking about when I really want to get you on the show. But my
question for you next is what roles do you think faculty and staff play in making AI adoption
successful on campus? What pitfalls have you seen that are common?
[00:08:14] France Hoang: Yeah, there's a very common concern, right? Like, am I going to be
replaced by AI? Is what I'm doing as an educator now irrelevant in era of AI, right? I used to
have to teach my students in order for them to have the answer. Now they can ask AI to
generate the answer. I think there are three critical components to being what I call AI ready.
The first is domain expertise. You have to be better than the AI as we know AI hallucinates. It
makes all of us a B minus in today's world at everything, but only a B minus.
To use AI well, you have to know more than AI. There is only one place to get that right, that is
through education. I guess you can get it through experience as well. But education and
experience combined, right, are the place to create domain expertise. And guess what? You
need an educator to do that. So educators are actually more important than ever, not less
important, because it's the only place you can learn to be better than the AI.
So for that reason alone, educators are not going away. Education's not going away. What we do
as educators is more important.
Second, I think this is the part that is kind of tricky, right? And does cause concern.
We have to teach our students to be AI enabled. And what I mean by that is knowing when and
how to use AI responsibly. The tricky part is, look, I love being in the front of the classroom. I
love having all the answers, being the sage on the stage.
But with AI, it's changing so fast. We have to be willing to experiment and fail and make
mistakes and have these very candid discussions of our students. And so we have to learn
alongside them. And I know there are educators who aren't comfortable with that. Like, I haven't
figured this out for myself. How can I teach students? Well, I think there's a new mode here.
We're going to learn alongside our students about AI. But again, it requires domain expertise to
know when and how to use AI responsibly. Those two things combined can only come from
education.
And then lastly, we've touched on it a couple times now. I think the things that make us uniquely
human are more, not less important now, right? Those durable skills of communication,
collaboration and critical thinking and creativity. And those things, I think can be enhanced by
this human AI collaboration, where I think a lot of what we do as knowledge workers, right, 80%
is rote. It's automated bowl, right? It is the stuff that's the drudgery. And then 20%, we get to be
creative and use our judgment and critically think, well, what if we could reverse that ratio? What
if using AI, you spend 20% of your time on the drudgery, you spend 80% of your time on things
that require real human ad value. I think that's the promise of AI. But those three things
educators can help to create, domain expertise, AI enablement and more. Excellent human
beings.
[00:10:51] Jeff Dillon: Yeah, I love that. Those three pillars.
And now a word from our sponsor.
AD: Where can you find a consultant who knows your challenges? Really?
With decades of campus executive experience, the Mackey Strategies senior team provides
interim leadership, presidential and vice presidential council, crisis communications and
technology guidance built on real world success. Mackie Strategies Expertise when it matters
most.
[00:11:26] Jeff Dillon: The promise of AI, I think is often measured in abstract terms.
But for educators and platform developers, I think the true success lies in observable changes.
How students approach a task in the depth of their final product and in their level of
engagement. Are they just using it for shortcuts or is it fundamentally altering their learning
behaviors?
So especially when dealing with a secure governed environment like Bootlebox, you have a
unique advantage in this shift. So my question, I guess, is from your experience, what are those
specific student behaviors or learning outcomes that shift when they have access to tools like
Boodlebox?
[00:12:08] France Hoang: Yeah. So we as educators know that learning comes from productive
struggle. The conundrum is these tools are designed for productivity. The opposite is they don't
want you to struggle, so they make things as easy as possible. So in order to, to teach students
with and about AI, we need to reintroduce struggle into the use of these tools. And so Boodlebox
creates an environment where you can do that. You can put guardrails around the student use,
you can create learning environments where the students will only use the AI, the want the way
you allow them to use, and it's all transparent so you can see how the students are struggling.
That's important because we live in an era where process is just as important as production.
In some cases, maybe even more important, it's not what you produce, it's how you produce it.
And so what that allows you to do now is to see how students use AI. And are they using it, as
you mentioned, Jeff, only as a tool and outsourcing critical thinking, or are they using it as a
thought partner where they're engaging with it in ways that deepen and amplify their critical
thinking? That's gotta be taught to them in order to teach it we need to be able to see it right,
and we need to create the learning environments. And so Brutalbox enables that. And then we
see, you know, we work with over 80 institutions now, many of which have done case studies
and pilots and measured effects. In one particular instance, Pikes Peak State College, they used
it throughout a set of courses for English, for composition.
90% of the students felt that this was a more ethical way to use AI. After even one session, 87%
of students felt more comfortable with AI prompting than they did before the use of Buddha box.
And 83% of the students actually felt more comfortable with this sort of collaborative AI platform
than the other free tools that are out there. And the professor had zero incidence of academic
integrity throughout the entire course. And the work product that was turned in was among the
best they'd ever seen since teaching the course.
[00:14:07] Jeff Dillon: Wow. That some great testament to what you're doing. You know, the
term AI readiness is much broader than I think, simply purchasing AI software, adopting a new
tool for a college or university. It represents a fundamental institution wide transformation that
touches every pillar of campus, life, curriculum, governance, IT infrastructure. And since it's
about creating an ecosystem where AI can be used safely, you know, and effectively and
ethically, it requires thoughtful alignment across all the different areas from the classroom to
administration, all the way down to the student. What does AI readiness truly mean to a
campus? How does campus most effectively begin that journey?
[00:14:50] France Hoang: Yeah, I will tell you what it doesn't mean, Jeff. Right? It doesn't mean
simply providing access to AI. Students have more access than they know what to do with
already. Right? Like there's a bunch of free tools out there.
Every AI company is allowing students some sort of access. So it's not simply a matter of
accessing the technology that's number one. Now, as a cio, you may say, well, look, I get that I
want something that's secure and private and FERPA compliant, and that's what AI readiness
means. And that's certainly a component. We want something that is FERPA compliant and
doesn't train on your data and is secure. And certainly a solution needs to encompass that.
Right? The security and privacy portion of that. That's a piece of the equation. But ultimately
we're talking not about technology, we're talking about transforming an institution. This is a
people problem, not a platform problem. And so a number of institutions have turned on AI or
made it available and not seen the adoption and not seeing the returns and not seen the impact,
as you mentioned, that they expected, because they treat it as a technology problem.
And so that's why we really take seriously this idea of a partnership. You know, we have a
number of professors that we work with that have successfully implemented AI infrastructure
across organizations, created the environment where professors are learning, experimenting
alongside students, they're measuring impact, they are producing real, demonstrable impacts on
learning. That's a whole process. And it starts with, yes, having a safe, secure, private platform.
But that's only a start, it's certainly not an end. And I think it's also going to be different from
institution to institution. The needs of a two year community college are going to be different than
needs of a 60,000 person state school or a graduate school. And so we are very blessed to be
able to work with hundreds of educators across hundreds of schools and seeing all the different
ways. So I think this is also a moment to let a thousand flowers bloom, Jeff, and have lots of
experimentation and lots of failure. And that's okay. That's absolutely okay.
[00:16:49] Jeff Dillon: In today's environment, for a long time there was a clear separation.
Academic projects stayed in the lab. Industry applications operated under different constraints of
security and scale and proprietary data. Now, with the rapid adoption of AI in every sector, that
gap is becoming a real chasm. Colleges have to move beyond teaching the theory of AI and
providing governed hands on experiences that really mirror the complexity of the professional
world.
So, you know, requires rethinking everything, all these tools. How does Boodlebox specifically
help bridge the gap between the academic use and the real world workforce applications?
[00:17:31] France Hoang: Let me answer that by telling a story. So there's a professor out at Point
Loma University. She teaches marketing and she uses Boodlebox. And she starts with a
traditional lecture style classroom, but then she requires every one of her students to put their
lecture notes into an AI assistant that they build inside Bootlebox. So every student as they
progress in the class has to train this quote unquote assistant with their notes from class. And
then the assessments are not quizzes and they're not essays. They are real world case studies
and real world problems. And the student is required to collaborate with the AI system they
created to solve that real world problem.
[00:18:09] Jeff Dillon: Problem.
[00:18:10] France Hoang: And then as the semester goes along, not only do they do that one on
one, they do that in groups. So groups of students with multiple AI systems they created are all
collaborating to solve real world problems. And they do that all inside Buddha box in a way that's
transparent and observable. How much better prepared is that student for this AI enabled world
than a student who takes a traditional Marketing class.
[00:18:31] Jeff Dillon: Right? Great story. There's a reality and you mentioned different types of
school.
Small community colleges, all the way up the big skate schools. Most colleges and universities
don't have the deep pockets or large central IT teams that major research universities have for
them. You know, AI adoption cannot be about a massive top down overhaul. It has to be very
strategic and focused and high impact. Less about building a complex system, more about
finding the one quick win that frees up staff time.
So based on your work with institutions of all sizes, what advice would you give to a smaller
institution or department with tight budgets and limited staff looking to pilot AI tools?
[00:19:16] France Hoang: Yeah, I would start with a pilot, right, like build on success.
There are schools that kind of sight unseen have kind of rolled out AI campus wide. And I think
in almost every case that I've read of, there's been pushback from the faculty. Adoption has not
been great.
You know, it's a solution in search of a problem. Right. As opposed to partnering with a company
like ours that's responsive. You know, starting with pilots, figuring out the use cases, getting a
group of pioneers that know, understand the use cases, that can then teach their peers. Peer to
peer learning is really important here and then organically it grows over time. What we've seen is
adoption takes longer than any institution expects. You know, we talk to institutions and they're
like, they always expect, like they turn this on and they're going to have 100% of students and
faculty using this from day one. That's not the reality. The reality is it's going to take, you know,
we talk about a 2, 3 year rollout to get AI adoption at an institution in teaching and learning.
Right. In an embedded way. Obviously students are using it left and right like a near 100%. But
that, that's not the kind of adoption I'm talking about. I'm talking about integration into teaching
and learning and a thought process way. One of the things we've done to enable this, you know,
you mentioned budgets, Jeff. We have figured out a way at Boodlebox to generate responses
from large language models using up to 96% less tokens. So that's 96% less energy cost, that's
96% less environmental impact and it's 96% less cost to us. And so we're able to provide a
solution that is affordable, affordable across a much wider range of institutions.
And that's deeply satisfying to me. One of the things I'm concerned about is equitable access to
AI. There's going to be some Professors listening to this and they're going to understand this
story because they've lived it. If you allow AI in the classroom, you very quickly will find that
there's a group of students who have better outcomes than other students. And it turns out it's
because they have access to better AI. They have the resources to pay for AI and they're paying
for better results.
Well, I want an educational system where people have better results because they work for it.
Right. And because they've earned it, and not simply because they have access to a paid AI
platform. And so I think providing equitable access to the same set of AI tools across your
student body is something that you can do to open up doors of opportunity. And because of our
infrastructure, we enable institutions that don't have big budgets to be able to do that.
[00:21:43] Jeff Dillon: Right, right. My thought is that a real successful approach treats AI not as
a new priority, but as a, as a force multiplier for existing ones to align AI adoption with core
institutional goals like might be retention or efficiency to ensure it supports rather than competes
with these other projects.
So, you know, use AI to automate administrative tasks, free up staff for higher value work. Right,
We, I think we can all understand that. But how can an institution align their broader edtech
strategy with AI adoption without derailing these other priorities?
[00:22:21] France Hoang: Yeah, it's a great question, Jeff. I think AI should be driven by particular
use cases. AI is not the end of itself. Right. The goal isn't to use AI, you know, the goal is to do
whatever you were doing before and do it in a way that is more efficient, more effective, you
know, free up time to be, so you can spend more time on high value, you know, human centered
tasks. And so the key is to identify particular use cases. You mentioned some. And then to figure
out how AI fits into those.
And then, then there's clearly a return on investment. Right. If I if something that used to take me
five hours, now it takes one, or if I produce a twice a better result, then oh, that's what I should
use AI for. That's how you demonstrate a clear return on this investment.
So I'm an early adopter. I love new technology like I love to play around with AI. But most people
are not early adopters. Right. And most people are busy and most people don't play with things
just to play with them. Right. And so I think those of us on the early adopter train should realize
that most people are going to adopt AI when there's a need to and. Right. And desire to in the
past, I.
[00:23:30] Jeff Dillon: Think ed tech success was often measured by simple metrics like tool
usage rate. But with AI we need to look beyond mere efficiency too, toward the deeper human
centered outcomes. And it's like balancing the quantitative data like retention rates with
qualitative indicators that measure things like critical thinking and student agency and equity. I
would say, how should leaders measure success when implementing AI in learning? What
metrics or qualitative indicators matter the most?
[00:24:02] France Hoang: Yeah, I think again, it's going to depend on the institution. Right. Every
institution has a different mission, has a set of values. I would be bold, right. How does this
contribute to our organizational mission? How does this support our organizational values? What
are the qualitative and quantitative ways of doing that? You know, we are used by 80 institutions,
we have pilots all over the place. I think no two institutions measure the same, frankly.
There are some basic metrics, right. Like, oh gosh, how often do students log in. Right. And how
many words they generate. That's just measuring output. Right. That's not necessarily
measuring impact.
Some of the greatest outputs are measured qualitatively through talking to the students. Right.
And seeing how their attitudes changed or the opportunity opportunities they create. Going back
to Point Loma, right. We had an initial group of students that used the platform.
We got a story later on that there's a student who was interviewing for a job and she could tell
the interview wasn't going well because they were, you know, basically signaling to her that she
wasn't a fit.
And so she just kind of threw a Hail Mary and says, oh, by the way, love to share with you what I
was doing in my marketing class with AI and then actually straight showed the bot she created
and it totally changed the conversation around. Like, you know, they were asked all these
questions and she ended up getting a job. And so that's awesome. Yeah.
So I think, yes, there's quantitative things you gotta measure on an edtech platform. But I think
again, if we're going to be human centric, we also have to measure the human impact and that
those things are harder to measure just by looking at usage.
[00:25:33] Jeff Dillon: Well, we've talked about AI solving problems today, but the real
excitement really lies in tomorrow where AI moves beyond automating and existing tasks, starts
enabling fundamentally new forms of human potential. And I think it means AI acting as a true
intellectual co pilot, helping students discover new knowledge, acquiring future proof skills. So to
wrap it up, looking ahead in the next few years, what excites you the most about the future of AI
in education? And what, what one actionable Step can a listener today take to get started?
[00:26:04] France Hoang: Yeah, I believe the future of education has been in some ways it's going to
be personalized, it's going to be differentiated and it's going to be portable. And so what do I
mean by those things? Personalized means it's going to be personalized for you as a learner.
Right. We have the ability to personalize at scale, which is kind of an interesting conundrum if
you think about intellectually to using AI. Now, the professor should do that for the student.
Right? And the student should do it for the student as well. But that's capable of AI on a scale
that we. We've never had available to us. Right. It's like having a TA assigned every student.
What could you do if you did that? Right. I'm not saying we're going to have a two sigma effect
of every single student, but maybe we're moving towards that way. Differentiated, because
differentiated education means that how I use AI and how I learn should be different for
somebody who's in chemistry than someone who's in philosophy. Again, AI, because it's so
flexible, allows us to differentiate our education in ways that we hadn't thought about before.
What's more interesting, reading about the conference at Yalta or being a participant in an AI
simulation and taking on the role of Churchill or Stalin or Roosevelt, Having to live through the
multiple days and make decisions and then ultimately portable. You know, we have this vision of
students showing up on the first day of college, getting access to Bootlebox and spending the
next four years filling it with every assignment and every chat and every conversation and every
collaboration and every custom bot they build in any class project and when they graduate,
being able to take it with them. So imagine the sum of your college academic experience now in
an AI toolkit that you can access and use for the of rest. Rest of your life.
[00:27:38] Jeff Dillon: Yeah. Yeah, I love that. Well, France, thank you so much for sharing your
journey, your incredible insights on Buddha Box and the future of AI and education. It's clear that
your passion is what drives this mission forward. And I appreciate you taking the time to join us
today. I will put links to Boodle Box in the show notes as well as Francis LinkedIn and to all those
listeners out there all over the world now, thanks for tuning in. We'll catch you next time. Bye.
As we wrap up this episode, remember EdTech Connect is your trusted companion on your
journey to enhance education through technology.
Whether you're looking to spark student engagement, refine edtech implementation strategies or
stay ahead of the curve in emerging technologies. EdTech Connect brings you the insights you
need. Be sure to subscribe on your favorite podcast platform so you never miss an inspiring and
informative episode. And while you're there, please leave us a review your feedback that fuels
us to keep bringing you valuable content. For even more resources and connections, head over
to edtechconnect.com your hub for edtech reviews, trends and solutions. Until next time, thanks
for tuning in.