
On Episode 33 of Enterprise AI Defenders, hosts Evan Reiser (CEO and co-founder, Abnormal AI) and Mike Britton (CIO, Abnormal AI) sit down with Lester Godsey, Chief Information Security Officer at Arizona State University, to discuss how ASU is building an ambitious, campus-wide AI strategy. With more than 200,000 users, ASU has deployed an in-house platform supporting 60+ language models and has granted all students and staff access to ChatGPT. Godsey outlines ASU’s strong governance framework, proactive security controls, and threat modeling to address risks such as prompt injection and insider misuse, while highlighting student-driven innovation through hackathons and grants that promote responsible AI experimentation in cybersecurity.
Quick hits from Lester:
On AI threat acceleration: "It’s not net new attacks, we’re just seeing them executed faster, more effectively. The deepfakes in 2024 aren’t funny anymore."
On internal innovation: "We built our own platform supporting over 60 large language models, with walled garden controls and ethical guardrails."
On AI’s future impact: “We’re training a model to ingest messy threat intel from all sources and separate the good from the bad. That’s how small teams can finally take action with confidence.”
Recent Book Recommendation: It's Your Ship: Management Techniques from the Best Damn Ship in the Navy by D. Michael Abrashoff
Evan Reiser: Hi there and welcome to Enterprise AI Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most sophisticated cyber attacks. In each episode, Fortune 500 CISOs share how AI has changed the threat landscape, real-world examples of modern attacks, and the role AI will play in the future of cybersecurity. I’m Evan Reiser, the founder and CEO of Abnormal AI, and I’m joined by…
Mike Britton: …and I’m Mike Britton, the CIO of Abnormal AI. Today on the show, we’re bringing you a conversation with Lester Godsey, Chief Information Security Officer at Arizona State University. ASU is one of the largest public universities in the U.S., with over 200,000 students and a rapidly evolving AI research and innovation ecosystem.
There were three interesting things that stood out to me in the conversation with Lester.
First, Lester’s team built an in-house platform that supports over 60 large language models, all with built-in privacy, security, and ethical controls. It’s a walled-garden approach that lets students and researchers experiment safely while protecting institutional trust.
Second, the threat landscape in higher education is evolving fast to defend against phishing campaigns and prompt-injection attacks. Lester’s team is combining cyber, physical, and human risk management into one unified governance approach.
And finally, Lester sees AI as a democratizing force in cybersecurity. His team is piloting an AI model that ingests and normalizes threat-intelligence data, allowing smaller institutions to access high-quality insights without expensive vendor costs.
Evan: Lester, first of all, thank you so much for joining us. Mike and I are looking forward to the conversation. Maybe to start, do you mind giving our audience a little bit of background about yourself and an overview of your role at ASU today?
Lester Godsey: Yeah, absolutely. So, I’m Lester Godsey. I’ve been in my role for about 10 months now as the CISO for Arizona State University. And by way of context, ASU is the largest public university by student enrollment. The last number I heard earlier this week was right around 200,000 students.
Me personally, I’ve been primarily in public-sector IT support—a combination of higher education and local government—for the last 30-plus years. Prior to ASU, I was the CISO at Maricopa County, and a lot of people know me based on that experience and my work dealing with the 2020 and 2024 presidential elections, if you will. That was very formative—we’ll use the word formative. So, a lot of public-sector experience.
Evan: Do you mind sharing a little bit about what inspires you—what motivates you? Why do you sign up for that hard work? There must be some driving force there. I’d love for you to share that with our audience.
Lester: I have roots in higher education. I’ve also taught on and off for the better part of 15 years at the collegiate level. A lot of it actually has to do with the charter of ASU. President Crow, who’s been in the role for over 20 years here at ASU—our focus is that we’re defined by who we include and who we serve as a community. We’re of the opinion that higher education has a role to play in society, and that culture really is what motivates me to be in this role. And as you said, Evan, it is very challenging.
From a compliance perspective, we have criminal justice information; we have FERPA, which is student-related data. We have an international presence at ASU, so we deal with GDPR and other compliance standards; take your pick—HIPAA—all the acronyms more or less apply to us when it comes to compliance. It’s a very complicated, large, and heavy responsibility, but the mission outweighs all that in my opinion. Being part of an organization focused on having a societal impact—that’s huge.
Evan: Do you mind sharing what’s at stake when you think about cybersecurity? It’s not just making the network secure or making people look at phishing emails. There’s a bigger impact on the greater community—maybe even the world—by helping protect your students, faculty, and alumni. Why is cybersecurity so important for you and the team?
Lester: I think what cybersecurity folks across the board—whether you’re in higher ed or another industry—are seeing is that there’s less delineation between “this is a cyber risk” versus “this is a physical risk.” It’s all one and the same. So cybersecurity professionals—our jobs were already hard enough—but now we have to content... be more open to filtering and identifying multiple disparate types of risk and making sure the right folks within our organization get that information to act in an intelligent and timely manner.
Whether it’s student safety or intellectual property—ASU being an R1 research institution, we’re very concerned about data security—but then everything in between, including the well-being and status of our faculty, staff, and students. Everything’s fair game from a cyber perspective these days.
Mike: You mentioned research. I imagine ASU is absolutely a hotbed of experimentation and research when it comes to AI and AI tools—and I imagine students use it both for good and bad. How is ASU approaching the governance and safe adoption of generative AI while still encouraging innovation?
Lester: We’re arguably leading the way when it comes to AI adoption. That includes adopting third-party tools—for example, we just announced we have an agreement with OpenAI to give access to all faculty, staff, and students—ChatGPT for EDU, I believe. That’s one example.
But internally, ASU has developed its own platform in-house. We currently support—I think the last number I saw was—over 60 large language models, both private and commercial. We’ve developed this walled-garden platform internally with our own privacy, security, and ethical controls for that platform.
We’re also in the process of standing up our governance structure. By the time this airs, I believe we’ll have that already in place. We have an actionable framework that’s been developed and we’re adhering to it, and from a governance perspective, we have that in place as well. On top of that, we’re taking a playbook from our security program and doing a tabletop exercise later this semester, specifically to ensure that from an operational perspective we know how to address things that come up as a result of AI use as it relates to security and privacy.
In a lot of ways, I’d say the adoption of AI—as big and pivotal as it is—is no different than adopting any other technology. From an overall security perspective, you still have the basics you do programmatically. Whether it was cloud adoption or virtualization or anything else—I’m old enough to remember when the internet was becoming a thing. We take those steps and do our due diligence.
Mike: I’d love to hear what you’re seeing about how bad actors are leveraging AI and how they’re attacking students as victims. Sometimes, unfortunately, you may have students doing things they shouldn’t from a negative perspective as well. First, how do you and your team see the threat landscape with attackers targeting students, and how do you balance the differing approach between protecting students versus protecting faculty and other internal resources?
Lester: From my perspective, our big concern is prompt injection in particular. The vast majority of AI use has been improving the efficiency and speed of attacks—not necessarily creating net-new attack types—so AI is being leveraged to be more efficient and effective.
An example from my last job: in 2020, we saw a bunch of deepfakes, but the quality then wasn’t believable—you could laugh it off. In 2024, nobody was laughing. The things employed against us at Maricopa County were significantly better, a lot of it due to AI—phishing, etc. Prompt injection is definitely something net-new, unique to the use of generative AI. I liken it to a new flavor of SQL injection.
As to the latter part of your question, insider threat has always been there. For listeners, insider threats aren’t all malicious; they can be accidental or intentional. It’s definitely something we’re looking at. Part of our concern, too: I’ve seen stories about people using AI in lieu of counseling—as something to connect with. So our concerns aren’t just traditional cybersecurity. What about mental health? Well-being? Physical safety? Those are newer, more unique concerns as a result of AI.
Mike: How do you manage this ever-connected world when it comes to attackers? Arizona State is probably, for a variety of reasons, on some attackers’ lists. How do you deal with the complexity of such an interconnected world—everything driven by SaaS—where one system connects to another? How do you untangle that complexity?
Lester: A couple things come to mind. Many risks associated with AI are no different from other risks we’ve had for decades. Probably the most obvious one is treating data as an asset and data classification. We tackle that with fundamentals.
For example, we have a vendor IT risk-assessment process as we onboard new technology—third party, fourth party, SaaS. We have a checklist associated with any new technology, and that checklist reflects the use of AI. That’s one of our first security controls. It’s not about technology—it’s ensuring our existing business processes account for AI use. Then it trickles down to intended use-cases and when it’s appropriate and advisable to use our internally developed walled-garden Create AI platform versus third-party SaaS products like ChatGPT or Gemini.
Underneath all that, it boils down to risk. We articulate our position: if you have sensitive or highly sensitive data per our classification standard, you can’t use third-party AI solutions and upload that information. That’s foundational and existed before this accelerated AI adoption—sticking to the basics and doing them well. Where it makes sense, apply the same logic and compensating or mitigating controls to AI. And in rare instances—prompt injection, model poisoning, drift—you develop new processes. That’s where governance comes in—having an intelligent conversation so the organization understands the risks and everyone’s part of the solution.
Evan: What do you think the role of AI will be in the future of cyber defense? Are there examples where you and your team have seen success implementing AI solutions? Any areas that feel overhyped? Areas you’re excited about? How do you think about where AI can play a role?
Lester: I think AI can democratize cybersecurity. Take threat intelligence: today, unless there’s a good open-source or free solution—say an ISAC—real-time, highly curated, high-fidelity threat intel usually costs six or seven figures with major vendors. AI has the potential to change that, especially because small and medium businesses are a big societal risk.
The technology exists today to automate a lot of security operations. One thing stopping that is trust—trust in the information you’re automating against. I’ve received IOCs where, had I auto-blacklisted, I would have blocked Google’s DNS IP and be looking for a new job. A lot of that is threat-intel quality.
We have a small internal project training a model to ingest multiple disparate sources of threat intel—various IOCs in different formats. We’re not just looking at good intel; we’re purposely looking for bad, too—trying to create a model that can consolidate disparate sources with different fidelity levels in a way we can machine-ingest, then identify with a high degree of fidelity that the intel is good. That way, small and medium businesses can take action.
Mike: Should we even look at the paradigm of threat intelligence as “dead” in the world of AI? Because we can look at behavior and anomalies—baseline normal behavior—and act on behavior rather than IOCs that may or may not be accurate. Thoughts?
Lester: That’s a great point. The pessimist in me immediately asks about the source of the data and its accuracy. But assuming your inputs are accurate and reflect reality, I definitely think there’s something there. AI finally gives us the opportunity to identify behavior. One of the big challenges in cybersecurity is that orgs spend a lot of time trying to determine intent, which is virtually impossible from traditional IOC perspectives—hashes, IPs, etc. More nuanced practitioners might infer intent from timestamps, frequency, and other statistical elements.
AI gets us as close as we can get—short of traditional espionage—to inferring intent. Why do we care about intent? Because I treat a potential threat very differently if it’s a spray-and-pray attack versus a targeted onslaught against my organization. Intent makes a big difference. Maybe the traditional approach to threat intel becomes less prevalent or important.
Evan: You have such a unique environment where students are an ephemeral population—every couple of years you have new people coming in and out. It’s a culture of learning and experimentation, and you want students to play around with new technologies and try AI tools. How do you balance that culture of innovation with making sure you’re secure?
Lester: Strategically, my CIO and our organization have made a conscious choice to support this AI journey for students in multiple ways. Students have access to third-party tools and technologies, and we encourage that—but they’ll do so in a way segmented from more sensitive information and data. We have our own walled-garden environment for those use-cases.
We have plenty of students doing research across hundreds of labs and environments. There’s a time and place regarding which technologies are used for which purposes. In terms of encouraging students and doing faculty outreach, one of the great things about our Create AI platform is it’s no-code. We have hundreds—if not over a thousand—projects that faculty and staff have created on the platform.
Even my own team, from an operational perspective, has deployed technologies and created bots. For example, a month or two ago, we released 19 updated or net-new security standards. Instead of just sharing them, we also created a bot trained on those and existing policies so anyone could ask about, say, what encryption we support or our position on multi-factor authentication. That was done on our internally developed platform.
We’ve even held small internal grant opportunities where people interested in leveraging our technology could apply and get tools and access to create their own solutions—across all academic units. With students, we’re kicking off a cyber hackathon with three tracks: incident response, IoT hardware security, and social engineering with AI. We’re engaging students and saying: this is the place to learn, play, and develop the skills needed to be successful cybersecurity practitioners.
Evan: One of the things we like to do in the last five minutes is a lightning round—questions that are unreasonable to answer in a tweet, but we’re looking for the one-tweet version. Mike, you want to go first?
Mike: What’s one piece of advice you’d give someone stepping into their very first CISO job—something they might overestimate or underestimate about the role?
Lester: You cannot over-communicate. Understand what the business needs are and where cyber fits into that.
Evan: What’s the best way for CISOs to stay up to date on the latest technologies and their implications for cybersecurity?
Lester: Two things. One, stay plugged into the community. In Arizona, the cybersecurity community is close-knit and bridges sectors. Two, I stay involved in education as faculty. I’m not the most disciplined person, so having to teach forces me to stay relevant and on top of tech. Also, taking jobs like being CISO at the largest public university in the country forces me to step up—especially around AI.
Mike: On the personal side, what’s a book you’ve read that had a big impact on you, and why?
Lester: I’m probably one of the least well-read when it comes to management books—I’m kind of old school, nose-to-the-grindstone. But one book that sticks out is It’s Your Ship—by a U.S. Navy captain. It’s a very pragmatic book about what leadership means. I’ve taken it to heart and tried to emulate it. It’s not about technology; it’s about leadership in an organization.
Evan: Last question—and my favorite. What do you think will be true about AI’s impact on cybersecurity that most people consider science fiction?
Lester: First contrarian take that comes to mind—and I hope it doesn’t happen. Unless this country does something at a federal level around data privacy, my bigger concern is something like Minority Report. When we were talking about behavior and intent, that’s my fear: if we collect enough data—and AI being a logical source—who knows what’s being done with data we freely share with commercial entities? How will it be leveraged? For good—from a cybersecurity perspective—or something more nefarious?
Evan: That’s one of my favorite authors—Philip K. Dick. The Minority Report is one of his short stories. The whole book is about the moral and ethical tensions in that question—goes way beyond crime to the nature of the future. It’s only about 40 pages—really thought-provoking. I’m an optimist. I think we’ll use AI for good—and hopefully put a lot of bad guys out of work. That’ll be our collective goal.
Lester, thank you so much for joining us today. I really enjoyed the conversation. Frankly, I barely got through half our scripted questions—I was so interested in what you were saying. Sorry if we were a little all over the place, but I really enjoyed it. Thank you so much.
Lester: Thank you.
Mike: That was Lester Godsey, Chief Information Security Officer at Arizona State University. I’m Mike Britton, the CIO of Abnormal AI.
Evan: And I’m Evan Reiser, the founder and CEO of Abnormal AI. Thanks for listening to Enterprise AI Defenders. Please be sure to subscribe so you never miss an episode. Learn more about how AI is transforming cybersecurity at enterprise-software.blog.
This show is produced by Josh Meer. See you next time.
Hear their exclusive stories about technology innovations at scale.



