
On the 34th episode of Enterprise AI Defenders, hosts Evan Reiser (co-founder and CEO, Abnormal AI) and Mike Britton (CISO, Abnormal AI) talk with Micah Czigan, Chief Information Security Officer at Georgetown University. Micah shares how Georgetown is navigating AI adoption with security-first thinking, tailored governance, and a mindset rooted in experimentation. From piloting secure internal AI tools to defending against deepfakes and hyper-personalized phishing, Micah’s approach protects people while embracing innovation.
Quick hits from Micah:
On AI-powered phishing: “Phishing emails now look personal. AI is building profiles and crafting messages that feel targeted, not blasted.”
On governance that enables adoption: “Shadow AI happens when people don’t feel they have a path to yes. We’re focused on building that path.”
On personalized AI defense models: “We're profiling advisory targets to understand risk not just by email but web activity too, and that’s all AI-powered.”
Recent Book Recommendation: Nuclear War: A Scenario by Annie Jacobsen
Evan Reiser: Hi there and welcome to Enterprise AI Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most anticipated cyber attacks in each episode. Fortune 500 CISOs share how AI is changing the threat landscape. Real-world examples of modern attacks, and the role I will play in the future of cybersecurity. I'm Evan Reiser, the founder and CEO of Abnormal AI.
Mike Britton: And I'm Mike Britton, the CIO of abnormal AI. Today on the show, we're bringing you a conversation with Micah Czigan, chief information security officer at Georgetown University. Georgetown is a top-tier R1 university with over 200 million in annual research activity spanning cybersecurity, biomedicine, linguistics, and more.
There were three interesting things that stood out to me in the conversation with Micah.
First, Micah explains that securing a university is unlike any other environment. His team must protect sensitive research while preserving academic freedom by partnering with cybersecurity professors; they’re proving that security and innovation can coexist.
Second, Georgetown takes a measured approach to AI. Micah's team built early governance, piloted Gemini in secure environments, and created a custom model to match researchers with grants.
And finally, Micah warned that AI-powered phishing is already here and highly effective. His team has seen attacks that are high volume, targeted, and realistic. His advice to keep pace with adversaries? You need AI defending against AI.
Evan: Well, Micah, first of all, thank you so much for joining us. I know Mike Britton and I were really looking forward to this episode. Maybe to start, do you mind giving our audience a bit of a brief background about yourself, and maybe share a little bit about your current role at Georgetown?
Micah Czigan: Well, I’m Micah Czigan, the current chief information security officer for all the global campuses at Georgetown. I’ve been there just over five years. This is my first part of my career in higher education. I spent almost 20 years in federal government—a mix of active duty in the U.S. Navy, and then I spent some time with the Department of Defense and Department of Energy. So this is a very different role than what I was accustomed to.
Mike: I’d love to, you know, hear some of the exciting AI-driven initiatives happening across Georgetown—whether in research, operations, or education—and maybe what are some of the security challenges with diving headfirst into AI at Georgetown?
Micah: So we started early on with saying, you know, “Let’s write a policy.” Even if it’s not the best policy in the world, let’s at least give some guidelines for groups of people. One is the educational side; the other is the corporate side—the business of the university—and those are two separate sets of guidelines because they really have two end goals they’re working toward.
The big push we did is use cases—don’t come with the shiny thing; come with a use case and an intended outcome. What we’ve seen is everybody wants the, you know, chatbot—ChatGPT, Gemini, Copilot, whatever—and so we’ve done some pilots and evaluations on that, just in general, right, and that has been very helpful. We have a close relationship with Google, so we’ve done a lot of work on figuring out what are the benefits, what are the risks, and the drawbacks of using Gemini, and the university as a whole hasn’t made a total decision yet. But at some point we’ll get to an enterprise where we’ll say, “Okay, this is the enterprise chat,” whatever that is.
One of the key things to that is we will have our own enclave. So that allows groups like Finance to upload financial data—investment data, all that kind of stuff—but it stays within our environment, and yet it still allows, you know, the AI to learn from that, but that data is secure.
Thinking on the instructional side, where they can teach students how to properly use the chat interface—to ask the right questions, right, prompt-engineering kinds of things; how to load the data, what’s important—and, you know, how to properly utilize that not just in your education but for your future job. One of the key things we’ve keyed in on is: don’t trust the results. If it gives you back a number, like, double-check the math. And we’re ensuring that people are looking at the links—what documents did it reference, how did it do that, is that a valid reference, does that make sense—instead of just trusting it.
Outside of the, you know, just the chat side, we have two research projects. One is related to what we’re doing internally within IT and Security, which is we’re building an LLM. The goal is: we would have it build profiles on researchers. So we take all of your research data—your books, your papers, everything you’ve produced—and it builds this profile on you. Then it goes out and finds grants that are likely to be related to your specific research area, and then presents you with those grants, which we’re hoping will save researchers tons of time, because that is a major hurdle for them. It’ll also do things like, “Okay, well this grant is, you know, CUI data, and so it’s going to require CMMC Level 2. We don’t have it in that environment, so we’re not even going to give it to you,” right—so it’s pulling out those data requirements as part of that look.
The other is with one of our institutions called the Massive Data Institute. They’re looking at: can we build our own AI for our research purposes—one we can allow researchers to use for our own purposes. That has not completely launched yet, but they are working in that direction.
Evan: What do you see, kind of, coming down the pike from a threat-landscape perspective that you’d want your peers to be a little more aware of—maybe things you think folks are underestimating a bit?
Micah: Well, hopefully they don’t, but, you know, the number-one threat vector is email—it’s phishing, right? That’s where the vast majority of attacks come from and how the attackers get that initial hook in—it’s through email. And so that’s where we’ve seen the most amount of what we’re seeing as AI—through phishing attacks becoming much more sophisticated, much more relational. So it’s not just this blast; it looks like they’re targeting Evan directly—like they know this, AI’s built this profile, and it’s using things that it knows are likely to get your attention for you specifically. We’ve seen those emails. And so that’s where we’re focusing, and that’s where—talking with my peers—they’re also focusing their efforts: how do I shore up email security? The other area is in the noise. We’re all trying to get rid of the noise of people scanning our ports, scanning our traffic, looking for probing, right? But when does the probing become not probing? When is it, “Okay, that’s an AI,” because it’s probing, it found something, and then, you know, five milliseconds later it launches the attack. That’s faster than humans can do. So we’re putting some emphasis on: can we look at the noise that we normally just get rid of and say, “Oh, does this look like it might be AI-related?” And the only way to do that is using our own AI—using our own tools, AIs, to do that. That’s what I would definitely encourage my peers to take a look at.
Evan: What do you think people might be underestimating a little bit? Like, everyone’s taking ethics seriously, but not everyone has the same calibration of what’s most important.
Micah: One of the first things that came to my head is deepfake—and that as people are using AI in their research, they might put a little bit too much trust in whatever AI they’re using. If that AI becomes compromised, how would you know? And it may be able to feed you back what you anticipate you’re going to get, and it looks legitimate, but it’s not, right?
If you’re using AI to write code for your research—like, I’ve done it before: “Look, I want to write a firewall rule—write me a firewall rule,” or “Write me a YARA rule.” I get halfway in and I’m like, “Man, this is really complex. I don’t know if this is legit or not—I have no idea.” Thankfully, I have engineers who can figure that out, but it takes them a long time. So, areas of caution are: don’t trust the AI too much; make sure we’re validating and looking for responses that can point to, “Well, this may be an AI on the back end that’s compromised,” a compromised AI that’s feeding me bad data, bad information, right? Or sending me somewhere—even if it’s like, “Oh, here’s the link to how I got this.” I click on the link; it’s a malicious link. We want to make sure of all those little checks. I think it’s going to be those little areas that we’re not paying attention to.
Mike: So what are you guys doing at Georgetown to try to stay ahead of those risks?
Micah: We’re attacking it from different angles. One is the governance side—making sure we know what AI is being used for what purposes—that it’s been, you know, checked, validated, and audited. And then looking at tools that can sit in the middle and are looking for responses or information that it recognizes as not just AI-generated, but AI-malicious, right? And some of these tools are, you know, obviously using AI—so it’s AI against AI—and it says, “Okay, well, I know what you asked for, and I’m looking at those responses coming back,” and so it’s playing that man-in-the-middle and checking all those links. Are those valid links, are those malicious links? “Yep—nope—those are fine.” They’re good, and it’s letting that through. Looking at the data—so you get back this, you know, CSV file—well, is there malicious data in that CSV, somehow? It’s checking that and validating that. I saw one—maybe last year—a company that claimed they could do this, but I don’t think it really panned out. I think this would be great—it’s more than a man-in-the-middle; it’s actually playing an interface between you and other AIs. So you’re asking it; it’s asking the AIs, then taking those results and checking those—double-checking them—before it gives you back the results. Turned out to me not to be legit, but, like, I think that’s what I’m looking for. That’s my unicorn. I want that.
Mike: And is there concern—you know, because I hear a lot of my peers in the corporate world talk about the concern of shadow and unapproved AI—how does that work out at Georgetown, and how do you, kind of, keep your hands around that?
Micah: Yeah, very true. I was just in a conversation like this earlier this afternoon, and one of my peers brought that up—shadow AI. And I was talking to a student this morning, and that’s what he was talking about: shadow AI.
So we are—one, you know—the governance side: making sure that applications in the enterprise environment, they’re there. So we scan for them all the time, like we’re looking at traffic—do we see traffic coming from applications that are not approved, including AIs?
Then for more of the educational side, it’s educating students and faculty—like, look, we’re here to help you. And so shadow IT or shadow AI comes about because people don’t feel like they have an easy path to “yes.” We’re trying to make that not just a rubber stamp, but that it’s very clear how you get to “yes”—that we don’t make it overly difficult, overly cumbersome for people to submit these requests, and that we’re reviewing them in a very quick but judicious manner and giving them feedback—not just “no,” but, “Okay, here might be your path to yes.” And, “We don’t think this is a good product because it’s developed in a country that we don’t trust, but we did some legwork, and maybe this might meet your needs. Might not be 100%, but it might be 80%.” And so we’re trying to partner with them to work on how do we get from an 80 to a 95% solution. That has proved to be successful because nobody likes to be told “no” with no, you know, “Well, okay, what do I do now?” We don’t want to be the office of “no.” We try to help so they don’t feel like they don’t have a path.
Mike: Do you feel like, with the rise and speed of threats, an annual or static training is probably not sufficient these days—and on the flip side, are there potential use cases of AI to make training a little more personalized and relevant in real time?
Micah: Yeah. So when I say training, I mean, like, we actually do the whole year—it’s not once in October and you’re done. We keep adding trainings and saying, like, “You have to take these,” and they’re short—they’re like five minutes, right? We don’t want the old boring, you know, government training that nobody wants to do.
As we see trends coming, we’ll add trainings that are based on the trends that we’re seeing. We just added a couple—you have some options, some choices for AI: how do we better look at emails coming in, as an individual, to say, “Are these AI-generated? Do I need to be concerned? What are the things I should look for?” Like, “It’s a little too personal—do I really know this person?” And then what we’re also doing is working to link the threats that we see coming to people in emails—what they’re building this profile on them and what they’re clicking on and saying, “I think this is legitimate; I think this is not”—and then providing that training based upon what we’re seeing as a profile of you—what’s coming to you—because that’s going to be different than what Evan is seeing and what he’s clicking on, what’s important to him. And so we’re trying to make that very individualized.
Mike: What other areas in cyber—maybe technologies or areas in cyber—do you feel are right for disruption and right for leveraging AI to, kind of, rethink or better execute on some of these technologies?
Micah: Well, going even further—like, I don’t even know if this is possible, right—but we have what we call value targets. Some people definitely have access to more sensitive data—our finance people, you know, the President’s Office—people who are going to make decisions and write a check that is disastrous, right, or something like that. And so building that profile on them—not just on email, but their web habits, right? Where are they going? And so it might be okay for you, Evan, to go to these sites, right—might be all right. But one of the things we were talking about is, like, Mike is part of our police department, so we really want to protect them—like, even probably draconian-protect them—but still they’ve got to be able to do their jobs.
We want to build that whole holistic network profile, computer profile, email profile. And that’s all got to be—like, we don’t have the people to do that. I don’t have a thousand people. I can’t do that. But AI can do that, right? And so then, as it’s looking at the incoming traffic, it’s evaluating, “Is this dangerous not just for Georgetown; is this okay for Mike as a police officer at Georgetown who also has children?” Right? And I know that kind of gets a little—some people are like, “That’s a little creepy, man—now you know too much about me,” but that helps. And if we can keep that AI contained within our enclave—like, I mean, I don’t have any access to that data. I don’t really care, and there’s no way I can—like, I can’t ask the AI, “Tell me about Mike.” But it’s building that profile around you and keeping you safe from yourself and from others based upon your profile. That, I think, is where AI can really be powerful—super powerful.
When I was at the Pentagon, I was working with the Defense Intelligence Agency on: is there a way we can better evaluate people for security clearances? It’s always been a problem; it takes forever to get a security clearance, right? And so they were like, “What if we throw a hundred psychologists in a room and they build profiles on people?” Like, that’s the dumbest idea in the world. Now, we didn’t have AI back then to the level we have today, but that’s really what you want to do—you want to throw in an AI and build that profile that continually evaluates Mike as a government employee: what are the risks, and what access to data can I give him? Top Secret? Can I give him—right—special compartments? You know, the location of the aliens—is that okay? Is he at risk? Where are we seeing it? Looking at the communications—because we cannot do that as humans, but the AI can.
I know it’s scary—people are like, “Ah, it’s going to build this thing and what if it gets it wrong?” That’s true—what if it gets it wrong? That’s where we, on the cyber side, have to make sure that it doesn’t get it wrong—what are the protections in place so it doesn’t go off the rails? But I think that’s where we’re going to get. And the more we resist doing that, the further we’re going to be behind the curve, because the adversaries are not. They’re like, “Yep, we’re going to do it.” They’re out there; they don’t care—they’ll break the eggs, because they want to—and they’re already out there. So I don’t think we have the luxury of sitting back and being cautious. We have to be bleeding edge.
Evan: To what extent do you think the criminals are adopting AI faster than the good guys—the defenders—and, based on that, what do we need to be doing differently?
Micah: Well, they’re trying anything and everything, right? They don’t care—they’re not doing supply-chain risk management, and I am every day, right? That’s part of our job. It’s a good thing—we have to do that—but the adversaries are not. They’re just like, “Okay, great, we’ll do ChatGPT, we’re going to do Gemini, we’ll do Copilot—we’re going to do them all, we’re going to build a Llama—we’re going to throw everything at the people we’re trying to get,” because if we throw a hundred, we might get one. That’s all I need—is one, right?
So that’s where we’ve got to think—this has been true for decades—we have to think like the enemy. We have to understand our enemy: what’s in their minds, what are their tactics, what are their techniques, what are their procedures. We build those profiles, and then we start— I mean, that’s threat hunting, right? What are the TTPs I’m looking for, and know your adversaries: who is likely to attack me, what do I need to be looking for? And then we build those models with AI to start looking for those TTPs so that we can detect them early and then say, “Nope—shut that off,” right, and then continue to look. We can’t keep just saying, “Well, we’re just going to do whatever Microsoft does.” We can’t go with blinders. And, I mean, we also can’t look at the latest shiny thing because nobody has pocketbooks that big. But we’ve got to look at everything.
Evan: What is it about your experience—or what have you seen—that’s led you to that belief? And what do you think other people maybe aren’t seeing that prevents them from having that attitude—being willing to really try these new technologies so we don’t, kind of, lose the arms race?
Micah: I think part of it comes from my background in the Navy. You know, when you’re at sea, nobody’s coming to your rescue—something breaks, you’ve got to fix it, you’ve got to figure it out, right? And so you’re going to try anything and everything. And that’s part of where I learned to be, you know, a critical thinker and really just, you know, almost that MacGyver-type mindset—we can piece it together with bubble gum and string and nothing is off the table.
I’ve had to use that a lot when I was in the military. And, you know, when I was in government, everybody’s like, “No—we’re a Cisco shop; everything’s Cisco.” Nothing wrong with Cisco, but they’re not the end-all be-all for everything. If we don’t at least know what is out there—where those other technologies are—we’re on with blinders. And so I’ve seen where I’ve been able to, kind of, move that needle a little bit and say, “Okay, maybe we try this product in a different set because we want to achieve these goals and Cisco can’t,” right? It’s not the best— it’s been very effective—but we’ve got to be very strategic in what we’re trying to achieve. I think there are some people who have become sort of laissez-faire—“We’re just going to do whatever.” Like, what’s the goal, what’s the outcome, what do I expect out of it, and what are my use cases? What am I trying to solve? A lot of times that’s the key—people haven’t truly defined the problem they’re trying to solve. And so once I’ve defined the problem, like, “Okay, now let’s consider everything—what are all the options?” Even the, like, wacky, you know, stupid stuff that everybody’s like, “That—no.” Well, yes—let’s go down that road; at least spend a few brain cycles trying to break it and say, “Nope, that will not work. That’s not for us.”
Evan: What we like to do at the end of the episode is a bit of a lightning round—just looking for your one-tweet takes on things that are probably really difficult to answer in the one-tweet format, so please forgive us. Mike, you want to go first?
Mike: Sure. What one piece of advice would you give to someone stepping into their very first CISO job—maybe something they might overestimate or underestimate about the job?
Micah: Don’t take yourself too seriously. You’re going to get it wrong. That’s okay.
Evan: So, Micah, I’m pretty impressed—not just with your experience—but you seem very up to date on the latest stuff, right? I think a lot of CISOs I know kind of struggle a little bit to figure out how to stay on top of the latest technology trends. I feel like there’s a new AI thing every week. Right—it’s hard to know what’s real and what’s kind of B.S. What’s your, kind of, information diet—what would be your advice for one of your peers about how they can stay up to date?
Micah: Well, I love to read, so I’ve got feeds coming in, right? I get my daily intel ingest—what’s going on, what are the current attacks—and I listen to the SANS Internet StormCast after dropping my daughter off. And so, you know, I’ve got those coming in. But then I, kind of, tell my staff, like, “Okay, so we can’t all— we can’t all know everything. So, you know, what’s important to you? What really gets you going in the morning—what’s your love?” Maybe you love coding, right, and you love writing scripts in Python—great, okay, become the expert in that and tell me when things are happening in that world or the adjacent world. And somebody else is like, “Yeah, man, I have a passion for forensics.” Okay—what are the latest trends in forensics? We have this thing for sharing—articles, a conference, a web conference, whatever—we’re always trying to share what we’re seeing and hearing.
Mike: On a more personal note, what’s a book you’ve read that’s had a big impact on you, and why?
Micah: Non-cyber-related book: Annie Jacobsen’s Nuclear War: A Scenario. It’s about what would happen if we actually had a nuclear war. I worked in the Pentagon—I didn’t work with nuclear weapons, but I was part of that environment. If that happened, we would go to, you know, a remote site—all that kind of stuff. So as I’m reading the book, I’m like, “Holy cow—she got it right.” Like, that is spot-on exactly what would happen with the nuclear triad. I found that to be an amazing book. It’s not long; it’s an easy read. She’s a great writer.
Evan: What do you think is going to be true about the future of AI and cybersecurity that most of your peers would consider science fiction today?
Micah: I think that we will see it on both sides—the adversary side and the defender side—that you will not know it’s an AI. It will look and feel so human that the only way you’ll know is it’s operating at a speed faster than is physically possible. But I think we are on that edge. We don’t know it’s not an AI.
Evan: I agree. And if we’re not there today, it’s coming tomorrow. It’s right around the corner.
Micah: It’s right around the corner.
Evan: Micah, I really appreciate you taking the time to join us—thank you for all the great work you do. Hopefully we’ll talk again soon.
Micah: Absolutely. Thanks, gentlemen.
Mike: That was Micah Czigan, Chief Information Security Officer at Georgetown University. I'm Mike Britton, the CIO of Abnormal AI.
Evan: And I'm Evan Reiser, the founder and CEO of Abnormal AI. Thanks for listening to enterprise AI defenders. Please be sure to subscribe so you never miss an episode. Learn more about how AI is transforming cybersecurity and “enterprise software dot blog”.
This show is produced by Josh Meer. See you next time!
Hear their exclusive stories about technology innovations at scale.


