
On the 36th episode of Enterprise AI Defenders, hosts Evan Reiser and Mike Britton talk with Chris Leigh, VP and Chief Information Security Officer at Eversource Energy. Chris leads security for a regulated, high-stakes utility serving millions across New England. He also owns AI in an uncommon org design that he uses to prove a point: governance does not have to slow innovation if it is built like an engineering function, with repeatable guardrails and clear pathways to ship.
Quick hits from Chris:
On shipping AI faster with standardization: “And that’s allowed us to accelerate our time to delivery by orders of magnitude of three months, down to four weeks, down to two weeks for various sprints.”
On preventing outages with drone inspection: “We’ve put some patterns out on this that allows us to better bring in the imagery and run it through our models and pick up damaged components or hotspots in the wires, which allows us to schedule and do repairs before we actually have power outages.”
On transforming threat intel into action for the SOC: “Any IOCs get popped into our tools automatically.”
Recent Book Recommendation: Outlive: The Science and Art of Longevity by Peter Attia, MD
Evan Reiser: Hi there, and welcome to Enterprise AI Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most anticipated cyber attacks in each episode. Fortune 500 CISOs share how AI is changing the threat landscape. Real world examples of modern attacks, and the role I will play in the future of cybersecurity. I'm Evan Reiser, the founder and CEO of AI.
Mike Britton: And I'm Mike Britton, the CISO of Abnormal AI. Today on the show, we're bringing you a conversation with Chris, Lead vice president and chief information security officer at Eversource Energy. Eversource is New England's largest energy provider, delivering electricity, gas, and water to over 4.5 million customers across three states. When you're responsible for critical infrastructure, the stakes couldn't be higher, especially in today's evolving threat landscape.
There are three interesting things that stood out to me in the conversation with Chris:
First, Chris didn't start governance with experimentation. He started with containment by building guardrails inside the security organization. His team turned governance into a secure-by-design engine for AI development across the company. Now, projects that used to take three months are shipping in about two weeks.
Second, Eversource is exploring drone imagery and machine learning to spot failing equipment before outages occur and to model vegetation growth to prevent storm-driven blackouts.
And finally, Chris is preparing for threats that don't look like yesterday's attacks. From deepfake scams to AI-accelerated vulnerability discovery, his team is focused on authenticating communications and flagging behavior that doesn't belong.
Evan: Chris, first of all, thank you so much for joining us. Do you mind sharing with our audience a little bit about your role today?
Chris Leigh: Yeah, so I’m Chris Leigh. I’m the Vice President, Chief Information Security Officer at Eversource. So I run all things cyber. As I say, no guards, no guns, no gates, but if it touches technology, it’s in my purview.
Strange enough, I also run the network layer three and up, and I own AI for Eversource, which is definitely an unusual situation, but it’s allowing us to show you can do app development in a secure-by-design model. And having AI under security has been pushing that, and it’s just been a natural revolution.
So it’s an interesting situation to be going through. In addition to all the work with the cyber team, I teach a couple college classes all around cyber—one undergrad, one master’s level. So it’s nice to put your undergrads through a four-month interview process and see who you pick up as interns.
Evan: Those in our audience that are less familiar with Eversource, you might explain a little bit about what you guys do, and maybe if you talk like why cybersecurity matters so much and why, you know, what’s at stake.
Chris: We are the electric and natural gas water utility in New England. So we service 4.5 million customers across Connecticut, Massachusetts, and New Hampshire. $89 billion in revenue, investor-owned utility.
We cover the transmission, which for those who aren’t familiar with the utility space is the high-voltage side of the power line—takes it from the generation source into the region—and then we do the distribution, which brings it down to your house. So we cover both of those.
We do not do generation like a lot of our southern utility peers do. It’s a New England thing around regulations. We buy power on the markets.
And then the cyber side, it’s really there to ensure that all the technologies work and that hackers aren’t able to get into the power grid and shut the lights off, or influence gas pressure on the pipelines coming out to your homes or different businesses.
So utilities, as many people say, is critical infrastructure—one of the top two between us and the banks in most cases. So we do a lot of programs with our government peers and peer utilities to make sure we’re all prepared. Being monopolies, we don’t compete. So we have the advantage of being able to share information on a very regular basis, which is part of the strength of the utility industry from a cyber side, is when something’s going on, we’re all communicating and sharing and ready to make sure we can respond accordingly.
Mike Britton: What is unique about Eversource’s cybersecurity organization, or your culture, that you think sets you apart from your peers in the energy sector?
Chris: We all are very connected with the federal government. We are one of the few utilities that do a lot in the classified space, and I think having that partnership with the federal government—getting access to information that may not be public—that allows us to guide our decision-making, makes us a little bit unique in that out there.
There are other utilities that are in the government space also, but with the amount of access we have, there’s probably only a handful of them. And so I think that drives us.
We are a leader with a number of other New England utilities—something we call the “new CEC.” Won’t find anything out in Google about this. It’s not—no websites or anything—but it’s an information-sharing program that we’ve led with National Guard and others in Homeland Security. And so it allows us to really run our data, take a look at intel reports, and really understand what’s going on.
That was very valuable when Russia first invaded Ukraine, and to be able to get into very secure rooms and understand what the intel is telling us. What are we seeing? I think that sets us apart from most of our peers out there.
Mike: Walk us through how, first of all, how you, as the CISO, own AI, and then what’s an energy company doing with AI?
Chris: All right, that’s a great question. When ChatGPT and OpenAI first came out, it woke everyone up that, okay, it’s now in the public space. It’s no longer just machine learning with some of the big insurance companies and stuff in financial services—it’s becoming mainstream very fast.
And so we took the first question of what’s happening here at Eversource. Do we have a problem? What’s the risk situation here? And we did some searching and found a lot of employees were using different AI tools from the internet. And we said, okay, that might be an issue that we need to get control of.
And so it started with that security question. So locked it all down, put a policy out there, started enabling those who have the right business cases, and got a few people cleared to do what they needed to do. But now AI is blocked.
It’s becoming the big thing out there, so now we have to start thinking: what does it mean for Eversource? And one of the most technical folks we have who was self-learning AI was in my team. So we said, all right, let’s try to figure out our governance structure and how are we going to do this. And it just stayed in my lap because we had the only ones that could spell AI at the time.
So we built a governance structure, put the policies in there, and then started working: how do we take in some AI projects, put a little staffing on the side to actually build up a couple of projects in there. And so it just stayed with me. It broadens my horizon a little bit.
Ultimately, like in many companies, we’re forming more of a center of excellence. When an application becomes productionized, it ultimately goes into the application team for that business area. So they’re going to take it over, they’re going to own it. So we try to bring them into the development process so that they know what they’re getting. And there’s a transition process for that.
But this way, we’re building everything one way using standard tools—only adding new functions when you have to, but not adding different functions or different products for the same thing. And that’s allowed us to accelerate our time to delivery by orders of magnitude—from three months down to four weeks down to two weeks for various sprints.
Mike: I think part of the debate that I’ve seen in some of the news, too, is with the increased use of AI comes the increase of energy consumption. Do you guys find that you’re having to change some of your plans? How is that relating for you in the industry?
Chris: So a couple of different areas. One, when you think about it, is this may seem boring until you get there: vegetation management. Never knew until I got into the industry how much we invest in tree cutting and bush trimming. But outside of critters getting into a substation and frying themselves, the number one reason for power outages is tree branches falling down, taking out the lines.
If we can better use AI to look at vegetation growth in an area—you take New England, Connecticut, specifically in Worcester, Mass—one of the most densely populated areas of trees per acre in the country. When you have that much foliage and the winds blow, and the ice comes down and the snow is plopping in there, branches come down, and we need to deal with that.
So the ability to forecast how trees are going to grow, how the bushes are going to grow, where do we better send our crews to manage that? How do we predict that there’s going to be a drought here—so slower growth—and more rain over there—so more growth—that maybe we weren’t planning on because now the AI will give us better models. That’s an area that we think AI can really be beneficial and then be more productive and ultimately have less power outages to our customers.
Same time as we look at distributed energy and renewables—summarizing, the vegetation management is really how do we predict growth of bushes and trees considering weather conditions, droughts or snowstorms and added water into the water tables—will then ultimately allow us to ensure more reliable power, because we can predict where to send crews and how to trim things back. So we have good, reliable power. Storms don’t take out the trees.
As we go forward and we think about data centers or the distributed energy resources that everyone’s starting to build out there, AI is going to play a role. You have solar, you have wind farms out there—how can we predict where the winds are changing? That’s going to allow for more turbines to spin to generate more power. How’s the clouds going to impact the solar farms that we have? And that ultimately turns into better forecasting the amount of power we’re going to have onto the grid.
Evan: How do you see AI affecting the threat landscape? Have you seen new types of attacks or new things that you worry about that maybe some of your peers would underestimate?
Chris: So when you think about the attacks, AI is going to create—the same attacks that could be done, but it’s going to do it faster. It’s going to find the vulnerabilities faster. It’s going to explore an app and find zero-days faster. It’s going to even be automating attacks like we just saw a few weeks ago with the agentic AI and some of the automation they did there.
So I think it’s like putting a turbo booster behind the hacker’s ability to come after you and attack. Is it going to find new attacks? It’s likely going to find new ways, but I look at it—what are you attacking? You ultimately have to have a vulnerability somehow. You have to have a way to get into an environment.
I generate this like: I’m on the street, I’m trying to break into someone’s house. There’s only so many ways to get into the house—through the door, through the windows, through the chimney, cracks in the wall. You need some way. And as a hacker, if I can find that crack that no one knows about, okay, maybe now I can squeeze in and get in there. AI is going to help do that. It’s going to help us do it faster for sure.
Where it’s going to be harder to defend is because it’ll be more able to quickly change where it’s coming from. It’s coming from this set of IPs to those IPs to all of these others. It’s going to be harder to block them. You have to have better hygiene. You have to be quicker. And how do you fix what you know? AI is going to find ways to take advantage of it.
And I think that’s one of the big concerns. Everyone is talking, “AI, AI.” It can’t attack you if you’re not vulnerable. Now, we know there’s always something there, and the landscape’s too large to say we’re not going to be vulnerable. So you have to do the basics first. And then once you’ve got the basics down, then you’ve got to look at more technology to say, “All right, is this an AI tool? How do I detect that?”
Evan: What do you think we have to do differently? How do some of the defensive technologies need to use AI or advance in different ways to help fight against these new threats?
Chris: I can almost see a concept of—I’m going to say a certificate, bad choice of words—but how do I throw a flag, almost like your encryption certs, where you set up your red ribbon in your inbox email when that came in signed by someone.
How do you take that and put that on so that—I'll make it up—all internal Teams videos and recordings have a certification, a certificate, a flag, something that says this was validated and authenticated through the internal measures of the company. So that when the CEO is now calling you up and you look and say, “Wait a minute, that’s coming in from the outside. It doesn’t have it in there.” External should be an external sender banner kind of a thing.
How do we teach our employees to realize that that’s not coming in from the inside? That’s not what I would’ve expected. And the same thing with audio calls, Teams calling—we know where our Teams calls should originate from. How do I identify that and say this is an authorized CEO call, and then verify that, yes, now I can say that came in from the CEO of the company.
I think we need something along those lines—almost along that passwordless authentication role of the identity. How do I know the identity? How do we take identity into the new space to validate what’s being done in the AI space? It’s an idea. I don’t know if it’s the right one.
Evan: To what extent do you think we can use AI to augment our critical thinking, maybe help people make better decisions in these risky scenarios, especially against these very sneaky AI attacks?
Chris: I think there’s definitely an opportunity. I correlate it to what a lot of folks are doing in the agentic space of supervisory agents. So you’ve got agents doing the process and other agents watching the process. There’s probably an opportunity there.
I think you have to design in a way to look at what’s normal. And I think that’s one of the things that AI brings to the table. It’s hard right now, even with all the technology, to say this is what’s normal for our employees. This is who I talk to normally. Here’s who I communicate normally. Just the vast amounts of data that that represents—AI is the only way to really take all of that and say, “Hey, CEO never calls Chris Leigh. Why is he calling me to say, ‘Hey, I need you to go pay an invoice,’ when I don’t pay invoices—so you shouldn’t be calling me.” How do we take those concepts and bake that in? I think AI could help in that space. Are we there yet? No. Do we have the data that we need to actually do that? It’s hard to capture all that data. There’s a cost to all of this. So I’m going to see how it all folds out.
I’m looking—any of you startups out there—here’s the problem: how do we verify those deepfakes that are coming in there and prove they are or they are not? I think that would be an interesting discussion.
Mike: Obviously, you’re in a heavily regulated industry, so you have compliance standards and regulators to deal with. Previously, I came out of the banking industry—same type of regulation. What’s your take on regulations and compliance and things like that that often feel like they’re very slow to adopt to new approaches, and you still have to balance that, keep regulators happy, with possibly pivoting how you approach risk and how you approach controls?
Chris: Yeah, it’s definitely evolving. We’re not at a maturity level in that space. Part of it is the education side. So most utilities are looking at AI from a cost savings perspective, optimization of business processes, load management, vegetation management—things that do not directly affect the customer in that sense.
Mike: Is there any particular initiative or innovation within your team that you’re especially proud of, or something your team may have built or designed, something unique that your team’s doing that you could share?
Chris: Eversource has developed, in partnership with the engineering group—and you made a comment earlier in the drone technology—preventative maintenance is the way to put it. So we, as well as some other utilities, will fly drones over lines and use imagery to detect broken components on transmission distribution lines.
The AI models now are coming along, and we’ve put some patents out on this, that allows us to better bring in the imagery and run it through our models and pick up damaged components or hotspots in the wires, which allows us to schedule and do repairs before we actually have power outages.
This has been one of our big successes. Our engineering group has gone out there and been able to do some machine learning even before our program formally got going on this one. But it is a game-changing technology to be able to just fly drones out there. The human eye can’t see some of this stuff, and it just helps in the resiliency of the power grid. So I think that that’s a strong win for us.
Evan: I got to ask—that one is so cool, right, to me. I don’t know because I’m a bit of a nerd—AI and drone nerd—but like, other ones you can share that have really been transformative in that way? Or maybe even inside the building, when it comes to cybersecurity. I have to imagine AI has probably changed some of the way you guys operate when investigating or responding, or maybe like Mike said, building tools or different internal solutions.
Chris: Probably the one on the security side that I can talk about the best would be in the threat intelligence space. If I go back over the last 13, 15 years, it used to be the government had no information to help industry out on the adversaries. “These aren’t your drones here, there’s nothing here to see, but give us all your data” so that we can see what’s going on—that’s changed.
The amount of unclassified threat intelligence that’s coming out from government partners to help industry is phenomenal. It’s coming out faster and faster. It’s got more detail. It’s almost overload now. In many cases it is, because different agencies are sending the same things—you get the same report three different ways—or they’re putting their spin on it so you get the same versions of a report a couple different ways. You just don’t have enough people to go through all of that.
So we’ve been able to put that into an AI model. These reports come out, they go right into an AI model, it gets analyzed, any IOCs get popped into our tools automatically. We’ll run them historically, see if there’s any hits on those IOCs, they reference back to the source report. So if we want to go back to government agency X and say, “Hey, this report—we actually saw some activity out the perimeter, blocked it, didn’t block it,” whatever the case, the AI is helping us deal with that. It’s not a manual effort anymore.
And that allows us to be faster, put our resources on more sophisticated activities and analysis. And you’re able to find faster—not hand-typing IP addresses or importing an IP address from a PDF, because a PDF is just so easy to import data from. This is just quicker, easier to get all of that in there. So we’re very happy to have that tool. Again, that’s why it’s nice having AI under security—they were thinking about that while we were still forming it—and it’s been invaluable for our SOC analysts.
Mike: Are there things that you’re seeing in the energy sector with more deepfakes? First—and then part two—when it comes to employees in general in the energy sector, do they buy into the fact that deepfakes are here and the risk of it? Or when you or your team talk about the problem, do they blow it off as science fiction?
Chris: I think people have seen deepfakes in one way or another—whether it’s spam text messages, emails already—and didn’t necessarily know it. And I think by doing the education, we help people connect the dots.
You may have heard people doing videos—did you ever think about that link in that text that might have said, “Hey, here’s a video recording of whatever supplier reaching out,” or whatever? I think you can connect the dots for them so that it brings it home.
And I love grabbing examples like the toll booth one, like some of the phishing messages, or the CEO or the CFO trying to do payments, because it brings it to life. Do the deepfake video and say, “Okay, what if you’re the accounts payable group—what if the CFO said, ‘Hey, that payment for that vendor, I need you to get that one done.’”
It’s to the right people. It’s by someone in finance who maybe does interact with them on a regular basis. Maybe they’re pushing a little harder than they normally would for that one payment.
I’d look at it and say, “Hey folks, what’s your internal controls? What are your process?” They want you to do it—you probably have a way to do it fast if you put the attention—but you still have to verify, get the right approvals, match certain things up. We still need to follow the right procedures to help defend on that.
And I think when you bring it tangible like that, then they start saying, “Okay, yeah, I could see that Teams call is really a deepfake video.”
If the CFO is saying, “I got to go make this vendor payment,” got it, CFO. I’m going to go push through these controls, call my boss, get the approval that I need—we’ll get it. If everything checks out, we’ll have it right out today.
But it takes people having the confidence in the process to do what they have to do. I would venture to say most CEOs, CFOs are never going to say, “Go bypass process and just get this done.” They’re signing their Sarbanes-Oxley attestations. They’re not going to want to say, “Go make that multi-million dollar payment” and not follow a process.
So I think it’s an education. Let’s connect the dots for the employees. As a culture—if your organization is where it should be culturally—then tell them: stop, think, don’t just react. And hopefully you’ll reduce it, because usually when you overreact and don’t think a little bit, you make those mistakes.
Evan: All right, Chris. We got about like five, ten minutes left. The way we like to end episodes is with a bit of a lightning round. We’re looking for shorter takes—kind of like the one-tweet version to questions, the questions that are too hard to practically answer in one tweet. So please forgive us. Mike, do you want to kick it off first?
Mike: Yeah. So what advice would you give to a security leader stepping into their very first CISO job? Maybe something they might overestimate or underestimate about the role.
Chris: Understand why you’re going into that role. Why did they hire you? What is their problem they need to solve first? That’s where you focus.
Evan: What’s your advice for CISOs to do that better? What’s your advice for their information diet, or how should they be staying up to date with what’s going on.
Chris: Lead by example. Lead by walking around. Get out there, talk to your teams. Listen to what they’re doing. Your teams always have to keep up on the technology. We do too. We just have to do it at a different level. So put your ego away and learn. Listen.
Evan: Probably good advice for every industry there. So on the more personal side, what’s a book that you’ve read that’s had a big impact on you and why? And it doesn’t necessarily have to be cyber or work-related.
Chris: So this will be the geek on the health side. So I just finished a book called On Longevity. And I apologize—I forgot, I should have gotten the author’s name. It was just on 60 Minutes about a month ago.
But it looks at what’s the medical, the emotional, the physical side of people who live to be 100, 110, 115 years old. You get the chemistry and the biological side, but ultimately when you get towards the end, there’s the emotional side of all this—the work-life balance, the persona.
It’s one thing to live long. It’s another thing to live a healthy, fruitful life. So spend time with whatever is important for you—your family. Don’t make work being the only thing you do. Do something different that gives you some diversity. But make sure you enjoy every day, because these are precious days.
Evan: For the final question, Chris—this is one of my favorite ones—what do you think is going to be true about the future of AI and security that most of your peers would consider science fiction today? Really trying to get your contrarian take?
Chris: That’s a really good question. I’m not so sure my peers would disagree. I think they all know it’s going to revolutionize where we are. I think the virtualization of business processes is just going to be well beyond anything we know. The ability to automate and really transform the business—just thinking about robotics on the manufacturing side—this is the same thing on the electronics side.
So I’m not sure it’s that my peers aren’t in the same place. I think it’s just that it’s going to go so far so fast that it’s going to be here before we know it. It’s just a question of how fast can people adopt it.
Evan: I think I’m with you on that one. Well, I appreciate you joining us, Chris. I really enjoyed the episode. Looking forward to chatting again soon. Thank you so much.
Chris: Thank you. I appreciate the time. It’s been fun.
Mike: That was Chris Leigh, Vice President and Chief Information Security Officer at Eversource Energy. I'm Mike Britton, the CIO of abnormal AI.
Evan: And I'm Evan Reiser, the founder and CEO of abnormal AI. Thanks for listening to enterprise AI defenders. Please be sure to subscribe so you never miss an episode. Learn more about how AI transforming cybersecurity and enterprise software top block.
Mike: The show is produced by Abnormal Studios. See you next time!
Hear their exclusive stories about technology innovations at scale.

