On the 3rd episode of Enterprise Software Defenders, hosts Evan Reiser and Mike Britton, both executives at Abnormal Security, talk with Ryan Fritts, CISO at ADT. ADT is a renowned industry leader in security and automation solutions, protecting homes and businesses around the United States. In this conversation, Ryan discusses the hype vs. reality of AI in cybersecurity, the evolving security risks from adopting cloud computing, and considerations of AI regulatory frameworks.
Quick hits from Ryan:
On the promises vs realities of AI for cybersecurity: “[AI] actually poses more of a benefit to the attacker side from an intelligence perspective. The ability to generate an adversarial AI that can know what various controls might exist and how to best evade them is one that I think the adversarial component is going to be the thing that is most impactful for AI in cybersecurity.”
On AI helping sift through an explosion of data: “The volume of data when you have this many systems in this volume of data sets, the data explosion really hurts. You can't make it a people problem and a tech problem. It's got to be holistic. You have to rethink the problem entirely, you can't approach it the same way, and the advancements that have kind of really been happening on the AI and analytics front have enabled more efficient deployment of resources.”
On AI regulatory frameworks: “I think certainly there's some work out there with NIST and CMMC that they're pushing through the government contracting sector, but those aren't national regulatory standards. The SEC is pushing the disclosure framework, but clarity helps. In the absence of clarity, there's confusion.”
Recent Book Recommendation: 1% Leadership by Andy Ellis
Evan: Hi there and welcome to Enterprise Software Defenders, a showed that highlights how enterprise security leaders are using innovative technologies to stop the most sophisticated cyber attacks. In each episode, fortune 500 CISOs share how the threat landscape has changed due to the cloud real-world examples of modern attacks in the real AI can play in the future of cybersecurity.I'm Evan Reiser, the CEO and founder of Abnormal Security
Mike: And I'm Mike Britton, the CISO of Abnormal Security.
Evan: Today on the show, we’re bringing you a conversation with Ryan Fritts, Chief Information Security Officer of ADT. ADT is a Fortune 1000 company with over 20,000 employees that provides electronic security, fire protection, and alarm monitoring services for homes and businesses throughout the United States.
In this conversation, Ryan shares his perspective on the promises vs realities of AI for cybersecurity, the explosion of data in a cloud-first world, and considerations of AI regulatory frameworks.
Evan: Well, Ryan, I really appreciate you taking the time to chat with us. Do you want to share a little about what your role is at ADT today?
Ryan: Yeah - CISO at ADT, been in the role since November of 2017. Prior to that was vice president of technology. I also have infrastructure, networking identity within my remit for the current role at ADT, which is a little unusual but, it definitely makes certain activities a lot easier from a security perspective when the infrastructure team reports to you.
Mike: I think a lot of people probably familiar with the yard signs and familiar with ADT, but what are maybe some unique cybersecurity use cases that an outsider may not fully appreciate?
Ryan: I mean, an outsider might not fully appreciate the intensity of cyber security when your brand is security. Your brand is trust. So as stressful as information security can be generally, it does add an amplification factor when the brand is security, when the brand is trust, and you have to maintain it.
Evan: Mike and Ryan, as you both know, cybersecurity has changed a lot, right, in the last 10 years. Especially as the software people use to do business has also changed. Ryan, do you maybe mind sharing how you see that threat landscape evolving just with this kind of growing adoption of cloud software?
Ryan: Today, you know, the thought of the four walls and a moat doesn't really exist. The whole IT ecosystem has been kind of deconstructed and reconstructed in services that are hosted by third parties, and the cloud overlay is effectively a little bit like colocation of days gone by, you know, somebody else's computer, but it's still a computer. The reality though, of what you can do in there and the ability to do something that would be not well advised is easy today as it's ever been.
And that's the harder challenge. So you've got this like exponential increase in interconnectedness that each one of those points is, uh, you know, a potential failure mode from a security perspective.
Mike: Yeah. And as we look at these things that you mentioned and even fast forward to today, what are maybe some examples of emerging threats that maybe a CISO didn't have to worry about 5-10 years ago?
Ryan: Yeah, I mean, particularly with data analytic AI advancements. If I think about the problems 5-10 years ago, it was can you get the logs somewhere? And the rise of log management, and you can secure by log management, the volume of data when you have this many systems in this volume of data sets, the data explosion really hurts. You know, you can't make it a people problem and a tech problem. It's got to be kind of holistic. You have to rethink the problem entirely, you can't approach it the same way, and the advancements that have kind of really been happening on the AI and analytics front have enabled more efficient deployment of resources, right?
You can take those same resources and get more out of them in terms of security outcomes than you could have in years past. You're no longer sifting through a mountain of hay to find a needle. You're sifting through a pile of needles. While the landscape is much more challenged, there's a lot more failure points, the technologies to kind of help mitigate that increase in complexity have also kind of started to catch up.
Evan: You know, there's been an explosion in these AI technologies, and they're more accessible than ever. Like, my parents were talking to me about ChadGPT. And so, ChadGPT is kind of like the V1 of this next generation of AI tools and those are broadly accessible. They're very powerful. How do you see the threat landscape changing as, you know, attackers start using these for offensive purposes?
Ryan: Yeah, I mean, that's the double edged sword, right? That when something gives you efficiency to see an operation. It doesn't just give it to you. It gives it to everybody. And it's really the last thing we want is ne'er-do-wells to be more efficient and effective, particularly relative to like phishing and social engineering scams, right? And I look at ChatGPT as something that really can just change the dynamic from a social engineering perspective. Because a lot of these that originate from non-native English speakers targeting native English speakers and their ability to generate a conversational dialogue to extract information in a way that is completely not obvious that they might not even know a word of English is scary. The ability to social engineer, via prompt engineering, and you don't have to even do it in English, it's a frightening proposition, I would say.
Evan: Well, what's the like impact on enterprise security teams?
Ryan: Well, I mean, just think about the concept of protecting email is one avenue where you can at least deploy tools. Now, just realize that SMS based and voice based phishing still occur. And there is not a technology on planet Earth so far that will help prevent SMS based phishing and voice based phishing. It requires, really, just training. And even then, with a lot of the AI, deepfake technology, how can you possibly train when something can give you something so believable that it's indiscernible from reality?
And at that moment, what can you do as a security person, where you can’t tell people to look at the phone number. The phone number be spoofed. You know, the person on the other end of the phone that sounds like the CEO? Well, that can be deepfaked. It's just to the point where there's no technology that you can deploy that, that is gonna, I mean, are we gonna give people secure phones and, you know, two tin cans and a rope when they want to call somebody?
The solutions don't exist yet. And this is where the cat is ahead of the mouse on this one. I know Congress has started looking at security obligations on the SMS side. It's one where action is most desperately needed. But if I think about voice based, how do you account for that?
I struggle on even understanding what the security controls look like when I can train somebody perfectly, but the attacker can, if appropriately sophisticated, execute something that's indiscernible from reality. And that's a scary proposition looking forward.
Mike: So obviously, it's a scary world out there, and the threat actors are leveraging this to some large degree of success. Maybe there's some tangible results that you've seen from AI technologies on the defender side that our listeners might be surprised to hear, or maybe even underestimate how AI can be used for good in the security space.
Ryan: Yeah, I mean, I'll look at communication, the thought about legacy, you know, and I've lived through, there's not even a spam gateway, all email just gets straight to your inbox, it's a, it's a, it's a mailbox, anything that comes in and dumps on top, to like the old Barracuda spam filters to secure mail gateways, and if I look at it, the reality of like compromised accounts is real. And with third parties, compromised accounts are real. I mean, it might be easy to flag something that is an imposter domain, right? It's spoofing the from, it doesn't line up with SPF, DKIM, DMARC, all of those fail. Okay, we can drop that. But what happens if it's a legitimate vendor that gets compromised and sends you a change in payment instructions. Well, that's an unusual thing. That is something that is not a normal transaction via email. And there's other scenarios out there. I mean, email scams are so prevalent, if only because they basically cost nothing to undertake. And even with a low success rate, all you need is one and you've made your year.
I struggle because you can train people to look at the, you know, address headers. You can train people to look at everything. But if the other person on the end of that conversation is somebody you normally deal with, and it's coming from them, you trust them, you start to break down these, these training walls that you tell people to look for. How do you find and identify that and how do you prevent it before it becomes a loss?
That is where AI and analytics really shine is, is taking data, understanding the context and calling out things that don't look like the rest. Even from, you know, a network perspective, you know, network is so data intensive as well. What looks like abnormal network usage? What system is talking to systems in Russia? Like, you know, should that happen? The ability of AI and analytics to, to look over a problem, know what normal looks like, and call out things helps turn the pile and the mountain of hay that you're trying to find the needle in, into the pile of needles.
So, you know, is there some hay in there? Yes. But the things that are there still are not normal scenarios and you should look at them and evaluate. But it can really kind of materially change the game.
Mike: So on the flip side, obviously, you go to security conferences, you hear vendors, you hear AI is in every marketing pitch, you know, are there some overhyped, overpromised capabilities in AI that we don't think we're actually going to see results now or in the future?
Ryan: Yes, I think everybody assumes you drop ChatGPT into a product and magically it's a hundred times better, and it does everything for you. It solves all of your problems. That's not true. I think everybody sees the reality of ChatGPT and the efficiency side of it and instead of highlighting efficiency they just over promise on what it can do like, oh, it's got AI. Whatever your problem is, this will figure it out and it'll solve it and this snake oil elixir problem can become very real and there's only so much bandwidth you're gonna have to do POCs and evaluations. You can't test everything, but I do worry that AI is the new snake oil. That it is the magic solution in search of the problem instead of anchoring back on the problem and how this solves it.
Evan: So what do you think will be true about AI's future impact on cybersecurity that most people today believe is science fiction?
Ryan: I'm sure most people would frame it as, what does this mean from the defender's side? And I think it actually poses more of a benefit to the attacker's side. From an intelligence perspective, the ability to generate an adversarial AI that can know what various controls might exist and how to best evade them is one that I think the adversarial component is going to be the thing that is most impactful for AI in cyber security.
And, and, you know, coming back to the point on phishing and social engineering and deep fakes, at some level everybody conducts business with people and people are fallible and they can be tricked and it does not matter. I could be tricked. You could be tricked. Anybody listening could be tricked. You just might not have found the person that has done it yet, but it can happen. And I am completely upfront with myself, you know, intellectually honest that I can be fooled. And it's the aspect and overlay around social engineering that I think is the most scary.
Mike: Yeah, and I guess I would say, you know, the bad guy being able to leverage AI and almost kind of an AI arms race here, to what extent do you think regulation plays a role? Because, you know, we're seeing things like the AI Act and we're seeing U. S. Congress starting to step into this place. Where does that play a role?
Ryan: I mean it plays a role in trying to develop the controls oversight. Even liability frameworks. Having a consistent kind of liability framework is also a very difficult overlay, right? Like, if you think about the patchwork that exists today, you know, globally, let alone U. S. You know, you've got GDPR for Europe where, you know, everything's kind of codified and known there, there isn't a real federal standard here in the U. S. But if I think about just regulatory, which most people are going to think of U.S., the liability and liability shields, can it be an unlimited liability if it's a nation state actor, right? An insurance provider is going to view a nation state actor as a potential act of war. And if a nation state moves against you as a private company, who can stand opposed? You know, they're dealing with a level of sophistication that is very difficult to oppose in perpetuity.
And you would have to be successful in perpetuity, which is a difficult, if not impossible proposition. Does that mean you're going to be subject to large liabilities as a function? Even if you take all necessary and appropriate precautions, industry standard, uh, certifications. So I think, you know, there's certainly a regulatory overlay, particularly for having a harmonized liability framework to understand and, and know with, with greater certainty what it should look like and what controls should look like.
I think certainly there's some work out there with NIST on, uh, you know, 853, 800 171 and, uh, CMMC that they're pushing through the government contracting sector, but those aren't national kind of regulatory standards. The SEC is pushing the, you know, the disclosure framework, but clarity helps, right? In the absence of clarity, there's confusion.
And there just needs to be clarity of understanding. Because how do I plan? How do I weigh the pros and cons, the benefits, the impact analysis, if there isn't clarity on outcome? And I think that's the one thing I’d love to see a lot more of - Clarity.
Evan: Ryan, I've totally lost track of time because I've been, like, enchanted by some of your answers and they really resonate with me. Mike, should we try to do, like, a real quick lightning round of questions for, like, the last ones we had saved up? Ryan, you ready for lightning rounds?
Ryan: I'm ready for the lightning round.
Evan: Okay, Mike, fire first lightning bolt.
Mike: Absolutely. If you could wind back the clock and, uh, you know, give yourself advice on day one as a CISO, what would be the thing that you'd tell yourself? What, what did you overestimate or underestimate about the role?
Ryan: Oh, I would've told myself, find a really good stress outlet really fast. It's a stressful role and I underestimated the stress out of the gate. It took me a while to find my groove on just kind of finding the daily routine that would help me manage the stress levels to the best of my ability. And it turns out that snuggles from my five year old is a really good solution.
Evan: Maybe next question, any blind spots you feel like new CISOs might have on their first year job, right? That you'd give them some, some coaching advice to maybe not ignore.
Ryan: Take nothing at face value. Dig. Dig. Trust nothing. Trust no one. Prove everything to yourself. Don't just accept that what somebody told you is what is happening is what's actually happening. Trust but verify.
Mike: What's a good book you've read recently and how's it, you know, had an impact on you?
Ryan: Oh, I'll go with Andy Ellis's One Percent Leadership. Really good book. I think about the, uh, obviously very long tenure there at Akamai, in a very stressful role in security. Just understanding the things that help me move the needle for the teams, for myself, for the company, and trying to find the balance.
Evan: My final question, because I know Josh is going to yell at me otherwise, uh, Ryan, any kind of final words of wisdom you want to share with the next generation of security leaders?
Ryan: Yeah, be passionate. Be passionate about understanding how things work, right? Everything that is being developed today didn't exist when I started, you know, working on computers, uh, back in the early and mid nineties. I was just passionate about understanding how things worked, and, you know, when it comes to security, it's a set of problems and the best way to solve them is to understand and try to break them down into the smallest possible constructs. And the more passionate you are about understanding problem solving, the better you'll be.
Evan: Great, Ryan, appreciate you joining us as always. Really enjoyed chatting with you and let's find a chance and excuse to talk again soon.
Ryan: Indeed. Evan, Mike, much appreciated.
Evan: That was Ryan Fritts, chief information security officer at ADT.
Mike: Thanks for listening to the Enterprise Software Defenders podcast. I'm Mike Britton, the CISO of Abnormal Security.
Evan: And I'm Evan Reiser, the CEO and founder of Abnormal Security. Please be sure to subscribe so you never miss an episode. You can find more great lessons from technology leaders and other enterprise software experts at enterprise software.blog.
Mike: This show is produced by Josh Meer. See you next time.
Hear their exclusive stories about technology innovations at scale.