On the 9th episode of Enterprise Software Defenders, hosts Evan Reiser and Mike Britton, both executives at Abnormal Security, talk with Patti Titus, chief privacy and information security officer at Markel. Markel is the insurance wing of Markel Group, a global insurance and investment company with over 17,000 employees and $12 billion in annual revenue. In this conversation, Patti shares her thoughts on the opportunities and challenges of AI’s growing popularity, the balance of AI regulation and innovation, and considerations for the next generation of AI threat response.
Quick hits from Patti:
On improving enterprise security training: “ChatGPT has given us a whole new landscape to think about how we are providing the right training and guidelines to our employees. Our adversaries are becoming more sophisticated at figuring out how to socially engineer people. It's becoming more pervasive.“
On security and AI model governance: “Model governance is going to be a necessity. And inside that model, governance is a function of incident response. We are going to have to teach our people to be faster to report than the, ‘Oh, it's just phishing, I'm going to delete it.’ mentality. Don't do that. I want you to report everything. Because all the data that you provide to us enriches our threat perspective so that we can see what's really happening.”
On the duality of AI’s potential: “Can AI become pervasive enough to recognize itself as a threat? And then can it, in turn, think about how that AI is being developed and predict the next step before its counterpart predicts the next step? You are getting into a game of cat and mouse, only much more sophisticated, like the cat and the mouse are playing chess.”
Book Recommendation: The Six Types of Working Geniuses - Patrick M. Lencioni
Evan Reiser: Hi there and welcome to Enterprise Software Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most sophisticated cyber attacks. In each episode, fortune 500 CISOs share how the threat landscape has changed due to the cloud real-world examples of modern attacks in the role AI can play in the future of cybersecurity.
I'm Evan Reiser, the CEO and founder of Abnormal Security
Mike Britton: And I’m Mike Britton, the CISO of Abnormal Security.
Evan: Today on the show, we’re bringing you a conversation with Patti Titus, chief privacy and information security officer at Markel. Markel is the insurance wing of Markel Group, a global insurance and investment company with over 17,000 employees and $12billion of annual revenue.
In this conversation, Patti shares her thoughts on opportunities and challenges of AI’s growing popularity, the balance of AI regulation and innovation, and considerations for the next generation of AI threat response.
Patti, maybe to start us off, I mean, you've had a very, you've had a storied career. You've been at many notable organizations. You've worked in the government, right? You've worked at a large cybersecurity companies, um, and now you're, you lead up privacy and information security at Markle. Maybe you can, um, share a little bit about you know, how you got into this and, um, yeah, any kind of highlights you want to share from your career?
Patti Titus: Yeah. I'm like the age old story of when you complain and you end up doing it. So that's really how it started as a CISO, but my career actually started, we'll just say back a long time ago when I joined the Air Force.
I went into the electronic security command. I was a Morse intercept operator with the U. S. Air Force, so, yes, I have been doing this for some time. I actually was the first federal named CISO, so there, when CISOs first became a thing, there were none in the federal government, and I happened to start at this brand new federal organization called TSA.
So we had a lot of opportunity to start fresh and start new. And so that's really where it started. I was actually hired as the wireless program manager, and I kept complaining to the CIO, "who's doing security?" And so I think I just finally wore him down, and, um, kind of the rest is history, I guess.
Evan: And would you share a little bit about your role today?
Patti: I've been at Markel now for almost eight years, which seems like I started yesterday. And then some days it seems like I started a hundred years ago.
I run privacy, security, business continuity, crisis management. We just transferred health and safety over. They gave me that in 2019, which was exciting. Privacy has been a big deal. Largely because of the GDPR, so I took it over that in 2017 and expanded my role. We're moving more into the conversation, trying to take it away from privacy and move it toward data protection.
So we're trying to change the conversation a little bit here at Markel. I also own Identity and Access Management, so I have a wide berth of requirements and initiatives and projects that are always ongoing.
Mike: So when you think about this cybersecurity program at Markel, what are some unique things that maybe you guys are doing that an outsider may not necessarily appreciate?
Patti: What we've done is been able to build a security operations center with the help of a service provider that has allowed our people time to mature with them. So it's kind of been train the trainer. Um, and we've really appreciated that. We've added a cyber threat intelligence to our portfolio, which has been a lot of interesting and enriching data. And of course, like many, we're on that cloud journey and trying to find better ways to secure some of the big technologies we have. And when I say big, I'm meaning big company technologies. You know, you always hope, which I always say hope is not a plan, but you always hope that those technologies are going to be all that you could want them to be.
And unfortunately, sometimes they come up short. And you've got to find other ways to secure your environment. So I think some of the things that we've done that are a little bit unique is we've been able to integrate our disaster recovery, our business impact assessments into our risk management strategy.
Evan: I was curious if you could share a little bit about how you've seen the threat landscape change, especially with the adoption of more enterprise software, more SaaS applications.
I have to imagine that, you know, your kind of worry list today is probably a lot different than, you know, when you first joined.
Patti: The challenges that we've had as we've converged all this technology but yet made it more dispersed is how do I monitor data that I don't actually control that infrastructure, control where it lives. The whole idea of cloud is you have this magic, redundant thing that's supposed to be functioning all the time, which quite honestly, we've seen some big shortcomings of our cloud providers where things did not work as we thought they would. We shouldn't suffer outages, technically, if we've designed our cloud infrastructure well. We used to have a tagline 10-15 years ago, data on demand, anywhere, any type of device.
And you could access it. And it seemed like a really great idea because we controlled the device. We controlled where the data went. We controlled who was accessing it. Now we don't control any of that in some instances, and it's become more complicated for us.
Evan: As you said, almost all data is at the fingertips of every employee, they can access it from anywhere in the world, any network, any device, any location.
And I imagine, you know, every organization, their employees are saying, Hey, I want easier access, you know, faster. All these tools are allowing us to collaborate and communicate better, right? As we're doing right now through, through a web browser. How does that shape your view of kind of what the threat landscape looks like. Now everything's accessible to everyone, which is both the feature and a big challenge for security teams. How do you think about that?
Patti: You know, I think it's the big focus has switched from data protection to who's accessing the data and how are we accessing.
So identity and access management was always important before it's become a critical function. So from an attack surface perspective, if I'm not controlling access to the data. First of all, I have to know where the data is, and that can be a challenge with cloud where it's so easy to spin up a cloud instance or to go buy a SaaS application. So there's technology now that allows us to go out and search for where people are accessing different SaaS applications to give us a better, broader view. Not big brother, but a broader view of where our people are sending data, how it's being put into different cloud instances or applications.
ChatGPT has given us a whole generative AI, a whole new landscape to think about how are we providing the right training and guidelines to our employees. And I think our adversaries are becoming more sophisticated at figuring out how to socially engineer people. It's becoming more pervasive. And some of that is because people have become a bit immune to the security training, and we're going to have to get more creative on how we're reemphasizing that training, and maybe it's more gamification, maybe it's more financial incentive.
I'm not sure, but we have to come up with some new ways. Clicking through a PowerPoint's just not going to do it. Our adversaries are becoming much more in tune to what they want access to. Is it data? Is it money? Is it an email compromise? So I can insert myself into a communication. What is it? And then what are they going to do with it?
I think the separation between access brokers and threat actors has helped us get a little bit of a different mindset on how we do protection. So, gaining access through an access broker versus now I'm in, what can I do? There are two different types of threat techniques and you have to be an expert at both of them.
Mike: You bring up some really good risks, uh, especially around social engineering and just, you know, new technologies. Given all of that, over the next four or five years, where do you think security teams are going to have to probably overinvest in to help kind of stem some of those risks and threats and protect their organizations?
Patti: We're going to have to get better at encryption, but we're going to have to get better at encryption a different way.
We really got to nail DLP so that our data's not leaking out of our environment. That goes back to that AI conversation where people can take corporate data and, and put it into ChatGPT or generative AI. Because they're trying to solve a business problem. I think our teams are going to have to pivot more to be able to respond in more real time.
So we've got to get better capabilities around stopping it from happening in the first place, stopping stuff from getting in the environment in the first place. I also think we're going to have to get better tools looking at firmware memory. Some of the things that nation states had been doing to each other, we're going to have to get pulled into that game through chipset level problems, malware on a chipset.
It's just not something that technology has been able to catch up with. Finding things that's bouncing around in memory, that's like a government thing, right? Not commercial. We need to weaponize some of those federal systems. To allow commercial companies, critical infrastructure, to protect themselves.
Those capabilities are being close held with the government and we need them in industry, and if the government wants to help us with the private partnership, they're going to have to start sharing more of those offensive capabilities. We've got to quit playing defense. We have got to start playing offense and I don't mean somebody's threatening you and sending, you know, pings in your network, and so you go try to wipe them out. That's like poking the bear. You don't want to do that. But we have to get some offensive capabilities. We've been playing defense far too long in this industry.
Evan: Patti, there's so, there's so many interesting things you said there. One thing I wanted to follow up on is, um, you talked about, um, you know, us being more offensive, right?
And there is kind of a, it's always a bit of an arms race in cybersecurity and, you know, attackers are always moving a little bit quicker, right? Because organization that we have to, you know, we make progress, but we're never, you know, we're kind of usually one step behind. You also talked about some of these new tools like ChatGPT, generative AI, that, you know, every business is excited about because it's very clear there's productivity gains, efficiency gains. It's more, it's easier for a broader set of employees to get access to more data and, you know, enable them, empower them to do more things. I think we're all very excited about that.
Similar to what you said about quantum, AI is a double edged sword, right? And it's a tool, and there's a bunch of tools, there's a bunch of ways that tool can be used for a lot of good stuff in the world. Love to hear your thoughts on, you know, how you see criminals and adversaries using some of those tools and what the implications are for security leaders as we go into this next chapter of cybersecurity.
Patti: I think there's an opportunity to educate our employees about what generative AI can do. And so thinking about my security team and having them question a deep fake, having them question things that look legitimate. For some reason we're having reconnaissance lately across the industry of, you know, back to the simple, Hey, this is so and so and you're getting smishing. So you're getting a smishing attack. This is the CEO of the company and I need you to answer a question for me. Are you there? How many people actually say, yes, I'm here.
Just think about it. In your job, is the CEO of the company going to send you a text message? Highly unlikely. I mean like he's, you know, certainly there is a circle of employees in the company that would interact with him that way. But there's also a bunch of us who would never interact with him that way.
And so if all of a sudden that happened, my fake news meter should be, you know, pinging like my spidey senses. I think it's an opportunity for us to educate people. I think the problem we're going to see with AI is... the more we think we're training and it's how often are you retraining? How often are you going back to zero with your AI and taking all the junk out of it that you may be training into it?
So I just think there's a lot of opportunity with this one, especially from an educational perspective. I think model governance is going to be a necessity. And inside that model, governance is, to me, a function of incident response. So, we're going to have to teach our people to be faster to report than the, Oh, it's just phishing, I'm going to delete it.
Don't do that. I want you to report everything. 'cause all the data that you provide to us enriches our threat perspective so that we can see what's really happening. So we, we keep telling people, oh, don't worry about it, just delete it. If it looks bad, throw it away. I think we're gonna get to the point where we're gonna need to see everything and then we're going to need to be more collective working as industries.
So using forms like FS-ISAC, we have an insurance risk counselor where it's just insurance companies sharing data. We're going to have to create more grassroots efforts where we're getting the CISO community together to share information and recognize this isn't competitive edge. This is operations. We're all fighting the same war, and if we don't band together, we're going to lose. So we've got to start sharing information, as CISOs, with each other. And stop worrying about, is it impacting a competitive edge for my business? There are going to be things like artificial intelligence. No, I'm not going to tell my competitor what models I'm doing. That's not cyber. Cyber is about what am I doing to defend myself and then help you defend yourself so that we can have more of the offensive type responses. Get ahead of it, not behind it.
I was actually just did a panel discussion with a few other executives for our internal audit team that were all here for their global town hall. And the other folks that were on the panel were not security people and not technical and, and they had really great feedback for, you know, what keeps them awake and what, what should audit be concerned about?
And I just went boom, right into the, you know, we always do the out of band. I pick the phone up and I call you while I might be calling you, which might reroute it to someone who has figured out how to intercept and hijack that phone number. Or you might respond to a text message or you might get a deep fake that's an actual video of the individual. Telling you to do something, and interacting with you like a human would. And after that our executive for underwriting said no more inviting Patti to these panel discussions. So it's all those out of band capabilities and techniques that we have used to validate a human as the person so we can change their password or add their MFA to, because they called and said, Hey, I'm on travel and I can't get in. Help.
What else are we going to have to be able to do? I mean, put a chip in people's heads? Well, we don't really need to do that anymore, do we? Because we have these. So we always know, but do we need to geolocation a person? Now the other side of my brain, which is the privacy person is thinking we have rights and we're infringing on, you know.
When does. When does privacy rights as a company impinge on your ability to be a private human?
Evan: Patti, maybe just one follow up to my earlier question on AI. So, I actually agree with what you just said, where... It's kind of like the old is new, right? It's the same kind of core risks that are happening now, there's just no new tools and techniques to kind of activate that, right?
And certainly cyber criminals will, you know, they'll be using generative AI emails, but the kind of strategy is still the same, right? Trick someone, right? Trick a human into doing something they shouldn't be doing. Love to hear your thoughts more on the defensive side. You know, there's a lot of hype in AI, and there's a lot of nonsense out there, but there's probably some opportunity there, even though it may be hard to identify from the outside.
What are the areas you feel bullish where AI technology can enhance or augment security operations today? And maybe just the other side of that, like, are there areas you feel like, hey, this is just overhyped and it's all kind of a little bit silly. Um, I'd love to hear your thoughts on, you know, where you see AI being applied to, you know, better defend against some of these new attackers.
Patti: So I think we have to sit down and define what do we mean by AI. Each industry seems to be coming up with their own definition. Similar to what happened in privacy, everybody had their own definition, and then it got normalized. Are we talking to straight AI, where the computer's making a decision? Are we talking about supervised AI, where there's a human still in the middle of that? And isn't that really ML? So, I think there's a little hype here about, Oh, I have this AI tool for you. It's going to make your world better. Is it really AI or is it ML or is it RPA? So robotic process automation. I think there's an opportunity for AI in the identity space big time. So can AI ferret itself out?
Can AI recognize AI patterns? Can AI recognize itself in an attack that would be unique and interesting. If AI would have the capability to identify itself. Now I'm going back way too 2001 Space Odyssey, and Bob never got back in the spaceship. They just wouldn't let him back in. So, did AI recognize itself or that as a threat?
Can AI become pervasive enough to recognize itself as a threat? And then can it, in turn, think about how that AI is being developed and predict the next step before its counterpart predicts the next step? You're getting into a game of cat and mouse, only much more sophisticated, like the cat and the mouse are playing chess.
And so it's point counterpoint. Can the AI predict the next step that's being used in the attack? Can it predict the next one? Or do we need to create an Iron Dome? Do we need to create the next Iron Dome for our networks? Where the Iron Dome says AI can't get in, AI is only going to be allowed in through certain types of capabilities, or it's only going to work inside a sandbox environment, or a data lake, or some sort of trusted cloud environment, and that's where you're going to run your AI.
What does that mean for companies that have added AI to their tools? I'm not really sure, but I would imagine that it's not going to be like, we're not going to allow AI updates to happen on the fly, like they're currently happening. I mean, I logged into one of the browsers and there's an AI window pops up and I'm like, wait a minute. I didn't ask for that. Go away. And then I started, I knew, I knew that was going to happen. It happened. And then all of a sudden I started getting emails. Hey, did you know that this thing is AI thing is popping up in the, in our corporate environment. And I'm like, okay, whose idea was that? So then I start chasing it down.
All right. Did you IT guys approve something that I didn't approve? Or people are smart enough to say, Hey, wait a minute. I better call Patti because I don't know what this is. Is this legal? Is it legit? Did something happen? Did we get compromised? Which is good. That means people know they should pick the phone up and call me, but that those 50 emails and phone calls I got in one day were a little oppressive.
So I think there's opportunities to try to figure out what do we need to put into place for that next generation threat responder. How are we going to put that iron dome around our data that's everywhere? Is there a way to like, put it in like a George Jetson capsule and that data only goes like to certain places and it's protected and AI can't actually infiltrate it?
That's going to be the other concern is, a lot of companies invest in error and omission insurance. Do we have concerns that AI can get into corporate financial systems and make changes? And does that create enough concern that the SEC will be concerned about it. And Wall Street will be concerned about it, and our financial services will be concerned about it?
Evan: Well we only have a couple minutes left, so we might just switch to a couple lightning round questions just so we can get some of your quick takes or kind of the one tweet, one tweet versions. Mike, do you maybe want to kick it off?
Mike: Sure. So what's the one piece of advice you'd give to someone stepping into their first CISO job? Maybe something they might overestimate or underestimate about the job.
Patti: Run. No, I'm just kidding. Super important to build a foundation. You've got to have GRC. I hate to say it, but you've got to have, without that foundation, you can have the best security operations team. If you don't know what your appetite thresholds are, you're building it for not.
You got to get your SOC in place first. Make sure that you got good IR and that you're testing it. But you've got to have a strong GRC program.
Evan: Patti, you know, you, you seem very thoughtful about some of the technology trends. And I imagine there's people listening that are curious, how do they kind of stay up to date?
Any, anything to share about your kind of like, I don't know, your information diet? Like where do you, where do you kind of learn about, you know, new technology trends, new attacks, especially when it comes to some of these new technology trends around AI?
Patti: I'll be honest with you. I stay very close to the startup community.
They're innovating in areas that we're going to need capabilities for. There are a lot of companies that are hungry for CISO types to listen, just listen to their pitch, listen to their product. Does it solve the problem that other CISOs are having? It may not solve your problem, but is it good for others?
And it, what does it give you? It costs you half an hour. Like you tell them you got, you got a half an hour, make it happen. Half an hour, a couple times a week, a couple times a month, whatever you can give. I think that's really important.
The other one is you just have to be a ferocious reader and getting back to that whole fake news stuff, ask questions, be curious, be a bit provocative when you're in meetings with vendors.
Don't hold back, ask those questions. Be direct. Sure, you might get a reputation, I may have one of those. Vendors don't want you stringing them along if you've already got another solution, just be honest. I got another solution, tell me why I would want yours. Tell me the difference. Competitive edge is really important for companies as well to stay current.
And then there's so many podcasts out there. Some people might go on a silent walk. I know that's kind of a thing right now, but I like to go with a little bit of company and listen to my favorite podcasts.
Mike: On the more personal side, what's a book that you've read recently that's had a big impact on you and why?
Patti: The six types of working geniuses. Why? Because I have a team of people that have got to work together, and I need to understand what makes them tick. It's a really interesting book. I think it's interesting to see the dynamics of your team, and it's probably the easiest brain data instrument I've seen come out.
It's simple. There's only six. It shows you why you might be frustrated. I happen to be an inventor. An inventor makes people crazy if you're not an inventor, so it's the frustrations or the your superpower. So it's really helped to unlock a dynamic amongst my team of why I drive them crazy when I say, Hey, I met with this new startup company and I'd love for you to talk to him.
And they're like, what? Another one? Clearly the brain data shows how I frustrate my team. It helps me realize how to interact with them that are different. It's a great assessment. It's a great tool.
Evan: I also drive people crazy for some of the same reasons. So, um, I haven't read that yet, but I've read some of his other books.
Patti: Well, the book itself is a lot of story. The last probably third of it is the interesting part in the assessment. Cost you 25 bucks. Best 25 bucks I spent really.
Evan: Patti, thank you so much for taking the time to join with us. Great to chat with you as always. And, uh, I appreciate you sharing your thoughts and wisdom with our listeners.
Patti: Hey, thanks guys for having me on the show. I really appreciate it. And I'll say a lot of this is my opinion and I'm, I'm happy to talk to anybody who wants to, um, talk further about any of my opinions.
Evan: It's very generous of you. Thank you, Patti.
That was Patti Titus, chief privacy and information security officer at Markel.
Mike: Thanks for listening to the Enterprise Software Defender. I'm Mike Britton, the CISO of Abnormal Security.
Evan: And I'm Evan Reiser, the CEO and founder of Abnormal Security.
Mike: Please be sure to subscribe so you never miss an episode. You can find more great lessons from technology leaders and other enterprise software experts at enterprise software.blog.
Evan: This show is produced by Josh Meer. See you next time.