On the 12th episode of Enterprise Software Defenders, hosts Evan Reiser and Mike Britton, both executives at Abnormal Security, talk with Lynton Oelofsen, Chief Information Security Officer at Associated British Foods. ABF is a multinational food processing and retail conglomerate with 132,000 employees and over 21 billion dollars in annual revenue. The company plays a significant role in shaping the global consumer food and beverage landscape through its impressive portfolio of subsidiaries and associated brands. In this conversation, Lynton shares his thoughts on the evolving security needs of large enterprises, understanding the double-edged sword of generative AI adoption, and how AI tools can enhance the effectiveness of modern cybersecurity teams.
Quick hits from Lynton:
On preparing for AI-powered threats: "The reality is there is a skill augmentation in terms of the attack vector. The ability to leverage generative AI capability to write things at pace or to automate social engineering, and you're looking at things like that, your ability to be able to write crafted, well-positioned emails that are specific to what someone's doing...starts to become a real concern."
On cybersecurity as a continuous battle: "It's like the Golden Gate Bridge analogy of you paint from one side and you get to the end and you pretty much have to start again. That's what vulnerability chasing around felt like."
On collaborative security efforts: "No single entity can tackle the cybersecurity challenge alone. It's about collaboration, sharing knowledge, and leveraging collective strengths."
Recent Book Recommendation: The Unicorn Project by Gene Kim, Kevin Behr, George Spafford
Evan Reiser: Hi there and welcome to Enterprise Software Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most sophisticated cyber attacks. In each episode, fortune 500 CISOs share how the threat landscape has changed due to the cloud real-world examples of modern attacks in the role AI can play in the future of cybersecurity.
I'm Evan Reiser, the CEO and founder of Abnormal Security
Mike Britton: And I’m Mike Britton, the CISO of Abnormal Security.
Evan: Today on the show, we’re bringing you a conversation with Lynton Oelofsen, Chief Information Security Officer at Associated British Foods.
ABF is a multinational food processing and retail conglomerate, with 132,000 employees and over 21 billion dollars in annual revenue. Through its impressive portfolio of subsidiaries and associated brands, the company plays a significant role in shaping the global consumer food and beverage landscape.
In this conversation, Lynton shares his thoughts on the evolving security needs of large enterprises, understanding the double-edged sword of Generative AI adoption, and how AI tools can enhance the effectiveness of modern cybersecurity teams.
Evan: So, yeah, maybe kick us off. Do you want to tell us a little bit about your role at, uh, Associated British Foods?
Lynton: So I am the group CISO at Associated British Foods, which basically means that from a central perspective, I am accountable to the board for cyber security across our 70 plus entities. Which is 53 countries and 130, 000 people.
Evan: And, and for maybe, um, some of our listeners that may be less familiar with ABF, do you want to share a little bit about kind of what you guys do and, um, how people might come across your, your products and services in the world?
Lynton: Yeah. So we've kind of got five channels to market. We have agricultural, which is pretty much focused around Europe in general.
We supply things like animal feed and, um, you know, commodity, um, commodity level kind of animal feed that size. We do speciality animal feed in terms of different nutritions and stuff like that, which gets mixed it in from additives and things to provide a better quality feed for animals. So we do that on the agricultural front.
We do a lot in sugar, so we're one of the largest suppliers in sugar in the UK. We're probably the largest supplier of sugar in Southern Africa. We do sugar by way of beet as well as cane. So beet in Europe and then cane in Africa. We have a grocery division and that's quite a mixture of things.
We make Twining's tea. So we do the best tea in the world, obviously, uh, the King and the Queen, but so that, um, so we also do localized brands like, uh, Jordan's Dorset, Ravita, Patux. So they're more household product brands that are very well known in the, in the UK. And a bit in Europe, probably a little bit into Africa as well.
We do a lot of bread manufacturing or bread baking through the grocery divisions as well as doing, uh, milling. So we mill a lot of flour and stuff. So we, we do that. We, we supply a lot of bread into the UK. We have a mixture, which is a speciality ingredients division. And that's kind of a division that has really strong technical capabilities and things like enzymes, pharmaceutical excipients.
Lipids, things like that. So they're a mixture of different types of ingredients that are either used in foods or used in, um, in, uh, drug transport or other areas. So, you know, we use enzymes to do things like detergents and stuff so that that's also there. Um, and we have a retail business, which is a large retail business in Europe called Pramok.
Um, it's a, um, a good sort of value chain or a value. Retail seller, um, and is, I think we were about 430 stores across Europe in the U. S.
Evan: Well, it's an incredible scope and thank you for your all of your work to help keep the world fed and healthy.
Lynton: Yeah, I wish I could take some credit. I've worked in a factory before, so I've done some of the work, but the real work comes from the men and women that actually do the hard graft.
Um, we're sort of here to protect the information component of it and, and the real world nowadays as well, probably around some of the operational technology as well, which is some physical assets as well that we protect. But, um, you know, the other ones that do the work, we're the ones that obviously just try and help look after it.
Evan: Do you mind sharing a little bit about, kind of the rise of cloud and the rise of, you know, SaaS applications that I'm sure you guys are using to power and optimize your business. What are some kind of new things that have come up that, yeah, you would have been, you didn't have to think about, you know, five, 10 years ago.
That's kind of where it, it stopped, but you may have an antivirus on your device. Fast forward five years and you start to think about the ability for everyone to connect quite remotely into your environment. You know, then you start to fast forward another two or three years, and then you start to think about SaaS provisioning around things like Salesforce Workday.
Microsoft Dynamics 365, that sort of stuff. And then you start to have that sort of realization that the control and that security component that you would have had on premise, which was only obviously controlling the gateway of people getting in, is now broadened out to the aspect of having it somewhere that is not necessarily within your control.
There was quite a shift and a further shift to that is then when you start to think about taking large blocks of provisioning from providers, and then those providers potentially suffer a cyber incident, and then suddenly you lose quite a lot of capability from a single block because you've put, you know, payroll, ERP and other things in a PaaS provider, so platform as a service, which is obviously basically somebody else running your hardware for you, and then you kind of lose that, you know, you have that experience of right.
That's quite a real impact because I can't bring it back up. I'm reliant on a third party. I think that's something that I think has moved on quite dramatically. The adoption of the O365 stack a little bit more, you know, thinking about teams and thinking about things like that. That's moved the problem to different places and places we haven't really thought about.
You know, we protect our general sort of email estates and we protect our um, know an environment quite well, we're now doing a lot of that into things like teams or into slack and stuff because you know that that's another avenue we hadn't really thought about.
Mike: Well, it often feels like cybersecurity lags behind the business and technology, and we're always playing catch up. So when you think about maybe the next five years with, you know, continued explosion of SaaS, AI, generative AI, what are some of the investments within your security program that you feel will have disproportionate value by making those investments to keep up with the business and technology?
Lynton: I think one of the first avenues we went after was our email stack. I think that was quite an obvious one to do because there was an avenue that inbound and outbound is just, it exists, you know, and we're not going to turn it off. It's not going to go away. So that was one, you know, getting in technology around business email compromise that allows us and helped us to get a little bit more control over that.
You know, we rolled that out to a massive aspect of our estate and it made a big impact really quickly so that that was definitely one of those things that was important to us. Where also we we're thinking about disproportional stuff is where we've invested in vulnerability management capability. That is really top draw.
I think that was something that we, we did quite early on and we've stuck with it and we've adopted more and more, and I think we're seeing a lot more evidence about how to manage vulnerabilities in a more proactive way than what we did before. I still remember when I first started looking into cyber, I was still a, I was a, what was I doing?
I was a IT person in Germany at the time, and, um. One of the things that I realized was we kind of chase vulnerabilities per device. So you get a number and then you run around like an idiot trying to trying to repair everything and patch everything in it. It's like the Golden Gate Bridge analogy of you paid from one side and you get to the end and you pretty much have to start again.
That's what VPD chasing around felt like. And I think what we've now started to realize is that if we can apply better lenses over there and more information to how we're doing it, we have way more value to add in the patching we're doing. It's expensive to do in terms of doing it properly, but I can tell you in terms of the value return of what you get, it gives you some confidence in that you're looking after your estate in a better way.
I think that. What we did in terms of brand and reputational protection in terms of getting, I don't know what the product is called, but we basically register a lot of assets as well. So domains and things like that. I think that before I joined the security team in, in a, in a more senior way, I wasn't aware of how much work went into protecting just general stuff, you know, websites, brands, email addresses, you just kind of take it for granted that that stuff's yours and it's all fine.
But I think that adds loads of value because there's a massive reputational problem that you land up with. If people take over websites and that that's hard to unpick, that was one of the things, EDR, XDR, whatever R nowadays, um, you call it. The rollout of that, that sort of endpoint protection component that adds huge value because that behavioral analytics that you get out of those tool sets is not impossible for a human being to do.
At that scale and at that pace. So that's the kind of stuff that I think is made a big impact to us.
Evan: Maybe two questions for you, Lynton. Um, one is, how do you think criminals will start using some of these new technologies? You know, things like, you know, chat GPT or other, you know, generative AI tools to kind of launch, you know, new types of attacks.
And then what is the implication for security teams, right? What would you recommend kind of, you know, for you and your peers to start thinking about now to kind of get ready for, you know, this future AI powered, you know, criminals that are coming soon?
Lynton: The reality is, I think that there is a skill augmentation in terms of the attack vector, isn't there? So the ability to upskill, the ability to leverage generative AI capability to write things at pace or You know, if you're still going off the sophisticated social engineering, and you're looking at things like that, your ability to be able to write crafted, well positioned emails that are specific to what someone's doing, you know, Susie is the PA from LinkedIn, and then suddenly you say, Oh, yeah, Susie's the person.
This is what you're after, you'd kind of give it some prompts and then what you end up with is you end up with a really well crafted CEO sounding or CFO sounding email that I think more traditionally would be a lot harder to craft in a language or in a manner that most cyber criminals probably would have would have come across and then you can do that at scale.
I think that that starts to become a real a real piece. The concepts around attack automation in terms of what might be possible by using generative AI to do things in a less person orientated way. So how do you get generative AI to support you in the way that you do the attack and then also pivot a lot faster?
Because you know, the thing that a human being, I guess, does at a slower pace is consume the response and then figure out what to do next. Whereas I think from an AI kind of component, I think you can do that at pace, which I think is, is a challenge. And then generating more robust content around what you want to do.
So if I'm going to go off the business email compromise, that's one thing, but what happens if I wanted to go off the reputational kind of slander sort of space? I could start to generate what seems to be reality around stories that have been around for 20 years that are dug up. This is what it looks like.
You get generative AI to help you do quite a lot of that sort of thing. You know, one of the things that I saw that was quite scary was the, the idea of saying that there was a virus in, I don't know, let's say an Eastern country 20 years ago, and then fabricating that it was real. And then kind of publicizing that out to the internet and doing things.
It's really hard to know that that's not true. And to me, that is where that kind of attack kind of component starts to become a real, real concern. We can only invest and defend in, in a certain number, isn't it? There's a finite number of investment we can make and what we can defend against. The challenge around the sort of, um, threat vector there or the threat axis is that they only have to get it right once in a certain area that we may not have invested as much in, isn't it?
So it's hard to know where to pick your battles. And I guess that's where the defend kind of component then starts to come in is like how can you use AR to provide more coverage in a way that is difficult to do. So, you know, what I think about there is a sock can only consume so much information. You know, I mean, I guess if you, you know, alert fatigue is real and all that sort of stuff.
If you start to apply general AI over the top of that and you give it an ability to. use the LLM in the background to be able to consume that information. I think you can get rid of a lot of the noise and you can kind of focus on the right areas. And I think that those, you know, that kind of gives you a defendable position, which I don't think you would have if you start to have large amounts of, of, um, automated attacks being generated.
Mike: And then I think the the million dollar question is always, I think we can think of all the examples of how these can happen. But have you seen or heard of any actual attacks where you feel like, hey, the attackers leveraging AI or generative AI in perpetrating this attack?
Lynton: I can tell you a story from a friend, and that just kind of removes me from the situation. Um, look, I think what we have seen is teams being leveraged quite heavily. And then Exceptionally well crafted emails around. We're having a review of the way that our organizational staff structure is sitting.
These are the things we're doing very specific documents around consultation and what that looks like. Please look at this. That is, is, is exceptionally well thought through. I feel like that is probably more crafted on somebody saying, Hey, chat, GPT or others. Please give me a CEO kind of position letter or head of HR or whatever, because we hadn't seen it at that quality before.
That'd be one. Two, you could also probably buy a toolkit. So somebody's gone off and convinced people that it's the right thing they've created themselves. So maybe whether that's generated AI, I'm not sure. But I don't think there's a specific marker or a flag that I would say is just showing me that it's there today.
That doesn't mean that it's not there, it just means that it's harder to tell whether it is or isn't, which is probably even more concerning because you can't tell it apart from the human piece, which is the point, isn't it?
Evan: Yeah, you can imagine, especially with the fast paced development of these tools, right?
Even if it is distinguishable today, well, it's just a matter of time before this AI generated content is indistinguishable, right? It's not just going to be emails and Teams messages, right? It's going to be FaceTime and Zoom and, you know, who knows what else in the future.
Lynton: And that's a good point, isn't it? Because it's going to be, I mean, I don't want to go back to things like deep fakes and stuff like that. But, um, you know, the idea of being able to then generate a voice or a close enough impression of that voice that, um, you end up with a situation of, um, I'm pretty sure that sounds like my boss, or it sounds like somebody from the bank or whatever.
It's going to be a lot harder to distinguish between whether it's real or not. Not looking forward to that, Tom, because I'm not sure how you'd do that over the phone and stuff to figure that out. I also think that, you know, if you think about data loss or data theft, the ability to pause that data meaningfully is also becoming a lot simpler.
I mean, you can, you can shove a whole bunch of information into a chatGPT type tool, and it can ingest and give you meaningful insights and analytics over what you've given it. So that you could probably weaponize the information you've just stolen much faster than you would have done in the past, you know, if that takes.
Bob, Bob's my criminal fella. So Bob's, yeah, it takes Bob 17 years to look at it all, shove it into chat GPT and next thing two hours later, you've got more than enough content out of that than you could have done anything with if you'd spent all that time. That's probably another key.
Evan: You know, Lynton, that's a really good point that I haven't heard anyone else talk about; it's kind of obvious once you say it.
Lynton: Yeah, it is. The thing that worries me about that is if you look at PII information, and you have a, is it LAM, which is the piece that allows you to set the parameters as to what chat GPT and others do that.
I don't know what the, there's a layer for that. I'm not sure what it's called, but if you have the ability to be able to purchase your own stream of it and remove a lot of that control set, you can just say, look for PII information. Tell me all this sort of stuff. And, um, And then suddenly you've got a list of stuff that might've been trunched in a bunch of files and a bunch of texts.
That to me is kind of the piece that accelerates the problem that you then experience next, which is I can publish a whole bunch of information about a whole bunch of people quite quickly. Because there's no regulation and I know there's a lot of talk about it and I know Europe's doing stuff and everything like that, but because it's an unregulated area in that sense, what you also end up with is the quality of the control set in the background and where that information goes and how it's then processed and what gets done with it again.
Means that if you let it loose and your, your business uses it and they decide to do something with it and they put a whole bunch of data and information into it, legitimately not, not trying to be malicious in any way. You also don't know what that then feeds into. So other people then have access to it in a non criminal way, but that doesn't mean it doesn't get criminalized or, or it doesn't get weaponized to be doing, doing something else.
Evan: So, Lynton, I think we've, uh, left the audience with a grim look of the future, right? We've talked about AI. You know, AI is going to enable more people to be criminals. It's going to allow them to operate at a way higher scale and create, you know, super human level sophistication of attacks. So maybe talk about the other side, right?
Like what, what should give us optimism, right? What role do you think AI will play in helping defenders not just stop those few attacks, but kind of get, you know, hope, ideally get even further ahead of the, of criminals?
Lynton: Yeah. Why not consider either fully or close to fully automated defense capability based on AI, isn't it?
Because again, I'd come back to what is, what is the main challenge? So. Something gets flagged, there's an alert, I can teach every security analyst that I have that always assume compromise. But there is still a human factor of making a decision about what's going on, isn't there? So now I've looked at that, oh, I think something's going on, I won't raise it as a ticket.
Well, it's not supposed to be that. On a AI front, you can eliminate that kind of component of don't take that step. So you could basically, in my view, you could set up more gen AI kind of modeling that allows you to always assume breach and then sort of follow a process around what that means. Doesn't have to stop or, or disconnect accounts or do anything like that, but gives you the idea of being able to really follow through the concept of what it is in an investigation in a short term, much easier than analysts can do.
Then dropping the really kind of the gritty stuff into an analyst's inbox so they can go after it. I think that is going to be game changing for organizations that get thrashed with stuff. You know, you think about how many people you need and what that is. Well, capability wise, I can go after some really excellent people and I can get those people to focus on some really challenging things.
That's a much better space to be in. I think AI will bring that into the market quite quickly. I think the ability to work through and investigate indicators of compromise, I think AI will help with as well. You know, because you can feed it the information, you can tell it what they are and what they need to look for.
I think you can peruse through that and figure out what's going on in that space a lot faster than humans can, or can flag very specific areas that humans need to go focus on because it's anomalous much quicker than the human are because, you know, that, that parameter and pattern it well understands it would look at the same thing a thousand times if it has to at different angles, whereas a human being can look at it at three or four and then sort of gets up and says, I think it's okay.
Those are the things that I think will be game changing to save. The more challenging things, I think, are going to be much easier to do with AI if it kind of moves in that direction, which I think it is. I think that's the point, isn't it? I think a lot of the, um, the cyber organizations that sell cyber security capabilities, I think they're adopting AI as fast as anybody else because I think they can see that there is a massive opportunity to actually help, you know, make a difference.
I think that's the key for me.
Mike: Do you think that changes kind of what you look for as you hire cyber security professionals for your team? I mean, what are some of the qualities you think you're looking for as these other things change?
Lynton: I think it might into the future. I don't think it has for me yet.
But what I would say is that you probably start to look for slightly different qualities. I think if you, you want people to be more challenging and investigative around what's coming out. You'd look for their quality, whereas I think if you might look at analysts in general, you might have a bit of a churn of work sort of mentality and that sort of thing.
But you want to, you want to go after a different quality set. But I don't, I don't think we've bumped into the, would I change my hiring criteria based on what I'm thinking generative AI might deliver? I haven't thought about it, but now I will be. So thanks for that. Well, because it's right though, isn't it?
Because if you feel that there's a lot that can get done, so staff productivity can be bolstered quite heavily, quite quickly in short term by AR, then what's the next capability you need? So what's the next bit? What's the layer above that? Or if you've already got staff that you're invested in, what are you getting them to to upskill towards by using, again, probably using AR to help them to upskill in terms of where they need to get to.
Evan: So we got five minutes left. Want to kind of just like finish up with a, um, a bit of a lightning round. So let me ask you like three, four questions and looking for like the, the one tweet response, but these are, these questions might be hard for the, the one tweet version. Um, so Mike, you want to kick it off for us?
Mike: Absolutely. So if you could go back in time and, you know, prior to stepping into this role as CISO, what's the one thing you wish you would have known? going into this role.
Lynton: The predominant thing in my organization is the education of what cyber security means, impact wise, disruption to business wise, avoid the scaremongering, actually go and have proper conversations about impact.
I think we would have gotten to the table with a lot of the businesses much faster. In my opinion.
Evan: So, Lynton, one thing I've always been impressed by you is, um, you're able to kind of look at that, right? Both through like the board lens, but you're also very close to details and sort of understanding some of the specific attacks.
What's your advice to maybe other CISOs that, about how they can stay up to date, right? With the kind of latest stuff without being overwhelmed, you know, with, you know, studying every research report, like what's your, what's your kind of like information diet, right? To kind of help keep you up to date.
Lynton: The people you have in your team, if you have a team, they are absolutely gold. There are, I mean, the people who enjoy the technical piece more than anybody else, you need to have debrief kind of conversations with them every second week or whatever, and get them to fill you in.
I think because they love that stuff. And I think their, their capability of being able to ingest it, digest it properly and give you the feedback is probably better than you would get if you were trying to absorb that stuff yourself. That'd be one and two, I genuinely think you can find quality information in other CISOs that you know.
So we have groups of CISOs. I'm on a couple of chat groups, I'm on a couple of discord groups and all that. We share a lot of information and it's poignant. You can ask a question to say, I'm suffering through X. What are you doing? And you've got a group of people who are willing to say to you, we did X, we did Y.
That makes a big difference. That's like having your own little LLM built on actual people with lots of knowledge. It makes a big difference. Community stuff like that and networking like that and getting people you can trust, that makes a big difference in this, in this, um, in this game.
Evan: It's like artificial, artificial intelligence.
Lynton: Yeah. A hundred percent. But they have names. It's brilliant.
Evan: Yeah. Yeah. And a lot of experience and knowledge and they care about stopping crime.
Lynton: Exactly. Superheroes.
Mike: So on the more personal side, what's a book that you've read that's had a big impact on you and why?
Lynton: It was a book I read called The Phoenix Project.
I don't know if you know that one. That one was quite good. And then there was the one that followed around on the CIO piece. And then there was another one that they wrote about the CISO area. And I can tell you what, I realized that there is probably a little bit of writer's, um, writer's freedom in terms of what expression or whatever it is in terms of, you know, going off and changing a few things and saying some stuff.
But reading that book, you could almost sit there and replace, you know, Steve's name for the person at your office. And, and I don't know, I just, I felt it kind of hit home a lot more that it wasn't just me in a cycle of pain, that there were other people doing it. I don't know why we find as human beings, if we have a group of people who are suffering, we all feel a bit better, which is odd, but, um, I enjoyed those immensely.
I purposefully read books that are escapism, like the movies I watch, because I think my real life and the amounts of the financial times that I read, I need a way to disconnect. So the last great book I read was the second book in the Witcher series, because I prefer to disconnect from, from things, but yeah, that, those Phoenix Project books and stuff like that, they are an excellent set of, of, of reads.
And then bunch of self help ones so I don't cry myself to sleep at night, but otherwise it's fine.
Evan: Oh yeah, we're gonna have to trade some book notes for, you know, science fiction or fantasy. I got some recommendations. I want to hear yours.
Lynton: Problem is you say switch off and then suddenly I find myself reading at like 1:30 in the morning going, come on, go to bed.
Evan: So that's right. Lyndon, thanks so much for spending time with us today. Great to chat with you as always and looking forward to chatting again soon.
Lynton: Thank you very much. It was great to speak to you again, Evan. And thanks, Mike.
Evan: That was Lynton Oelofsen, Chief Information Security Officer at Associated British Foods.
Mike: Thanks for listening to the Enterprise Software Defenders podcast. I'm Mike Britton, the CISO of Abnormal Security.
Evan: And I’m Evan Reiser, the CEO and founder of Abnormal Security.
Mike: Please be sure to subscribe so you never miss an episode. You can find more great lessons from technology leaders and other enterprise software experts at enterprisesoftware.blog.
Evan: This show is produced by Josh Meer. See you next time.