On the 29th episode of Enterprise AI Defenders, host Mike Britton speaks with Vaughn Hazen, Chief Information Security Officer at Canadian National Railway (CN), a company that operates over 20,000 track miles across North America and sits at the intersection of physical infrastructure and digital modernization. His background spans military service, electrical engineering, early internet infrastructure, and phone phreaking culture. This origin story continues to shape his practical, adversarial-minded approach to cybersecurity leadership today.
CN is a century-old rail operator with layers of legacy infrastructure and growing deployments of cloud-native and AI-powered technologies. "We've got some systems and technologies that are a little bit older. We've got some systems and technologies that are kind of leading edge," Vaughn says. "So we've got a mix of things that makes it kind of unique." That duality creates not only technical challenges, but also cultural and operational ones. As Vaughn explains, his security program must cover everything from safety-oriented machine learning deployments to deeply embedded on-prem systems that cannot be easily modernized.
CN’s ML-driven Automatic Inspection Portal is a case in point. High-definition cameras capture data from trains as they pass through, and machine learning models analyze those video streams in real time to detect mechanical issues and prevent derailments. This kind of industrial AI, Vaughn says, is a strategic safety tool. But its adoption also demands new thinking around security posture, visibility, and talent.
At the same time, Vaughn and his team must defend against attackers using the very same tools. Deepfakes and impersonation are not future threats; they are current attack vectors. "We've actually done some tabletop exercises, leveraging deepfakes," he shares. The goal is to stress-test CN’s processes for validating identities and preventing social engineering. "It really gets down to making sure that you have the right processes in place to be able to counter that type of fraud and impersonation."
What concerns Vaughn most is not just the sophistication of the threats, but how easily existing processes can be exploited. He offers a stark example: "You should not enable somebody to just send a message to change banking information. If your system works like that, you've got a real problem." Instead, he argues for multi-step verification protocols using pre-established contact data, and critically, for keeping workflows out of email. "Email should be a notification, not necessarily the process."
This theme, resilience through design, runs through Vaughn’s playbook. His insistence on systems thinking extends to identity access controls, approval hierarchies, and even internal org charts. Attackers, he warns, increasingly leverage that metadata to map social structures and target mid-level employees instead of CEOs. And as phishing attacks grow more personalized through generative AI, he believes the real differentiator is user intuition. "Am I feeling like I'm being pressured to do something? Am I feeling a heightened sense of emotion? If I am, that should be a flag."
Still, he is not a pessimist about AI. Vaughn sees real upside in automating labor-intensive security tasks, such as updating CMDBs. He describes a scenario in which a chatbot uses login activity patterns to proactively confirm ownership of infrastructure assets and then updates records accordingly. "We'd have to hire a bunch of interns to go out and chase people down... That would be a one-time effort and it wouldn't keep things up to date."
Yet Vaughn's most sobering insight may be about the long-term cost of over-automating. "As we migrate some of these more basic tasks over to AI... we're not building that next generation of talent." Without low-level investigative work, junior analysts lose the reps that build intuition, pattern recognition, and judgment. "You give new people that work that maybe isn't all that exciting, but it starts to give them visibility into how everything goes."
That philosophy, that security maturity comes from sustained, hands-on practice, leads Vaughn to cite one of his favorite books: The 7 Habits of Highly Effective People. Its lesson is that long-term results require long-term discipline. "You can't plant the seeds and expect to harvest in the same day. A lot of the stuff that we do to prepare for a potential event is stuff that we've got to be doing in advance."
For Vaughn, AI does not remove that responsibility. It heightens it. Whether defending against deepfakes or deploying machine learning to prevent derailments, his approach is the same: make the system harder to fool, the people harder to manipulate, and the organization better prepared to adapt.
Listen to Vaughn's episode here and read the transcript here.