
Universities are unlike any other enterprise environment. They are research institutions, living communities, and public infrastructure operators all at once. For John Hoyt, Chief Information Security Officer at Clemson University, the challenge of protecting such a diverse ecosystem goes beyond firewalls and endpoint monitoring. It requires AI.
"It’s a small city," Hoyt explained. “We have researchers and students that live on our network. There’s a power plant, water treatment, a police department, we've got everything.” The interconnectedness of these systems, from academic labs to dormitory networks to critical infrastructure, creates a complex and unpredictable digital footprint.
Faced with this scale and variability, Hoyt doesn’t see artificial intelligence as a futuristic concept. He sees it as a necessary evolution of how defenders think. From automated log analysis to intelligent honeypots, Clemson is exploring AI as a means to do what no human team could do alone: maintain vigilance across a sprawling, always-changing landscape.
One of the most pressing challenges in a university environment is distinguishing between legitimate activity and emerging threats. With thousands of users, countless devices, and multiple overlapping roles (students who are also employees, for example), the concept of normal behavior is constantly shifting.
“How do you keep up with all those pieces to look for unusual patterns?” Hoyt asked. “You have to understand what normal is. And it helps you to be like a human intrusion detection engine.”
That’s where AI enters the picture. Rather than defining fixed thresholds or relying on signature-based detection, Hoyt sees potential in AI’s ability to baseline behavioral norms across users and devices. "Having AI help you with that is a bit scary," he admitted, “but I do think it is possible already.”
This isn’t about removing human analysts, but augmenting them. In Hoyt’s vision, AI becomes an intelligent sentinel, scanning petabytes of signals to surface only the most contextually relevant anomalies.
Hoyt is equally intrigued by AI’s ability to drive proactive security, not just detecting breaches but enticing adversaries into controlled traps. Honeypots and canary documents have long been part of the security playbook, but they’re rarely used at scale due to management complexity.
“I think that we can get into setting traps for adversaries,” he said, “like honeypots or setting up canary token documents that phone home when they’re touched.” But deploying these tools manually is tedious, prone to misconfiguration, and often difficult to monitor. “It’s a pain to manage and troubleshoot those things.”
This is where AI could make a measurable difference. Hoyt envisions systems that not only generate decoy content automatically but also monitor interactions and adapt in real time. “AI could help us enhance that,” he said, noting the appeal of proactive, deception-based defenses.
Such tactics can buy defenders time, gather intelligence on attacker methods, and even act as early warning signals before actual damage is done. In a highly targeted environment like a university, where everything from student financial accounts to sensitive research is at risk, these kinds of defenses aren’t just clever. They’re critical.
While some AI applications are still theoretical, Hoyt pointed to clear, tangible uses already improving day-to-day operations at Clemson. “With AI, there are already practical uses,” he stated plainly. “Create me a script or interpret this script. Look at this error in this log message and help me decipher it.”
For a team tasked with securing thousands of endpoints, countless software packages, and diverse operating environments, these seemingly small automations add up fast. Whether generating Bash commands or interpreting cryptic log entries, AI helps analysts move faster, reduce errors, and stay focused on higher-level threats.
Hoyt appreciates that AI isn’t just about neural networks or black-box systems, but can also be a tool that meets security professionals where they are. “That is stuff we can use and do use today,” he said.
This is especially valuable in environments like Clemson, where staff resources are finite and demands are constant. AI isn’t replacing cybersecurity talent as much as it is multiplying their impact.
While AI has potential for threat detection and automation, Hoyt also sees it as an area requiring thoughtful governance, especially in higher education, where faculty often procure or build new technologies independently.
“We’re not building our own private AI environment,” Hoyt said of Clemson’s current posture. “But mainly in the purchasing part of it. If you can be in that decision tree when folks are going out to purchase new technology… to understand what data is going to be there, what data classification, and whether they are doing the appropriate security controls.”
This reflects a broader concern: as generative AI tools become more accessible, well-meaning departments may deploy them without fully understanding their risk profiles. Hoyt’s approach isn’t to slow innovation, but to align it with policy.
AI is not just a security tool. It’s also a category of software that needs to be secured itself. Whether through privacy protections, access controls, or third-party assessments, Hoyt wants to ensure Clemson’s academic and research missions can continue safely.
The academic environment presents a blend of freedom and formality that few other industries share. Students and faculty expect academic freedom, but, like any other organization, compliance regulations also need to be upheld.
That balance between openness and oversight makes AI especially relevant. With the right design, it can respect privacy while enhancing protection. With the right governance, it can empower faculty without exposing the university to undue risk. And at a fundamental level, Hoyt believes in using AI to support, not replace, the people behind the security controls.
John Hoyt’s view of AI isn’t rooted in hype. It’s grounded in operational needs, staff realities, and institutional complexity. He doesn’t chase shiny tools; he looks for meaningful, scalable leverage. From interpreting logs to outsmarting intruders, from defending infrastructure to enabling secure innovation, AI is already making a difference at Clemson.
The path forward, he suggests, is to embrace what works, explore what’s promising, and ensure every new tool fits into a larger strategy of trust, accountability, and resilience. Because when you run a digital city, defense isn’t theoretical. It’s personal, persistent, and powered by every insight you can get.