FICO has been in the analytics business for decades, but Mike Trkay’s framing of the current AI moment is less about novelty and more about discipline. The “cool, shiny new objects” era is wearing off, he says, and enterprise teams are getting pushed toward a harder question: “where’s the true return on that investment?” That pressure is changing how CIOs plan. Instead of funding dozens of demos, Mike expects enterprises to concentrate spending on a smaller set of initiatives that can be operationalized, governed, and measured.
On the 62nd episode of Enterprise AI Innovators, hosts Evan Reiser and Saam Motamedi talk with Mike, the CIO at FICO, about what that operational shift looks like in practice. Mike’s role is “a little bit unique” compared to the classic back-office CIO description. He still owns traditional IT functions, but he spends most of his time on technical operations, customer onboarding, and the platform experience for FICO’s enterprise customers. That operating posture shapes how he thinks about AI: it is only useful when it can survive production realities like reliability, latency, incident response, and support.
One of Mike’s most concrete “enterprise-class” themes is data. If you are adopting AI features embedded in tools you already run (think ITSM, CRM, or security platforms), you quickly run into a familiar constraint: your knowledge is scattered across systems. You can get answers, but they are partial because each system can only “see” its own pool of data. Mike’s advice is to prioritize data integration and pipelines so AI experiences can produce answers that reflect the enterprise, not a single application. If, on the other hand, you are building AI tools yourself, the bottleneck shifts to data quality and stewardship. In his view, proprietary data becomes a moat, but only if you treat it like “the crown jewels,” with a clear owner mindset and controls that preserve its integrity.
Mike also makes the “must do” case for secure internal access to LLMs, without pretending you can stop usage. “Be open, be honest with yourselves, know that it’s going to happen,” he warns, because employees will use public tools and “unintentionally… share data and proprietary data or IP.” His recommendation is straightforward: provide a sanctioned, controlled enterprise option (even if it is based on a leading vendor model) so the default behavior becomes compliant. This is less about chasing the latest feature and more about reducing predictable risk.
When it comes to quick wins, Mike doesn’t reach for flashy use cases. He calls out three starting points that tend to have the ingredients for fast time-to-value: (1) enterprise search across the company’s disparate repositories, while respecting role-based permissions; (2) security, where rule-writing can’t keep up with dynamic threat patterns and model-based approaches become necessary; and (3) event intelligence management, sometimes called AIOps, for telemetry-heavy operations. That last one is close to his day job: profile-driven anomaly detection (no endless monitor rules), event correlation, likely root-cause prediction, and guided or automated response. The reason it works is simple: the data volume is already beyond what humans can process, making it a natural place to apply machine learning and language models for triage and knowledge retrieval.
Finally, Mike draws a clear boundary around where broad LLMs struggle. “Every tool in the toolbox has a perfect use case,” he says, and general-purpose models are powerful but risky when you need deep domain accuracy, trust, and explainability. That is why he expects more enterprise momentum behind smaller, focused language models in regulated environments, along with responsible AI tooling that supports explainability, auditability, robustness, and compliance. The operating principle is consistent across the conversation: don’t optimize for the demo. Optimize the workflow, data path, controls, and measurable outcome.