
Marty Brodbeck doesn’t talk about AI as a novelty or a buzzword. He speaks about it as a practical tool that underpins transformative experiences at Priceline. For the CTO of one of the world’s largest online travel agencies, generative AI isn’t just a shiny technology trend; it’s reshaping how the company connects with customers, personalizes travel journeys, and builds smarter, more efficient infrastructure.
Brodbeck described the technical foundations and organizational practices that have enabled Priceline to use generative AI effectively, why careful training and iteration matter, and how the company’s AI-powered chatbot, Penny, plays a pivotal role in enhancing customer experience and conversion.
Before diving into applications like chatbots and personalization, Brodbeck emphasized the importance of solid data architecture and real-time systems as prerequisites for meaningful AI, especially generative AI, at scale.
Priceline’s journey toward generative AI began with a strategic investment in real-time capabilities: refactoring applications for the cloud, redoing data pipelines, and building a customer data platform that could ingest search, booking, and interaction data in real time. Brodbeck explained that this foundation was “paramount to get us to a point where we could do generative AI and advance a lot of our machine learning capabilities.”
This emphasis on real-time data is about more than speed; it’s about context. Without accurate, up-to-the-moment customer information, AI models are blind to intent, preferences, and behavior. Priceline recognized that delivering generative AI value depended on linking user identity, travel patterns, and real-time signals into a coherent platform.
With the architectural foundations in place, Priceline launched Penny, an AI-powered chatbot embedded across key customer touchpoints — including checkout flows, post-booking interactions, and customer care. But Brodbeck underscored that Penny’s success wasn’t the result of dropping a model into production; it required rigorous iteration and careful tuning.
“It’s really two things… the iterative nature in which we built our prompts for Penny and tested those to get it to a point where it was conversion positive…and being able to iterate on prompts to get them right so that Penny is responding in the appropriate way that’s valuable for our consumers…”
Rather than assuming a large language model (LLM) would function perfectly out of the box, Priceline treated AI deployment as a test-and-learn process. They applied real-time measurement, such as NIPBD (Net New Incremental Bookings Per Day), to understand whether Penny’s behavior helped or hindered customer outcomes.
Early prompt iterations had to be rolled back or refined because they initially reduced conversions. Iterating through roughly 20 variations, the team gradually brought Penny to neutral and then positive impact on conversion. This iterative methodology mirrors best practices in software development and experimentation: launch small, measure precisely, learn quickly, and pivot as needed.
A critical insight from Brodbeck’s team was that AI outputs are only as effective as the context provided to the model. Priceline approached this problem by designing prompts that weave customer data into AI interactions — creating a kind of personalized travel dialog.
“It’s almost like object-oriented programming. You have this universal prompt…based on who the end user is… {traveling alone, with a companion, or as a family}…then the prompt expands into those three different dimensions.”
By layering personalization elements (user type, past bookings, search history, and likely intent), Priceline crafted a prompt structure that could adapt responses to individual customers’ needs. This is an important illustration of how AI’s generative power must be married with real business logic and context to create real utility.
Contrary to the notion that generative AI initiatives require large AI research teams or specialized labs, Brodbeck revealed that Penny was built by a compact, nimble team: “The core team was…five developers, one programmer, one product manager, and one quality person.”
This small, cross-functional group enabled rapid iteration on prompts and the infrastructure needed to operationalize Penny. The key to scaling, according to Brodbeck, is not headcount but architecture and repeatable patterns, such as how prompts are structured and expanded to handle personalization.
His approach reinforces a broader lesson: with thoughtful frameworks and clear objectives, even small teams can deliver impactful generative AI solutions that drive business value.
While Penny represents a major customer-facing application, Brodbeck sees even larger opportunities in areas that are less flashy but equally impactful. One such area is customer service automation, particularly in managing large volumes of interactions with accuracy and speed.
“There’s a huge opportunity in the customer service…side of the equation…no human can ever understand all of that.” By mining historical logs, support transcripts, and product knowledge, AI can help automate responses to common queries and reduce costly human labor, freeing agents to focus on higher-value, complex tasks.
Another underestimated opportunity Brodbeck highlighted is modernization of legacy technology stacks. He pointed out that generative AI models can assist in refactoring old codebases, converting outdated languages and architectures into modern equivalents far faster than traditional manual processes could. These use cases move the narrative from chatbots as conversational novelties to AI as a tool for deep operational transformation.
Priceline is also exploring AI’s promise in enhancing developer productivity, recognizing that human expertise remains critical even as AI tools evolve. Brodbeck shared that the company is partnering with tools that use large language models to assist with code completion, pull request reviews, and automated testing.
Importantly, he reiterated that AI isn’t a replacement for developers; it’s an amplifier of developer capability: speeding up routine work so engineers can focus on building high-value features.
This stance aligns with a pragmatic view of AI: it supplements human skill and judgment rather than displacing it, especially in technical disciplines that require creativity, domain expertise, and architectural insight.
Reflecting on Priceline’s journey, Brodbeck identified several foundational pillars that enabled rapid AI innovation:
These pillars show that generative AI adoption is not an isolated project; it’s part of a larger transformation in how technology teams operate and deliver value.
Across his conversation, Marty Brodbeck offered several strategic lessons for leaders applying generative AI in enterprise contexts:
For technology leaders navigating the AI era, Brodbeck’s insights show that success lies not in conquering generative models, but in training them well, integrating them thoughtfully, and orchestrating them with solid engineering and data practices.