with Daniel M. Ringel and Bernd Skiera
[show abstract] Effective online advertising depends on a marketer’s ability to reach a target audience—a specific group of consumers with desired characteristics. Traditionally, marketers have identified these consumers by tracking and analyzing their online behavior. However, growing privacy concerns and new regulations are restricting this practice. In response, this research investigates an alternative strategy for reaching target audiences online: inferring consumer characteristics solely from search queries consumers use when searching online. We empirically test the premise that search queries contain valuable signals about consumer characteristics that allow marketers to identify those queries most indicative of their target audience. Across three contexts—weight loss, online dating, and personal investing—we demonstrate that search queries strongly indicate consumer characteristics such as socio-demographics, category experience, or brand preferences. A subsequent field study further supports the external validity and practical implications of these findings. Using our results, a leading retail bank launched a search advertising campaign targeting a particular high-value audience. This audience-specific campaign converted a higher share of new customers (+21.37%) who generated substantially more revenue (average trading volume per customer: +97.90%), compared to a performance-driven campaign designed by SEA experts.
function toggleAbstract(link) { var abstract = link.parentElement.nextElementSibling; if (abstract.style.display === 'none' || abstract.style.display === '') { abstract.style.display = 'block'; link.innerHTML = '[hide abstract]'; } else { abstract.style.display = 'none'; link.innerHTML = '[show abstract]'; } }
with Raymond Burke and Alex Leykin
[show abstract] Marketers increasingly rely on large language models (LLMs) for guidance in their daily work, yet the extent of these models’ conceptual grounding remains unclear. What do LLMs truly “know” about marketing, and how effectively can they reason with and apply that knowledge? To answer these questions, we compile a dataset of approximately 33,000 questions from 25 marketing textbooks spanning 12 subfields and evaluate LLMs’ ability to answer them. Current LLMs show strong overall performance, answering 83%–87% of questions correctly. To understand the drivers of their performance, we leverage variation in answer accuracy within textbooks and assess three dimensions: (i) domain knowledge, (ii) reasoning ability, and (iii) AI-human interaction. We find consistently strong performance across subfields, including niche areas. LLMs’ reasoning abilities are strong, with near-perfect recall and understanding of concepts, but decline slightly on tasks requiring higher-order (–9%) and numerical reasoning (–13%) or involving false statement detection (–20%). In turn, accuracy is largely unaffected by prompt- or question-wording. Additional experiments manipulating question phrasing indicate that the high performance does not result from matching surface patterns in our specific question set. Together, these findings suggest that LLMs can serve as capable co-intelligences for marketing professionals and educators. function toggleAbstract(link) { var abstract = link.parentElement.nextElementSibling; if (abstract.style.display === 'none' || abstract.style.display === '') { abstract.style.display = 'block'; link.innerHTML = '[hide abstract]'; } else { abstract.style.display = 'none'; link.innerHTML = '[show abstract]'; } }