with Christian Schaefer, Matthias Eggenschwiler and Marc Linzmajer
[show abstract] Many brands leverage authenticity to connect with consumers by embodying values that resonate with them, fostering a sense of shared purpose. Yet, despite its advantages, such a values-based positioning might entail substantial risks for demand. We argue that when consumer-brand relationships are premised on a brand's consistent commitment to its values, demand-side authenticity beliefs carry a binding liability. If a brand action disrupts these beliefs, it may cause non-compensatory consumer reactance. To document a possible impact of such an authenticity violation on consumer demand, we leverage a recent brand acquisition in a mainstream consumer goods category (spices). Using difference-indifference and synthetic control methods, we estimate that the acquisition-widely perceived as an authenticity violation-led to a 22.74% decline in brick-and-mortar sales and a 32.02% demand decline online. Content analysis of social media records confirms that this decline was indeed driven by a loss of consumers' beliefs in brand authenticity. We further suggest that the violation's impact was transmitted through the brand's reliance on influencers: many influencers publicly withdrew their support following the acquisition. Our findings demonstrate how authenticity, while a valuable marketing asset, can also expose brands to severe demand risks when violated.
function toggleAbstract(link) { var abstract = link.parentElement.nextElementSibling; if (abstract.style.display === 'none' || abstract.style.display === '') { abstract.style.display = 'block'; link.innerHTML = '[hide abstract]'; } else { abstract.style.display = 'none'; link.innerHTML = '[show abstract]'; } }
with Daniel M. Ringel and Bernd Skiera
[show abstract] Effective online advertising depends on a marketer’s ability to reach a target audience—a specific group of consumers with desired characteristics. Traditionally, marketers have identified these consumers by tracking and analyzing their online behavior. However, growing privacy concerns and new regulations are restricting this practice. In response, this research investigates an alternative strategy for reaching target audiences online: inferring consumer characteristics solely from search queries consumers use when searching online. We empirically test the premise that search queries contain valuable signals about consumer characteristics that allow marketers to identify those queries most indicative of their target audience. Across three contexts—weight loss, online dating, and personal investing—we demonstrate that search queries strongly indicate consumer characteristics such as socio-demographics, category experience, or brand preferences. A subsequent field study further supports the external validity and practical implications of these findings. Using our results, a leading retail bank launched a search advertising campaign targeting a particular high-value audience. This audience-specific campaign converted a higher share of new customers (+21.37%) who generated substantially more revenue (average trading volume per customer: +97.90%), compared to a performance-driven campaign designed by SEA experts.
function toggleAbstract(link) { var abstract = link.parentElement.nextElementSibling; if (abstract.style.display === 'none' || abstract.style.display === '') { abstract.style.display = 'block'; link.innerHTML = '[hide abstract]'; } else { abstract.style.display = 'none'; link.innerHTML = '[show abstract]'; } }
with Raymond Burke and Alex Leykin
[show abstract] Marketers increasingly rely on large language models (LLMs) for guidance in their daily work, yet the extent of these models’ conceptual grounding remains unclear. What do LLMs truly “know” about marketing, and how effectively can they reason with and apply that knowledge? To answer these questions, we compile a dataset of approximately 33,000 questions from 25 marketing textbooks spanning 12 subfields and evaluate LLMs’ ability to answer them. Current LLMs show strong overall performance, answering 83%–87% of questions correctly. To understand the drivers of their performance, we leverage variation in answer accuracy within textbooks and assess three dimensions: (i) domain knowledge, (ii) reasoning ability, and (iii) AI-human interaction. We find consistently strong performance across subfields, including niche areas. LLMs’ reasoning abilities are strong, with near-perfect recall and understanding of concepts, but decline slightly on tasks requiring higher-order (–9%) and numerical reasoning (–13%) or involving false statement detection (–20%). In turn, accuracy is largely unaffected by prompt- or question-wording. Additional experiments manipulating question phrasing indicate that the high performance does not result from matching surface patterns in our specific question set. Together, these findings suggest that LLMs can serve as capable co-intelligences for marketing professionals and educators. function toggleAbstract(link) { var abstract = link.parentElement.nextElementSibling; if (abstract.style.display === 'none' || abstract.style.display === '') { abstract.style.display = 'block'; link.innerHTML = '[hide abstract]'; } else { abstract.style.display = 'none'; link.innerHTML = '[show abstract]'; } }
Abstract: Political parties and their members regularly hold speeches in which they express their opinions. Thereby, their speeches can reveal their underlying positioning – potentially in very high frequency. Herein, we propose a method for estimating high-frequency time-series of party positioning from their members’ parliamentary speeches. Our approach leverages two recent methodological innovations: pre-trained language models and dynamic scaling. We apply our approach to data covering parliamentary speeches within the German Bundestag during a 12-year period. Our monthly positioning estimates are highly consistent with a plethora of established benchmarks derived from manifesto texts, expert-surveys, roll-cast votes, or party-embeddings. In contrast to extant approaches, our estimates uncover substantial positioning dynamics across and within legislative periods. We demonstrate that simple measures of positioning dynamics can help to explain up to 20%p of additional variance in weekly election polls. [Show abstract ...] .collapsed { height: 5px !important; overflow: hidden; } .article-style { pointer-events: none; cursor: default; } .show_more { color: #1c3058; pointer-events: auto !important; } function wp_III_toggle_abstract() { var abstract = document.getElementById('wp_III_abstract'); if( abstract.className == 'collapsed' ) { abstract.className = 'expanded'; var max_height = document.getElementById('wp_III_abstract_text').clientHeight + 25; abstract.style = 'height: ' + max_height + 'px'; document.getElementById( 'wp_III_more' ).textContent = '[Hide ...]'; } else { abstract.className = 'collapsed'; document.getElementById( 'wp_III_more' ).textContent = '[Show abstract ...]'; }; }