Home / Essays / Performance implications of deploying marketing analytics

Performance implications of deploying marketing analytics

Frank Germann b,1
, Gary L. Lilien a,
⁎, Arvind Rangaswamy c,2
a Smeal College of Business, Pennsylvania State University, 484 Business Building, University Park, PA 16802, USA
b University of Notre Dame, 395 Mendoza College of Business, Notre Dame, IN 46556, USA
c Smeal College of Business, Pennsylvania State University, 210E Business Building, University Park, PA 16802, USA
article info abstract
Article history:
First received in 29, February 2012 and was
under review for 3½ months
Available online 27 November 2012
Area Editor: Dominique M. Hanssens
Keywords:
Marketing analytics
Marketing models
Marketing ROI
A few well-documented cases describe how the deployment of marketing analytics produces positive organizational
outcomes. However, the deployment of marketing analytics varies widely across firms, and many C-level executives
remain skeptical regarding the benefits that they could gain from their marketing analytics efforts. We draw on
upper echelons theory and the resource-based view of the firm to develop a conceptual framework that relates
the organizational deployment of marketing analytics to firm performance and that also identifies the key antecedents
of that deployment. The analysis of a survey of 212 senior executives of Fortune 1000 firms demonstrates
that firms attain favorable and apparently sustainable performance outcomes through greater use of marketing
analytics. The analysis also reveals important moderators: more intense industry competition and more rapidly
changing customer preferences increase the positive impact of the deployment of marketing analytics on firm
performance. The results are robust to the choice of performance measures, and, on average, a one-unit increase
in the degree of deployment (moving a firm at the median or the 50th percentile of deployment to the 65th percentile)
on a 1–7 scale is associated with an 8% increase in return on assets. The analysis also demonstrates that
support from the top management team, a supportive analytics culture, appropriate data, information technology
support, and analytics skills are all necessary for the effective deployment of marketing analytics.
© 2012 Elsevier B.V. All rights reserved.
1. Introduction
A recent Google search for “marketing analytics” returned more
than 500,000 hits. Marketing analytics, a “technology-enabled and
model-supported approach to harness customer and market data to
enhance marketing decision making” (Lilien, 2011, p. 5) consists of
two types of applications: those that involve their users in a decision
support framework and those that do not (i.e., automated marketing
analytics). During the past half century, the marketing literature has
documented numerous benefits of the use of marketing analytics, including
improved decision consistency (e.g., Natter, Mild, Wagner, &
Taudes, 2008), explorations of broader decision options (e.g., Sinha
& Zoltners, 2001), and an ability to assess the relative impact of decision
variables (e.g., Silk & Urban, 1978). The common theme in this
literature is the improvement in the overall decision-making process
(e.g., Russo & Schoemaker, 1989, p. 137).
Rapid technological and environmental changes have transformed
the structure and content of marketing managers’ jobs. These changes
include (1) pervasive, networked, high-powered information technology
(IT) infrastructures, (2) exploding volumes of data, (3) more
sophisticated customers, (4) an increase in management’s demands
for the demonstration of positive returns on marketing investments,
and (5) a global, hypercompetitive business environment. In this
changing environment, opportunities for the deployment of marketing
analytics to increase profitability seemingly should abound. Indeed,
an entire stream of research in marketing documents the
positive performance implications of deploying marketing analytics
(e.g., Hoch & Schkade, 1996; Kannan, Kline Pope, & Jain, 2009;
Lodish, Curtis, Ness, & Simpson, 1988; McIntyre, 1982; Natter et al.,
2008; Silva-Risso, Bucklin, & Morrison, 1999; Zoltners & Sinha, 2005).
However, there continue to be many skeptics with regard to the
“rational analytics approach” to marketing. For example, in a recent
interview with one of the authors, a (former) senior executive at one
of the world’s leading car manufacturers claimed that “…marketing
analytics-based results usually raise more questions than they
answer,” and he asserted that “the use of marketing analytics often
slows you down.” He also claimed that the “…performance implications
of marketing analytics are at best marginal.” When we inquired
about documentation for his views, he referred us to Peters and
Waterman’s (1982) highly influential book, In Search of Excellence, in
which the authors denounce formal analysis because of its abstraction
from reality and its tendency to produce “paralysis through analysis”
(p. 31). More recently, a study of 587 C-level executives of large international
companies revealed that only approximately 10% of the firms
regularly employ marketing analytics (McKinsey & Co., 2009). And
Kucera and White (2012) note that only 16% of the 160 business
Intern. J. of Research in Marketing 30 (2013) 114–128
⁎ Corresponding author. Tel.: +1 814 863 2782; fax: +1 814 863 0413.
E-mail addresses: fgermann@nd.edu (F. Germann), GLilien@psu.edu (G.L. Lilien),
arvindr@psu.edu (A. Rangaswamy). 1 Tel.: +1 574 631 4858; fax: +1 574 631 5255. 2 Tel.: +1 814 865 1907; fax: +1 814 865 7064.
0167-8116/$ – see front matter © 2012 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.ijresmar.2012.10.001
Contents lists available at SciVerse ScienceDirect
Intern. J. of Research in Marketing
journal homepage: www.elsevier.com/locate/ijresmar
leaders who responded to their survey reported using predictive analytics,
although those users “significantly outpace those that do not in
two important marketing performance metrics3
” (p. 1).
John Little diagnosed the issue more than 40 years ago as follows:
“The big problem with … models is that managers practically never
use them. There have been a few applications, of course, but the practice
is a pallid picture of the promise” (Little, 1970, p. B-466). Revisiting the
issue, Little (2004, p. 1858) reports that “The good news is that more
managers than ever are using models … what has not changed is organizational
inertia.” Winer (2000, p. 143) concurs: “My contacts in consumer
products firms, banks, advertising agencies and other large
firms say that [model builders] are a rare find and that models are not
used much internally. Personal experience with member firms of MSI
indicates the same.”
The low prevalence of marketing analytics use implies that many
managers remain unconvinced about the benefits that accrue from
that use. In addition, most research studies that document these bene-
fits have focused on isolated firm or business unit “success stories”
without systematically exploring performance implications at the firm
level. Given the lack of compelling evidence about the performance implications
of marketing analytics, the objective of this research is to address
two questions: (1) Does widespread deployment4 of marketing
analytics within a firm lead to improved firm performance? and (2) If
the answer to (1) is “yes,” what leads to the widespread deployment
of marketing analytics within firms? With the usual caveats and cautions,
particularly with regard to making causal inferences using
non-experimental data, we find that the answer to question 1 appears
to be “yes” and, hence, the answer to question 2 has high managerial
relevance, as well as academic importance.
To address our research questions, we propose a conceptual framework
that relies on both the resource-based view (RBV) of the firm
(Barney, 1991; Wernerfelt, 1984) and upper echelons theory (Hambrick
& Mason, 1984) to model the factors that link marketing analytics
deployment to firm performance, as well as the factors that drive the
deployment of marketing analytics. We assess the validity and value of
that framework with data drawn from a survey of 212 senior executives
at Fortune 1000 firms, supplemented by secondary source objective performance
data for those firms. We find that the deployment of marketing
analytics has a greater impact on firm performance when the industry is
characterized by strong competition and when customer preferences
change frequently in the industry. We also find that top management
team (TMT) advocacy and a culture that is supportive of marketing analytics
are the keys to enabling a firm to benefit from the use of marketing
analytics, and our analyses suggest that the benefits realized by marketing
analytics deployment may be sustainable.
We proceed as follows: We first present our conceptual framework
and hypotheses and, then, describe our data and our methodology. We
then present our findings and discuss their theoretical and managerial
implications, as well as the limitations of our research.
2. Conceptual framework
The conceptual framework in Fig. 1 depicts what we refer to
as the marketing analytics chain of effects. The framework articulates
our predicted relationships, including the hypothesized relationship
between the deployment of marketing analytics and firm
performance.
We propose that marketing analytics deployment, which we define
as the extent to which insights gained from marketing analytics guide
and support marketing decision making within the firm, has a positive
impact on firm performance. However, this positive impact on firm
performance is likely to be moderated by three industry-specific
factors: (1) the degree of competition faced by the firm, (2) the rate of
change in customer preferences, and (3) the prevalence of marketing
analytics use within the industry. Furthermore, we identify TMT advocacy
of marketing analytics as a vital antecedent of the deployment of
marketing analytics. We suggest that a firm’s TMT must not only commit
adequate resources in the form of employee analytic skills, data,
and IT but also nurture a culture that supports the use of marketing analytics.
Such a culture can ensure that the insights gained from marketing
analytics are deployed effectively.
In the following section, we first elaborate on the link between the
deployment of marketing analytics and firm performance. Next, we
consider the antecedents of the deployment of marketing analytics;
i.e., the resources and organizational elements that we posit must
be in place for marketing analytics to be deployed effectively.
2.1. The performance implications of deploying marketing analytics
A few authors (primarily authors writing for non-academic journals)
suggest that the use of marketing analytics can slow firms down,
leading to missed market opportunities that are seized by more
agile and non-analytics-oriented competition. For example, citing
General Colin Powell’s leadership primer, Harari (1996, p. 37) suggests
that “excessive delays in the name of information-gathering
breeds analysis paralysis,” which leads to missed opportunities and,
hence, subpar firm performance. Peters and Waterman (1982) predict
an analogous effect. Additionally, based on our discussions with executives,
we conclude that many top managers share similar notions
regarding the performance outcomes of marketing analytics use.
However, there are many firm-specific case studies that describe
the positive performance impact of marketing analytics use. For
example, Elsner, Krafft, and Huchzermeier (2004) demonstrate how
Rhenania, a medium-sized German mail order company, used a dynamic,
multilevel response modeling system to answer its most important
direct marketing questions: When, how often, and to whom
should the company mail its catalogs? The model allowed the company
to increase its customer base by more than 55% and quadrupled its
profitability during the first few years following implementation, and
the firm’s president asserted that the firm was saved by deploying
this model.
Marketing analytics can also significantly improve a firm’s ability
to identify and assess alternative courses of action. For example, in
the 1980s, Marriott Corporation was running out of adequate downtown
locations for its new full-service hotels. To maintain growth,
Marriott’s management planned to locate hotels outside downtown
areas to appeal to both business and leisure travelers. A marketing
analytics approach called conjoint analysis facilitated the company’s
design and launch of its highly successful Courtyard by Marriott
chain, establish a multibillion dollar business, and create a new product
category (Wind, Green, Shifflet, & Scarbrough, 1989).
In another example, Kannan, Kline Pope, and Jain (2009) report
how marketing analytics at the National Academies Press (NAP) led
to a better understanding of customers and to a better manner of
reaching the customers. The NAP was concerned about the best way
to price and distribute its books in print and in pdf format via the Internet.
It built a pricing model that allowed for both substitution and
complementarity effects among the two formats and calibrated the
model using a choice modeling experiment. The results permitted
the NAP to launch its entire range of digital products with a variable
pricing scheme, thereby maximizing the reach of its authors’ work.
The common theme of the above firm-specific examples is that the
deployment of marketing analytics allows firms to develop and offer
3 The metrics are “incremental lift from a sales campaign” and “click through rate
(for mass campaigns).” Those firms that use customer analytics also report a signifi-
cantly greater ability to measure customer profitability and lifetime value and are also
more likely to have staff dedicated to data mining.
4 We use the term “deployment” or “to deploy” to mean “to put into use, utilize or
arrange for a deliberate purpose,” without reference to the financial, human, or technical
investment that might be necessary for the enablement of such deployment.
F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128 115
products and services that are better aligned with customer needs
and wants, which, in turn, leads to improved firm performance.
Thus, we propose the following main effect:
H1. The greater the deployment of marketing analytics, the better the
firm’s performance.
2.1.1. Competitive industry structure
Most firms compete with a number of rivals (Debruyne & Reibstein,
2005), although the degree of rivalry varies considerably across industries
(DeSarbo, Grewal, & Wind, 2006). The level of competition that a
firm faces also has many concomitant effects, including the degree of
customer satisfaction that the firm must attain to operate successfully.
For example, Anderson and Sullivan (1993) find that firms with less satisfied
customers that face less competition perform approximately the
same or even better than firms with more satisfied customers that operate
in more competitive environments. Thus, firms that confront
more competition must strive for higher levels of customer satisfaction
to perform well.
Assuming that marketing analytics provide better insights about
customer needs, firms in industries with greater competition should
earn higher returns (because of more clearly targeted offerings, for
example, which result in greater customer satisfaction) than firms
in less competitive industries. Thus, we propose:
H2. The more intense the level of competition among industry participants,
the greater the positive impact of marketing analytics deployment
on firm performance.
We note that if “analysis-paralysis” is a serious concern associated
with the deployment of marketing analytics, then the corresponding
negative performance implications should be even greater in competitive
environments because competitors move more swiftly in such
environments (e.g., DeSarbo et al., 2006). Under these circumstances,
we should observe a negative interaction between marketing analytics
deployment and level of competition (as opposed to our predicted
positive interaction).
2.1.2. Customer preference changes
Customer preferences regarding product features, price points, distribution
channels, media outlets, and other elements of the marketing
mix change over time (e.g., Kotler & Keller, 2006, p. 34). The rate of such
change varies: fashions change seasonally, whereas preferences for
consumer electronics appear to change almost monthly (e.g., Lamb,
Hair, & McDaniel, 2009, p. 58), but preferences regarding construction
equipment, hand tools, and agricultural products appear to be much
more stable over time.
The more customers’ needs fluctuate, the greater is the uncertainty
that firms face in making decisions and the more critical scanning and
interpreting the changing environment becomes (Daft & Weick, 1984).
Marketing analytics offer various means to assist firms in monitoring
the pulse of the market and providing early warning of preference
changes. Additionally, a stable, predictable environment reduces the
need for marketing analytics because such an environment requires a
limited number of decision variables to manage for organizational success
(Smart & Vertinsky, 1984). Therefore, we propose:
H3. The more rapidly customer preferences change in an industry,
the greater the positive impact of the deployment of marketing analytics
on firm performance.
2.1.3. Prevalence of marketing analytics use
The prevalence of the use of marketing analytics within an industry
may attenuate their positive performance implications. Porter
(1996, p. 63) notes that as firms evolve, “staying ahead of rivals
gets harder,” partially because of the diffusion of best practices, facilitated,
for example, by inputs from strategy consultants. Competitors
are quick to imitate successful management techniques, particularly
if they promise superior methods of understanding and meeting customers’
needs. Such imitation eventually raises the bar for everyone
(e.g., Chen, Su, & Tsai, 2007; D’Aveni, 1994; MacMillan, McCaffery, &
Van Wijk, 1985). Thus, the greater the overall use of marketing analytics
in an industry, the lower is the upside potential for a firm to increase
its use. Hence, we propose:
H4. The more prevalent the use of marketing analytics in an industry,
the lower is the positive impact of the deployment of marketing analytics
on the performance of individual firms in that industry.
To summarize our hypotheses regarding research question #1, we
predict that the deployment of marketing analytics has positive performance
implications in general5 and that this effect is even stronger
5 A concave (downward sloping) response function would admit diminishing
returns to deployment and would model a “paralysis of analysis effect”. We report a
test for such an effect in Section 4.3.4 and do not find that effect.
Top Mgmt
Team
Advocacy
Analytics
Culture
Analytics
Skills
Data and IT
Deployment
of Analytics
Firm
Performance
Competition (H2) (+)
Needs & Wants Change (H3) (+)
Analytics Prevalence (H4) (-)
+
+
+
+
+
+ (H1)
+
+
The Deployment of
Marketing Analytics
The Performance Implications of
Deploying Marketing Analytics
Fig. 1. Conceptual framework.
116 F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128
in industries characterized by strong competition and in which customer
preferences change frequently and weaker in industries in which the
deployment of marketing analytics is commonplace.
We next discuss the factors that lead to the deployment of marketing
analytics.
2.2. Antecedents of the deployment of marketing analytics
Adapting a resource-based view (RBV—Barney, 1991; Wernerfelt,
1984), Amit and Schoemaker (1993) suggest that firms create competitive
advantage by assembling, integrating, and deploying their
resources in a manner that allows them to work together to create
firm capabilities. Firm capabilities can provide a sustainable competitive
advantage when they are protected by isolating mechanisms
that thwart competitive imitation (Rumelt, 1984).
Building on the RBV literature, we suggest that marketing analytics
must be appropriately assembled and embedded within the fabric of
the firm to be deployed effectively, which potentially results in a sustainable
competitive advantage. Furthermore, we single out TMT advocacy
of marketing analytics as a key driver of that process.
2.2.1. TMT advocacy, analytics culture, and sustainable competitive
advantage
According to upper echelons theory (Hambrick & Mason, 1984),
organizations are a reflection of their TMT; thus, for marketing analytics
to become an integral part of a firm’s business routines and,
ultimately, its culture, it must be strongly supported by the firm’s
TMT (Hambrick, 2005).
We posit that a culture that is supportive of marketing analytics
is critical for its effective deployment because that culture carries
the logic of how and why “things happen” (Deshpande & Webster,
1989, p. 4). These norms are especially important because the person
(or organizational unit) that carries out the marketing analytics
(e.g., marketing analyst or researcher) frequently is not responsible
for implementing the insights gained, namely, executives in marketing
and other functions (Carlsson & Turban, 2002; Hoekstra & Verhoef,
2011; Van Bruggen & Wierenga, 2010; Wierenga & van Bruggen,
1997). An analytics culture provides decision makers with a pattern of
shared values and beliefs (Deshpande, Farley, & Webster, 1993; Ouchi,
1981), which in turn, should positively influence the degree to which
they incorporate the insights gained from marketing analytics in their
decisions. Furthermore, culture is sticky, difficult to create, and even
more difficult to change (e.g., Schein, 2004), suggesting that it may
protect against competitive imitation of a firm’s analytics investments,
thus delivering sustainable rewards from a firm’s marketing analytics
investments.
2.2.2. Analytics skills
To deploy marketing analytics within a firm, the firm must also have
access to people (either internally or among its partners) who have the
knowledge to execute marketing analytics. Thus, the TMT must ensure
that people with the requisite marketing analytics skills are present
within the company or available outside the firm. We distinguish between
technical marketing analytics skills and other individual-level,
analytics-based knowledge structures that are tacit (Grant, 1991). Technical
marketing analytics skills likely derive primarily from classroom or
other structured learning situations and consist of the range of marketing
models and related concepts that the analyst could deploy. In contrast,
tacit knowledge of marketing analytics includes skills acquired
primarily through real-world learning.
We anticipate that higher levels of marketing analytics skills will
increase the extent of marketing analytics deployment because people
use the tools and skills they understand and with which they
are comfortable (Lounsbury, 2001; Westphal, Gulati, & Shortell,
1997). Additionally, better skills should lead to more useful results
from using those skills, thus facilitating the organization-wide
marketing analytics adoption process. Therefore, a firm’s employees’
analytics skills should have both a direct, positive impact on the organizational
deployment of analytics and an indirect effect on organizational
deployment through the positive impact on analytics culture.
2.2.3. Data and IT resources
A firm’s physical IT infrastructure and data resources are two other
critical tangible assets that the TMT must implement to allow for the
effective deployment of marketing analytics. Physical IT resources
form the core of a firm’s overall IT infrastructure and include computer
and communication technologies and shared technical platforms and
databases (Ross, Beath, & Goodhue, 1996). Data result from measurements
and provide the basis for deriving information and insights
from marketing analytics (Lilien & Rangaswamy, 2008). Marketing analytics
are often based on vast amounts of customer data (Roberts,
Morrison, & Nelson, 2004), which require sophisticated IT resources to
effectively obtain, store, manipulate, analyze, and distribute across the
firm. Therefore, IT and data are closely related tangible resources, such
that one would be significantly less valuable without the other. Building
on this mutual dependence, we posit that both IT and data resources are
important prerequisites for marketing analytics use.
To summarize our hypotheses regarding research question #2,
we propose that TMT advocacy of marketing analytics is an important
precursor to the effective deployment of marketing analytics.
We further propose that a firm’s TMT must not only ensure that employees
with the requisite analytics skills and an adequate data and
IT infrastructure are in place but also nurture a culture that supports
the use of marketing analytics. Such a culture can ensure that the insights
gained from marketing analytics are deployed effectively.
3. Data and methods
3.1. Scale development
We adapted existing scales when they were available. However, our
study is among the first to empirically explore the performance implications
of marketing analytics, and scales for several of our constructs were
not available.We developed the missing scales, following a four-phase iterative
procedure, as recommended in the literature (Churchill, 1979):
First, we independently generated a large pool of items for each of the
constructs from an extensive literature review. Second, we engaged fifteen
senior-level, highly regarded marketing academics to expand our
list of items and evaluate the clarity and appropriateness of each item.
Third, we personally administered pretests to six top managers to assess
any ambiguity or difficulty that they experienced when responding.
Fourth, we conducted a formal pretest with 31 senior managers. Because
the fourth stage/pretest revealed no additional concerns, we finalized
the scale items, which are listed in Appendix A.
6
3.2. Data collection procedure
We conducted a mail survey among executives of Fortune 1000
firms. We first randomly selected 500 entries from the Fortune 1000
6 We note that we employed single-item measures for some of our constructs. Several
researchers have demonstrated that in certain contexts, measures that comprise
one item generate excellent psychometric properties (e.g., Bergkvist & Rossiter,
2007; Drolet & Morrison, 2001; Robins, Hendin, & Trzesniewski, 2001; Schimmack &
Oishi, 2005). In particular, single-item measures have been found to be very useful
when the construct is unambiguous (Wanous, Reichers, & Hudy, 1997). Furthermore,
single-item measures are also useful when participants are busy (which certainly applies
to top executives) and perhaps dismissive of and/or aggravated by multipleitem
measures that, in their view, measure exactly the same construct (Wanous et
al., 1997). Such respondent behavior has been found to inflate across-item error term
correlation (Drolet & Morrison, 2001). Our pretests revealed that three of our constructs
(i.e., competition, needs and wants change, and marketing analytics prevalence)
are unambiguous in nature, leading us to employ single-item measures for
them.
F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128 117
list and then leveraged the corporate connections of two major U.S.
universities to obtain the names of 968 senior executives (primarily
alumni) working at these firms.
We addressed these respondents using personalized letters, in
which we asked them to complete the survey in reference to either
their strategic business unit (SBU) or their company, whichever
they felt was more appropriate. We also provided a nominal incentive
(1 USD, called a token of thanks, which emerged as the most effective
incentive in a pretest). Of the 968 executives contacted, 36 returned
the surveys and indicated they were not qualified to respond and
20 surveys were returned because of incorrect addresses. We
obtained 212 completed surveys (of the 912 remaining surveys),
which yielded an effective response rate of 23.25%. We controlled
for possible nonresponse bias by comparing the construct means for
early and late respondents (Armstrong & Overton, 1977) but found
no significant differences. As we show in Table 1, most (71%) of the
respondents in our sample had titles of director or higher, which suggests
that they should be knowledgeable about their firms’ capabilities
and actions.
We also asked the respondents to report their confidence levels
with regard to the information they provided (Kumar, Stern, &
Anderson, 1993). The sample mean score was 5.59 (out of 7 [SD=
.81]), indicating a high level of confidence. Additionally, we received
multiple (either two or three) responses from 35 firms/SBUs in our
sample, allowing us to cross-check the responses when we received
more than one response from a firm.7,8
3.3. Scale assessment
We assessed the reliability and validity of our constructs using
confirmatory factor analysis (Bagozzi, Yi, & Phillips, 1991; Gerbing
& Anderson, 1988). We included all independent and dependent latent
variables in one confirmatory factor analysis model, which provided
satisfactory fit to the data (comparative fit index [CFI]= .97;
root mean square error of approximation [RMSEA]= .05; 90% confi-
dence interval [CI] of RMSEA= [.033; .068]). On the basis of the estimates
from this model, we examined the composite reliability and
discriminant validity of our constructs (Fornell & Larcker, 1981). All
composite reliabilities exceed the recommended threshold value of
.6 (Bagozzi & Yi, 1988); the lowest reliability is .75. The coefficient alphas
of our constructs are all greater than .7. We also assessed discriminant
validity using the criteria proposed by Fornell and
Larcker (1981). The results demonstrate that the squared correlation
between any two constructs is always lower than the average variance
extracted (AVE) for the respective constructs, providing support
for discriminant validity. Finally, the correlations between the
respective constructs are all significantly different from unity
(Gerbing & Anderson, 1988). Overall, the results indicate that our latent
constructs demonstrate satisfactory levels of composite reliability
and discriminant validity. We present the correlations among the
constructs in Table 2 and the AVE and coefficient alphas in the
Appendix A along with the scale items.
Although we were able to establish discriminant validity, some of
our constructs are highly correlated. For example, the correlation between
analytics skills and analytics culture is 0.825. As per our measures,
“analytics skills” refer to the type of analytics skills that the
employees possess, whereas “analytics culture” indicates shared
beliefs with regard to how analytics will influence the company. Although
one would expect these two constructs to be highly correlated,
we assert that they do not measure the same thing, much in the same
manner that a physician who measures a patient’s height and weight,
two highly correlated items, might argue that height and weight measure
different important things and thus both should be measured.
3.3.1. Descriptive statistics
Table 3 contains descriptive statistics for our sample firms and indicates
that the sample represents a broad range of firms. Table 4 lists the
names of some sample firms. In Table 5, we provide the summary statistics
and correlations for our variables and, in Table 6, we present histograms
for our focal variables. As the histograms show, the sampled
firms display a wide range of values for our focal variables. For example,
on the seven-point scale measuring TMT advocacy of marketing analytics,
approximately 18% of the sample firms fall within the 6–7 range and
16% within the 1–3 range (M=4.5; SD=1.7). Furthermore, with regard
to analytics culture, approximately 25% of the sample firms fall
within the 6–7 range, and approximately 14% score within the 1–3
range (M=4.6; SD=1.6). We also asked the respondents (1) whether
their marketing analytics applications are designed primarily in-house
or by outside experts and (2) whether the primary day-to-day operations
of marketing analytics are managed in-house or outsourced.
Table 7 presents the responses to these questions and demonstrates
that the majority of the Fortune 1000 firms design and manage their
marketing analytics (applications) in-house. We also make note of the
low percentage of respondents who did not know the answer to these
questions, another sign that our respondents are quite knowledgeable
about the domain under study.
3.4. Conceptual model testing procedures
Our conceptual model proposes both direct and moderating effects
(Fig. 1). To model and test these effects simultaneously, we used structural
equation modeling (SEM); recent methodological advances have
made it feasible to include multiple interactions in a path model
(Klein & Moosbrugger, 2000; Marsh, Wen, & Hau, 2004; Muthén &
Asparouhov, 2003). We used Mplus Version 6.11 and estimated our
model using the full-information maximum likelihood approach
(Klein & Moosbrugger, 2000; Muthén & Muthén, 2010, p. 71).
4. Results
4.1. SEM model fit
Fig. 2 summarizes the results of our SEM, depicting two of the three
interactions (i.e., competition and needs and wants change) as statistically
significant. Because means, variances, and covariances are not suf-
ficient statistics for our SEM estimation approach, our model does not
provide the commonly used fit statistics, such as RMSEA and CFI. Instead,
in accordance with Muthén (2010), we assessed fit in two
steps. First, we re-estimated our SEM without the interaction terms
and compared that model with our original model via a chi-square difference
test using the associated loglikelihoods (Muthén & Muthén,
2011; Satorra & Bentler, 1999). This test yielded a χ2 (3) difference of
Table 1
Profile of Fortune 1000 firm respondents.
Position Number of participants Percentage
President, CEO 7 3
EVP, (Sr.) VP, CMO, CFO, COO 78 37
(Sr.) Director, Executive Director 65 31
(Sr.) Marketing Manager 47 22
Other (e.g., Marketing Strategist) 15 7
Total 212 100
7 We received two responses from 33 firms/SBUs and three responses from 2 firms/
SBUs. Because we had contacted 968 executives who worked for 500 randomly selected
Fortune 1,000 firms, we evidently contacted multiple executives working for the
same firms/SBUs, which accounts for most of these multiple responses. In a few instances
(n=5), executives also invited their coworkers to participate in the survey. 8 Although this multiple-response sample is too small for a formal multitrait,
multimethod assessment, it enabled us to assess whether the respective respondent
groups’ means for the key constructs were statistically different (e.g., Srinivasan, Lilien,
& Rangaswamy, 2002). T-tests indicated that none of the means were statistically significantly
different from each other.
118 F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128
28.124, which is highly significant (pb.0001) which clearly favors the
model with interactions. Second, we (re)estimated the model without
interactions with the conventional SEM estimation approach to derive
the usual model fit statistics (e.g., RMSEA and CFI). This conventional
model (without interactions) fits the data quite well (χ2 (175)=243;
CFI=.97; RMSEA=.04; 90% C.I.= [.03; .06]), and the paths are very
similar to those of the moderated model. Based on these results, we
conclude that the “un-moderated” model fits the data well and that
the moderated model enhances the model fit.
4.2. Specific model paths and hypothesis test results
All of the paths from TMT advocacy to the respective subsequent
latent constructs are positive and significant, suggesting that the TMT
plays a key role in establishing an organizational setting in which marketing
analytics can be deployed effectively. Additionally, as predicted,
an analytics-oriented culture has a positive and significant effect on
the deployment of analytics (β=.317, pb.01), in line with our proposition
that strengthening a firm’s analytics-oriented culture leads to an
actual increase in the deployment of marketing analytics. In addition,
we find that enhancements to a firm’s marketing analytics skills have
both a direct and positive impact on the deployment of analytics (β=
.427, pb.001) and a positive, indirect effect through analytics culture
(β=.120, pb.05). That is, employees’ marketing analytics skills directly
influence the degree to which the firm uses analytics-based findings in
marketing decision making; they also exert an indirect influence by enhancing
the organization’s analytics-oriented culture. We also find that
the presence of a strong data and IT infrastructure promotes marketing
analytics skills within the firm (β=.621, pb.001).9
As hypothesized in H1, higher levels of deployment of marketing analytics
leads to an increase in firm performance (β=.106, pb.01).
Moreover, as hypothesized in H2, we find a positive and significant deployment
of analytics × competition interaction (β=.081, pb.05),
which shows that the use of analytics is more effective in more competitive
environments than in less competitive environments.10 Similarly,
in support of H3, the use of analytics is more effective in environments
in which customers’ needs and wants change frequently (β=.060,
pb.01). However, we do not find support for H4 concerning the analytics
× prevalence interaction (β=−.034, ns).
4.3. Robustness checks
4.3.1. Validity of the performance measure/monomethod bias
Because our independent and dependent measures originate from
the same respondents, leading to the possibility of monomethod bias
(Podsakoff, MacKenzie, Lee, & Podsakoff, 2003), we collected performance
data from independent sources to validate our performance
measure. We obtained information on firm-specific net income and
total assets for as many firms as possible by retrieving their 10 K
and other filings with the U.S. Securities and Exchange Commission
9 Because data and IT go hand in hand, this may imply an interaction effect between
the two in our model. As a robustness check, we added a fourth item to the “Data and
IT” construct that captured the interaction between the data and IT items and then reran
our model. The results did not change in any substantive way.
10 The competition variable was skewed to the left. As a robustness check, we reran
our analysis, substituting the competition variable with a dummy variable (1=high
competition [survey score of 6 or 7]; 0=low competition [survey score between 1
and 5]). The results did not change in any substantive way.
Table 3
Sample firm profiles.
Industry groups # %
Services 88 41.5
Manufacturing 65 30.7
Trade 22 10.4
Construction and Mining 7 3.3
Finance and Insurance 30 14.1
Total 212 100
Sales # %
b$1 Million 5 2.4
$1 Million to $10 Million 14 6.6
$10 Million to $100 Million 23 10.8
$100 Million to $1 Billion 57 26.9
$1 Billion to $5 Billion 74 34.9
>$5 Billion 39 18.4
Total 212 100
Number of employees # %
0–100 20 9.4
101–1000 37 17.5
1001–10,000 39 18.4
10,001–100,000 60 28.3
100,001–200,000 32 15.1
>200,000 24 11.3
Total 212 100
Note: The profiles pertain to either the strategic business unit (SBU) or the overall
company associated with our respondents, depending on which UNIT the respondents
selected when completing the survey.
Table 2
Construct correlations and variances.
Constructs Correlations
123456
1. TMT Advocacy 1.257 0.649 0.570 0.188 0.476 0.047
2. Analytics Culture 0.806 (0.03) 1.677 0.681 0.176 0.543 0.033
3. Marketing Analytics Skills 0.755 (0.04) 0.825 (0.03) 2.777 0.318 0.608 0.070
4. Data and IT 0.434 (0.07) 0.419 (0.07) 0.564 (0.06) 0.638 0.196 0.107
5. Deployment of Analytics 0.690 (0.05) 0.737 (0.04) 0.780 (0.03) 0.443 (0.06) 1.788 0.062
6. Firm Performance 0.216 (0.07) 0.181 (0.08) 0.265 (0.07) 0.327 (0.08) 0.248 (0.07) 0.373
Note: The correlations and their standard errors (provided in brackets underneath) are in bold, the squared correlations are in italics, and the variances are provided on the diagonal.
Table 4
Sample firms (partial list).
• IBM • Kraft Foods
• Honeywell • FedEx
• American Express • Sears Holdings
• Marriott International • JP Morgan Chase
• Raytheon • UPS
• Capital One • Deere & Company
• DuPont • Alcoa
• Hewlett-Packard • Aramark
• Ford Motor Co • Citigroup
• Pfizer • Baxter International
• AT&T • General Mills
• Xerox • 3 M
• Johnson & Johnson • Motorola
• Progressive • Starbucks
• Boeing • Verizon
• Amazon.com • Charles Schwab
• ConAgra Foods • Dick’s Sporting Goods
• Apple • Harley-Davidson
• Oracle • Hershey
F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128 119
from the EDGAR database. We also consulted COMPUSTAT, Mergent
Online and the firms’ websites. With these financial data, we computed
the respective firm’s return on assets (ROA). These procedures
yielded financial performance data for 68 of the 212 responses.
After matching the time horizon of the performance measures, we
computed a 2-year average ROA for the 2 years preceding our primary
data collection (see, for example, Boulding, Lee, & Staelin, 1994).
We also standardized the ROA measure with respect to each firm’s
competitors (from Mergent Online).
To address same-source bias, we used the objective performance
data (i.e., ROA) to reanalyze our conceptual framework. Given the
small sample size and the consequent lack of statistical power (n=
68), it was not feasible to simultaneously test all of the hypothesized
effects of our framework in a single SEM model. Instead, we
conducted two separate analyses: first, we used a SEM to estimate
the direct (un-moderated) effects in our conceptual framework. Second,
we used an ordinary least squares (OLS) regression model to (re)
examine the link between deployment of analytics and firm performance
and to (re)test H1–H4. We substituted the ROA objective performance
measure for the perceptual performance measure in both
analyses.
The SEM results remain consistent regardless of the use of objective
or subjective data; in fact, the link from deployment to performance
is even stronger with objective data than with subjective
data. We provide the SEM results with objective data in Fig. 3.
We report the regression results with objective data in Table 8
(model 1). We used a simple average of the items measuring deployment
of analytics as our deployment construct in that analysis. We repeated
the analyses using the factor scores from our SEM for our
deployment construct. These two measures were highly correlated
(correlation>.94), and none of our inferences were affected by the
choice of deployment construct. Overall, the regression model is significant,
and our inferences did not change.
In summary, the signs of the SEM and regression model coeffi-
cients using objective data are consistent with those obtained
using the survey-based data. However, the deployment of analytics
× competition interaction did not reach significance in the regression
model (t= 1.60), a result that could be due to the small sample size
for the objective data (n= 68).
4.3.2. Multiple respondents for some firms
As noted, we obtained data from multiple respondents from 35 organizational
units. To address potential issues of non-independence
among these observations in our data, we averaged the responses of
multiple respondents11 from each firm (e.g., Homburg, Grozdanovic,
& Klarmann, 2007) and then re-estimated the SEM using individual
responses as if we had only obtained single responses (i.e., the average
responses for those organizational units for which we obtained
multiple responses). The results remain virtually the same, and our
inferences do not change.
4.3.3. Multigroup analysis — B2B vs. B2C
There are many differences between business-to-business (B2B)
and business-to-consumer (B2C) firms (see Grewal & Lilien, 2012)
that might lead one to expect that there would be differences in the
role and impact of marketing analytics within B2B and B2C firms. To
assess this possibility, we performed a multigroup confirmatory factor
analysis to compare the factor loadings of B2B with B2C firms.
To test for partial measurement invariance across groups, we compared
a model in which all parameters could be unequal across the
two groups with one in which we constrained the factor loadings to
be equal. The model with all parameters freely estimated fit the
data well (χ2 (252)=321.541; CFI=.97; RMSEA=.05), as did the
partial invariance model with factor loadings constrained to be equal
(χ2 (270)= 336.227; CFI=.97; RMSEA=.05). Furthermore, the χ2
difference test indicated that the two models were not statistically
significantly different (χ2 (18)= 14.7, p= .68), thereby suggesting
that our findings hold across different types of firms.
11 The t-tests of the key variables across these respondents’ reports indicated that the
respective means were not statistically different.
Table 5
Correlations and summary statistics.
Variables Correlations
1 2 3 4 5 6 7 8 9 10
1. TMT attitude toward marketing analytics 1.000
2. Annual reports highlight use of marketing analytics 0.579 1.000
3. TMT expects quantitative analyses 0.578 0.778 1.000
4. If we reduce marketing analytics use, profits will suffer 0.379 0.558 0.601 1.000
5. Confident that use of marketing analytics improves customer satisfaction 0.492 0.641 0.635 0.697 1.000
6. Most people are skeptical of any kind of analytics-based results (R) 0.401 0.552 0.581 0.676 0.713 1.000
7. Appropriate marketing analytics tool use 0.497 0.546 0.600 0.630 0.637 0.561 1.000
8. Master many different marketing analysis tools and techniques 0.466 0.569 0.591 0.648 0.615 0.558 0.837 1.000
9. Our people can be considered experts in marketing analytics 0.572 0.599 0.642 0.621 0.648 0.637 0.736 0.738 1.000
10. We have a state-of-art IT infrastructure 0.283 0.281 0.264 0.300 0.331 0.283 0.431 0.343 0.361 1.000
11. We use IT to gain a competitive advantage 0.190 0.230 0.285 0.220 0.180 0.103 0.315 0.312 0.320 0.344
12. In general, we collect more data than our primary competitors 0.269 0.268 0.349 0.326 0.319 0.253 0.432 0.422 0.420 0.392
13. Everyone in our UNIT uses analytics insights to support decisions 0.459 0.537 0.586 0.577 0.624 0.560 0.649 0.639 0.645 0.312
14. We back arguments with analytics based facts 0.404 0.436 0.516 0.460 0.498 0.444 0.586 0.598 0.562 0.234
15. We regularly use analytics in the following areas 0.335 0.550 0.502 0.442 0.598 0.460 0.489 0.517 0.509 0.310
16. Firm performance – total sales growth 0.077 0.007 0.006 0.089 0.124 0.062 0.100 0.149 0.147 0.207
17. Firm performance – profits 0.293 0.150 0.186 0.106 0.113 0.167 0.193 0.203 0.230 0.343
18. Firm performance – return on investment 0.276 0.158 0.182 0.134 0.136 0.216 0.204 0.197 0.236 0.313
19. We face intense competition −0.060 −0.115 −0.060 −0.058 −0.078 −0.082 −0.050 −0.058 −0.113 −0.042
20. Our customers’ needs and wants change frequently −0.090 −0.083 −0.103 −0.130 −0.172 −0.084 −0.040 −0.057 −0.042 0.051
21. Marketing analytics are used extensively in our industry −0.052 0.126 0.101 0.069 0.061 0.069 0.014 0.032 0.049 −0.063
22. Size −0.005 0.017 0.057 0.081 0.084 0.043 0.069 0.067 0.091 0.044
23. Objective ROA (Time 1) 0.278 0.276 0.334 0.168 0.288 0.287 0.320 0.283 0.294 0.056
24. Objective ROA (Time 2) 0.276 0.300 0.270 0.151 0.229 0.244 0.219 0.187 0.258 0.060
Summary statistics
Mean 3.571 5.029 5.014 4.699 4.714 4.455 3.596 3.790 3.720 4.696
Standard Deviation 1.705 1.506 1.419 1.589 1.511 1.618 1.860 1.704 1.771 1.576
120 F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128
4.3.4. Robustness of the deployment to performance link
Our study reveals a statistically significant positive relationship
between the deployment of marketing analytics and firm performance
(both subjective and objective). This result is of great managerial
importance, and, therefore, we subjected this relationship to
additional scrutiny via (1) testing for the linearity of this relationship,
(2) assessing the effects of various controls, (3) subjecting it to a
reverse-causality test, (4) assessing the contemporary vs. carryover
effects of deployment on performance, (5) testing for the effects of
unobserved heterogeneity, and (6) assessing the unidimensionality
of our performance construct. We elaborate on these robustness
tests below.
First, we ran an OLS regression model similar to that reported in
Table 8 and included a quadratic term to check for curvilinear effects
of the deployment of analytics. The squared term was not statistically
significant, suggesting the absence of a curvilinear effect, at least
within the range of our data.
Second, we included organization size (number of employees) and
industry dummy variables as controls in the regression model. Firm
size would account for the fact that larger firms could benefit from
economies of scale and scope, rendering their use of analytics more
effective. Industry dummies would account for differences in industry
segments. We used standard industrial classifications to group the
sample firms into five categories (see Table 3): services, manufacturing,
finance/insurance, trade, and construction/mining. The size and
industry dummy variables had neither a main nor a moderating effect
on the relationship between deployment of analytics and firm performance,
and our inferences did not change. Thus, our results appear
robust to firm size and industry segments.
Third, it might be that firms that perform well have more leeway
and, hence, more resources to deploy marketing analytics than do
those that perform poorly, implying that firm performance may affect
the deployment of marketing analytics, and not vice versa. To (at least
partially) assess this potential reverse-causality issue, we collected
additional objective performance data for the year following our survey.
We followed the same procedure as outlined earlier to collect the
objective performance data and then calculated the 2-year average
ROA using the newly collected data, as well as the data for the year
preceding our primary data collection. We then used this new objective
performance data to reanalyze our conceptual model. As before,
we relied on SEM to estimate the direct (un-moderated) effects in
our conceptual model and used OLS regression to examine the link
between deployment of marketing analytics and firm performance.
We report the SEM results in Fig. 4 and include the regression results
in Table 8 (model 2). As the results show, the outcomes did not
change in any substantive manner, providing support for the notion
that the deployment of marketing analytics is an antecedent of firm
performance, not vice versa.
Fourth, to assess the timing of the performance effects of deployment
of marketing analytics, we combined the objective performance
measures as follows:
λ  PerformanceTime 1 ð Þþ ½  1–λ PerformanceTime 2 ð Þ;
where λ can range from 0 to 1, PerformanceTime 1 is our initial objective
ROA measure and PerformanceTime 2 is the ROA measure with a
1-year lag. We then re-estimated our OLS regression model, with
the resulting linear combination values as the dependent variable
(with λ varying in increments of 0.1 from 0 to 1), and assessed
which linear combination yields the best fitting model as determined
by Adj. R2
. Fig. 5 provides the results of our analyses.
The results reveal that the highest Adj. R2 occurs when λ=.4 (this is
the maximum likelihood estimate for λ assuming Normal distribution of
the error terms of the OLS regression), suggesting that the performance
effects of the deployment of analytics appear to be observed both immediately
and with a slightly stronger carryover. This finding further discounts
the possibility of a reverse-causality effect, with the effects being
slightly stronger in Time 2 than in Time 1 (A value of λ=.5 would indicate
that the short-term and longer-term effects are the same).
Fifth, we estimated a mixture regression model (DeSarbo & Cron,
1988) to explore the possibility of unobserved heterogeneity among
firms. The lowest Bayesian Information Criterion (BIC) emerged for
Table 5
Correlations and summary statistics.
Correlations
11 12 13 14 15 16 17 18 19 20 21 22 23 24
1.000
0.637 1.000
0.248 0.352 1.000
0.205 0.314 0.813 1.000
0.205 0.354 0.542 0.479 1.000
0.156 0.206 0.172 0.148 0.174 1.000
0.218 0.242 0.197 0.172 0.222 0.451 1.000
0.204 0.193 0.208 0.188 0.181 0.496 0.832 1.000
−0.034 −0.017 −0.118 −0.154 −0.117 0.018 −0.068 −0.097 1.000
0.013 0.005 −0.092 −0.060 0.022 0.031 0.020 0.005 0.167 1.000
0.033 0.117 0.079 0.097 0.115 −0.032 −0.023 −0.011 0.007 0.052 1.000
0.012 0.007 −0.006 −0.030 −0.024 −0.012 −0.157 −0.162 0.146 0.177 −0.070 1.000
0.291 0.279 0.342 0.397 0.444 0.275 0.318 0.375 0.061 −0.037 0.177 0.048 1.000
0.204 0.145 0.347 0.323 0.362 0.217 0.341 0.371 0.082 −0.003 0.230 0.001 0.508 1.000
4.219 4.505 5.241 4.580 5.189 4.839 5.196 5.006 5.422 3.743 3.408 3.561 4.962 4.674
1.755 1.744 1.422 1.383 1.435 1.208 1.268 1.262 1.635 1.966 1.638 1.467 1.541 1.234
F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128 121
a one-class model (consistent with our multi-group analysis above),
which suggests that unobserved heterogeneity was not relevant for
our model. Thus, our findings appear to be generalizable to all types
of Fortune 1000 firms.
Sixth, the correlations among the subjective performance measures
(items 16–18 in Table 5) suggest that our performance construct may
not be unidimensional: the correlation between profits and return on
investment (ROI) is quite high (r=.832), whereas the correlations between
sales growth and profits (r=.451) and sales growth and ROI
(r=.496) are significantly lower. Therefore, we analyzed the effect of
the deployment of analytics on performance with regard to sales
growth and profits/ROI separately. In the SEM model, the main effect
of the deployment of analytics on performance increased in both instances,
i.e., when using only the single-item sales growth measure
(β=.171 vs. .106) and when using the construct comprised of the
profits and ROI items (β=.198 vs. .106). Furthermore, when employing
sales growth as the outcome measure, competition no longer emerges
as a significant moderator of analytics deployments’ effect on performance
(βdeployment × competition=.063 vs. .081; the interaction between
needs and wants change and deployment of analytics remains marginally
significant: βdeployment × needs and wants change= .076 vs. .06).
In contrast, both interactions, i.e., competition × deployment of analytics
and needs and wants change × deployment of analytics become
stronger when including the profits/ROI performance variables in
the SEM (βdeployment x competition= .149 vs. .081 and βdeployment x
needs and wants change= .113 vs. .06). All other paths remain virtually
the same in the respective models.
Thus, although the use of marketing analytics appears to positively
affect sales growth, profits and ROI, our analysis suggests that the deployment
of analytics may have a somewhat stronger effect on
profits/ROI than on sales growth. We offer the following possible explanations
for this finding: First, many marketing analytics applications are
geared toward identifying the most profitable customer segment(s)
(e.g., Reinartz & Kumar, 2000), applications designed to improve profits
and ROI, as opposed to sales. Second, our sample is drawn from Fortune
1000 firms — all large firms — and their scale may prevent them from
growing as quickly as smaller firms. Thus, this finding may be specific
to our sample and should be explored more broadly.
Table 9 summarizes our robustness checks of the deployment to
performance link.
4.3.5. Deployment of analytics as mediator
Our conceptual model assumes that the deployment of analytics
mediates the effect of analytics culture and analytics skills on firm
performance. To test this assumption, we conducted a formal test of
mediation, following the procedure recommended by Baron and
Kenny (1986). We used both of the objective performance measures
as the respective dependent variables, deployment of analytics as
the mediator, and analytics skills or analytics culture as the respective
independent variables. Deployment of analytics emerges as a mediator
for both independent variables irrespective of the objective performance
measure used.
5. Discussion and conclusions
Our research objective was to determine whether the deployment
of marketing analytics leads to improved firm performance and to
identify the factors that lead firms to deploy marketing analytics.
Our findings address these two research objectives and provide insights
of value for both marketing theory and practice.
5.1. Theoretical implications
Our study helps explain what drives the adoption of marketing analytics
by firms and why that adoption leads to improved firm
performance.
We find support for our hypotheses that the positive effect of marketing
analytics deployment on firm performance is moderated by
the level of competition that a firm faces, as well as by the degree to
which the needs and wants of its customers change over time. However,
contrary to our hypothesis, the prevalence of marketing analytics
use in a given industry does not moderate the effect of marketing
analytics on firm performance. We suggest a possible explanation for
this (non)result: consistent with McKinsey & Co.’s (2009) findings,
the prevalence of marketing analytics use in the industries that we
examined is relatively low. That is, the average response of executives
who participated in our survey to the statement “marketing analytics
are used extensively in our industry” was a 3.4 on a 7-point scale
(SD= 1.6). Perhaps the moderating effect of marketing analytics’
prevalence does not emerge until the industry-wide use of marketing
analytics reaches a higher level than evidenced in our sample. Our
data simply may not provide the necessary range to manifest such
an effect,12 an issue we plan to examine in more detail in the future.
An alternative explanation for the non-significant interaction could
be that competitors cannot compete away a firm’s marketing analytics
capability that is implemented properly.
We posit and show empirically that a firm’s TMT must ensure that
the firm (1) employs people with requisite analytics skills, (2) deploys
sophisticated IT infrastructure and data, and (3) develops a culture
that supports marketing analytics so that the insights gained
from marketing analytics can be deployed effectively within the firm.
The people who perform marketing analytics (e.g., marketing analysts)
are frequently not those who implement the insights gained
from marketing analytics (e.g., marketing executives), but both
groups should support the use of marketing analytics if the firm is to
possess a strong marketing analytics-oriented culture (Deshpande et
al., 1993). Therefore, a suitable analytics culture that promotes the
use of marketing analytics is a critical component of our framework.
Additionally, the centrality of an analytics culture, which is sticky
and difficult to change or replicate, suggests that the deployment of
marketing analytics may provide the necessary firm capability properties
that can lead to a sustainable competitive advantage (Barney,
1991).
5.2. Managerial implications
Our findings offer several useful implications for managerial practice.
First, the low prevalence of marketing analytics use indicates that
few managers are convinced of the benefits of marketing analytics.
However, our results suggest that most firms can expect favorable
performance outcomes from deploying marketing analytics. Moreover,
these favorable performance outcomes should be even greater
in industries in which competition is high and in which customers
change their needs and wants frequently.
The use of objective performance data as the dependent variable
in our regression model enables us to quantify the actual performance
implications of, for instance, a one-unit increase (on a scale of 1 to 7)
in marketing analytics deployment. Consider Firm A in our sample,
which is at the median (50th percentile) in deployment of marketing
analytics and operates in an industry characterized by average competition
and average changes in customer needs and wants. For Firm
A, a one-unit increase in the deployment of marketing analytics is associated
with an 8% increase in ROA. Now, consider Firm B in our
sample, which is also at the median (50th percentile) deployment
of marketing analytics but which operates in highly competitive industries
with frequently changing customer needs and wants. For
Firm B, a one-unit increase is associated with a 21% average increase
12 We also examined potential curvilinear effects of marketing analytics prevalence
but did not find any such effects.
122 F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128
Table 6
Histograms of focal variables.
Note: (Combined) signifies that the graph reports the average scores of the variables that form the respective latent variables. As the histograms illustrate, the firms in the sample
display a wide range of values for our focal variables.
Table 7
Locus of marketing analytics development and execution.
1=Primarily in-house; 2=Primarily external; 3=Combination of in-house and external; 4=Don’t know.
F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128 123
in ROA.13 The 8% increase in ROA translates to an expected increase of
approximately $70 million in net income for the firms in our sample;
the 21% increase indicates an increase of $180 million in net income.14
Second, if implemented properly, the use of marketing analytics
could be a source of a sustainable competitive advantage for a firm.
Our study should aid managers in avoiding what appears to be a common
misconception, i.e., that simply hiring marketing analysts who
know how to perform marketing analytics will be sufficient for a firm
to benefit from marketing analytics. Instead, we find that TMT involvement
and a suitable analytics culture that supports the use of marketing
analytics (along with the appropriate IT and data infrastructure) are
necessary for the firm to see the benefits of greater deployment.
Analytics
Skills
Analytics
Culture y4y5y6
Data and IT y10 y11 y12
Firm
Performance y16 y17 y18
.514***
.947***
.294***
.621***
.317**
.427*** .106**
.378***
Deployment
of Analytics y13 y14 y15
Top Mgmt
Team
Advocacy y1y2y3
Competition Needs & Wants
Change
Analytics
Prevalence
y19
y20
y21
-.076 .037 † -.036
.081* .060** -.034
y7
y8
y9
Fig. 2. Structural equation model results. We used full information maximum likelihood to estimate the model; ***t≥3.291, pb.001; **t≥2.576, pb.01; *t≥1.96, pb.05; †
t≥1.645,
pb.10.
Analytics
Skills
Analytics
Culture y4y5y6
Data and IT y10 y11 y12
Firm
Performance y16
.988***
.915***
.233***
.550*
.343**
.453*** .474***
.137 †
Deployment
of Analytics y13 y14 y15
Top Mgmt
Team
Advocacy y1y2y3
y7
y8
y9
Fig. 3. Structural equation model results using objective ROA (Time 1) as performance measure. Overall, the model fits the data reasonably well; χ2=158.153; CFI=.922; RMSEA=
.096, 90% confidence interval of RMSEA= [.068; .123]. ***t≥3.291, pb.001; **t≥2.576, pb.01; *t≥1.96, pb.05; †
t≥1.645, pb.10.
13 Assuming Firm B’s ROA is 0.05, a one-unit increase in deployment of analytics should, on
average, be associated with an increase in ROA of about 0.01 (i.e., 0.05×1.21≈0.06). 14 We used our first objective performance measure in this analysis (i.e., the performance
measure used in regression 1 in Table 8). The average net income of the firms in our sample
was $922million. We note that we repeated the analysis using our second objective performance
measure, and our conclusions did not change in any significant way.
124 F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128
5.3. Limitations and further research
Although we believe that we have broken new ground with this
work, there are clear limitations, several of which provide avenues
for future research. First, while our robustness analysis shows that
the effects that we report are associated with financial returns, our
main measures are attitudinal, not objective. In addition, we do not
examine the actual return that a firm could expect from its investments
in marketing analytics. Thus, obtaining objective data on the
costs and benefits that we measure subjectively in this research
would be useful.
Second, our findings are correlational, not causal. For example,
we find that a higher level of analytics skills and culture ceteris
paribus is associated with the deployment of analytics, which in
turn, is associated with higher firm performance. However, we
cannot make causal claims regarding these relationships. Future research
could be based on longitudinal data for a sample of firms to
track changes in the precursors of the deployment of marketing
Table 8
The effect of analytics deployment on (objective) firm performance (=DV).
Predictor Variable Model 1: Objective
ROA
(Time 1)
Model 2: Objective
ROA
(Time 2)
Parameter
estimate
t-Value Parameter
estimate
t-Value
Main Effects
Deployment of Analytics .45** 3.06 .24* 2.08
Needs & Wants Change .04 .46 .06 .83
Competition .11 1.09 .10 1.26
Analytics Prevalence .08 .87 .11 1.43
Interactions
Depl×Competition .12 1.60 .11† 1.79
Depl×Needs & Wants Change .13* 2.15 .13** 2.68
Depl×Prevalence .03 .46 −0.04 −0.63
Other
Constant 5.00 29.14 4.75 35.58
R2 32.5% 36.3%
Adjusted R2 24.7% 28.9%
F-value (7,60) 4.14 4.89
F-probability b.001 b.001
Note: For ease of interpretation, we mean-centered the focal variables (i.e., deployment
of analytics, needs and wants change, competition, and analytics prevalence) before
creating the interaction terms (Echambadi & Hess, 2007). **t≥2.576, pb.01;
*t≥1.96, pb.05; †
t≥1.645, pb.10.
Analytics
Skills
Analytics
Culture y4y5y6
Data and IT y10 y11 y12
Firm
Performance y16
.987***
.916***
.234***
.548*
.354**
.447*** .328***
.137 †
Deployment
of Analytics y13 y14 y15
Top Mgmt
Team
Advocacy y1y2y3
y7
y8
y9
Fig. 4. Structural equation model results using objective ROA (Time 2) as performance measure. Overall, the model fits the data reasonably well; χ2=149.744; CFI=.932; RMSEA=
.089, 90% confidence interval of RMSEA= [.060; .117]. ***t≥3.291, pb.001; **t≥2.576, pb.01; *t≥1.96, pb.05; †
t≥1.645, pb.10.
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
λ = 0.0
λ = 0.1
λ = 0.2
λ = 0.3
λ = 0.4
λ = 0.5
λ = 0.6
λ = 0.7
λ = 0.8
λ = 0.9
λ = 1.0
Adj. R-Square
Fig. 5. Contemporary vs. carryover effects on firm performance. This linear combination
analysis shows that the highest Adj. R2 occurs for λ=.4. This result suggests that
the deployment to performance link is strongest with an objective performance variable
that gives 40% of the weight (λ=.4) to contemporary effects on firm performance
and 60% to carryover effects.
F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128 125
Table 9
Robustness of the deployment to performance link.
Model Parameter estimates and significance levels (two-sided) Conclusion
H1: Deployment of analytics
is positively correlated with
performance measure (pb.05)
H2: The interaction between
deployment of analytics and
competition is significant
(pb.05)
H3: The interaction between
deployment of analytics and
needs and wants change is
significant (pb.05)
H4: The interaction between
deployment of analytics and
analytics prevalence is
significant (pb.05)
OLS regression using objective
performance measure (ROA)
at Time 1
βDepl. of analytics=.45; p=.003
βDepl. of analytics×competition=.12; p=.114
βDepl. of analytics×needs and wants change=.13; p=.036
βDepl. of analytics×prevalence=.03; p=.645
✓ ✓
SEM in which we averaged the
responses of multiple
respondents of each firm
βDepl. of analytics = .093.; p = .018.
βDepl. of analytics×competition=.07; p=.033
βDepl. of analytics×needs and wants change=.06; p=.012
βDepl. of analytics×prevalance=−.02; p=.352
✓ ✓✓
OLS regression using objective
performance measure (ROA)
at Time 1 and including
quadratic term of deployment
of analytics
βDepl. of analytics=.38.; p=.016
βDepl. of analytics2=−.16; p=.149
βDepl. of analytics×competition=.08; p=.308
βDepl. of analytics×needs and wants change=.14; p=.027
βDepl. of analytics×prevalence=.07; p=.404
✓ ✓
OLS regression using objective
performance measure (ROA)
at Time 1 and including
control variables
βDepl. of analytics=.42; p=.008
βDepl. of analytics×competition=.13; p=.113
βDepl. of analytics×needs and wants change=.14; p=.040
βDepl. of analytics×prevalence=.02; p=.835
✓ ✓
OLS regression using objective
performance measure (ROA)
at Time 2
βDepl. of analytics=.24; p=.042
βDepl. of analytics×competition=.11; p=.078
βDepl. of analytics×needs and wants change=.13; p=.009
βDepl. of analytics×prevalence=−.04; p=.534
✓ ✓
Mixture regression model
(one-class model)
βDepl. of analytics=.17; p=.011
βDepl. of analytics×competition=.09; p=.031
βDepl. of analytics×needs and wants change=.10; p=.002
βDepl. of analytics×prevalence=−.03; p=.476
✓ ✓✓
SEM using single-item sales
growth measure from survey
instrument
βDepl. of analytics=.171; p=.016
βDepl. of analytics×competition=.063; p=.408
βDepl. of analytics×needs and wants change=.076; p=.095
βDepl. of analytics×prevalence=−.032; p=.629

SEM using profit and ROI
measures from survey
instrument
βDepl. of analytics=.198; p=.004
βDepl. of analytics×competition=.149; p=.013
βDepl. of analytics×needs and wants change=.113; p=.008
βDepl. of analytics×prevalence=−.060.; p=.296
✓ ✓✓
126 F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114
–128
analytics to determine how they affect deployment and how
changes in deployment affect firm performance. Such research
should be feasible because many firms are still in the early stages
of deploying marketing analytics.
Third, our results are based on the overall deployment and impact
of marketing analytics. Additional research is needed to understand
the performance implications associated with different types of analytics
(e.g., embedded automated models vs. interactive decision
support), as well as from various aspects of analytics implementation,
such as the nature of the decisions/actions supported by analytics
(e.g., segmentation, targeting, forecasting, pricing, sales), and the
penetration of marketing analytics into non-marketing decisions and
actions.
Fourth, our results are based on and limited to very large U.S.
firms. Extending this work to other geographies and to the much larger
universe of medium-sized and small firms would be useful.
Despite these limitations, we believe that beyond their theoretical
interest, our framework and findings should prove useful for managers
who are seeking a framework that will aid them in deploying their marketing
analytics investments most effectively. Our results also provide a
bit of a cautionary tale: Without TMT advocacy and support, the necessary
investments in data, analytic skills, and a supportive analytics culture
are unlikely to occur. We hope that the modest step we have taken
here to address the performance implications of marketing analytics
will prove provocative and spawn additional research in this important
area.
Measure Items
Top management team advocacy
α=.84
Average variance extracted (AVE)=0.659
1. Our top management has a favorable attitude towards marketing analytics.
2. Our annual reports and other publications highlight our use of analytics as a core competitive advantage.
3. Our top management expects quantitative analysis to support important marketing decisions.
Analytics culture
α=.87
AVE=0.692
4. If we reduce our marketing analytics activities, our UNIT’s profits will suffer.
5. We are confident that the use of marketing analytics improves our ability to satisfy our customers.
6. Most people in my unit are skeptical of any kind of analytics-based results (R).
Marketing analytics skills
α=.90
AVE=0.777
7. Our people are very good at identifying and employing the appropriate marketing analysis tool given the
problem at hand.
8. Our people master many different quantitative marketing analysis tools and techniques.
9. Our people can be considered as experts in marketing analytics.
Data and IT
α=0.72
AVE=0.503
10. We have a state-of-art IT infrastructure.
11. We use IT to gain a competitive advantage.
12. In general, we collect more data than our primary competitors.
Deployment of analytics
α=.82
AVE=0.657
13. Virtually everyone in our UNIT uses analytics based insights to support decisions.
14. In our strategy meetings, we back arguments with analytics based facts.
15. We regularly use analytics to support decisions in the following areas (average score across 12 areas to
choose from [pricing, promotion and discount management, sales-force planning, segmentation, targeting,
product positioning, developing annual budgets, advertising, marketing mix allocation, new product
development, long-term strategic planning, sales forecasting]+2 open ended areas).
Firm performance
α=.81
AVE=0.639
Please circle the number that most accurately describes the performance of your UNIT in the following areas
relative to your average competitor (1=well below our competition; 7=well above our competition) Please
consider the immediate past year in responding to these items.
16. Total Sales Growth.
17. Profit.
18. Return on Investment.
Competition 19. We face intense competition.
Needs and wants change 20. Our customers are fickle—their needs and wants change frequently
Industry prevalence 21. Marketing analytics are used extensively in our industry.
Appendix A. Scale Items
References
Amit, R., & Schoemaker, P. (1993). Strategic assets and organizational rent. Strategic
Management Journal, 14(1), 33–46.
Anderson, E. W., & Sullivan, M. W. (1993). The antecedents and consequences of
customer satisfaction for firms. Marketing Science, 12(2), 125–143.
Armstrong, J. S., & Overton, T. S. (1977). Estimating non-response bias in mail surveys.
Journal of Marketing Research, 14(3), 396–402.
Bagozzi, R. P., & Yi, Y. (1988). On the evaluation of structural equation models. Journal
of the Academy of Marketing Science, 16(1), 74–94.
Bagozzi, R. P., Yi, Y., & Phillips, L. W. (1991). Assessing construct validity in organizational
research. Administrative Science Quarterly, 36(3), 421–458.
Barney, J. B. (1991). Firm resources and sustained competitive advantage. Journal of
Management, 17(1), 99–120.
Baron, R. M., & Kenny, D. A. (1986). Moderator–mediator variables distinction in social
psychological research: conceptual, strategic, and statistical considerations. Journal
of Personality and Social Psychology, 51(6), 1173–1182.
Bergkvist, L., & Rossiter, J. R. (2007). The predictive validity of multiple-item versus
single-item measures of the same constructs. Journal of Marketing Research,
44(3), 175–184.
Boulding, W., Lee, E., & Staelin, R. (1994). Mastering the mix: Do advertising promotion,
and sales force activities lead to differentiation? Journal of Marketing Research,
31(2), 159–172.
Carlsson, C., & Turban, E. (2002). DSS: Directions for the next decade. Decision Support
Systems, 33(2), 105–110.
Chen,M. J., Su, K. H., & Tsai,W. (2007). Competitive tension: The awareness-motivation-capability
perspective. Academy of Management Journal, 50(1), 101–118.
Churchill, G. A., Jr. (1979). A paradigm for developing better measures of marketing
constructs. Journal of Marketing Research, 16(1), 64–73.
D’Aveni, R. (1994). Hypercompetition: Managing the dynamics of strategic maneuvering.
New York: The Free Press.
Daft, R. L., & Weick, K. E. (1984). Toward a model of organizations as interpretation
systems. Academy of Management Review, 9(2), 284–295.
Debruyne, M., & Reibstein, D. J. (2005). Competitor see, competitor do: Incumbent
entry in new market niches. Marketing Science, 24(1), 55–66.
DeSarbo, W., & Cron, W. L. (1988). A maximum likelihood methodology for clusterwise
linear regression. Journal of Classification, 5(2), 249–282.
DeSarbo, W., Grewal, R., & Wind, J. (2006). Who competes with whom? A demand-based
perspective for identifying and representing asymmetric competition. Strategic
Management Journal, 27(2), 101–129.
Deshpande, R., Farley, J. U., &Webster, F. E. (1993). Corporate culture, customer orientation, and
innovativeness in Japanese firms: A quadrad analysis. Journal of Marketing, 57(1), 23–37.
Deshpande, R., & Webster, F. E., Jr. (1989). Organizational culture and marketing: Defining
the research agenda. Journal of Marketing, 53(1), 3–15.
Drolet, A. L., & Morrison, D. G. (2001). Do we really need multiple-item measures in
service research? Journal of Service Research, 3(3), 196–204.
F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128 127
Echambadi, R., & Hess, J. D. (2007). Mean-centering does not alleviate collinearity
problems in moderated multiple regression models. Marketing Science, 26(3),
438–445.
Elsner, R., Krafft, M., & Huchzermeier, A. (2004). Optimizing Rhenania’s direct marketing
business through dynamic multilevel modeling (DMLM) in a multicatalog-brand
environment. Marketing Science, 23(2), 192–206.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with
unobservable variables and measurement error. Journal of Marketing Research,
18(1), 39–50.
Gerbing, D. W., & Anderson, J. C. (1988). An updated paradigm for scale development
incorporating unidimensionality and its assessment. Journal of Marketing Research,
25(2), 186–192.
Grant, R. M. (1991). The resource-based theory of competitive advantage. California
Management Review, 33(3), 114–135.
Grewal, R., & Lilien, G. L. (2012). Business-to-business marketing: Looking back, looking
forward. In G. L. Lilien, & R. Grewal (Eds.), Handbook of business-to-business marketing
(pp. 3–14). Northhampton: Edward Elgar Press.
Hambrick, D. C. (2005). Upper echelons theory: Origins, twists and turns, and lessons
learned. In K. G. Smith, & M. A. Hitt (Eds.), Great minds in management: The process
of theory development. New York: Oxford University Press.
Hambrick, D. C., & Mason, P. A. (1984). Upper echelons: The organization as a reflection
of its top managers. Academy of Management Review, 9(2), 193–206.
Harari, O. (1996). Quotations from Chairman Powell: A leadership primer. Management
Review, 84(12), 34–37.
Hoch, S. J., & Schkade, D. A. (1996). A psychological approach to decision support
systems. Management Science, 42(1), 51–65.
Hoekstra, J. C., & Verhoef, P. C. (2011). The customer intelligence — Marketing interface:
Its effect on firm performance. working draft.
Homburg, C., Grozdanovic, M., & Klarmann, M. (2007). Responsiveness to customers
and competitors: The role of affective and cognitive organizational systems. Journal
of Marketing, 71(3), 18–38.
Kannan, P. K., Kline Pope, B., & Jain, S. (2009). Pricing digital content product lines: A
model and application for the National Academies Press. Marketing Science,
28(4), 620–638.
Klein, A., & Moosbrugger, H. (2000). Maximum likelihood estimation of latent interaction
effects with the LMS method. Psychometrika, 65(4), 457–474.
Kotler, P., & Keller, K. L. (2006). Marketing management (12th ed.). Upper Saddle River:
Pearson Prentice Hall.
Kucera, T., & White, D. (2012). Predictive analytics for sales and marketing: Seeing around
corners. Aberdeen Group Research Brief (www.aberdeen.com)
Kumar, N., Stern, L. W., & Anderson, J. C. (1993). Conducting interorganizational research
using key informants. Academy of Management Journal, 36(6), 1633–1651.
Lamb, C., Hair, J. F., Jr., & McDaniel, C. (2009). Marketing 3.0. Mason: Thomson/South-Western.
Lilien, G. L. (2011). Bridging the academic–practitioner divide in marketing decision
models. Journal of Marketing, 75(4), 196–210.
Lilien, G. L., & Rangaswamy, A. (2008). Marketing engineering: Connecting models
with practice. In Berend Wierenga (Ed.), Handbook of marketing decision models
(pp. 527–560). New York: Elsevier.
Little, J. D. (1970). Models and managers: The concept of a decision calculus. Management
Science, 16(8), B466–B486.
Little, J. D. (2004). Comments on models and managers: The concept of a decision
calculus. Management Science, 50(12), 1841–1861.
Lodish, L. M., Curtis, E., Ness, M., & Simpson, M. K. (1988). Sales force sizing and deployment
using a decision calculus model at Syntex Laboratories. Interfaces, 18(1), 5–20.
Lounsbury, M. (2001). Institutional sources of practice variation: Staffing college and
university recycling programs. Administrative Science Quarterly, 46(1), 29–56.
MacMillan, I. C., McCaffery, M. L., & Van Wijk, G. (1985). Competitors’ responses to easily
imitated new products—Exploring commercial banking product introductions.
Strategic Management Journal, 6(1), 75–86.
Marsh, H. W., Wen, Z., & Hau, K. -T. (2004). Structural equation models of latent interactions:
Evaluation of alternative estimation strategies and indicator construction. Psychological
Methods, 9(3), 275–300.
McIntyre, S. H. (1982). An experimental study of the impact of judgment-based marketing
models. Management Science, 28(1), 17–33.
McKinsey & Co. (2009). McKinsey global survey results: Measuring marketing.
McKinsey Quarterly, 1–8.
Muthén, B. O. (2010). Mplus discussion — Comment on how to evaluate the fit of a SEM
with interaction, posted October 26, 2010 (accessed July 2011). available at http://
www.statmodel.com/discussion/messages/11/385.html?1309276920
Muthén, B. O., & Asparouhov, T. (2003). Modeling interactions between latent and observed
continuous variables using maximum-likelihood estimation in Mplus. Mplus web notes:
No. 6.
Muthén, L. K., & Muthén, B. O. (2010). Mplus user’s guide (6th ed.). Los Angeles:
Muthén & Muthén.
Muthén, L. K., & Muthén, B. O. (2011). Chi-square difference testing using the Satorra-Bentler
scaled chi-square. (Accessed July 2011). available at http://www.statmodel.com/chidiff.
shtml
Natter, M., Mild, A., Wagner, U., & Taudes, A. (2008). Planning new tariffs at tele.ring:
The application and impact of an integrated segmentation targeting, and positioning
tool. Marketing Science, 27(4), 600–611.
Ouchi, W. G. (1981). Theory Z. Reading: Addison-Wesley Publishing Company.
Peters, T. J., & Waterman, R. H. (1982). In Search of Excellence. New York: Harper & Row.
Podsakoff, P. M., MacKenzie, S. B., Lee, J. -Y., & Podsakoff, N. P. (2003). Common method
biases in behavioral research: A critical review of the literature and recommended
remedies. Journal of Applied Psychology, 88(5), 879–903.
Porter, M. E. (1996). What Is Strategy? Harvard Business Review, 61–78.
Reinartz, W., & Kumar, V. (2000). On the profitability of long lifetime customers: An
empirical investigation and implications for marketing. Journal of Marketing,
64(3), 17–32.
Roberts, J., Morrison, P., & Nelson, C. (2004). Implementing a pre-launch diffusion
model: Measurement and management challenges of the Telstra switching study.
Marketing Science, 23(2), 180–191.
Robins, R. W., Hendin, H. M., & Trzesniewski, K. H. (2001). Measuring global
self-esteem: Construct validation of a single-item measure and the Rosenberg
self-esteem scale. Personality and Social Psychology Bulletin, 27(2), 151–161.
Ross, J. W., Beath, C. M., & Goodhue, D. L. (1996). Develop long-term competitiveness
through IT assets. Sloan Management Review, 38(1), 31–45.
Rumelt, R. P. (1984). Towards a strategic theory of the firm. In R. Lamb (Ed.), Competitive
strategic management. Englewood Cliffs: Prentice-Hall.
Russo, J. E., & Schoemaker, P. J. H. (1989). Decision traps. New York: Doubleday and
Company.
Satorra, A., & Bentler, P. M. (1999). A scaled difference chi-square test statistic for moment
structure analysis. Psychometrika, 66(4), 507–514.
Schein, E. H. (2004). Organizational culture and leadership (3rd ed.). San Francisco:
Jossey-Bass.
Schimmack, U., & Oishi, S. (2005). The influence of chronically and temporarily accessible
information on life satisfaction judgments. Journal of Personality and Social
Psychology, 89(3), 395–406.
Silk, A. J., & Urban, G. (1978). Pre-test-market evaluation of new packaged goods: A
model and measurement methodology. Journal of Marketing Research, 15(2),
171–191.
Silva-Risso, J. M., Bucklin, R. E., & Morrison, D. G. (1999). A decision support system for
planning manufacturers’ sales promotion calendars. Marketing Science, 18(3), 274–300.
Sinha, P., & Zoltners, A. A. (2001). Sales-force decision models: Insights from 25 years of
implementation. Interfaces, 31(3), S8–S44.
Smart, C., & Vertinsky, I. (1984). Strategy and environment: A study of corporate
responses to crises. Strategic Management Journal, 5(3), 199–213.
Srinivasan, R., Lilien, G. L., & Rangaswamy, A. (2002). Technological opportunism and
radical technology adoption: An application to E-business. Journal of Marketing,
66(3), 47–60.
Van Bruggen, G. H., & Wierenga, B. (2010). Marketing decision making and decision
support: Challenges and perspectives for successful marketing management support
systems. Foundations and trends in marketing, Vol. 4, Boston: Now Publishing.
Wanous, J. P., Reichers, A. E., & Hudy, M. J. (1997). Overall job satisfaction: How good
are single-item measures? Journal of Applied Psychology, 82(2), 247–252.
Wernerfelt, B. (1984). A resource-based view of the firm. Strategic Management Journal,
5(2), 171–180.
Westphal, J. D., Gulati, R., & Shortell, S. M. (1997). Customization or conformity? An
institutional and network perspective on the content and consequences of TQM
adoption. Administrative Science Quarterly, 42(2), 366–394.
Wierenga, B., & van Bruggen, G. H. (1997). The integration of marketing-problem-solving
modes and marketing management support systems. Journal of Marketing, 61(3),
21–37.
Wind, J., Green, P. E., Shifflet, D., & Scarbrough, M. (1989). Courtyard by Marriott:
Designing a hotel facility with consumer-based marketing models. Interfaces,
19(1), 25–47.
Winer, R. S. (2000). Comment on Leeflang and Wittink. International Journal of Research
in Marketing, 17(2–3), 141–145.
Zoltners, A. A., & Sinha, P. (2005). The 2004 ISMS practice prize winner: Sales territory
design: Thirty years of modeling and implementation. Marketing Science, 24(3),
313–332.
128 F. Germann et al. / Intern. J. of Research in Marketing 30 (2013) 114–128
PLACE THIS ORDER OR A SIMILAR ORDER WITH US TODAY AND GET AN AMAZING DISCOUNT

Leave a Reply

WPMessenger