top of page

Methodology - The Research Instrument

 

A cross sectional survey was employed to identify, using factor analysis, the epistemic belief structures of marketing academics and practitioners. This will identify the underlying value structure of their respective underlying epistemologies. Hofer’s self-completion DEBQ instrument was distributed via email

 

Research Strategy

 

The overall research strategy is to identify a set of epistemic factors common to the whole sample. Their identification provides an understanding of the overall epistemic identity for marketers be they academics or practitioners The next step in identifying whether there is an epistemic gap between the two groups will be to compare means for each factor between the two groups. Any significant differences here will suggest that each group whilst sharing a common epistemic underpinning, views some or more of these epistemic factors differently. In other words they will have separate epistemic perspectives on key epistemic factors. Such a finding would be critically important in establishing that each group whilst sharing a common set of epistemic values may have different views on the nature of those values.

 

The analysis of Likert type scales 

 

The issue of how to analyse a Likert scale seems to cause some confusion. Likert scales are in some literature shown as ordinal in nature and analysed in accordance with this. But this is by no means a consistent treatment and there is considerable literature arguing that it is common to treat them as producing interval data and indeed Brown (2011) says that most of the research in his field treats them as interval scales. For example Brown (ibid) argues that Likert scales can be effectively analysed as interval scales. In particular he argues that Likert scales when summed from several Likert questions should be treated as interval scales.   Brown argues that where Likert items are treated as individual items (questions) then ordinal treatment is usual. But where the items are summed into groups of questions to examine the group as an attribute or factor then the data assumes the characteristic of intervalness and should be treated as interval data. This argument is upheld by Boone (2010) who effectively makes the same argument. Both authors then cite appropriate measures as – means, Pearson’s correlation, ANOVA and factor analysis.  Some positions are clearer, for example, Crawford (1997), simply shows Likert scales as being interval scales.  The argument appears has proponents on both sides. However guidance from San Diego State University provides the following -

                            “When responses to several Likert items are summed, they may be treated as interval data measuring a latent variable. If the                                        responses are normally distributed, parametric statistical tests such as the analysis of variance can be applied”

 

Sample

 

The population of inference under examination is marketing practitioners and marketing academics.  The ONS does publish figures for the population size of marketing jobs but the data includes ‘sales’ roles as well as non-core marketing roles. The broadness of the ONS data renders estimates of population size from this source unproductive. Furthermore no controls for company size are used.  Population size is therefore difficult to identify with precision. Outside ONS data commercial data from list management companies provide researched lists of marketing personal which are controllable by a number of criteria including job title and company size. Hence the sample frames used are based on commercial lists. These are regularly cleansed to remove nils, duplications and missing units and provide a useable solution to finding a representative sampling frame. This provides an accessible and representative sample frame for the population under scrutiny.  

 

Selection bias

 

Bias due to participant self-selection on the basis of some special interest is a risk and it is possible that an email based frame based on opt in contains some non-apparent bias related to willingness to opt in (Fricker and Ronald, 2006 ) but there is no evidence to support this in this study.  

 

Sample strategy

 

Effectively the sample strategy is purposive non-probability sampling. There are advantages and disadvantages in using such a methods and as Tongco (2007) argues, non-probability methods can be just as good as probability ones in some situations.  The argument is summed up in Baker et al (AAPOR, 2013) who point out that it is not axiomatic that probability samples produce valid reliable results and that non probability samples can produce results as good as or better than probability samples (p13).  

Respondents self-selected themselves or choose to opt in. This method does have potential threats. For example self-selection bias and non-response can

 

Selection Bias 

 

Research sample and response  can call the representativeness of subsequent results into question. However it is a pragmatic technique and self-selection methods do not invalidate findings (AAPOR et al, 2013). However it is acknowledged that self-selection bias in non-probability samples does create the risk to the reliability of inferences drawn from findings. The issue is one of balance and care in ensuring the sample drawn is as representative as possible.  This raises the issue of non-response bias. As a general rule higher levels of response rates are required to minimise the effect of non-response bias. But there are practical considerations such as survey cost or the availability of comprehensive sampling frames which provide practical limits on methods available. Overall as Groves (2006) argues low response is not itself evidence of low response bias and it is important to identify evidence for non-response bias.

 

 

 

 

 

 

bottom of page