Sunday, April 2, 2023

Conducting Interviews

The Interview

The research interview is a means of obtaining information from respondents. We candefine an interview as a form of dyadic (person-to-person) communication that involves the asking and answering of questions.

The dyadic nature of the interview is such that bias and error potentially will be presentas a result of interviewer and the interviewee background characteristics (e.g., age, education,socioeconomic status, race, religion, gender, etc.), psychological attributes (e.g., perceptions,attitudes, expectations, motives), and behavior (e.g., errors in asking questions, probing, motivating, recording responses). Interviewer and respondent are perceive and react to the observable background characteristics and specific behaviors of each other. These factors have a direct influence when an interactive interview is conducted and are implied in self-reportinterviews (e.g., mail, e-mail).

Intervening variables are also part of the didactic interaction. The respondent’s role expectations, the interviewer’s task behavior, differences in social desirability of response alternatives, extent of topic threat, and salience of topic all affect the responses received.

The interviewer can likewise control the style of interviewing and thereby affect the quality of information obtained. For example, styles can range from socio-emotional (maintaining a warm, sympathetic, and understanding relationship with the respondent) to formal (where the person-oriented actions of the interviewer are held to a socially acceptable minimum). The researcher must assess the desirability of one style over the other in a given situation.


EXHIBIT 4.1 What’s in an Interview?

Universal dimensions underlie the relationships that are shaped as part of every Interview

-

Involvement encompasses the degree to which each party wants to take part in the interview, including the degree of commitment of each to making it a success.

-

Control refers to the degree of power the interviewer or interviewee has to affect the interview process and its outcome.

-

Relationship is the degree of warmth or friendship between the interview parties.


A number of elements also define the environment in which each Interview takes place :

1.

Context. The total situation in which an interview takes place, including location,physical arrangements, the people present, and those absent. This also includes status differences between parties, temperature, privacy, and time.

2.

Content. What the parties talk about during the interview. It involves topic selection

and treatment, arguments, supporting materials, language, and questions and answers.

Information From Respondents

3.

Structure. Includes the interviewer’s or interviewee’s basic organizational patterns, sequences of topics and questions, and the means used to open and close interviews.

4.

Disclosure. The willingness on the part of both parties to reveal their “true” selves to one another

5.

Feedback. The continuous stream of verbal and nonverbal signals (e.g., smiles, puzzled expressions, raised eyebrows, moans) sent between interview parties that reveal feelings, belief or disbelief, approval or disapproval, understanding or misunderstanding, interest or disinterest, and awareness or

6.

Cooperation. The degree to which the interview parties are willing and able to reduce

the competition inherent in most interview situations and work together for their mutual benefit.

7.

Conflict. The potential or actual struggle between parties because of incompatible or opposing needs, desires, demands, and perceptions.

8.

Trust. Belief in the good, worth, ethics, believability, and reliability of the other party.


Involvement, Control and Relationships have some effect upon each of the elements. These dimensions and elements of relationships are present in each interview but are not of equal importance. Although they are independent of each other, they have strong interdependence as well.

SOURCE: From Stewart, C. J. and Cash, W. B., Interviewing Principles and Practices, 4/e. © 1985 William C. Brown, Publishers. Reprinted with permission of The McGraw-Hill Companies, pp. 9–13


Structure of the Interview

Interviews in marketing research and the behavioral sciences typically involve information gathering and are usually classified by two major characteristics. An interview is either structured or unstructured, depending on whether a formal questionnaire has been formulated and the questions asked in a prearranged order. An interview is also categorized as either direct or indirect, reflecting whether the purposes of the questions are intentionally disguised. Crossclassifying these two characteristics helps us to identify four different types of interviews

Objective Interviews

a.         structured and direct

b.        unstructured and direct


Subjective Interviews

Subjective Interviews

a.       structured and indirect

b.    unstructured and indirect.


Types a and b are basically objectivist; types c and d, subjectivist. We discuss each type of interview in turn (although the discussion of the two indirect types of interviews is combined). We then discuss the media through which interviews may be conducted.


Structured Direct Interviews

Structured-direct interviews are the usual type of consumer survey to “Get the Facts” and obtain descriptive information. A formal questionnaire is used consisting of nondisguised questions.


Example:

A marketing research manager of a bedroom-furniture manufacturer wants to find out how many and what kinds of people prefer various styles of headboards and dressers. The question sequence is fixed only those questions are asked. The resulting interview is structureddirect in nature.


The structured-direct interview has many desirable features. Since the questions areformulated in advance, all the required information can be obtained in an orderly and systematic fashion. The exact wording and phrasing of the questions can be worked out carefully to reduce the likelihood of misunderstandings or influencing the answer. Pretests can (and should) be made on the questionnaire to discover any problems in the wording or ordering of questions before the questionnaire is finalized.

 In the structured-direct interview, the questionnaire is, in effect, the dominant factor in the interview. The interviewer’s role is simply to ask questions. The same questions are asked of all respondents in the same order. This provides maximum control of the interviewing process and reduces the variability in results caused by differences in interviewer characteristics. This type of interview is less demanding insofar as the abilities of the interviewer are concerned, permitting the use of less-skilled interviewers and resulting in a lower cost per interview. The standardized, direct questions allow for uniform recording of answers, thereby reducing errors in editing, tabulating, and analysis of the information.

The major problems associated with this type of interview involve wording questions

properly and the difficulties encountered in getting unbiased and complete answers to questions concerning personal and motivational factors. The structured-direct interview is by far the most commonly used type of interview in marketing research. An alternative approach is suggested in Exhibit 4.2.


EXHIBIT 4.2 Structuring Conversational Interviewing

Despite pretesting, every survey question contains terms that have the potential to be understood differently than the survey designer intends. For example, in a study of physicians conducted for a pharmaceutical company, the interviewer might ask, “During the past two weeks, did you prescribe any anti-inflammatory drug?” A respondent might answer, “Well, that depends. What exactly do you mean by anti-inflammatory?” The interviewer is now faced with a choice. Should he use his knowledge to answer the respondent’s question, or leave the interpretation of “anti-inflammatory” up to the respondent?

The normal (standardization approach) way of handling this situation would be to leave the interpretation of the question up to the respondent. Interviewers must read exactly the same question and never interpret the question in any way (Fowler, 1991; Fowler & Mangione, 1990). When a respondent asks for help, the interviewer should use so-called neutral probing techniques, like repeating the question, presenting response alternatives, and so forth.

Another school of thought holds that the interviewer in the example above should help the respondent and define “anti-inflammatory.” This group argues that response validity can be undermined if respondents interpret questions idiosyncratically. An approach suggested is that interviewers be allowed to use conversationally flexible interviewing techniques. This means that interviewers should engage respondents in a manner similar to ordinary conversation, deviating from the standardized script to ensure that respondents interpret questions consistently and correctly.

Online surveys have the ability to provide standard definitions through context sensitive help.

The respondent uses the mouse cursor to touch a context laden word (underlined or otherwise

designated) and a popup definition will appear. To the degree that respondents needing help wouldreceive it, response validity would increase, but the interaction with the respondent would be standardized.


Unstructured-Direct Interviews

Unstructured-direct interviews are most often used in exploratory studies, and in

qualitative research (see Chapter 5). In the unstructured-direct method of interviewing, the interviewer is given only general instructions to obtain the type of information desired. He or she is left free to ask the necessary direct questions to obtain this information, using the wording and order that seems most appropriate in the context of each interview

Many research projects go through an exploratory phase in which researchers contact respondents and hold unstructured interviews. These interviews are useful for obtaining a clearer understanding of the problem, and determining what areas to investigate. This type of interview is also often useful for obtaining information on motives. Following the exploratory interviews, a formal questionnaire is developed for the final interviews.

To use the bedroom furniture example again, if the owner of a bedroom set is asked the free-answer question, “Why did you buy your bedroom set?” the answer is almost certain to be incomplete, provide proximate causes and may be worthless.

If the interviewer were seeking motivations, consider answers such as “Because we needed a bed,” “Our old bed was worn out,” or “Because it was on sale.” When motivations are given, such as “We enjoy a comfortable mattress that gives us a good night’s sleep,” they are rarely complete.

The added enjoyment may be because the mattress is firmer, because of the pillow top, because of the prestige the owner attaches to having a carved oak bedroom set, or some combination of these and other factors. In addition, it is probable that motives other than “enjoyment” influenced the purchase.

When used to establish motives, the unstructured-direct interview is known as a depth interview. The interviewer will continue to ask probing questions: “What did you mean by that statement?” “Why do you feel this way?” “What other reasons do you have?” The interviewer continues with similar questions until satisfied that all the information that can be obtained has been obtained, considering time limitations, problem requirements, and the willingness and ability of the respondents to verbalize motives.

The unstructured interview is free of the restrictions imposed by a formal list of questions. The interview may be conducted in a seemingly casual, informal manner in which the flow of the conversation determines which questions are asked and the order in which they are raised. The level of vocabulary used can be adapted to that of the respondent to ensure that questions are fully understood and rapport is developed and maintained. The flexibility inherent in this type of interview, when coupled with the greater informality that results when it is skillfully used, often results in the disclosure of information that would not be obtained in a structured-direct interview. 

In the unstructured interview, the interviewer must both formulate and ask questions. The unstructured interview can therefore be only as effective in obtaining complete, objective, and unbiased information as the interviewer is skilled in formulating and asking questions. Accordingly, the major problem in unstructured direct interviews is ensuring that competent interviewers are used. Higher per-interview costs result, both as a result of this requirement and the fact that unstructured interviews generally are longer than those that use a questionnaire. In addition, editing and tabulating problems are more complicated as a result of the varied order of asking questions and recording answers.

 

Structured-Indirect and Unstructured-Indirect Interviews

A number of techniques have been devised to obtain information from respondents by indirect means. Both structured and unstructured approaches can be used. Many of these techniques employ the principle of projection, in which a respondent is given a non-personal, ambiguous situation and asked to describe it. It is assumed that the respondent will tend to interpret the situation in terms of his or her own needs, motives, and values. The description, therefore, involves a projection of personality characteristics to the situation described. These techniques are discussed in more depth in Chapter 5.


REDUCING RESPONSE AND NONRESPONSE BIAS

A major concern of the research planner when choosing which interview medium to use is the potential systematic error (i.e., bias) that might arise. In Chapter 2, we discussed total error and looked at its major components. At this point, it is useful to explore how to reduce the nonsampling based error that occurs during the interview process. In communication, error may be due to the nature of response given (inaccuracy and ambiguity) or due to the fact that the sample member has not responded. The following sections will discuss these topics.

Inaccuracy

Inaccuracy refers to intentional and unintentional errors made by the respondent when they provide information. There are two types of inaccuracies: predictive and concurrent. Predictive inaccuracy is a special case or response error caused by inaccurate intentions :

Suppose a respondent indicates that he intends to buy a new sports car within 6 months ; and then does not. Or, alternatively, he does not now intend to buy, he answers “No” to the question, and then buys a car within the six-month period. In each case, the respondent intention was clear, but was not followed. This situation is a predictive inaccuracy.

 A similar type of predictive inaccuracy can occur when marketing researchers try to predict actual market response to a price by asking consumers, “How much are you willing to pay for Product X?” Differences between predicted and actual price acceptability may occur because the true range of acceptable prices may change between the time of data collection and the point of purchase for a number of reasons, such as budget constraints or windfalls, the price of substitutes at point of purchase, search costs, and purchase urgency.

Concurrent inaccuracy occurs when the respondent intentionally does not provide accurate information. Everyday experiences and empirical evidence suggest that inaccurate information results from the respondent’s inability or unwillingness to provide the desired information.

 For our car purchase example, suppose the respondent answers “Yes”, but really has no intention of buying a sports car within this period or, conversely, answers “No”, but does intend to buy a sports car. In this case, we may say that there is concurrent inaccuracy in his statement. The intention is to provide inaccurate information.

Concurrent inaccuracies are a major concern for many kinds of information obtained from respondents (information on past behavior, socioeconomic characteristics, level of knowledge, and opinion-attitude). Concurrent inaccuracies may also apply to instances where observation is used; the observer charged with reporting the events or behavior may be unable or unwilling to provide the desired information.

It is clear from this brief introduction to predictive and concurrent inaccuracies that inability and unwillingness to respond are major contributors to response bias and warrant more detailed attention to understand how they can be controlled.


Inability to Respond

Even such a simple and straightforward question as “What is the model year of your family car?” may result in an information-formulation problem, particularly if the car is several years old. If respondents were asked, “What brand or brands of tires do you now have on your car?” most would have even more difficulty in providing an accurate answer without looking at the tires. Finally, if respondents were asked, “What reasons did you have for buying Brand “A” tires instead of some other brand?” most respondents would have even more difficulty in providing an accurate answer. Semon (2000a, 2000b) suggests that inaccuracies due to inability to respond stem from three major conditions :


-

Memory error: A respondent gives the wrong factual information because he or she simply does not remember the details of a specific event. Often time since an event occurred (purchase) are underestimated or overestimated. While better questionnaire and survey design can help reduce this error, such proven techniques are not used because they add to the length of the survey. For instance, in a personal or telephone interview survey, follow-up calls are often not made to confirm the answers given.

-

Ignorance error: This refers to the respondent’s lack of understanding, awareness or perception of irrelevance for a question and is due to a poor research design in terms of question content and sampling. A question (or even an entire questionnaire) may be unrealistic, deficient, or directed to the wrong persons.

-

Misunderstanding: This can be a matter of careless question design. Poorly defined terms or words with different meanings can lead to inaccurate, or even deliberately falsified responses. Proper question design would avoid words with multiple meanings and definitions, or should clearly define the context in which the word is being used in the questionnaire.


In addition, the following items also create inaccuracies through the inability to respond accurately :

 

Telescoping

 

Questions that ask respondents to reconstruct past experiences run a high risk of response bias. More specific is the possibility of a respondent telescoping, or misremembering when an event occurred during a short recent time period. In a study of durable-goods purchases in the United States, respondents on average displayed forward-telescoping biases (reporting something happened more recently than it did), and the magnitude of this reporting increased (Morwitz, 1997). Overall, the tendency to make forward-telescoping errors may differ by the demographic of the respondent and by the event being studied— recall of purchase, reading of an ad, or some other event will have an effect on the nature of any telescoping errors.

Telescoping can be reduced by using bounded recall procedures, which involve asking questions about the events of concern in previous time periods as well as the time period of research interest (Sudman, Finn, & Lannam, 1984). Other approaches include asking respondents to use finer time intervals, and to use a landmark event such as New Year’s Day or Easter, or an individual event landmark such as the date of a child’s wedding.


Exhibit 4.3 Response Error and Questionnaire Design

One of the keys to minimizing concurrent errors is for researchers to better select questions that fulfill the clients’ information needs. Important/Urgent questions are a first priority and easy to identify. Unimportant/Non-urgent questions are likewise easy to identify and exclude. It is the other two categories that can cause most of the problems. Remember that Urgent/Unimportant questions may be best answered by a judgment-call than by extensive research, and that the Important/Non-urgent questions are the ones that often need to be addressed. In the short run, a company will carry on without answers to Important/Non-urgent questions. But these answers may be essential to the long-term future direction of the company.


Unwillingness to Respond

When we move to the problem of unwillingness of respondents to provide accurate information, the topic is more complex. Here we are dealing with the motivations of people : why they are not willing to accurately provide the information desired.

Except in those instances where the respondent provides information by being observed  in a natural situation, there are always costs (negative utilities) attached to his or her formulating and sharing information.

No fully accepted general theory to explain this behavior, but we again apply everyday experiences to this problem and add some research findings to suggest why people may not be willing to make accurate information accessible.


Investigator Expectations

A complex source of inaccuracy in response stems from the respondents’ appraisal of the investigator and the opinions and expectations imputed to him or her.

A classic example is a cosmetics study that showed an unexpectedly high reported usage of luxury cosmetics among women from low-income families. In this case, one exceptionally well-dressed, carefully groomed, competent interviewer conducted all of the interviews. The study was repeated with a matronly woman, in dress similar to the women interviewed, calling on the same respondents on the following days. The reported brands of cosmetics used were much less expensive, in this series of interviews.


Investigator Unwillingness 

Sometimes, what appears to be respondent unwillingness to provide accurate data is actually a case of interviewer cheating where the investigator is unwilling to obtain accurate information. This happens when an interviewer finds a particular question too embarrassing to ask; when the interview finds it easier to self complete the survey forms rather than conduct the interviews; or when interviewers have friends complete the survey. Interviewers may also complete some questions legitimately and then make an estimate or inference of other questions such as age, income, and certain attitudes or behaviors of respondents.

Interviewer cheating can be kept to a low level of incidence but not eliminated completely. Careful selection, training, and supervision of interviewers will eliminate much of  the problem. In addition, control procedures can and should be established to reduce it even more.

The simplest control procedure is to call-back a subsample of respondents. If theinformation on an initial interview is found to disagree significantly with that on the call-back interview, additional call-backs may be made on respondents originally interviewed by the same person. The fear of being caught will discourage cheating.

Other control procedures include the use of “cheater” questions and the analysis of response patterns. Cheater questions are informational questions that will disclose fabricated answers with a reasonably high probability of success. Likewise, the analysis of patterns of responses for interviewer differences will disclose interviewer cheating when significant variations from expected norms occur. Such analyses can be made at very little additional cost.


Time Costs

Perhaps the most common reason for respondent unwillingness to provide accurate information, or any information for that matter, is the result of the time required to make the information available. Respondents often give hasty, ill-considered, or incomplete answers and resist probing for more accurate information. When possible to do so, a respondent will tend toact in a manner that will reduce time costs. Such behavior often results in inaccurate or missing information.

When conducting telephone and personal interviews the interviewer might ask “Is this a good time to answer some questions, or would you rather set a time when I could contact you again?” Experience has shown this latter technique only slightly lowers response rates.


Perceived Losses of Prestige

When information attributing prestige to the respondent is sought, there is always a tendency to receive higher-prestige responses. All researchers experience difficulty both in recognizing the items that demand prestige content, and in measuring the resulting amount of inaccuracy. Information that affects prestige is often sensitive information, including socioeconomic (age, income, educational level, and occupation), place of birth or residence

An example of a still more subtle prestige association occurred in a study on nationally known brands of beer. One of the questions asked was, “Do you prefer light or regular beer?” The response was overwhelmingly in favor of light beer. Since sales data indicated a strong preference for regular beer, it was evident that the information was inaccurate. Subsequent investigation revealed that the respondents viewed people who drank light beer as being more discriminating in taste. They had, therefore, given answers that, in their view, were associated with a higher level of prestige

Measuring the amount of inaccuracy is a difficult task. One solution to this problem is to ask for the information in two different ways. For example, when obtaining information on respondents’ ages, it is a common practice to ask early in the interview, “What is your present age?” and later “In what year did you graduate high school?”

In one study, when respondents were asked, “Are you afraid to fly?” Very few people indicated any fear of flying. In a follow-up study, when they were asked, “Do you think your neighbor is afraid to fly?” (a technique known as the third-person technique), most of the neighbors turned out to have severe anxieties about flying.


Invasion of Privacy

Clearly, some topics on which information is sought are considered to be private matters. When such is the case, both nonresponse and inaccuracy in the responses obtained can be anticipated. Matters about which respondents resent questions include money matters or finance, family, life, personal hygiene, political beliefs, religious beliefs, and even job or occupation. It should be recognized however, that invasion of privacy is an individual matter. Thus, information that one person considers sensitive may not be viewed that way by others. The investigator should attempt to determine sensitivity if it is suspected to be a problem. One way of handling this is to add questions in the pretest stage which ask about the extent of

sensitivity to topics and specific questions. A comprehensive treatment of sensitive information and how to ask questions about it is given by Bradburn and Sudman (1979).

 

Ambiguity

Ambiguity includes errors made in interpreting spoken or written words or behavior. Ambiguity, therefore, occurs in the transmission of information, through either communication or observation.


Ambiguity in Communication

Ambiguity is present in all languages. Unambiguous communication in research requires that the question asked and the answers given each mean the same thing to the questioner and the respondent.

The first step in this process is the controlling one. If the question is not clearly understood by the respondent, frequently the answer will not be clearly understood by the questioner. To illustrate this point, after pretesting in an actual research project on tomato juice, the following question change occurred after pretesting. 

 


Even a careful reading of these two questions may not disclose any real difference in their meaning. The analyst who drew up the question assumed that “like” refers to taste. In pretesting, however, it was discovered that some housewives answered “Yes” with other referent in mind. They “like” the amount of Vitamin C their children get when they drink tomato juice, they “liked” the tenderizing effect that tomato juice has when used in cooking of meat dishes, and so on. If the wording of the question had not been changed, there would have been a complete misunderstanding in some cases of the simple, one-word answer “Yes.”

 A related issue is one where a shortened form of a sentence is used. Examples are, “How come?”, “What?”, and “How?” This “elliptical sentence” requires the respondent to first consider the context of the sentence and then add the missing parts. When the mental process of transformation is different for the researcher and respondent, communication is lost and interpretation of a person’s response is faulty and ambiguous.

The understanding of questions is an issue that goes beyond ambiguity. All too often a respondent may not understand a question, but may have no opportunity to request clarification. Most personal and telephone interviewing uses standardized interviewing, meaning that the interpretation of questions is left up to the respondent. As discussed earlier in Exhibit 4.2, one interesting approach taken in online surveys by Qualtrics.com is to use context-sensitive help that provides standardized clarification or instruction for a given question.


Procedures for Recognizing and Reducing Ambiguity in Communication

Every research design that uses communication to obtain information should have as many safeguards against ambiguity as possible. Procedures should be employed to recognize where ambiguity may be present and to reduce it to the lowest practicable level.

Three procedural steps are useful for these purposes and should be considered in every project :

1.

Alternative question wording

We have already seen that the present state of the art question formulation cannot guarantee unambiguous questions. In questions where ambiguity is suspected, it is advisable to consider alternative wordings and forms of questions to be asked of sub-samples of respondents

The use of this simple experimental technique costs no more for online surveys (randomly assign respondents to different blocks of questions). In personal and telephone interviewing situations the interviewers can likewise be instructed to change the order for onehalf the interviews. Where significant differences in response are discovered, it will be even more worthwhile as a warning in interpreting the information.

 

2.

Pretesting.

Pretesting of questionnaires is a virtual necessity (Converse & Presser, 1986, pp. 51–75). The only way to gain real assurance that questions are unambiguous is to try them. Pretesting is usually done initially by asking proposed questions of associates. To be truly effective, however, pretesting of questions should be conducted by asking them of a group of  respondents who are similar to those to be interviewed in the final sample.

A typical way to assess problems with individual questions included in the questionnaire is to ask those participating whether they had any trouble with each of the questions. Be sure to ask about the exact nature of the problem.

If the pretest is done by an interviewer, each respondent can be asked about each question and probing can get more depth in the response. It is the rule, rather than the exception, that questions will be revised as a result of pretesting. Several versions of a question may need to be considered as a result of pretesting before the final version is decided upon.

 

3.

Verification by observation

Information obtained through communication should be Verified by observation whenever cost, time, and the type of information desired permit. Clearly, verification by observation is not always possible or practical. For example, a housewife may object to a pantry audit to verify that brands she indicated as preferred are on hand.



Ambiguity in Observation

 

Although it has been suggested that, where practical to do so, information obtained by communication should be verified by observation, the implication should not be drawn that observation is free of ambiguity. In making observations we each select, organize, and interpret visual stimuli into a picture that is as meaningful and as coherent to us as we can make it. Which stimuli are selected and how they are organized and interpreted are highly dependent on both the expertise, background and frame of reference of the observer.

 

As an illustration, a cereal manufacturer ran a promotional campaign involving a drawing contest for children. Each child who entered was required to submit (along with a box top) a picture he or she had drawn that depicted Brand X cereal being eaten. The contest was run, the prizes awarded on the basis of artistic merit, and the brand manager turned his attention to other matters. Later, a psychologist who worked for the company happened to see the pictures and was permitted to study them. He found that a sizable proportion of them showed a child eating cereal alone, often with no other dishes on the table. This suggested to him that cereal is often eaten by children as a between-meal snack. Later studies by the company’s marketing research department showed that cereals are eaten between meals by children in greater amounts than are eaten for breakfast. The advertising program of the company was subsequently changed to stress the benefits of its cereals as between-meal snacks.

Nonresponse Error

A nonresponse error occurs when an individual is included in the sample but, for any of many possible reasons, is not reached or does not complete the survey. In most consumer surveys this is a source of a potentially sizable error.

Non-response errors differ in nature depending on the mode of survey administration. For example, when using telephone or personal interview methodologies, families who cannot be reached generally have different characteristics than those who can be reached. They may be away from home during the day and differ from those in which at least one member can usually be found at home with respect to age, number of small children, and the proportion of time in which the wife is employed. Similarly, fathers who are unwed, poor, and live in large cities; busy executives and professionals, or occupational group such as hospital purchasing agents responsible for chemical agents provide examples of hard-to-reach-populations that are difficult to locate and interview (Teitler, Reichman, & Sprachman, 2003).

Internet surveys have the potential of increasing contact because the survey invitation appears in the inbox awaiting the potential respondent’s reply. However other respondent differences (time pressure, occupation, or lack of interest) and technological issues (spam filters) may similarly increase non-response.

The seriousness of nonresponse error is magnified by the fact that the direction of the error is often unknown. Hansen and Smith (2009) recently showed an 8% increase in response rates by using a two stage presentation of a highly interesting question (a highly interesting question was asked at the beginning of the survey and the respondent was told that the second part of the question would appear at the end of the survey). These additional respondents were shown to provide more centrally distributed responses, thus producing no additional error.

Researchers believe that major reasons for refusal include the public’s concerns about data privacy and personal protection; a negative association with telemarketing efforts of all types; consumers’ natural aversion to telephone surveys combined with a lack of survey choices for consumers; low salaries of interviewers; and the fact that financial remuneration is not widely used in surveys to compensate consumers for their time.

Evangelista, Albaum and Poon, (1999) suggest that four general motivations drive survey response. Exhibit 4.4 suggests that response rates may be increased (and non-response bias decreased) by using specific motivational techniques and inducements.


EXHIBIT 4.4 Theories of Survey Response

Why do people participate as respondents in a survey? The question is often asked by marketing researchers, perhaps all too often implicitly, and seldom is an answer provided other than in terms of specific techniques (including inducements) that have been used to increase participation. The following theories are among those proposed (and studied to varying degrees) as answers to this question (Evangelista, Albaum and Poon, 1999).

Exchange

The process of using survey techniques to obtain information from potential respondents can be viewed as a special case of social exchange. Very simply, social exchange theory asserts that the actions of individuals are motivated by the return (or rewards) these actions are expected to, or usually do, bring from others. Whether a given behavior occurs is a function of the perceived costs of engaging in that activity and the rewards (not necessarily monetary) one expects the other participant to provide at a later date. In order that survey response be maximized by this theory, three conditions must be present :


1      The costs for responding must be minimized.

2      The rewards must be maximized.

3      There must be a belief that such rewards will, in fact, be provided.


Cognitive Dissonance

Cognitive dissonance theory appears to provide a mechanism for integrating, within a single model, much of the empirical research that has been done on inducement techniques for survey response. As used to explain survey response, the theory postulates that reducing dissonance is an important component of the “respond/not respond” decision by potential survey respondents.

The process is triggered by receipt of a questionnaire and invitation requesting participation. Assuming that failure to respond might be inconsistent with a person’s self-perception of being a helpful person, or perhaps at least one who honors reasonable requests, failure to respond will produce a state of dissonance that the potential respondent seeks to reduce by becoming a survey respondent. Since the decision process involves a series of decisions for some people, delaying the ultimate decision may be a way to avoid completing the questionnaire without having to reject the request outright (and thus experience dissonance). Delaying a decision, therefore, may in itself be a dissonance-reducing response.


Self-Perception

Self-perception theory asserts that people infer attitudes and knowledge of themselves through interpretations made about the causes of their behavior. Interpretations are made on the basis of selfobservation. To the extent that a person’s behavior is attributed to internal causes and is not perceived as due to circumstantial pressures, a positive attitude toward the behavior develops. These attitudes (selfperception) then affect subsequent behavior.

The self-perception paradigm has been extended to the broad issue of survey response. To increase the precision of this paradigm, the concepts of salience (behaviors one has attended to), favorability (the affect or feeling generated by a given behavioral experience), and availability (information in memory) are utilized. In addition, to enhance the effects, researchers should create labels. Labeling involves classifying people on the basis of their behavior such that they will later act in a manner consistent with the characterization. Self-perception would predict that labeling one’s behavior would cause that person to view himself or herself as the kind of person who engages in such behavior; therefore, the likelihood of later label consistent behavior is increased.


Commitment and Involvement

Of concern here is the range of allegiance an individual may be said to have for any system of which he or she is a member. Consistent behavior is a central theme, including the following characteristics :


1.      Persists over some period of time

2.      Leads to the pursuit of at least one common goal

3.      Rejects other acts of behavior


Consequently, the major elements of commitment are viewed as including the following :

1.

The individual is in a position in which his or her decision regarding particular behavior has consequences for other interests and activities not necessarily related to it.

2.

The person is in that position by his or her own prior behavior.

3.

The committed person must recognize the interest created by one’s prior action, and realize it as being necessary.


A person who is highly committed to some activity is less likely to terminate the activity than one who is uncommitted

The theory of commitment (or involvement) can be extended to explain survey response behavior. To do this requires recognition that commitment can be attached to many different aspects of a survey, such as the source or the sponsor, the researcher, the topic and issues being studied, and/or the research process itself. To a large extent, commitment is manifested by interest in what is being asked of the potential respondent. The following hypotheses (untested) can be proposed :


1.

The less favorable the attitude toward a survey’s sponsor, topic, and so forth, the less involvement with, and thus commitment to, anything related to that study.

2.

The less the extent of involvement, the more behavior productive of disorder (e.g., nonresponse, deliberate reporting of false information, etc.) is perceived as legitimate.

3.

The more behavior productive of disorder is perceived as legitimate, the less favorable theattitude toward the survey..


REDUCING INTERNET SURVEY ERROR

Conducting online surveys has become not only accepted, but the dominant form of conducting structured/direct interviews. This shift to online research is due largely to reduced cost, the availability of dynamic surveys using advanced survey flow logic, the ability to display visually interesting and even interactive graphics, the ease of survey creation and administration, and the ability to eliminate errors associated with data entry, coding and transcription.

In this light, we will focus our attention on online surveys to discuss how we can further reduce or manage four other major sources of error in survey interviewing :

-       Coverage error

-       Sampling error

-       Nonresponse error

-       Measurement error

These same sources of error must be addressed regardless of the mode of survey data collection.


Coverage Error

Coverage error occurs when the sample frame (the group from which the sample is drawn) does not represent the population as a whole. For example, a random sample of Apple Mac users would be a mismatch for the adult population of the United States. In more traditional research methods such as mail or telephone methodologies, samples are drawn from sources such

as telephone directories, driver’s license records, rolls of property owners, credit reports, and so forth. However such sampling frames are very information specific and often do not contain email addresses.

E-mail list brokers offer panels and e-mail address lists that may be targeted to reduce coverage error. Respondent lists can be selected by many variables, including gender, interests (computers, electronics, family, finance, Internet, medical, and travel), and online purchasing. These lists are typically double opt-in, meaning that the users have specifically indicated their agreement to receive surveys or other promotional materials.

When the researcher requires a more detailed set of sample criteria, the cost of reducing coverage error increases. Targeted specialty lists such as physicians of a given specialty are expensive, costing as much as $100 per completed response. While this amount seems large, the cost is much less than other methods of data collection. E-mail name brokers make a practice of not providing the list, but of sending the survey invitation out, thereby controlling their list and avoiding survey abuse of the potential respondents on the list.

Online sampling frames rarely include all elements of the target population. Therefore coverage error will continue to be the greatest source of inaccuracy for online surveys for many years to come. While this same problem is often encountered in the use of mail and phone lists, it is not as severe as with online e-mail lists, which are often based on lists from online websites, including magazines that have specialized hobby and interest affiliations. Carefully selecting lists from well constructed probability panels or panels having millions of members will help to reduce coverage error.


Sampling Error

Sampling error occurs when a non-representative sample is drawn from the sampling frame. The estimation of sampling error requires that probability sampling methods be used, where every element of the frame population has a known nonzero probability of being selected, which may be made the same (i.e., equal) for all. However when the relationship between the sample frame and the target population is unknown, statistical inferences to the target population using confidence intervals may be inaccurate or entirely misleading. In online surveys the degree of sampling error is generally unknown unless the sample is drawn from an online panel or other frame with known size and characteristics. This information is rarely found in consumer research and is rarely estimated.

Online surveys are therefore subject to certain amounts of sampling error. Sampling error may be reduced in part by increasing the sample size. This is an easy task, especially when using an online panel.


Non response Error

Internet researchers are confronted with many non-respondent problems that have elements both unique and common to those faced in telephone surveys. Spam filters, like caller ID monitoring, prevent many survey requests from reaching the “In Box.” Internet users often have limited discretionary time resulting in decreased willingness to participate in surveys. This self-selection bias is manifest as potential respondents consider the appeal of the survey topic, survey length, and incentives to complete the survey. The net impact is that without adequate survey response and sample representativeness, non-response error will reduce validity and accuracy of results (Shaffer and Dillman, 1998).

Increasing response rates and reducing nonresponse error most often includes the use of multiple notifications and requests, and the use of personalization in the contact email requesting completion of the interview. Sometimes even these techniques to not produced the desired results. When a population of interest is not adequately represented online or is particularly difficult to interview, a mixed-mode survey strategy should be considered to reduce nonresponse error: a combination of e-mail and telephone, mail, or mall intercept techniques

For example, many airline passengers making a connection in Cincinnati during July, 2009 encountered an interviewer in the terminal who was giving travelers a business card with instructions and an online survey code. She requested that the traveler complete the airline satisfaction survey when they returned home or to their office. The contact was quick, novel and non-intrusive. Most travelers kindly accepted the card.

The single most important factor in reducing survey nonresponse is the number of attempts to make contact with each prospective respondent. While many studies have confirmed this fact, one of the more rigorous studies compared response rates for mail and e-mail surveys (Shaffer and Dillman, 1998). In this field study, respondents in the mail and e-mail treatment groups were contacted four times through (1) pre-notifications, (2) letters and surveys, (3) thankyou/ reminder notes, and (4) replacement surveys. Results showed no statistically significant difference between the 57.5 percent response rate for the mail group, and the 58.0 percent response rate for the e-mail group.


EXHIBIT 4.5 Increasing Response Rates

The variation in response rates for surveys is enormous, especially when interest and incentives are considered. Ryan Smith, director of sales and marketing at Qualtrics.com, relates his experience with three client surveys that differed greatly in their respective response rates (Smith, 2007). These three verydifferent surveys provide insight into the types of variables that influence response rate :

1.

The first survey consisted of a short 10-question survey entitled “What Do Women Want . . . For Valentine’s Day?” This somewhat whimsical survey was sent using a single e mail blast (with no second communication) to a “random sample” of Internet users through an e mail list broker. Recipients of the survey were offered the chance to win $500 cash in a random drawing and in addition were promised a copy of the results. This combination of incentives plus a short, interesting survey produced an amazing 43 percent response rate

2.

A second e-mail survey, a very long academic survey of more than 100 questions, focused on developing a demographic, psychographic, and technological expertise profile of the online shopper. This survey measuring attitudes and behaviors was sent through the same broker to a

random sample of “Internet shoppers.” Respondents were promised the chance to win $500 cash in one of seven random drawings. The university sponsorship of the survey was identified

in the cover letter that contained the professor’s name, contact information, and link to the survey. The response rate was 11 percent

A parallel paper and pencil survey was conducted for comparison purposes using a national sample provided by Experian, a provider of credit rating reports. This mail survey was implemented using three separate mailings (1) a pre-notification, (2) the survey, and (3) a follow-up reminder. The mail version produced a 20 percent response rate. Comparison of the mail and online survey results showed that demographic profiles were very different. This difference was attributed to the difference in sampling frames. Respondents to the mail sample were older, had different family structures and were more financially secure. However, demographic differences aside, the psychographic profiles related to online shopping were nearly identical

 

3.

A third survey sent to brokers by a leading investment firm resulted in a .002% response rate after two mail outs. Further follow up revealed that this fast paced group of potential respondents was too busy to be bothered with a survey

Smith believes that five actions will greatly increase your online survey response rates:

 

1)

make your survey as short as possible by removing marginal questions

 

2)

make your survey interesting to the respondent

 

3)

include an offer of incentives

 

4)

use group affiliations whenever possible

 

5)

use requests that focus on altruistic self perception appeals (I need your help)

It should be noted that although Dillman’s response rates for a university faculty population were considerably higher than would be expected for a consumer survey, the similarity across survey modes stands as a solid finding. Perhaps most noteworthy is the finding that when compared with the mail survey, the survey administered by e-mail produced 12.8 percent more respondents who completed 95 percent or more of the questions. Individual item response rates and item completion rates were also higher. For the e-mail based open-ended text responses, the same 12% increase in completion rates was observed, but in addition, responses were longer, averaging 40 words versus 10 words for the paper-and-pencil survey.

Non-response is also important at the question level. Response rates can be improved by using forced responses (the respondent cannot continue until all questions are answered). Albaum et. al. (2010) show that contrary to expectations, using forced response increases both the total response rate and the data quality improve.

It is clear that response rates are improved through the use of multiple contacts to secure cooperation, sending reminders to complete the survey, and forcing a response. This applies not only in traditional mail surveys but also in e-mail surveys. Yet, as shown in Exhibit 4.5 great response rate variations can exist.

Management of the data collection process through state-of-the-art online survey Technology offers many new capabilities like survey tracking and personalization of invitations and survey questions. The integration of panel information with the survey (using embedded codes and data) facilitates the identification and tracking of survey respondents and nonrespondents. Also with this integration, personalized follow-up mailings and reminders can be sent to nonrespondents, and survey questions can be personalized to further increase response rates.

Additional technologies enable the careful tracking of respondents. Statistics can be compiled about the data collection status, including the number of surveys e-mailed, the number received by potential respondents, the number of e-mails opened, the number of surveys viewed (link clicked on), and the number of surveys completed. While technological advances help the researcher to reduce non-response rates, it is clear that multiple factors are responsible for nonresponse rates, many of which are not addressable through the administration and handling of the survey.


Measurement Error

Measurement error is a result of the measurement process itself and represents the difference between the information generated on the measurement scale and the true value of the information. Measurement error may be due to such factors as faulty wording of questions, poor preparation of graphical images, respondent misinterpretation of the question, or incorrect answers provided by the respondent.

Measurement error is troublesome to the researcher because it can arise from many different sources and can take on many different forms. For telephone and personal interviews, measurement error will often occur when the interviewer misinterprets responses, makes errors recording responses, or makes incorrect inferences in reporting the data.

Technical issues may similarly create measurement error in online surveys. The size and resolution of the monitor, browser, operating system (Mac, Microsoft Windows, Linux), and even web page color pallet may change the appearance of the survey. Additionally, skins or templates may affect the actual survey’s appearance by adjusting the spacing between questions, the appearance of horizontal lines separating questions or sections, the use of horizontal versus vertical scales, drop-down boxes versus checkboxes or radio buttons, and even font characteristics including size, typeface, the use of boldface and italics, and spacing between scale items.

Researchers have for decades compared measurement error differences for the various modes of data collection. While differences do exist, online surveys compare favorably with traditional paper-and-pencil and telephone surveys. Standard surveys follow the structured/direct approach for data collection and change little when transitioning from paper and pencil to radio button or checkbox formats. However, as will be discussed later, the differences between the lesspersonal online surveys and the in-person qualitative research are far more extreme.

Many of the traditional measurement errors associated with transcription and recording of data are eliminated with electronic real-time entry of the data. With Internet surveys, the survey as well as the analysis of results can be conducted in real-time and posted to a secure Web site in hours. In one recent survey of programmers and software developers conducted by Qualtrics.com for Microsoft, 6,000 invitations were sent out with the promise of a $20 Amazon.com gift certificate. Nine hundred completed surveys were received within 48 hours as results were monitored online.

Online studies may be completed in 24 hours, as compared to the four to ten weeks required for paper-and-pencil methodologies. Mail surveys must be prepared, printed, mailed, followed up with mail reminders, manually coded, typed or scanned into the database, analyzed and then compiled into a managerial report. These many steps involve many participants with varying levels of expertise, and each may introduce error. Internet based surveys eliminate many of these steps and combine other steps to complete the research much more quickly and easily. Efficiencies aside, no matter which mode is used for survey completion, error control must be addressed to assure quality results.


SUMMARY

This chapter first examined the various types of information that can be obtained from respondents. It then considered communication as a means to obtain information from respondents. The types of respondent interviews structured-direct, unstructured direct, and structured- and unstructured-indirect were discussed.

The next section introduced the concepts of inaccuracy and ambiguity as the major sources of response and non-response bias. Predictive and concurrent sources of inaccuracy were discussed in the context of respondent inability or unwillingness to respond. Methods of reducing non-response error were then discussed in the context of theories of survey response. Finally our discussion focused on how to reduce coverage, sampling, non-response and measurement errors in online surveys.

The objective of marketing research is to understand the consumer and apply information and knowledge for mutual benefit. Technological advances in online marketing research provide the ability to monitor customer knowledge, perceptions, and decisions to dynamically generate solutions tailored to customer needs. In this chapter we have stressed the need to improve the research process by reducing errors in the research process. Perhaps the biggest mistake the market researcher can make is to view research options as time- and cost-saving tradeoffs acrossthe data collection options. New technologies continue to be developed, but each must be testedfor applicability under specific research conditions, and refined so that marketers are able to better identify and measure the constructs being investigated.


REFERENCE

Gerald Albaum, Catherine A. Roster, James Wiley, John Rossiter, and Scott M. Smith (2010). Designing Web Surveys in Marketing Research: Does Use of Forced Answering Affect Completion Rates?, Working paper

Evangelista, F., Albaum, G., & Poon, P. (1999, April). An empirical test of alternative theories of survey response behavior. Journal of the Market Research Society, 41, 2, 227–244

Bradburn, N., & Sudman, S. (1979). Improving interview method and questionnaire design. San Francisco: Jossey-Bass.

Stewart, C. J. and Cash, W. B., Interviewing Principles and Practices, 4/e. © 1985, William C. Brown, Publishers. Reprinted with permission of The McGraw-Hill Companies, pp. 9–13

Converse, J. M., & Presser, S. (1986). Survey questions: Handcrafting the standardizedquestionnaire. Beverly Hills, CA: Sage.

Fowler, F. J. (1991). Reducing interviewer-related error through interviewer training,

supervision, and other means. In P. P. Bremer, R. M. Groves, L. E. Lyberg, N. A. Mathiowetz, & S. Sudman, Measurement errors in surveys (pp. 259–278). New York: Wiley.

Fowler, F. J., & Mangione, T. W. (1990). Standardized survey interviewing: Minimizing interviewer-related errors. Newbury Park, CA :

Hansen, Jared M., Scott M. Smith, and Michael D. Geurts (2009), “Improving SurveyCompletion Rates and Sample Representativeness Using Highly-Interesting Questions: A National Panel Experiment Comparing One and Two Stage Questions”

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1313265

Morwitz, V. G. (1997). It seems like only yesterday: The nature and consequences of telescoping errors in marketing research. Journal of Consumer Psychology, 6(1), 1–29.

Semon, T. T. (2000a, August 14). Better questions means more honesty. Marketing News, 34,10.

Semon, T. T. (2000b, January 17). If you think a question is stupid—it is. Marketing News, 34, 7.

Schaffer, D. R., & Dillman, D. A. (1998). Development of a standard e-mail methodology : Results of an experiment. Public Opinion Quarterly, 62, 378–397.

Smith, R. (2007). Personal communication [interview].

Stewart, C. J. and Cash, W. B., Interviewing Principles and Practices, 4/e. © 1985. William C. Brown, Publishers. Reprinted with permission of The McGraw-Hill Companies, pp. 9–13

Sudman, S., Finn, A., & Lannam, L. (1984, Summer). The use of bounded recall procedures in single interviews. Public Opinion Quarterly, 48, 520–524.

Teitler, J. D., Reichman, N. E., & Sprachman, S. (2003). Cost and benefits of improvingresponse rates for a hard-to-reach population. Public Opinion Quarterly, 67, 126–138.