Saturday, April 8, 2023

Interviewing Modes : Personal - Call - Send

Survey information is obtained from respondents through communication in several alternative media modes. Respondents may be interviewed in person, by telephone, or they may be sent a questionnaire. Mail, FAX and the Internet surveys are similar in that non-personal self reporting is involved.


Personal Interviews

As the name implies, the personal interview consists of an interviewer asking questions of one or more respondents in a face to face situation. The interviewer’s role is to get in touch with the respondent (s), ask the desired questions, and record the answers obtained. Recording of the information may be done either during or after the interview. In either case, it is the interviewer’s responsibility to ensure that the content of the answers is clear, unambiguous and that information has been recorded correctly.

While it is significantly more expensive on a per completed interview basis, the personal interview, as a collection medium, has several advantages relative to telephone interviews and mail questionnaires. It provides the opportunity to obtain a better sample, since virtually all the sample units can be reached and, with proper controls and well trained interviewers, nonresponse to the survey or to individual questions can be held to a minimum. It also gives the opportunity to obtain more information, as a personal interview can be of substantially greater length than either a telephone interview or mail questionnaire. Finally, it permits greater flexibility. More freedom is provided for adapting and interpreting questions as the situation requires, especially in the case of unstructured personal interviews where visual, auditory, or olfactory aids are used The limitations of the personal interview include time, cost, and the response bias that may be induced by poorly trained or improperly selected interviewers. Problems with personal interviews arise from its very nature in that it is a social interaction between strangers, often on the respondent’s territory, initiated by an interviewer who may have little in common with the respondent. 

In addition to the home and workplace, many studies conduct consumer interviews in malls, where the so called mall intercept method is used. This method avoids the logistic, financial and time costs of travel to meet with respondents. The mall intercept method involves having interviewers stationed at selected places in a mall who request interviews from people passing by. Presumably the people are chosen on the basis of a predetermined sampling plan. At times, monetary incentives may have positive effects (Wiseman, Schafer, & Schafer, 1983).

The mall intercept method is a widely used method of data collection in marketing  them. These facilities may be equipped with video tape equipment, private interviewing compartments, food preparation facilities for taste tests, and a variety of other research equipment. Soundproof rooms free from distractions and equipped with proper lighting and materials can contribute to reliable data collection. Researchers can observe an interviewer’s technique and check the completed work immediately.

Overall quality of data (completeness, depth) appears to be about equal to that of other methods, since mall intercept respondents are more frequent users of shopping centers, and they may be better able to provide more brand and store oriented information than respondents contacted by other means.


Mall Intercepts Are Widely Used

According to Katherine Smith (1989), mall intercepts have the following advantages :

1.

They allow researchers to conduct visual, auditory and taste tests of ads, products and other physical stimuli.

2.

They offer an opportunity to obtain immediate response.

3.

They potentially provide more depth of response than non-face-to-face interviews.

4.

Researchers can use equipment to analyze responses (for example, voice pitch or eye movement tracking analysis).

5.

A large number of respondents from a wide geographic area can be interviewed in a limited time.

6.

Researchers can control the interviewing environment and supervise the interviewer.


Mall intercept studies are less expensive than door to door interviewing, because travel time and the “not-at home problem” are eliminated. However, it is becoming increasingly more difficult to locate people at home, and even more people are hesitant to let strangers inside.

Using the mall intercept, interviewing often takes place where members of the population of interest are doing something related to what is being measured. For studying certain types of products or behaviors, the mall is a more realistic setting when a respondent is being asked to make choices. Finally, using certain sampling methods, the mall intercept procedure may give a better distribution of respondents.


Despite all these virtues, mall intercepts have limitations :

1.

The mall customer may not reflect the general population.

2.

The intercept is not well-suited to probability sampling..

3.

Shoppers in a hurry may respond carelessly.

4.

The interview time constraint is more severe with mall intercepts than with other personal interviewing methods.


A variation of the mall intercept that is often used in business is to interview at conferences, sales meetings, or other gatherings representing the population of interest.

In more general terms, research is better conducted on site whenever the topic is about the business, when the purchase decision is made on the premises, or when the population of interest is represented. Respondents are most likely to recall and discuss their experiences during the experience, not days later during a survey. David Kay (1997), a partner in Research Dimensions International, suggests there are five types of interviews for on site research :

1.

Stream of consciousness interview. This is a conversation with questions designed to elicit what the respondent is experiencing at every moment of shopping.

2.

Spontaneous reaction interview. This asks for spontaneous, minimally prompted reactions of customers to their environment.

3.

Directed general-response interview. Useful to assess effectiveness of strategy, this method asks general questions directed to the strategy.

4.

Directed specific-response interview. This is useful to determine why consumers feel as they do, as indicated by answers to other questions.

5.

Prompted reaction to execution elements. This is designed to elicit response to specific elements. For example, an in-store taste test might include the question “What do you think about the taste of China Sea brand Spring Rolls?”


The obvious advantages of on site interviews are that the respondent is usually in a proper state of mind and has better task or product recall. In addition it is easier to contact the actual target group, making the response rates are higher. On site interviews seem to produce more robust information.

Paying people to participate in surveys, in the form of prepaid incentives, tends to increase overall response rates for personal interviews, as well as for other types of interviews. But, do not seem to influence the quality of the data collected (Davern, Rockwood, Sherrod, and Campbell, 2003).


The Telephone Interview

Telephone interviews are often used in lieu of personal interviews, especially when personal contact is desired, when the information must be collected quickly and inexpensively, and when the amount of information required is relatively limited. Compared to e-mail or mail surveys, telephone interviews often are more costly in terms of total costs of data collection. However, when cost is figured on a per completed questionnaire basis, telephone interviews may be less costly than mail, but more costly than e mail.  addition, telephone surveys offer the opportunity to probe for clarification or further information.

It is generally recognized that for business to business and consumer research, telephone interviewing is as effective as personal interviewing for scope and depth of information obtained. In addition, when a telephone survey is conducted from a call center, they can be better supervised than personal interviews.

Virtually all telephone interviews are structured direct interviews. However, when the population to be studied is business decision makers, some research practitioners believe that more information may be obtained using the telephone than by conducting focus groups (Eisenfeld, 2003). For business people and consumers alike, it is frequently easier to get 10 minutes of telephone cooperation, than a longer personal interview or attendance at a focus group.

With a detailed database to use as a sample frame, interviews of business people, current customers, former customers, and prospects all can be contacted relatively easily. Furthermore, pre notification letters can also be sent. A recent study of political telephone surveys concluded that advance pre notification letters can significantly increase response rates (Goldstein & Jennings, 2002).

The likelihood of the potential respondent refusing to be interviewed is always present when starting a telephone interview. Telephone surveys are unique in that they allow the interviewer to respond to the potential respondent and attempt to turn a refusal into a completed interview. In his classic treatise on telephone surveys, Dillman (1978) identifies common reasons people give for refusals and suggests some possible responses the interviewer can give. These responses can help the researcher handle objections and refine their interviewing skills. These are shown in Table 5.1. 


Table 5.1 Possible Answers to Reasons for Refusals

Reasons for

Refusing

Possible Responses

Too busy

This should only take a few minutes. Sorry to have caught you at a bad time, I would be happy to call back. When would be a good time for me to call in the next day or two?

Bad health

I’m sorry to hear that. Have you been sick long? I would be happy to call back in a day or two. Would that be okay? (If lengthy or serious illness, substitute another family member. If that isn’t possible, excuse yourself and indicate they will not be called again.)

Too old

Older people’s opinions are just as important in this particular survey as anyone else’s. In order for the results to be representative, we have to be sure that older people have as much chance to give their opinion as anyone else. We really do want your opinion.

Feel inadequate  Don’t know enough to answer

The questions are not at all difficult. They mostly concern your attitudes about local recreation areas and activities, rather than how much you know about certain things. Some of the people we have already interviewed had the same concern you have, but once we got started they didn’t have any difficulty answering the questions. Maybe I could read just a few questions to you and you can see what they are like.

Not interested

It’s awfully important that we get the opinions of everyone in the sample; otherwise the results won’t be very useful. So, I’d really like to talk with you

No one else’s  business    what I think

I can certainly understand, that’s why all of our interviews are confidential. Protecting people’s privacy is one of our major concerns, and to do it people’s names are separated from the answers just as soon as the interview is over. And, all the results are released in a way that no single individual can ever be identified.

Objects to surveys

We think this particular survey is very important because the questions are ones that people in parks and recreation want to know answers to, so they would really like to have your opinion

Objects to telephone surveys

We have just recently started doing our surveys by telephone, because this way is so much faster and it costs a lot less, especially when the survey is not very long, like this survey.


Source: Reprinted from Mail and Telephone Surveys: The Total Design Method by Dillman, D. Copyright © 1978. This material is used by permission of John Wiley & Sons, Inc.

As with all modes of surveying, telephone surveys benefit from the use of inducements or incentives monetary or nonmonetary to encourage potential respondents to participate. Incentives may be promised or sent in advance with a preliminary letter when the mailing address of the potential respondent is known, or they may also be offered when the initial request for participation is a refusal. When used this way it is known as a refusal conversion incentive.

The main purpose of such incentives is to generate a greater response rate with the effect of reducing nonresponse error. But the use of incentives has implications as well. First, total cost will increase, although cost per response may decrease depending on how effective the incentive is. Second, data quality may be affected, leading to a change in response bias, which may be a positive or negative change. Third, sample composition may be affected, again with a positive or negative effect. Fourth, expectations of interviewer and respondent may be changed. Finally, interviewer effort may be affected.

The telephone survey may be a good approach to reach specific market segments, particularly when door to door interviews are not possible or might lead to serious distortions in response. It is obvious that there must be sufficiently high telephone penetration in the segment for this mode of data collection to be advantageous. For example, the use of surname sorts makes telephone surveys the most efficient way to locate, contact and survey ethnic groups in the United States.

The basic limitations of telephone interviews are the relatively limited amounts of information that can be obtained (at least compared with alternative methods) and the bias that exists in any sample of home telephone subscribers. More than 25 percent nationally and more than 50 percent in large cities are not listed in a published directory, either because they have an unlisted number or as a result of moving (www.busreslab.com/articles/article3.htm). A technique for including unlisted telephone numbers in the sample frame is called random digit dialing (RDD).

Another additional problem for telephone researchers is that home telephone subscribers are disappearing. Currently about 1 in 5 homes do not have a “landline” telephone, but rely instead on cell phones or computer based phone services. 

http://www.azcentral.com/business/articles/2009/07/20/20090720biz-cellnation0721side.html

Additional problems associated with telephone interviewing are those of sample control and interviewer performance. Often this is manifested by inadequate efforts to complete interviews with some of the harder to reach respondents. Adding another sample is no substitute for dealing properly with the original sample.

Another aspect of interviewer performance that can influence response rates and data quality is actually something beyond any given interviewer’s control. This is the interviewer’s accent. There is evidence that accent can influence respondents’ participation. Linguistic experts have found that listeners form mental impressions of people who speak with an accent different from theirs, impressions that may lead to a refusal or bias the responses. In the United States there are many region specific accents (New England, the Deep South) and also those that are cultural (Hispanic, Asian, Indian). When faced with an unfamiliar accent, people may have trouble communicating. When communication becomes difficult, refusals will increase. Alternatively, some accents increase curiosity (British, Irish) and can actually increase response rates. In addition, respondents who identify an interviewer’s accent may apply preconceived biases to the interviewer and to the survey. Accent free interviewing eliminates one potential source of nonresponse and bias. At the very least, if a study is regional in nature, then having interviewers from that region will also reduce nonresponse and bias.

The so called “caller id” telephone technology problem has emerged for telephone surveys as the number of households having caller id and answering machines has increased. One way people use answering machines is to screen incoming calls. The second, of course, is to allow those calling to leave a message when members of the household are not at home. One might be tempted to assume that screening and leaving messages would allow potential respondents to choose to not participate a form of refusal. Interestingly, some research conducted on this issue has found that households with answering machines were more likely to complete the interview and less likely to refuse to participate compared to households where there was no answer on the initial call attempt (Xu, Bates, & Schweitzer, 1993). This suggests that where answering machines are operating, the call can represent a form of pre notification. It is generally believed that pre notification in any type of survey increases participation rates. But this phenomenon also has the potential to generate bias in the sample (Oldendick & Link, 1994).

In the United States, major legislation affecting telemarketing went into effect in 2003. This was the creation of the National Do Not Call Registry. Research practitioners are exempt from this law. Of primary concern to the research community is the use by telemarketers of selling under the guise of research techniques. For those interested, the European Society for Opinion and Marketing Research (ESOMAR) has responded by issuing guidelines for distinguishing telephone research from telemarketing (ESOMAR, 1989). A good overview of telephone surveys is given by Bourque and Fielder (2003a).


The Mail Interview

Mail interviews have in the past been widely used. Mail questions provide great versatility at relatively low cost and are particularly cost effective when included as part of a scheduled mailing, such as a monthly correspondence or billing. A questionnaire may be prepared and mailed to people in any location at the same cost per person : the cost of preparing the questionnaire, addressing the letter or card sent, and the postage involved. Respondents remain anonymous unless a name is requested, the questionnaire is openly coded, or some ethically questionable practice is employed.

Timeliness of responses is critical in mail surveys. If the time given is reasonable, say one or two weeks, stating a deadline should not adversely affect the response rate. Stating such a deadline may encourage the potential respondent not to postpone the task indefinitely.

The overall process of data collection from a mail survey is summarized below as a sequence of contact activities for an optimal mail survey. With minor modifications this general sequence is applicable to personal interview, telephone, and e mail surveys. When designing a survey, the researcher must consider issues that can affect response rate and data quality, including those shown in Table 5.2.

Table 5.2 Selected Dimensions of a Mail Survey and Alternatives for Choice

Dimension

Alternatives

Preliminary notification

Letter, postcard, telephone call, e. mail, none

Reminder

Letter, postcard, telephone call, e.mail, none

Cover letter

Separate item, included as first page of questionnaire Personalized, non personalized Color of ink in signature (black, blue)

Length of questionnaire

Number of pages

Format of questionnaire

Print front, print front and back, individual stapled pages, booklet

Type of outgoing postage

First-class stamp, first-class metered, bulk, nonprofit (where appropriate)

Return envelope postage

First-class stamp, metered, first-class permit, none

Inducements

Monetary (amount), nonmonetary (pen, silver jewelry, trinkets of all types), contribution to charity, none when given (prepaid, promise to pay)

Coding with a number

Yes (on questionnaire, on return envelope), none

Anonymity/Confidentiality

Yes, no

Endorsement

Yes, no


Increasing Response Rates

Perhaps the most serious problem with mail surveys is that of nonresponse. Typically, people indifferent to the topic being researched will not respond. It is usually necessary to send additional mailings (i.e., follow ups) to increase response. But even with added mailings, response to mail questionnaires is generally a small percentage of those sent; the modal response rate is often only 20 to 40 percent. On the front end of the surveying effort, response rates are  increased through preliminary contact by letter or telephone call, cover letters, and monetary or non monetary (a gift) inducements.

Other experimental research has evaluated response effects of questionnaire format and length, survey sponsorship, endorsement, type of postage, personalization, type of cover letter, anonymity and confidentiality, deadline date premiums and rewards, perceived time for task, and the use of a follow up reminder, Reported nonresponse rates and accuracy of data for experiments involving these techniques vary, and there appears to be no strong empirical evidence that any one is universally better than the others, except that it is better to use a followup and use monetary or nonmonetary incentives.

Cover letters are included as the first page of a questionnaire is shown in Exhibit 5.2. This letter could have been a separate page if more space was needed for the questionnaire. Note that email invitations should be much shorter, perhaps limited to only one short paragraph. There is no evidence that any alternative is universally better than another within each dimension. The best rule of thumb is to use common sense. Further discussion of these will be found in the many review articles and studies published in such sources as the Journal of Marketing Research and  Public Opinion Quarterly, in the book by Bourque and Fielder (2003b), and in the classic works of Dillman (1978, 2000).


Exhibit 5.2 Example of a Cover Letter

My colleague, Dr. David Boush, and I are engaged in a study of consumers’ use of financial services. The broad objective of this study is to gain an understanding of how people use banks and similar financial organizations, and what characteristics influence their behavior with such companies. The Bank of Anytown has agreed to cooperate with us in this endeavor by assisting us in data collection.

The enclosed questionnaire is being sent to a large number of the customers of the Bank of Anytown, each of whom has been selected by a random process. I would greatly appreciate your completing the questionnaire and returning it in the envelope provided. Please note that you do not have to add postage to this envelope.

All individual replies will be kept in strictest confidence. No person associated with The Bank of Anytown will see any questionnaire. Only aggregate results will be shown in our write up of the results. No person other than Dr. Boush, myself, and our research assistant will ever see a completed questionnaire. If you do not wish to participate in this survey simply discard the questionnaire. Completing and returning the questionnaire constitutes your consent to participate.

The code number at the top of the questionnaire will be used only for identifying those people who have not responded so that he or she will not be burdened by receiving a follow up mailing. After the second mailing has been made, all records that match a number with a person’s name will be destroyed.

The success of this project depends upon the assistance of persons such as yourself. If you have any questions, please call me at 503-346-4423.


Sincerely,

Gerald Albaum

Professor of Marketing


Endorsements are an intriguing dimension of survey research. An endorsement is an identifying sponsorship that provides “approval and support for a survey from an individual or organization.” An endorsement can be included as a company logo or a person under who’s signature the letter is sent. Unknowingly, an endorsement may have a positive, neutral, or negative effect, depending on how the endorser is perceived by a potential respondent

Rochford and Venable (1995) found that significantly higher response rates were observed when there was endorsement by an external party associated with the targeted audience than when there was no such endorsement. In addition, endorsements by locally known individuals produced higher response rates than endorsements by highly placed but less wellknown individuals from a national headquarters office.

Since people responding to a mail questionnaire tend to do so because they have stronger feelings about the subject than the nonrespondents, biased results are to be expected. To measure this bias, it is necessary to contact a sample of the nonrespondents by other means, usually telephone interviews. This is a type of nonresponse validation. The low level of response, when combined with the additional mailings and telephone (or personal) interviews of nonrespondents, results in substantial increases in the per interview cost. The initial low cost per mailing may therefore be illusory. On the other hand, the nonresponse validation may indicate that populate subgroups have not been omitted and that results may not be biased.


Variations on Mail Interviews

Many variations of the mail interview are frequently used. These include the warranty card, hand delivered surveys, newspaper/magazine surveys, the fax back survey, survey on the back of checks, website polls (a one question survey) and of course, the email survey.

Warranty cards often ask for information about where the item was purchased, what kind of store or outlet sold it, when it was purchased, and other variables such as demographics and life style. Although warranty cards do not provide an extensive amount of information, response rates are substantially higher than for the usual mail questionnaire and the cards are useful for creating a customer database.

Another variation is the questionnaire that is either printed or inserted in a newspaper or magazine. Potential respondents are requested to mail or fax this back to a designated address/fax number. Among the many problems with this approach is the lack of any formalized control over the sample. Yet, despite this major limitation, the approach does have a possibility of better hitting the target population.

Another modification of including a questionnaire in a printed publication asks respondents to return the questionnaire by fax. One advantage of this approach is that those who do respond will be truly committed to the project. But, again there is little control over the sample. One major Airlines routinely printed questionnaires to be returned by fax in its in flight magazine.

Potential advantages of using fax include quick contact, rapid response, retention of the original document format and visual images used, modest cost, and automated faxing that works directly with the computer (Baker, Hozier, & Rogers, 1999).

One hybrid approach, used by McDonalds, had patrons to take a brief in store survey on a mark sense sheet, as shown in Figure 4.3. When completed, the store faxed the sheets for processing using a technique that converted the image directly to data without being printed to paper on the receiving end. The results were then available online in real time.


Web And E. Mai Interviews

As computer coverage in home markets increase, the use of electronic surveys has increased. Web and e. mail surveys are fulfilling their promise to be a driving force in marketing research. Currently (2009) 73 percent of all U.S. households have Internet access in their home.

http://www.marketingcharts.com/interactive/home-internet-access-in-us-still-room-for-growth-8280/

The Internet has experienced a growth rate that has exceeded any other modern technology, including the telephone, VCR, and even TV. The Internet has diffused from a highly educated, white-collar, upper income, male dominated core. At the opposite end of the spectrum, the approximately 25% who are non adopters include a disproportionate number of elderly, single mothers, African Americans and Hispanics, and lower income individuals. For some studies, this may be a serious limitation. However even today, most groups like company employees, students, or association members have nearly 100 percent Internet access and check e. mail on a daily basis.

Electronics driven lifestyles that include online social networks, massive use of texting and pervasive internet connections are no doubt responsible in part for seemingly responsible for attitude and behavioral changes in the way we view our increasingly virtual world. Strong upward trends are observed in the percentage of Internet purchases for airline tickets, CDs, DVDs, books, computer software, hardware and systems. These online customers provide excellent access for research purposes.

Advocates of online surveying quickly point to the elimination of mailing and interviewing costs, elimination of data transcription costs, and reduced turnaround time as the answer to client demand for lower cost, timelier, and more efficient surveys. As a result, online marketing research has become so widely accepted that online research has been optimistically projected to account for as much as half of all marketing research revenue, topping $3 billion. While these numbers appear to be overly optimistic, it is clear that online research is growing and that researchers operate in a much faster paced environment than ever before. This pace will continue to increase as new modalities for research open: wireless PDAs, Internet capable mobile phones, Internet TVs, and other Internet-based appliances yet to be announced. Each is an acceptable venue for interacting with the marketplace and conducting online research.


Substantial benefits accrue from the various approaches to computer assisted data collection in surveys as shown below

􀁸 Respondents need few computer related skills

􀁸 Respondent choose their own schedule for completing survey

􀁸 Can easily incorporate complex branching into survey

􀁸 Can easily pipe and use respondent generated words in questions throughout the survey

􀁸 Can accurately measure response times of respondents to key questions

􀁸 Can easily display a variety of graphics and directly relate them to questions

􀁸 Eliminates need to encode data from paper surveys

􀁸 Errors in data less likely, compared to equivalent manual method

􀁸 Speedier data collection and encoding compared to equivalent manual method.


ACNielsen (Miller, 2001) reported the results of 75 parallel tests comparing online and traditional mall intercept methods. Researchers noted high correlations in aggregate purchase intentions. While online measures may yield somewhat lower score values, recalibration of averages against appropriate norms produced accurate sales forecasts. Wilkie further reported that while responses gathered using different survey modes may be similar, the demographic profiles of online and traditional respondents groups do differ.

Given that the current percentage of households online is approximately 75 percent, statistical weighting of cases could be used to adjust demographic differences of online groups to match mall intercept or telephone populations. However, the possibility of weighting actually raises the question of whether it is better to model phone or mall intercept behavior (which are also inaccurate) or to attempt to independently model the actual behavior of the respondents.



Probability And Nonprobability Survey Approaches

A variety of approaches to presentation of surveys and recruitment of respondents are used on the Web. Surveys based on probability samples, if done properly, provide a bias free method of selecting sample units and permit the measurement of sampling error. However the majority of online research can be typified as a one shot mailout, the objective of which is to obtain a sufficient number of completed responses. With these types of studies, there is rarely much thought given to the issues of representativeness of the sample, or validity and accuracy of the results. The question is, then, what is required for effective online research?

Online Nonprobability Surveys

Nonprobability samples offer neither of these features. Nonprobability sample based surveys, generally for entertainment or to create interest in a Web site, are self selected by the respondent from survey Web sites either for interest or compensation, or are provided to members of volunteer panels.

As an example of the Web site interest variety of survey, Figure 5.2 shows a CNN.com “Quick Vote” survey, which includes a link to the online results page.

Web Site Interest Survey Source: http://edition.cnn.com/

The National Geographic Society Web site offers surveys that focus on a variety of educational, social and environmental issues at www.nationalgeographic.com/geosurvey/. Surveys include lengthy inventories covering demographics, Internet usage, and attitudes about such topics as geographic literacy, conservation and endangered species, culture, and a variety of other topics. The 2006 Global Geographic Literacy Survey was conducted jointly with Roper Research to assess the geographic knowledge of people ages 18 to 24 across the United States. Specific questions focused on benchmarking attitudes towards the importance of geography and how aware young adults are of geography in the context of current events.

Other well recognized nonprobability surveys include the ACNielsen BASES (test marketing) and Harris Black panels. Although nonprobability surveys, these panels are continually redefined to match the demographic characteristics of telephone and mall intercept surveys. The parallel telephone and mall intercept studies provide weighting to proportionately adjust online samples to reduce selection bias.


Online Probability Surveys

Probability based surveys allow the researcher to estimate the effects of sampling error and thereby provide inferences about the target population through hypothesis testing. Coverage errors, nonresponse errors, and measurement errors still apply and may reduce the generalizability of the data. Online probability samples generally result where e. mail surveys are sent to comprehensive lists that represent the target population. When the target population is large, random samples from the list will be used. For smaller populations such as employees of a company, the survey may be sent to the entire population, thus representing a census.

Where the target population of interest is visitors to a given Web site, pop up surveys may be presented randomly to visitors during their visit to the site. In this case, the target population is well defined and the sample element has a known nonzero probability. The Qualtrics.com Site Intercept tool allows the market researcher to control all pop-up and popunder content without the assistance of the IT department. Surveys, white papers, and messages can be distributed based on a variety of conditions, including multiple pages visited that contain specific key words. For example alternative surveys about kitchen appliances could be administered if the visitor viewed pages dealing with major appliances (stoves and refrigerators) rather than counter top appliances (mixers and waffle irons).

Pre recruited online panels, when recruited using probability based sampling methods such as random digit telephone dialing, also produce probability surveys. In this case, random digit dialing would be used to contact the prospective panel members who would be qualified as Internet worthy before being recruited for the panel. The initial interview may also include a variety of demographic, lifestyle, and computer usage questions that would help in weighting the panel, thereby reducing selection bias.

Criteria

Telephone

CATI

In-Home

Interviews

Mall-

Intercept

Interviews

CAPI

Mail

Surveys

Mail

Panels

Internet/

Web

Flexibility of data

collection

Moderate

to high

High

High

Moderate

to high

Low

Low

Moderate

to high

Diversity of

questions

Low

High

High

High

Moderate

Moderate

Moderate

to high

Use of physical

stimuli

Low

Moderate

to high

High

High

Moderate

Moderate

Moderate

Sample Control

Moderate

to high

Potentially

high

Moderate

Moderate

Low

Moderate

to high

Low to

moderate

Control of data

collection

environment

Moderate

Moderate

to high

High

High

Low

Low

Low

Quantity of data

Low

High

Moderate

Moderate

Moderate

High

Moderate

Response rate

Moderate

High

High

High

Low

Moderate

Very low

Perceived

anonymity of

respondent

Moderate

Low

Low

Low

High

High

High

Social desirability

Moderate

High

High

Moderate

to high

Low

Low

Low

Obtaining

sensitive

information

High

Low

Low

Low to

moderate

High

Moderate

to high

High

Potential for

interviewer bias

Moderate

High

High

Low

None

None

None

Speed

High

Moderate

Moderate

to high

Moderate

to high

Low

Low to

moderate

Very high

Cost

Moderate

High

Moderate

to high

Moderate

to high

Low

Low to

moderate

Low

Source: From Malhotra, N., Marketing Research: An Applied Orientation, 4th edition, Copyright © 2004. Reprinted with permission of Pearson Education, Inc., Upper Saddle River, NJ.


Probability And Nonprobability Survey Approaches

A variety of approaches to presentation of surveys and recruitment of respondents are used on the Web. Surveys based on probability samples, if done properly, provide a bias free method of selecting sample units and permit the measurement of sampling error. However the majority of online research can be typified as a one shot mailout, the objective of which is to obtain a sufficient number of completed responses. With these types of studies, there is rarely much thought given to the issues of representativeness of the sample, or validity and accuracy of the results. The question is, then, what is required for effective online research?


Online Nonprobability Surveys

Nonprobability samples offer neither of these features. Nonprobability sample based surveys, generally for entertainment or to create interest in a Web site, are self selected by the respondent from survey Web sites either for interest or compensation, or are provided to members of volunteer panels.

As an example of the Web site interest variety of survey,  shows a CNN.com “Quick Vote” survey, which includes a link to the online results page. Source: http://edition.cnn.com/

The National Geographic Society Web site offers surveys that focus on a variety of educational, social and environmental issues at www.nationalgeographic.com/geosurvey/. Surveys include lengthy inventories covering demographics, Internet usage, and attitudes about such topics as geographic literacy, conservation and endangered species, culture, and a variety of other topics. The 2006 Global Geographic Literacy Survey was conducted jointly with Roper Research to assess the geographic knowledge of people ages 18 to 24 across the United States. Specific questions focused on benchmarking attitudes towards the importance of geography and how aware young adults are of geography in the context of current events.

Other well recognized nonprobability surveys include the ACNielsen BASES (test marketing) and Harris Black panels. Although nonprobability surveys, these panels are continually redefined to match the demographic characteristics of telephone and mall intercept surveys. The parallel telephone and mall intercept studies provide weighting to proportionately adjust online samples to reduce selection bias.


Online Probability Surveys

Probability based surveys allow the researcher to estimate the effects of sampling error and thereby provide inferences about the target population through hypothesis testing. Coverage errors, nonresponse errors, and measurement errors still apply and may reduce the generalizability of the data. Online probability samples generally result where e. mail surveys are sent to comprehensive lists that represent the target population. When the target population is large, random samples from the list will be used. For smaller populations such as employees of a company, the survey may be sent to the entire population, thus representing a census.

Where the target population of interest is visitors to a given Web site, pop up surveys may be presented randomly to visitors during their visit to the site. In this case, the target population is well defined and the sample element has a known nonzero probability. The Qualtrics.com Site Intercept tool allows the market researcher to control all pop up and popunder content without the assistance of the IT department. Surveys, white papers, and messages can be distributed based on a variety of conditions, including multiple pages visited that contain specific key words. For example alternative surveys about kitchen appliances could be administered if the visitor viewed pages dealing with major appliances (stoves and refrigerators) rather than counter top appliances (mixers and waffle irons).

Pre recruited online panels, when recruited using probability based sampling methods such as random digit telephone dialing, also produce probability surveys. In this case, random digit dialing would be used to contact the prospective panel members who would be qualified as Internet worthy before being recruited for the panel. The initial interview may also include a variety of demographic, lifestyle, and computer usage questions that would help in weighting the panel, thereby reducing selection bias.


Mixed Mode Studies

Mixed mode designs provide another alternative for the researcher, presenting respondents with a choice of responding via online survey or via another mode. Respondents contacted by mall intercept, telephone, mail, or other probability based sampling mechanism, are given the opportunity to respond in several modes, including online. It is common for businesses or individuals to prefer the online survey format.

Wisconsin cheese producers respond annually to an industry group survey that reports production by the type of cheese. This more than 90-page survey details the desired information for a separate type of cheese product on each page. When asked if they would prefer a paperand pencil or online survey, more than 50 percent favored the online mode. While the online methodology may be preferred, access to the survey must be provided to all cheese producers, even those without Internet access. A mixed mode survey design is the obvious choice.

When online samples are used to make inferences about the general population, we must account for the multiple factors that distinguish online samples from the general population. These factors include nonsampling errors unique to the Internet methodology: for example, fewer households have adopted the Internet than have telephone or mail and researchers lack control of the respondent’s computer configuration (browser, operating system, fonts, and resolution). In addition, refusals, partial completions, and all other nonsampling factors that bias traditional survey measurement and results still apply to online surveys. (Sources of nonsampling error were discussed in Chapter 2.)

Online survey techniques are also subject to many of the other errors that affect telephone and mail surveys. Marketing researchers, both professional and casual, often neglect to consider the implications that nonprobability sampling and surveys have on the ability to make inferences regarding the target population. While this brief review has done little more than identify the topic areas to be considered, much research on the topic has been completed for both traditional and online surveys. In Chapter 10, we build upon this discussion of general survey and sampling methodology.


Strategies Of Data Collection

When designing a survey, our concern should be for the total package of survey procedures rather than any single technique (Dillman, 1978, 2000). The total package concept underlying survey design strategy goes beyond contacts with respondents. All aspects of a study must be considered when comparing alternative strategies. This is the essence of a total design. Pretesting and conducting pilot surveys are part of this package.


Pretesting and Pilot Survey

A distinction should be made between a pretest and a pilot survey. Pretesting is an activity related to the development of the questionnaire or measurement instrument to be used in a survey or experiment. In contrast, a pilot survey is a small scale test of what the survey is to be, including all activities that will go into the final survey. Pretesting a questionnaire answers two broad questions :

1. Are we asking “good” questions?

2. Does the questionnaire flow smoothly, and is the question sequence is logical?

Pretesting does not, however, ensure that the questionnaire (or even the survey) will be valid, particularly in its content. A general rule of thumb for most surveys is that a pretest of about 30 to100 interviews is adequate, provided this covers all subgroups in the main survey population. Ideally, the sample for the pretest should mirror in composition that of the main survey.

The pilot study is designed to ascertain whether all the elements in the survey fit together. Thus, questionnaire pretesting may be part of the pilot study but normally should not be. One aspect of the pilot survey is that it can help researchers decide the size of the original sample for the main survey. Response to the pilot can be used, together with the desired sample size, to determine the size of the required sample.

Both pretesting and pilot surveys can provide information helpful to manage some of the sources of potential research error. Moreover, in the long run they can both make a survey more efficient and effective.


Longitudinal Data Collection With Panels

Panels are widely used in marketing research. In the preceding chapter we discussed the continuous panel as used by syndicated services. In this chapter we have discussed methods for reducing response error along with some general characteristics of panels. Although the panel concept has been used in business to business marketing research, its greatest application has been in studying consumer purchase, consumption and behavioral patterns.

For example, panels have been effectively used to develop early forecasts of long run sales of new products. There are major commercial consumer panel organizations, and many consumer product companies that maintain their own panels or create short-term ad hoc panels as the need arises to test new products. In addition, several universities maintain consumer panels to obtain research data and generate revenues by providing data to others. Moreover, the application of Internet technology has helped many corporations to reduce data costs while increasing customer feedback and research.

The distinguishing feature of a panel is the ability to repeat data collection and collect longitudinal data from a sample of respondents. The repeated collection of data from panels creates both opportunities and problems. Panel studies offer at least three advantages over onetime surveys :

1.

Deeper analysis of the data is possible so that the researcher can determine if overall change is attributable primarily to a unidirectional shift of the whole sample or only reflects changes in subgroups.

2.

Additional measurement precision is gained from matching response from one interview data collection point to another.

3.

Panel studies offer flexibility that allows later inquiries to explain earlier findings


When responses are obtained at two or more times, the researcher assumes that an event happens or can happen (i.e., changes may occur) during the time interval of interest. In fact, it is just such changes, analyzed in the form of a turnover table, that provide the heart of panel analyses.

Assume that we have changed the package in one market for a brand of paper towels called Wipe, and that we run a survey of 200 people purchasing the product two weeks before the change (T1) and a similar measure for the week after (T2). The results are shown in Table 4.4. Both (A) and (B) tell us that the gross increase in sales of Wipe over X (this represents all other brands) is 20 units (or 10 percent). However, only the turnover table from the panel in (B) can tell us that 20 former buyers of Wipe switched to X and that 40 former buyers of X switched to Wipe. In those instances where there is experimental manipulation, such as the introduction of a new product or the use of splitcable advertising, the manipulation is presumed to cause changes between time x (when he change is made) and time x + 1.

Changes in Sales of Wipe Paper Towels between T1 and T2 (Hypothetical)

(A) Cross-Sectional

T1

T2


 

Bought Wipe

Bought X

Number of Purchasers

100

100

200

120

80

200

 

 

 

 

(B) Panel

At T1 Bought Wipe

At T1 Bought X

Total


At T2 Bought Wipe

At T2 Bought X

Total

80

20

100

40

60

100

120

80

N = 200

Panel studies are a special case of longitudinal research, where respondents are typically conscious of their ongoing part in responding to similar questions over a period of time. This consciousness of continuing participation can lead to panel conditioning, which may bias responses relative to what would be obtained through a cross sectional study. As in any effort at scientific measurement, the researcher should be concerned with threats to internal validity, since internal validity is a precondition for establishing, with some degree of confidence, the causal relationship between variables. Another issue of concern is panel attrition, the extent of nonresponse that occurs in later waves of study interviewing. Some persons who were interviewed at the first time may be unwilling or unable to be interviewed later on.

There are many distinguishing characteristic of panel types. We have already mentioned different types of sponsoring organizations (such as commercial), permanence (continuous or ad hoc), and research design (non experiment). Panels can also be characterized by geographic coverage (ranging from national to local), whether a diary is used, data collection method (all types are used), sampling method employed for a given study (probability or not), and type of respondent.

A unique type of panel is the scanning diary panel. This panel involves recruiting shoppers in the target market area to participate in the panel, and each person typically is compensated for participation. An identification card (similar to a credit card) is given to each member of the panel household. At the end of a normal shopping trip in a cooperating store, the card is presented at the start of the checkout process. This identifies the respondent for input of purchases into the computer data bank. The types of information available from this sort of panel are similar to those discussed in the preceding chapter for scanner based syndicated services. An added advantage here, of course, is that there is a carefully designed sample providing purchase data.

One last comment about panels is that they are often used for a cross sectional study. When used this way and only one measurement is made, the panel is merely the source of a sample (the sample frame).


Summary

This chapter first examined the media through which interviews may be conducted. The personal interview, the telephone interview, the mail interview and online e. mail interview were discussed, including the merits and limitations of each. Variations of these basic methods, including electronic based variations, were briefly described. Finally, the use of panels was presented in a general context.

The objective of marketing research is to understand the consumer and apply information and knowledge for mutual benefit. Technological advances in online marketing research provide the ability to monitor customer knowledge, perceptions, and decisions to dynamically generate solutions tailored to customer needs. In this chapter we have stressed the advantages as well as the caveats associated with online research. Perhaps the biggest mistake the market researcher could make would be to view online research as simply a time  and cost saving extension of traditional modes of data collection. As new technologies continue to be developed, they are tested for applicability in marketing research settings, and refined so that marketers are able to better identify the needs and wants of today’s consumers.


References

www.azcentral.com/business/articles/2009/07/20/20090720biz-cellnation0721side.html

Baker, K. G., Hozier, G. C., Jr., & Rogers, R. D. (1999, February). E-Mail, mail and fax survey research: Response pattern comparisons. Paper presented at the 6th Annual Meeting of the American Society of Business and Behavioral Sciences, Las Vegas, NV.

Becker, E. E. (1995, January 2). Automated interviewing has advantages. Marketing News, 29, 9.

Bourque, L. B., & Fielder, E. P. (2003a). How to conduct telephone surveys. Thousand Oaks, CA: Sage.

Bourque, L. B., & Fielder, E. P. (2003b). How to conduct self-administered and mail surveys. Thousand Oaks, CA: Sage.

www.busreslab.com/articles/article3.htm

www.cnn.com/

Davern, M., Rockford, T. H., Sherrod, R., & Campbell, S. (2003). Prepaid monetary incentives and data quality in face-to-face interviews. Public Opinion Quarterly, 67, 139–147.

Dillman, D. A. (1978). Mail and telephone surveys: The total design method. New York: Wiley-Interscience.

Dillman, D. A. (2000). Mail and internet surveys: The tailored design method (2nd ed.). New York:Wiley.

Eisenfeld, B. (2003, March 3). Phone interviews may garner more data. Marketing News, 37, 57.

European Society for Opinion and Marketing Research. (1989). Distinguishing telephone research from telemarketing: ESOMAR guidelines. Amsterdam: ESOMAR.

Goldstein, K. M., & Jennings, M. K. (2002). The effect of advance letters on cooperation in a list sample telephone survey. Public Opinion Quarterly, 66, 608–617.

Kay, D. (1997, January 6). Go where the consumers are and talk to them. Marketing News, 31, 14.

Malhotra, N. K. (2004). Marketing research: An applied orientation (4th ed.). Upper Saddle River, NJ: Pearson Education.

www.marketingcharts.com/interactive/home-internet-access-in-us-still-room-for-growth-8280/

Miller, T. W. (2001, September 24). Make the call: Online results are mixed bag. Marketing News, 30. www.nationalgeographic.com/geosurvey/

Oldendick, R. W. (1994). The answering machine generation. Public Opinion Quarterly, 58, 264–273.

Rochford, L., & Venable, C. F. (1995, Spring). Surveying a targeted population segment: The effects of endorsement on mail questionnaire response rate. Journal of Marketing Theory and Practice, pp. 86–97.

Smith, K. T. (1989, September 11). Most research firms use mall intercepts. Marketing News, 23, 16.

Wiseman, F., Schafer, M., & Schafer, R. (1983). An experimental test of the effects of a monetary incentive on cooperation rates and data collection costs in central-location interviewing. Journal of Marketing Research, 20, 439–442.

Xu, M., Bates, B. J., & Schweitzer, J. C. (1993). The impact of messages on survey participation in answering machine households. Public Opinion Quarterly, 57, 232–237.