- Premium Academic Help From Professionals
- +1 323 471 4575
- support@crucialessay.com

## Understanding Of How To Construct Survey Items

Order ID# 45178248544XXTG457Plagiarism Level: 0-0.5%Writer Classification: PhD competentStyle: APA/MLA/Harvard/ChicagoDelivery: Minimum 3 HoursRevision: PermittedSources: 4-6Course Level: Masters/University CollegeGuarantee Status: 96-99%

Instructions

Understanding Of How To Construct Survey Items

6/28/22, 10:11 AM Print

https://content.uagc.edu/print/Newman.2681.16.1?sections=navpoint-29&content=content&clientToken=c9a97c91-0cc1-c2da-e26b-fe25d5b09a30&np=navpoint-20 1/6

4.3 Sampling From the Population

At this point, the chapter should have conveyed an understanding of how to construct survey items. The next step is to find a group of people to fill out

the survey. But where does a researcher find this group? And how many people are needed? On the one hand, researchers want as many people as

possible to capture the full range of attitudes and experiences. On the other hand, they have to conserve time and other resources, which often means

choosing a smaller sample of people. This section examines the strategies researchers can use to select samples for their studies.

Researchers refer to the entire collection of people who could possibly be relevant to a study as the population. For example, if we were interested in

the effects of prison overcrowding, our population would consist of prisoners in the United States. If we wanted to study voting behavior in the next

presidential election, the population would be U.S. residents eligible to vote. And if we wanted to know how well college students cope with the

transition from high school, our population would include every college student enrolled in every college in the country.

These populations suggest an obvious practical complication. How can we get every college student—much less every prisoner—in the country to fill

out our questionnaire? We cannot; instead, researchers will collect data from a sample, a subset of the population. Instead of trying to reach all

prisoners, we might sample inmates from a handful of state prisons. Rather than attempt to survey all college students in the country, researchers often

restrict their studies to a collection of students at one university.

The goal in choosing a sample is to make it as representative as possible of the larger population. That is, if researchers choose students at one

university, they need to be reasonably similar to college students elsewhere in the country. If the phrase “reasonably similar” sounds vague, this is

because the basis for evaluating a sample varies depending on the hypothesis and the key variables. For example, if we wanted to study the

relationship between family income and stress levels, we would need to make sure that our sample mirrored the population in the distribution of

income levels. Thus, a sample of students from a state university might be a better choice than students from, say, Harvard (which costs about $60,000

per year including room and board). On the other hand, if the research question deals with the pressures faced by students in selective private schools,

then Harvard students could be a representative sample for the study.

Figure 4.1 shows a conceptual illustration of both a representative and nonrepresentative sample, drawn from a larger population. The population in

this case consists of 144 individuals, split evenly between Xs and Os. Thus, we would want our sample to come as close as possible to capturing this

50/50 split. The sample of 20 individuals on the left is representative of the sample because it is split evenly between Xs and Os. But the sample of 20

individuals on the right is nonrepresentative because it contains 75% Xs. Because the population has far fewer Os than we might expect, this sample

does not accurately represent the population. This failure of the sample to represent the population is also referred to as sampling bias.

Figure 4.1: Representative and nonrepresentative samples of a population

6/28/22, 10:11 AM Print

https://content.uagc.edu/print/Newman.2681.16.1?sections=navpoint-29&content=content&clientToken=c9a97c91-0cc1-c2da-e26b-fe25d5b09a30&np=navpoint-20 2/6

From where do these samples come? Broadly speaking, researchers have two broad categories of sampling strategies at their disposal: probability

sampling and nonprobability sampling.

6/28/22, 10:11 AM Print

https://content.uagc.edu/print/Newman.2681.16.1?sections=navpoint-29&content=content&clientToken=c9a97c91-0cc1-c2da-e26b-fe25d5b09a30&np=navpoint-20 3/6

bowdenimages/iStock/Thinkstock

In a neighborhood with a majority of Caucasian residents,

stratified random sampling is needed to capture the perspective

of all ethnic groups in the community.

Probability Sampling

Researchers use probability sampling when each person in the population has a known chance of being in the sample. This is possible only in cases

where researchers know the exact size of the population. For instance, the current population of the United States is 322.1 million people

(www.census.gov/popclock/ (http://www.census.gov/popclock/) ). If we were to select a U.S. resident at random, each resident would have a one in 322.1

million chance of being selected. Whenever researchers have this information, probability-sampling strategies are the most powerful approach because

they greatly increase the odds of getting a representative sample. Within this broad category of probability sampling are three specific strategies:

simple random sampling, stratified random sampling, and cluster sampling.

Simple random sampling, the most straightforward approach, involves randomly picking study participants from a list of everyone in the population.

The term for this list is a sampling frame (e.g., imagine a list of every resident of the United States). To have a truly representative random sample,

researchers must have a sampling frame; they must choose from it randomly; and they must have a 100% response rate from those selected. (As

Chapter 2 discussed, if people drop out of a study, it can threaten the validity of the hypothesis test.)

Researchers use stratified random sampling, a variation of simple random

sampling, when subgroups of the population might be left out of a purely

random sampling process. Imagine a city with a population that is 80%

Caucasian, 10% Hispanic, 5% African American, and 5% Asian. If we were to

pick 100 residents at random, the chances are very good that our entire sample

would consist of Caucasian residents and ignore the perspective of all ethnic

minority residents. To prevent this problem, researchers use stratified random

sampling—breaking the sampling frame into subgroups and then sampling a

random number from each subgroup. In this example, we could divide the list of

residents into four ethnic groups and then pick a random 25 from each of these

groups. The end result would be a sample of 100 people that captured opinions

from each ethnic group in the population. Notice that this approach results in a

sample that does not exactly represent the underlying population—that is,

Hispanics constitute 25% of the sample, rather than 10%. One way to correct

for this issue is to use a statistical technique known as “weighting” the data.

Although the full details are beyond the scope of this book, weighting involves

trying to correct for problems in representation by assigning each participant a

weighting coefficient for analyses. In essence, people from groups that are underrepresented would have a weight greater than 1, while those from

groups that are overrepresented would have a weight less than 1. For more information on weighting and its uses, see http://www.applied-surveymethods.com/weight.html (http://www.applied-survey-methods.com/weight.html) .

Finally, researchers employ cluster sampling, another variation of random sampling, when they do not have access to a full sampling frame (i.e., a

full list of everyone in the population). Imagine that we want to study how cancer patients in the United States cope with their illness. Because no list

6/28/22, 10:11 AM Print

https://content.uagc.edu/print/Newman.2681.16.1?sections=navpoint-29&content=content&clientToken=c9a97c91-0cc1-c2da-e26b-fe25d5b09a30&np=navpoint-20 4/6

exists of every cancer patient in the country, we have to get a little creative with our sampling. The best way to think about cluster sampling is as

“samples within samples.” Just as with stratified sampling, we divide the overall population into groups, but cluster sampling differs in that we are

dividing into groups based on more than one level of analysis. In our cancer example, we could start by dividing the country into regions, then

randomly selecting cities from within each region, and then randomly selecting hospitals from within each city, and finally randomly selecting cancer

patients from each hospital. The end result would be a random sample of cancer patients from, say, Phoenix, Miami, Dallas, Cleveland, Albany, and

Seattle; taken together, these patients would provide a fairly representative sample of cancer patients around the country.

Nonprobability Sampling

The other broad category of sampling strategies is known as nonprobability sampling. These strategies are used in the (remarkably common) case in

which researchers do not know the odds of any given individual’s being in the sample. This uncertainty represents an obvious shortcoming—if we do

not know the exact size of the population and do not have a list of everyone in it, we have no way to know that our sample is representative. Despite

this limitation, researchers use nonprobability sampling on a regular basis. We will discuss two of the most common nonprobability strategies here.

In many cases, it is not possible to obtain a sampling frame. When researchers study rare or hard-to-reach populations or study potentially

stigmatizing conditions, they often recruit by word-of-mouth. The term for this is snowball sampling—imagine a snowball rolling down a hill,

picking up more snow (or participants) as it goes. If we wanted to study how often homeless people took advantage of social services, we would be

hard pressed to find a sampling frame that listed the homeless population. Instead, we could recruit a small group of homeless people and ask each of

them to pass the word along to others, and so on. If we wanted to study changes in people’s identities following sex-reassignment surgery, we would

find it difficult to track down this population via public records. Instead, we could recruit one or two patients and ask for referrals to others. The

resulting sample in both cases is unlikely to be representative, but researchers often have to compromise for the sake of obtaining access to a

population. Snowball sampling is most often used in qualitative research, where the advantages of gaining a rich narrative from these individuals

outweigh the loss of representativeness.

One of the most popular nonprobability strategies is known as convenience sampling, or simply including people who show up for the study. Any

time a 24-hour news station announces the results of a viewer poll, they are likely based on a convenience sample. CNN and Fox News do not

randomly select from a list of their viewers; they post a question onscreen or online, and people who are motivated (or bored) enough to respond will

do so. As a matter of fact, the vast majority of psychology research studies are based on convenience samples of undergraduate college students.

Research in psychology departments often works like this: Experimenters advertise their studies on a website, and students enroll in these studies,

either to earn extra cash or to fulfill a research requirement for a course. Students often pick a particular study based on whether it fits their busy

schedules or whether the advertisement sounds interesting. These decisions are hardly random and, consequently, neither is the sample. The goal here

is not to disparage all psychology research—that would be self-defeating—but to emphasize that all of the decisions researchers make have both pros

and cons.

Choosing a Sampling Strategy

6/28/22, 10:11 AM Print

https://content.uagc.edu/print/Newman.2681.16.1?sections=navpoint-29&content=content&clientToken=c9a97c91-0cc1-c2da-e26b-fe25d5b09a30&np=navpoint-20 5/6

Although researchers always strive for a representative sample, no such thing as a perfectly representative one exists. Some degree of sampling error,

defined as the degree to which the characteristics of the sample differ from the characteristics of the population, is always present. Instead of aiming

for perfection, then, researchers aim for an estimate of how far from perfection their samples are. These estimates are known as the margin of error,

or the degree to which the results from a particular sample are expected to deviate from the population as a whole.

One of the main advantages of a probability sample is that we are able to calculate these errors, as long as we know our sample size and desired level

of confidence. In fact, most of us encounter margins of error every time we see the results of an opinion poll. For example, CNN may report that

“Candidate A is leading the race with 60% of the vote, ± 3%.” This means Candidate A’s approval percentage in the sample is 60%, but based on

statistical calculations, her real percentage is between 57% and 63%. The smaller the error (3% in this example), the more closely the results from the

sample match the population. Naturally, researchers conducting these opinion polls want the error of estimation to be as small as possible. How

persuaded would anyone be to learn that “Candidate A has a 10-point lead, plus or minus 20 points?” This margin of error ought to trigger our

skepticism, because the real difference is between 30 points and –10 points—i.e., a 10-point lead for the other candidate.

Researchers’ most direct means of controlling the margin of error is by changing the sample size. Most survey research aims for a margin of error of

less than five percentage points. Based on standard calculations, this requires a sample size of 400 people per group. That is, if we want to draw

conclusions about the entire sample (e.g., “30% of registered voters said X”), then we would need at least 400 respondents to say this with some

confidence. If we want to draw conclusions about subgroups (e.g., “30% of women compared to 50% of men”), then we would actually need at least

400 respondents of each gender to draw conclusions with confidence.

The magic number of 400 represents a compromise—a researcher is willing to accept 5% error for the sake of keeping time and costs down. It is

worth noting, however, that some types of research have more stringent standards: For political polls to be reported by the media, they must have at

least 1,000 respondents, which brings the margin of error down to three percentage points. In contrast, some areas of applied research may have more

relaxed standards. In marketing research, for example, budget considerations sometimes lead to smaller samples, which means drawing conclusions at

lower levels of confidence. For example, with a sample size of 100 people per group, researchers have to contend with 8–10% margin of error—

almost double the error, but at a fraction of the costs.

If probability sampling is so powerful, why are nonprobability strategies so popular? One reason is that convenience samples are more practical; they

are cheaper, easier, and almost always possible to conduct with relatively few resources because researchers can avoid the costs of large-scale

sampling. A second reason is that convenience is often a good-enough starting point for a new line of research. For example, if we wanted to study the

predictors of relationship satisfaction, we could start by testing our hypotheses in a controlled setting using college student participants and then

extend the research to the study of adult married couples. Finally, and relatedly, in many cases it is acceptable to have a nonrepresentative sample

because researchers do not need to generalize results. If we want to study the prevalence of alcohol use in college students, it may be perfectly

acceptable to use a convenience sample of college students. Although, even in this case, researchers would have to keep in mind that they are studying

drinking behaviors among students who volunteered to complete a study on drinking behaviors.

In some cases, however, it is critical to use probability sampling, despite the extra effort required. Specifically, researchers use probability samples any

time it is important to generalize and any time it is important to predict behavior of a population. The best way of understanding these criteria is to

think of political polls. In the lead-up to an election, each campaign is invested in knowing exactly what the voting public thinks of its candidate. In

6/28/22, 10:11 AM Print

https://content.uagc.edu/print/Newman.2681.16.1?sections=navpoint-29&content=content&clientToken=c9a97c91-0cc1-c2da-e26b-fe25d5b09a30&np=navpoint-20 6/6

contrast to a CNN poll, which is based on a convenience sample of viewers, polls conducted by a campaign will be based on randomly selected

households from a list of registered voters. The resulting sample is much more likely to be representative, much more likely to tell the campaign how

the entire population views its candidate, and therefore much more likely to be useful.

RUBRICExcellent Quality95-100%

Introduction45-41 points

The background and significance of the problem and a clear statement of the research purpose is provided. The search history is mentioned.

Literature Support91-84 points

The background and significance of the problem and a clear statement of the research purpose is provided. The search history is mentioned.

Methodology58-53 points

Content is well-organized with headings for each slide and bulleted lists to group related material as needed. Use of font, color, graphics, effects, etc. to enhance readability and presentation content is excellent. Length requirements of 10 slides/pages or less is met.

Average Score50-85%

40-38 points More depth/detail for the background and significance is needed, or the research detail is not clear. No search history information is provided.

83-76 points Review of relevant theoretical literature is evident, but there is little integration of studies into concepts related to problem. Review is partially focused and organized. Supporting and opposing research are included. Summary of information presented is included. Conclusion may not contain a biblical integration.

52-49 points Content is somewhat organized, but no structure is apparent. The use of font, color, graphics, effects, etc. is occasionally detracting to the presentation content. Length requirements may not be met.

Poor Quality0-45%

37-1 points The background and/or significance are missing. No search history information is provided.

75-1 points Review of relevant theoretical literature is evident, but there is no integration of studies into concepts related to problem. Review is partially focused and organized. Supporting and opposing research are not included in the summary of information presented. Conclusion does not contain a biblical integration.

48-1 points There is no clear or logical organizational structure. No logical sequence is apparent. The use of font, color, graphics, effects etc. is often detracting to the presentation content. Length requirements may not be met

You Can Also Place the Order at www.perfectacademic.com/orders/ordernow or www.crucialessay.com/orders/ordernow

error: Content is protected !!

Open chat

You can contact our live agent via WhatsApp! Via our number +1 323 471 4575.

Feel Free To Ask Questions, Clarifications, or Discounts, Available When Placing the Order.