This thesis was confirmed by two studies. In the first, 33 male and 33 female college students filled out an online questionnaire each evening for 12 nights. They described up to three instances that day in which “you apologized to someone or did something to someone else that might have deserved an apology.” They also described up to three incidents in which “someone else apologized to you, or did something to you that might have deserved an apology.”
As expected, the women reported offering more apologies than the men. However, they also reported committing more offenses. After taking this different threshold of perceived offensive behavior into account, “we found that the gender difference in frequency of apologies disappeared,” Schumann and Ross write. “Female and male transgressors apologized for an equal proportion of their offenses (approximately 81 percent).”
Newly published research finds men are as willing as women to apologize. But they’re less likely to believe a particular incident warrants contrition.
By Tom Jacobs | //miller-mccune.com
Men, according to conventional wisdom, are stubbornly unwilling to apologize. Countless pop psychology books have referenced this reluctance, explaining that our egos are too fragile to admit we’re wrong, or we’re oblivious to important nuances of social interaction.
Sorry to disrupt that lovely feeling of superiority, ladies, but newly published research suggests such smug explanations miss the mark. Writing in the journal Psychological Science, University of Waterloo psychologists Karina Schumann and Michael Ross report that men are, indeed, less likely to say “I’m sorry.” But they’re also less likely to take offense and expect an apology from someone else.
Their conclusion is that “men apologize less frequently than women because they have a higher threshold for what constitutes offensive behavior.” Whether on the giving or receiving end, males are less likely to feel an unpleasant incident is serious enough to warrant a statement of remorse. Leer más “Real Men Do Apologize”
The research team, led by Dan Kahan of Yale Law School, studied “a broadly representative sample” of 1,500 Americans in 2009. Through a series of questions, their cultural beliefs were measured on what can be called a left-right scale (although the researchers do not use the terms “liberal” and “conservative” in their paper). Those strongly holding egalitarian and communitarian outlooks were on one end of the spectrum, while those with hierarchical and individualistic views were on the other.
Participants were then presented with a series of statements and asked whether in their view most experts concurred with them. Three of the statements represented the consensus of scientific opinion: “Global temperatures are increasing,” “Human activity is causing global warming,” and “Radioactive wastes from nuclear power can be safely disposed of in deep underground storage facilities.”
Finally, the participants were introduced to a fictional expert on one of those subjects, who either agreed or disagreed with their position. After reviewing the researcher’s credentials and reading a bit of their writing, each participant rated the degree to which they found the expert knowledgeable and trustworthy.
New research finds we trust experts who agree with our own opinions, suggesting that subjective feelings override scientific information.
By Tom Jacobs
A clear consensus of opinion emerges within the scientific community on an important issue, such as climate change. But the public, and its elected leaders, remains unconvinced and unreceptive to well-founded warnings.
With this phenomenon growing frustratingly familiar, researchers can be forgiven if they begin to feel like Rodney Dangerfields in lab coats. From their perspective, they don’t get no respect.
Newly published research suggests that’s not entirely true: Americans do believe and trust researchers. But we focus our attention on those experts whose ideas conform with our preconceived notions. The others tend to get discounted or ignored.
“Scientific opinion fails to quiet societal disputes on such issues (as climate change) not because members of the public are unwilling to defer to experts, but because culturally diverse persons tend to form opposing perceptions of what experts believe,” a team of scholars writes in the Journal of Risk Research. “Individuals systematically overestimate the degree of scientific support for positions they are culturally predisposed to accept.” Leer más “Four out of Five Experts Agree — With Me!”
As we enter the election season, we ask how can political polls like Gallup be so accurate when surveying about 2,000 people? The answer can be traced back to an infamous event when the Literary Digest announced that Republican candidate Alf Landon would win the 1936 presidential election against Democrat Franklin Roosevelt in a landslide. The magazine’s claim was based on around 2.3 million people responding out of 10 million receiving a ballot.
Roosevelt won that election (and two more after it); the Literary Digest was out of business by 1938.
Explanations include that the magazine used car registration lists and phone books to construct the sample, biasing the results toward the better off; that fewer than 25 percent of those receiving ballots responded; and that nonresponse bias occurred — those who did not return their ballots were more likely to be Roosevelt supporters. Pollsters today also make a distinction between the general adult population, registered voters and likely voters when conducting election surveys, something the Literary Digest did not do. Larger sample sizes do not, on their own, guarantee accurate results.
When the original population from which a subset of respondents is drawn is not clearly defined and not representative of its diversity, samples are unlikely to be good predictors of opinion or behavior. Statisticians have developed random probability techniques to insure that samples are representative of the population and can reasonably conclude that polling 2,000 people will result in findings accurate within, say, plus or minus 4 percent.
Sure, now that we’re dealing with genuinely representative samples, doubling the sample size to 4,000 will be more accurate, but is reducing the estimated outcome to 2 percent error worth the added costs and time to survey that many more people?
There will always be some margin of error, but just as Gallup found in its presidential polls, the results are pretty close to how the population actually voted, thanks to these random sampling techniques.
When interpreting findings from polls and research studies, assess whether the data are based on at least a representative sample and preferably a random sample in which each person had an equal chance of being selected.
There are a lot of shoddy polls out there. Some are frank about their shortcomings and some aren’t. Here are some ideas for getting an accurate picture of what a poll can tell you.
By Peter M. Nardi
Sampling size does not in itself ensure an accurate public opinion poll, but in concert with representative sampling can reduce the margin of error.
“But mom, everyone is going. Why can’t I?”
The anxious parent typically responds with numerous reasons why going to the big party is not going to happen, especially since it’s at a friend’s house while the parents are out of town. Yet, it’s the critical-thinking parent who might instead reply: “Everyone? Did you collect data to support your position? Let’s see your sampling methodology.”
OK, not all skeptical parents will pose such a geeky response, but I’m sure they know that not every teen is going to the bash. Making generalized claims based on limited samples of people is a major problem, and not only in parent-child relationships. Indeed, learning to evaluate the quality of public opinion polls, scientific research and proclamations by politicians and pundits involves understanding some basic principles of random sampling. It’s an especially important lesson in the U.S. today as Americans prepare for midterm elections. Leer más “Sample This: Making Sense of Surveys”