The article has been automatically translated into English by Google Translate from Russian and has not been edited.
Переклад цього матеріалу українською мовою з російської було автоматично здійснено сервісом Google Translate, без подальшого редагування тексту.
Bu məqalə Google Translate servisi vasitəsi ilə avtomatik olaraq rus dilindən azərbaycan dilinə tərcümə olunmuşdur. Bundan sonra mətn redaktə edilməmişdir.

In the 2016 elections, the predictions of the opinion polls did not come true: can you trust them this time

“Biden is ahead of Trump in election polls,” “Trump is closing the gap with Biden,” and other statements. Can you trust them, especially after the previous US presidential elections? Explains Air force.

Photo: Shutterstock

On November 8, 2016, many of those who closely followed the voting process in the US presidential election every now and then updated the page of The New York Times with forecasts of the presidential election results. In the early stages of the vote count, when results from key states had not yet arrived, opinion polls predicted a landslide victory for Hillary Clinton - over 80%. Trump's chances were then estimated to be below 20%.

But the predictions made by the authors of the polls did not come true.

Clinton's lead dwindled over the course of the evening. And by eight o'clock in the evening, the probability of Trump's victory was estimated at 95%.

But does this mean that the predictions are wrong and cannot be trusted?

Wrong or not

The most important thing that sociologists predicted was Hillary Clinton's victory by two to three percentage points. And in this they were not mistaken: Clinton did bypass Trump in the number of votes received by more than 2%. True, she still lost the election, since Trump received the largest number of electoral votes.

When the noise subsided, sociologists decided to find out what went wrong. There were several assumptions that they tried to test.

At first, it was assumed that the error could be due to the fact that some people systematically refuse to answer polls. In particular, poorly educated voters rarely agree to take part in polls.

The second theory was that many of the people who took part in the polls were simply not honest: they could simply not dare to admit in conversation with outsiders that they were planning to vote for an “unpopular” candidate. This option was possible, especially given the so-called Bradley effect: in 1982, black Democrat Tom Bradley lost to the Republican in the California governor's election, although he was ahead in all polls. The reason is simple - many interviewees were embarrassed to admit that they did not want to vote for a black candidate.

And the third theory was related to the mistake of sociologists, who could misidentify which population groups are most likely to go to the polls. No one can predict in advance who will want to take part in the elections and who will not. But sociologists have different models that predict how the electorate will behave on election day. In this matter, even small discrepancies with the real picture of events can lead to significant changes in the results.

On the subject: Professor who has been guessing the election with precision since 1984 named the winner in 2020

Something went wrong

The hypothesis that Trump's voters simply did not want to honestly say who they would vote for was quickly rejected. This theory was easily tested by comparing the results of Internet polls and conventional ones. Research shows that people who are embarrassed to state their real preferences are much more willing to admit them in online polls. Politico and Morning Consult conducted an experiment to test this theory, and did not find any support for it, although they found out that there is a section of the electorate with higher education and high incomes who would more readily admit their preferences in online polls. However, this was a small percentage of voters who could not significantly affect the results. Similar studies were conducted by Yale University and came to the same conclusions.

But the rest of the theory, to one degree or another, received confirmation. Sociologists have found that Clinton's lead in the polls was due to three main factors. Firstly, a significant part of voters, who until the last could not decide, nevertheless decided to vote for Trump - and this decision was made in the last days of the presidential race. Sociologists have not yet learned to predict which side of a given group of doubting voters will lean in at the last moment.

Second, the turnout of Trump supporters was higher than anticipated.

And third, Trump's support in the Rust Belt region (which is part of the Midwest and East Coast of the United States) turned out to be much stronger than polls showed. This happened precisely because the survey authors did not pay enough attention to the level of education of the respondents - in 2016, it was this factor that turned out to be important for understanding the election results.

Educated voters are more likely to take part in polls. In a typical US nationwide survey, about 45% of respondents will have at least a bachelor's degree, although nationwide, only 28% of people over 18 have completed college degrees. In 2016, it was voters with higher education who actively supported Clinton; they also willingly participated in polls. And the preferences of people with a lower level of education were not sufficiently reflected in the final sample.

“We know that certain categories of people are more willing to take part in surveys. They are highly educated people, white Americans, elderly people. These groups are overrepresented in the samples. Historically, this has not greatly affected the results. But specifically in 2016, people who, for example, had a secondary education (or did not even have a secondary education) really decided to massively support Trump - this has not happened before, ”explains Pew Research senior methodologist Andrew Mercer.

Conducting surveys and interpreting results

There are two main ways to conduct a survey - by phone and online. Moreover, the first method has long been considered the gold standard among sociologists. But how do you know which people to call?

Each state maintains a record of all registered voters with their contact details. These registers are often used when conducting surveys. Thus, it was possible to make samples according to different indicators, for example, by age or education.

Over time, polls began to be conducted online. And this can cause problems. There is no single register of emails, which makes things difficult. Most often, when conducting surveys in this way, companies first use regular mail - they send paper letters to people asking them to take part in an online survey. This method is used, for example, by the Associated Press and Pew Research.

Surveys are conducted online by another method - when a person just accidentally clicked on an advertising or any other link, deciding to take part in a survey. Having left their data, a person enters the database, after which he is periodically invited to take part in surveys.

Such methods have proven to be successful. But here it is very important for companies conducting surveys to follow certain rules on how exactly to interpret the results obtained. Otherwise, the polls are not representative.

The second important question is: how many people do you need to interview to get an accurate result? Pew Research senior methodologist Andrew Mercer notes that polling a large number of potential voters is by no means a guarantee of success: "The coverage of the interviewees can be very large, but not at all representative, and the results will be biased."

In the months leading up to the election, headlines like “Biden Widens Lead Over Trump” and “Poll Shows Trump's Gap Closing Behind Biden” can be misleading to readers. To understand how much you can trust such a survey, experts advise looking at the name of the company that conducted it.

“If you see a particular survey, pay attention to how the question was formulated, what kind of sample was. It is important that the methodology is spelled out. As a rule, professional companies always want to demonstrate how they worked, ”concludes Mercer.

On the subject: Trump or Biden: whose victory is predicted by scientists who guessed the results of the US elections more than once

What will happen now?

In polls released in early June, Joe Biden was 12% ahead of the incumbent. But in three months, Donald Trump was able to close the gap.

Biden is now just 7% ahead of Trump, according to a recent USA Today / Suffolk University joint poll. Now 50% of respondents are ready to vote for Biden, and 43% for Trump. At the same time, another 7% of respondents have not yet decided. From the experience of 2016, it is known that this group of people can have a great influence on the voting results.

Particular attention is now being drawn to the so-called wavering states. Historically, neither Democrats nor Republicans have a clear majority in these states. In Arizona, one of the swing states with 11 electoral votes, Joe Biden is now more than three percentage points ahead of Donald Trump.

True, one of the leaders of the Democrats in the region - Larry Bodine - no longer believes in numbers: “Polls are an illusion. After 2016, I decided not to rely on survey results. We need to look at what my fellow Democrats are facing on the ground. I communicate in democratic circles all the time and nobody talks about polls now. I believe all promising polls only give a false sense of security. ”

The mistrust in this year's polls is understandable. But there is reason to believe that the mistakes of 2016 were still taken into account. Sociologists pay attention to the fact that now the samples are indeed being adjusted more carefully.

For example, the number of respondents includes the corresponding number of people without higher education. The director of the University of Monmouth, Patrick Murray, notes that if, during the previous elections, experts had adjusted the sample for this indicator, then the Clinton margin would have been only two percentage points, and not four, as the university announced.

The poll companies are also trying this year to predict as much as possible the behavior of those voters who will make a decision at the last moment.

For example, during the last election, Franklin and Marshall University in Pennsylvania stopped polling potential voters 10 days before voting. This was a fatal mistake, since it was in recent days that so many undecided Americans decided to vote for Trump. This year the terms of the survey will be extended.

Program Director Terry Madonna notes that this time the percentage of undecided voters is much lower than in 2016: “There are relatively few people who found it difficult to answer. This year, people are really involved in the presidential race. Of course, it is important what they decide, but in this particular campaign, it is much more important to enlist the support of their main electorate. ”

Miscellaneous In the U.S. U.S. election opinion poll election 2020

Read also on ForumDaily:

Washed floors in the administration: a cleaning lady accidentally won the elections in a Russian village

Biden's headquarters began selling T-shirts with Trump's portrait and the phrase: 'Maybe you will shut up?'

NYT received Trump tax returns: he has not paid taxes in years

Facebook has removed the network of Russian accounts due to the threat of interference in the US elections

Do you want more important and interesting news about life in the USA and immigration to America? Subscribe to our page in Facebook. Choose the "Display Priority" option and read us first. And don't forget to subscribe to ForumDaily Woman and ForumDaily New York - there you will find a lot of interesting and positive information.



1049 requests in 2,107 seconds.