Home » Putting It Together: Hypothesis Testing in the Social Sciences

Putting It Together: Hypothesis Testing in the Social Sciences

Back in Chapter 1 and 2, I told you that hypothesis testing is the goal of positivist empirical political analysis. Now we finally have enough pieces to understand how to do that.

            We have hypotheses of interest in our research. If we are testing them quantitatively, they are (almost always) probabilistic, meaning directional, relative, or conditional hypotheses. (I’ll get to hypotheses of no effect in a minute; they’re a little quirky.) As we discussed back then, we must always specify a “than what” with probabilistic hypotheses. By default in quantitative analysis, we assume that the “than what” is 0, meaning no effect. This is called the null hypothesis.[1] What we are officially doing when we use quantitative analysis to test hypotheses is asking whether the test statistic we find – χ2, t, or b – is statistically distinguishable from 0. Properties of the sampling distributions of χ 2, t, and b, which have already been calculated for our test statistics, allow us to make this determination based on critical values for those test statistics. A value of the test statistic beyond our critical value indicates that the test statistic is statistically significant. That is all statistical significance means, that we are 95% (or whatever value we’ve chosen) sure that the true value is not 0. A significant test statistic for a research (sometimes called alternative) hypothesis provides support for that hypothesis’s claim. We therefore say that we “accept the research/alternative hypothesis” and/or “reject the null hypothesis.”

            Understanding statistical significance is critically important to using quantitative tools. It tells us only that the true value of the relationship, as measured by our test statistic, is not 0. We can even, thanks to some details of test statistic calculation that we haven’t covered, be reasonably sure of the direction of the relationship that our test statistic identified. A finding of statistical significance means nothing for the substantive importance of the variable that test statistic is attached to. Imagine a regression with a DV of probability of voting and Age (in years) as an independent variable. A significant coefficient of 0.002 on Age means that the difference in the likelihood of voting between a 0-year-old (who isn’t even allowed to vote) and a 100-year-old (who is probably dead) is … 0.2%. Statistically significant, yes; substantively meaningful, no.

            The other thing we need to know about using test statistics to determine statistical significance is that we cannot “prove” things with statistics. Remember that the sampling distributions of our test statistics are asymptotic at 0: they approach but never quite reach 0 on their y axis. That means that some probability always exists under the curve beyond the test statistic’s value, no matter how extreme your test statistic. “Proof” of a relationship using inferential statistics is simply mathematically impossible. Full stop.

            Does that mean we should reject any findings with quantitative analysis? Definitely not! What you’ve learned in this course and from this book should allow you to make reasoned, informed judgments about whether to believe a piece of quantitative research. Is the population sample representative? Has the data collection process been documented? Is the form of analysis correct? Does the author acknowledge limitations of their study? Is the research sensationalized, meaning that someone is trying to provoke emotional responses instead of informed, reasoned ones? Does the research claim to “prove” something with statistics?[2] All of these questions are things you are now equipped to answer, or to at least ask, when presented with quantitative research. You may never use the skills from this course again to conduct research, but your instructor and I hope that you will continue to use them as an informed consumer.

More Resources

The Null Hypothesis Explained – Statistics By Jim


[1] Hypotheses of no effect are functionally the reverse: the null hypothesis, the opposite of what we believe, is that a relationship does exist. Our research/alternative hypothesis is that the true value is 0. A hypothesis of no relationship expects that we should find an insignificant test statistic, not a significant one – a test statistic indicating that we can’t rule out 0 as the answer.

[2] To me, this is a huge red flag – it suggests the person writing up the research doesn’t know what they’re talking about, and so I shouldn’t believe what they say about it without checking into it myself.

Archives

No archives to show.

Categories

  • No categories

Site contents (c) Leanne C. Powner, 2012-2026.
Background graphic: filo / DigitalVision Vectors / Getty Images.
Cover graphic: Cambridge University Press.

Powered by WordPress / Academica WordPress Theme by WPZOOM