What is the usual strategy employed by researchers to increase statistical power when they discover their research design is inadequate?
Explanation
This question asks about the most common and direct method researchers use to address a lack of statistical power in their study design.
Other questions
What is defined as a Type I error in null hypothesis testing?
What is the term for retaining the null hypothesis when it is actually false?
When the null hypothesis is true and the alpha level is set to .05, what is the probability of mistakenly rejecting the null hypothesis?
According to the chapter, what is the primary reason that Type II errors occur in practice?
What is the consequence of reducing the chance of a Type I error by setting the alpha level to .01 instead of .05?
What is the 'file drawer problem' as described in the chapter?
What is a likely consequence of the file drawer problem on the published research literature?
What is the research practice known as 'p-hacking'?
One proposed solution to the file drawer problem mentioned in the chapter is registered reports. What is the key idea behind this solution?
What is defined as the statistical power of a research design?
If a study has a statistical power of .59, what is the probability of committing a Type II error?
What is the common guideline for an adequate level of statistical power in a research study?
According to the chapter, what are the two essential steps a researcher can take to increase the statistical power of a study?
What is a common misinterpretation of the p-value that the chapter warns against?
In a study by Oakes (1986) cited in the chapter, what percentage of professional researchers mistakenly believed that a p-value of .01 meant a 99 percent chance of replicating a significant result?
What is one of the main criticisms against the strict convention of using p less than .05 as a rigid dividing line for significance?
According to some critics mentioned in the chapter, what is the main limitation of null hypothesis testing even when it is carried out correctly?
What is the APA Publication Manual's suggestion for what should accompany every null hypothesis test?
What is a confidence interval?
In the chapter's example, a sample of 20 students has a mean calorie estimate of 200 with a 95 percent confidence interval of 160 to 240. Based on this, is the sample mean significantly different from a hypothetical population mean of 250 at the .05 level?
What is the defining characteristic of Bayesian statistics as a different approach to inferential statistics?
What was the editorial decision made in 2015 by the journal 'Basic and Applied Social Psychology' regarding null hypothesis testing, as mentioned in the chapter?
According to Table 13.6, what is the approximate sample size needed to achieve a statistical power of .80 for an independent-samples t-test with an expected weak relationship strength (d = .20)?
Based on the information in Table 13.6, what sample size is needed for a test of Pearson's r to achieve .80 power when a strong relationship (r = .50) is expected?
What sample size is required to achieve .80 power for a test of Pearson's r with a medium expected relationship strength (r = .30), according to Table 13.6?
The chapter discusses a study with 20 participants per condition where the expected difference was medium (d = .50). What was the statistical power of this design?
What is one way to increase the strength of a relationship in a study, thereby increasing statistical power?
A researcher concludes there is a relationship in the population, but in reality, there is not. What has occurred?
A researcher concludes there is no relationship in the population, but a relationship does, in fact, exist. What kind of error has been made?
According to the chapter, why is it important for researchers to replicate their studies?
How does G*Power, one of the online tools mentioned, assist researchers?
Why can the p-value not be used as a substitute for a measure of relationship strength?
What does Robert Abelson argue is an important purpose served by null hypothesis testing, when correctly understood and carried out?
The chapter states that the .05 level of alpha is a convention that keeps the rates of which two things at acceptable levels?
According to Table 13.6, for an independent-samples t-test, how large must the sample be to achieve .80 power for a medium effect size (d = .50)?
What is the critique against null hypothesis testing that suggests the null hypothesis is 'never literally true'?
An illustration in the chapter depicts a Type I error in a pregnancy exam. How is this illustrated?
An illustration in the chapter depicts a Type II error in a pregnancy exam. How is this illustrated?
The journal 'Journal of Articles in Support of the Null Hypothesis' is mentioned as a potential solution to what problem?
What does the chapter say is likely to happen to the reported strength of a relationship in published literature due to the file drawer problem?
If a researcher sets the alpha level to .10 instead of .05, what is the effect on the chances of Type I and Type II errors?
What distinguishes rejecting the null hypothesis from accepting the alternative hypothesis?
Why do researchers use the expression 'fail to reject the null hypothesis' rather than 'accept the null hypothesis'?
What is the key advantage of using a within-subjects design over a between-subjects design for increasing statistical power?
A Type I error is also known as a:
A Type II error is also known as a:
According to Table 13.6, a test for a weak relationship (r = .10) using Pearson's r requires what sample size to achieve .80 power?
Why are confidence intervals argued to be much easier to interpret than null hypothesis tests?
The chapter mentions the decision by the editors of 'Basic and Applied Social Psychology' to ban p-values was not widely adopted. What did the editors emphasize as important instead?