Then click either a “onetailed” or “twotailed” test radio button.
If you aren’t sure if your test is onetailed or twotailed, see: ? Click “OK” and read the results.
One and twotailed tests  Wikipedia
This criticism only applies to twotailed tests, where the null hypothesis is "Things are exactly the same" and the alternative is "Things are different." Presumably these critics think it would be okay to do a onetailed test with a null hypothesis like "Foot length of male chickens is the same as, or less than, that of females," because the null hypothesis that male chickens have smaller feet than females could be true. So if you're worried about this issue, you could think of a twotailed test, where the null hypothesis is that things are the same, as shorthand for doing two onetailed tests. A significant rejection of the null hypothesis in a twotailed test would then be the equivalent of rejecting one of the two onetailed null hypotheses.
One of the main goals of statistical hypothesis testing is to estimate the P value, which is the probability of obtaining the observed results, or something more extreme, if the null hypothesis were true. If the observed results are unlikely under the null hypothesis, your reject the null hypothesis. Alternatives to this "frequentist" approach to statistics include Bayesian statistics and estimation of effect sizes and confidence intervals.
relationship between direction and one or two tail  …
Linear regression and correlation assume that the data points are of each other, meaning that the value of one data point does not depend on the value of any other data point. The most common violation of this assumption in regression and correlation is in time series data, where some Y variable has been measured at different times. For example, biologists have counted the number of moose on Isle Royale, a large island in Lake Superior, every year. Moose live a long time, so the number of moose in one year is not independent of the number of moose in the previous year, it is highly dependent on it; if the number of moose in one year is high, the number in the next year will probably be pretty high, and if the number of moose is low one year, the number will probably be low the next year as well. This kind of nonindependence, or "autocorrelation," can give you a "significant" regression or correlation much more often than 5% of the time, even when the null hypothesis of no relationship between time and Y is true. If both X and Y are time series—for example, you analyze the number of wolves and the number of moose on Isle Royale—you can also get a "significant" relationship between them much too often.
Fortunately, numerous simulation studies have shown that regression and correlation are quite robust to deviations from normality; this means that even if one or both of the variables are nonnormal, the P value will be less than 0.05 about 5% of the time if the null hypothesis is true (Edgell and Noon 1984, and references therein). So in general, you can use linear regression/correlation without worrying about nonnormality.
relationship between direction and one or two tail
After finding that row, look across the table. Read the numbers this way: to get a onetailed statistical level of .05 (twotailed .10) you need a correlation of at least .34. For a onetailed statistical significance level of .025 (2tailed .05) you need a correlation of at least .40. And so on.
where the observed sample mean difference, μ_{0} = value specified in null hypothesis, s_{d} = standard deviation of the differences in the sample measurements and n = sample size. For instance, if we wanted to test for a difference in mean SAT Math and mean SAT Verbal scores, we would random sample subjects, record their SATM and SATV scores in two separate columns, then create a third column that contained the differences between these scores. Then the sample mean and sample standard deviation would be those that were calculated on this column of differences.
The null hypothesis one tailed hypothesis test ..

we call this a onetailed hypothesis
Onetailed Vs Twotailed Tests I CFA Level 1  AnalystPrep

A onetailed test directs all of the significance level ..
Onetailed Vs Twotailed Hypothesis Testing

4.1 hypothesis as a onetailed because the ideas about ..
10/03/2013 · What is the difference between a onetailed and a twotailed test of significance
FAQ: What are the differences between onetailed and …
In the second experiment, you are going to put human volunteers with high blood pressure on a strict lowsalt diet and see how much their blood pressure goes down. Everyone will be confined to a hospital for a month and fed either a normal diet, or the same foods with half as much salt. For this experiment, you wouldn't be very interested in the P value, as based on prior research in animals and humans, you are already quite certain that reducing salt intake will lower blood pressure; you're pretty sure that the null hypothesis that "Salt intake has no effect on blood pressure" is false. Instead, you are very interested to know how much the blood pressure goes down. Reducing salt intake in half is a big deal, and if it only reduces blood pressure by 1 mm Hg, the tiny gain in life expectancy wouldn't be worth a lifetime of bland food and obsessive labelreading. If it reduces blood pressure by 20 mm with a confidence interval of ±5 mm, it might be worth it. So you should estimate the effect size (the difference in blood pressure between the diets) and the confidence interval on the difference.
Social Research Methods  Knowledge Base  Correlation
The test statistic for a linear regression is t_{s}=/. It gets larger as the degrees of freedom (n−2) get larger or the r^{2} gets larger. Under the null hypothesis, the test statistic is tdistributed with n−2 degrees of freedom. When reporting the results of a linear regression, most people just give the r^{2} and degrees of freedom, not the t_{s} value. Anyone who really needs the t_{s} value can calculate it from the r^{2} and degrees of freedom.
CORRELATION  Missouri State University
Now instead of testing 1000 plant extracts, imagine that you are testing just one. If you are testing it to see if it kills beetle larvae, you know (based on everything you know about plant and beetle biology) there's a pretty good chance it will work, so you can be pretty sure that a P value less than 0.05 is a true positive. But if you are testing that one plant extract to see if it grows hair, which you know is very unlikely (based on everything you know about plants and hair), a P value less than 0.05 is almost certainly a false positive. In other words, if you expect that the null hypothesis is probably true, a statistically significant result is probably a false positive. This is sad; the most exciting, amazing, unexpected results in your experiments are probably just your data trying to make you jump to ridiculous conclusions. You should require a much lower P value to reject a null hypothesis that you think is probably true.
An introductory statistics text for the social sciences
Since it's possible to think of multiple explanations for an association between two variables, does that mean you should cynically sneer "Correlation does not imply causation!" and dismiss any correlation studies of naturally occurring variation? No. For one thing, observing a correlation between two variables suggests that there's something interesting going on, something you may want to investigate further. For example, studies have shown a correlation between eating more fresh fruits and vegetables and lower blood pressure. It's possible that the correlation is because people with more money, who can afford fresh fruits and vegetables, have less stressful lives than poor people, and it's the difference in stress that affects blood pressure; it's also possible that people who are concerned about their health eat more fruits and vegetables and exercise more, and it's the exercise that affects blood pressure. But the correlation suggests that eating fruits and vegetables may reduce blood pressure. You'd want to test this hypothesis further, by looking for the correlation in samples of people with similar socioeconomic status and levels of exercise; by statistically controlling for possible confounding variables using techniques such as ; by doing animal studies; or by giving human volunteers controlled diets with different amounts of fruits and vegetables. If your initial correlation study hadn't found an association of blood pressure with fruits and vegetables, you wouldn't have a reason to do these further studies. Correlation may not imply causation, but it tells you that something interesting is going on.