blob: 4d48cca474be14a5faf60a90dc697876e7eed131 [file] [log] [blame]
[section:f_eg F Distribution Examples]
Imagine that you want to compare the standard deviations of two
sample to determine if they differ in any significant way, in this
situation you use the F distribution and perform an F-test. This
situation commonly occurs when conducting a process change comparison:
"is a new process more consistent that the old one?".
In this example we'll be using the data for ceramic strength from
[@http://www.itl.nist.gov/div898/handbook/eda/section4/eda42a1.htm
http://www.itl.nist.gov/div898/handbook/eda/section4/eda42a1.htm].
The data for this case study were collected by Said Jahanmir of the
NIST Ceramics Division in 1996 in connection with a NIST/industry
ceramics consortium for strength optimization of ceramic strength.
The example program is [@../../../example/f_test.cpp f_test.cpp],
program output has been deliberately made as similar as possible
to the DATAPLOT output in the corresponding
[@http://www.itl.nist.gov/div898/handbook/eda/section3/eda359.htm
NIST EngineeringStatistics Handbook example].
We'll begin by defining the procedure to conduct the test:
void f_test(
double sd1, // Sample 1 std deviation
double sd2, // Sample 2 std deviation
double N1, // Sample 1 size
double N2, // Sample 2 size
double alpha) // Significance level
{
The procedure begins by printing out a summary of our input data:
using namespace std;
using namespace boost::math;
// Print header:
cout <<
"____________________________________\n"
"F test for equal standard deviations\n"
"____________________________________\n\n";
cout << setprecision(5);
cout << "Sample 1:\n";
cout << setw(55) << left << "Number of Observations" << "= " << N1 << "\n";
cout << setw(55) << left << "Sample Standard Deviation" << "= " << sd1 << "\n\n";
cout << "Sample 2:\n";
cout << setw(55) << left << "Number of Observations" << "= " << N2 << "\n";
cout << setw(55) << left << "Sample Standard Deviation" << "= " << sd2 << "\n\n";
The test statistic for an F-test is simply the ratio of the square of
the two standard deviations:
F = s[sub 1][super 2] / s[sub 2][super 2]
where s[sub 1] is the standard deviation of the first sample and s[sub 2]
is the standard deviation of the second sample. Or in code:
double F = (sd1 / sd2);
F *= F;
cout << setw(55) << left << "Test Statistic" << "= " << F << "\n\n";
At this point a word of caution: the F distribution is asymmetric,
so we have to be careful how we compute the tests, the following table
summarises the options available:
[table
[[Hypothesis][Test]]
[[The null-hypothesis: there is no difference in standard deviations (two sided test)]
[Reject if F <= F[sub (1-alpha/2; N1-1, N2-1)] or F >= F[sub (alpha/2; N1-1, N2-1)] ]]
[[The alternative hypothesis: there is a difference in means (two sided test)]
[Reject if F[sub (1-alpha/2; N1-1, N2-1)] <= F <= F[sub (alpha/2; N1-1, N2-1)] ]]
[[The alternative hypothesis: Standard deviation of sample 1 is greater
than that of sample 2]
[Reject if F < F[sub (alpha; N1-1, N2-1)] ]]
[[The alternative hypothesis: Standard deviation of sample 1 is less
than that of sample 2]
[Reject if F > F[sub (1-alpha; N1-1, N2-1)] ]]
]
Where F[sub (1-alpha; N1-1, N2-1)] is the lower critical value of the F distribution
with degrees of freedom N1-1 and N2-1, and F[sub (alpha; N1-1, N2-1)] is the upper
critical value of the F distribution with degrees of freedom N1-1 and N2-1.
The upper and lower critical values can be computed using the quantile function:
F[sub (1-alpha; N1-1, N2-1)] = `quantile(fisher_f(N1-1, N2-1), alpha)`
F[sub (alpha; N1-1, N2-1)] = `quantile(complement(fisher_f(N1-1, N2-1), alpha))`
In our example program we need both upper and lower critical values for alpha
and for alpha/2:
double ucv = quantile(complement(dist, alpha));
double ucv2 = quantile(complement(dist, alpha / 2));
double lcv = quantile(dist, alpha);
double lcv2 = quantile(dist, alpha / 2);
cout << setw(55) << left << "Upper Critical Value at alpha: " << "= "
<< setprecision(3) << scientific << ucv << "\n";
cout << setw(55) << left << "Upper Critical Value at alpha/2: " << "= "
<< setprecision(3) << scientific << ucv2 << "\n";
cout << setw(55) << left << "Lower Critical Value at alpha: " << "= "
<< setprecision(3) << scientific << lcv << "\n";
cout << setw(55) << left << "Lower Critical Value at alpha/2: " << "= "
<< setprecision(3) << scientific << lcv2 << "\n\n";
The final step is to perform the comparisons given above, and print
out whether the hypothesis is rejected or not:
cout << setw(55) << left <<
"Results for Alternative Hypothesis and alpha" << "= "
<< setprecision(4) << fixed << alpha << "\n\n";
cout << "Alternative Hypothesis Conclusion\n";
cout << "Standard deviations are unequal (two sided test) ";
if((ucv2 < F) || (lcv2 > F))
cout << "ACCEPTED\n";
else
cout << "REJECTED\n";
cout << "Standard deviation 1 is less than standard deviation 2 ";
if(lcv > F)
cout << "ACCEPTED\n";
else
cout << "REJECTED\n";
cout << "Standard deviation 1 is greater than standard deviation 2 ";
if(ucv < F)
cout << "ACCEPTED\n";
else
cout << "REJECTED\n";
cout << endl << endl;
Using the ceramic strength data as an example we get the following
output:
[pre
'''F test for equal standard deviations
____________________________________
Sample 1:
Number of Observations = 240
Sample Standard Deviation = 65.549
Sample 2:
Number of Observations = 240
Sample Standard Deviation = 61.854
Test Statistic = 1.123
CDF of test statistic: = 8.148e-001
Upper Critical Value at alpha: = 1.238e+000
Upper Critical Value at alpha/2: = 1.289e+000
Lower Critical Value at alpha: = 8.080e-001
Lower Critical Value at alpha/2: = 7.756e-001
Results for Alternative Hypothesis and alpha = 0.0500
Alternative Hypothesis Conclusion
Standard deviations are unequal (two sided test) REJECTED
Standard deviation 1 is less than standard deviation 2 REJECTED
Standard deviation 1 is greater than standard deviation 2 REJECTED'''
]
In this case we are unable to reject the null-hypothesis, and must instead
reject the alternative hypothesis.
By contrast let's see what happens when we use some different
[@http://www.itl.nist.gov/div898/handbook/prc/section3/prc32.htm
sample data]:, once again from the NIST Engineering Statistics Handbook:
A new procedure to assemble a device is introduced and tested for
possible improvement in time of assembly. The question being addressed
is whether the standard deviation of the new assembly process (sample 2) is
better (i.e., smaller) than the standard deviation for the old assembly
process (sample 1).
[pre
'''____________________________________
F test for equal standard deviations
____________________________________
Sample 1:
Number of Observations = 11.00000
Sample Standard Deviation = 4.90820
Sample 2:
Number of Observations = 9.00000
Sample Standard Deviation = 2.58740
Test Statistic = 3.59847
CDF of test statistic: = 9.589e-001
Upper Critical Value at alpha: = 3.347e+000
Upper Critical Value at alpha/2: = 4.295e+000
Lower Critical Value at alpha: = 3.256e-001
Lower Critical Value at alpha/2: = 2.594e-001
Results for Alternative Hypothesis and alpha = 0.0500
Alternative Hypothesis Conclusion
Standard deviations are unequal (two sided test) REJECTED
Standard deviation 1 is less than standard deviation 2 REJECTED
Standard deviation 1 is greater than standard deviation 2 ACCEPTED'''
]
In this case we take our null hypothesis as "standard deviation 1 is
less than or equal to standard deviation 2", since this represents the "no change"
situation. So we want to compare the upper critical value at /alpha/
(a one sided test) with the test statistic, and since 3.35 < 3.6 this
hypothesis must be rejected. We therefore conclude that there is a change
for the better in our standard deviation.
[endsect][/section:f_eg F Distribution]
[/
Copyright 2006 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]