By sizing experiment designs properly, test and evaluation (T&E) engineers can assure they specify a sufficient number of runs to reveal any important effects on the system. For factorial designs laid out in an orthogonal matrix this can be done by calculating statistical power. However, when a defense system behaves in a nonlinear fashion, then response surface method experiment (RSM) designs must be employed. The test matrices for RSM generally do not exhibit orthogonality, thus the effect calculations become correlated and degrade the statistical power. This in turn leads to inflation in the number of test runs needed to detect important performance differences that may be generated by the experiment. A generally acceptable alternative to sizing designs makes use of fraction of design space (FDS) plots. This article details the FDS approach and explains why it works best to serve the purpose of RSM experiments done for T&E.
Due to operational or physical considerations, standard factorial and response surface method (RSM) design of experiments (DOE) often prove to be unsuitable. In such cases a computer-generated statistically-optimal design fills the breech. This article explores vital mathematical properties for evaluating alternative designs with a focus on what is really important for industrial experimenters. To assess “goodness of design” such evaluations must consider the model choice, specific optimality criteria (in particular D and I), precision of estimation based on the fraction of design space (FDS), the number of runs to achieve required precision, lack-of-fit testing, and so forth. With a focus on RSM, all these issues are considered at a practical level, keeping engineers and scientists in mind. This brings to the forefront such considerations as subject-matter knowledge from first principles and experience, factor choice and the feasibility of the experiment design.
This article provides insights on how many runs are required to make it very likely that a test will reveal any important effects. Due to the mathematical complexities of multifactor design of experiments (DOE) matrices, the calculations for adequate power and precision are not practical to do by 'hand' so the focus is kept at a high level--scoping out the forest rather than detailing all the trees. By example, reader will learn the price that must be paid for an adequately-sized experiment and the penalty incurred by conveniently grouping hard-to-change factors.
Given the push for Quality by Design (QbD) by the US FDA and equivalent agencies worldwide, statistical methods are becoming increasingly vital for pharmaceutical manufacturers. Design of experiments (DOE) is a primary tool because “it provides structured, organized method for determining the relationship between factors affecting a process and the response of that process." Tolerance intervals (TI) verify that the design space will be robust for meeting the manufacturing specifications on every individual unit, not just on average.
Olive oil, an important commodity of the Mediterranean region and a main ingredient of their world-renowned diet (see sidebar), must meet stringent European guidelines to achieve the coveted status of "extra virgin." Oils made from single cultivars (a particular cultivated variety of the olive tree) will at times fall into the lower "virgin" category due to seasonal variation. Then it becomes advantageous to blend in one or more superior oils. This is a great case to become acquainted with the tools of mixture design for optimal formulation.
This article starts with the basics on RSM before introducing two enhancements that focus on robust operating conditions: Modeling the process variance as a function of the input factors and Propagation of error(POE) transmitted from input factor variation. It discusses how to find the find the flats high plateaus for maximum yield and broad valleys that minimize defects. Proceeding from International SEMATECH Manufacturing Initiative (ISMI) Symposium on Manufacturing Effectiveness.
This presentation details and demonstrates how to plot effects from general factorials, for example a 3x4x4, on a half-normal plot. This makes selection easy and more precise by it being a graphical method. Previously the half-normal plot of effects, developed by Cuthbert Daniel, was restricted to two-level factorials.
This article deals with thorny issues that confront every experimenter, i.e., how to handle individual results that do not appear to fit with the rest of the data - damaging outliers and/or a need for transformation. The trick is to maintain a reasonable balance between two types of errors: (1) deleting data that very only due to common causes, thus introducing bias to the conclusions. (2) not detecting true outliers that occur due to special causes. Such outliers can obscure real effects or lead to false conclusions. Furthermore, an opportunity may be lost to learn about preventable causes for failure or reproducible conditions leading to break-through improvements (making discoveries more or less by accident).
This article details the advantages of design of experiments (DOE) over the OFAT (changing only one factor at a time) approach to experimentation. By varying factors at two levels each, but simultaneously rather than one at a time, experimenters can uncover important interactions.
This mini-paper, provided on pages 5-8 in the publication via the link above*, addresses concerns about mixture designs coming up short on power. *(Manuscript available under Download link below.)