We just wrapped up our 2020 Online DOE Summit. What a successful summit. A group of influential speakers kicked off the discussion of design of experiments (DOE). Hundreds of attendees logged in to each talk and soaked up that knowledge. Thank you to everyone who participated.
We created the summit because of the COVID-19 pandemic. Originally scheduled for the middle of June, our 8th European DOE Meeting was canceled in January. So, after thinking about it for a bit, we decided to move the meeting online. This would be the only way to have a meeting for a while. Plus, the cost to the audience would be zero.
All the speakers who agreed to speak at the European meeting agreed to make the move to a virtual event. A schedule was set-up. We gave it a new name. We sent emails to everyone with dates. The 2020 Online DOE Summit was born.
Our first group of presentations consisted of a kickoff talk, three keynotes, and a tutorial. Many of these talks revolved around current directions in DOE. Even though DOE has been around for decades, it is an evolving practice with new techniques and advice coming up all the time. Each speaker discussed broad concepts in design of experiments.
[Click on the title of the talk for a video recording of the presentation.]
Kickoff: Know the SCOR for Multifactor Strategy of Experimentation
Mark Anderson: Principal of Stat-Ease, Inc.
Talk Topic: Laying out a strategy for multifactor design of experiments
Keynote: My Lifelong Journey with DOE
Pat Whitcomb: Founding Principal of Stat-Ease, Inc.
Talk topic: Pat explores his lifetime of design of experiments with a view to the future
Keynote: Some Experiences in Modern Experimental Design
Marcus Perry: Editor in Chief, Quality Engineering; Professor of Statistics, The University of Alabama
Talk topic: Handling non-standard situations in today’s DOE environment
Keynote: Innovative Mixture-Process Models
Geoff Vining: Professor of Statistics, Virginia Tech
Talk Topic: An overview of KCV designs that limit runs in experiments involving both mixture components and process variables
Tutorial: Strategies for Sequential Experimentation
Martin Bezener: Director of Research & Development, Stat-Ease, Inc.
Talk Topic: This presentation explores how it may be more efficient to divide an experiment into smaller pieces. Learn how to use resources in a smarter, more adaptive manner.
In the second week of the summit, we had a separate set of talks. Each one detailed real-world experiments. Presenters discussed the actual experiments they had worked on, and how they used DOE in each case.
Simultaneous and Quick Determination of Two Ingredients Concentrations in a Solution Using a UV-Vis Spectroscopy Chemometric Model
Samd Guizani: Process Scientist, Ferring International Center
Use of DOE for 3D Printer Ink Formulation Development
Uri Zadok: Senior Research Chemist, Stratasys
Using Experimental Design to Optimize the Surfactant Package Properties of a Metalworking Cleaner
Mathijs Uljé: Development Chemist, Quaker Houghton
Optimizing Multi-Step Processes with DoE – A Cryopreservation Protocol for Plant Cells as a Case
Johannes Buyel: Head of Department of Bioprocess Engineering, Aachen University
In all, this was a great summit. The presenters were spot on with the current state of DOE, whether in modern concepts or real-life experiments. The audience took away many useful ideas and practices. It was a classic case of making lemonade from lemons.
Thanks all!
[Disclaimer: I’m not a statistician. Nor do I want you to think that I am. I am a marketing guy (with a few years of biochemistry lab experience) learning the basics of statistics, design of experiments (DOE) in particular. This series of blog posts is meant to be a light-hearted chronicle of my travels in the land of DOE, not be a text book for statistics. So please, take it as it is meant to be taken. Thanks!]
When I wrote my introduction to factorial design (Greg’s DOE Adventure - Factorial Design, Part 1), there were a couple of points that I left out. I’ll amend that post here to talk about making sure your experiment is planned out yet random.
Wait. What?
You’ll see. Let me explain.
Getting organized
During the initial phase of an experiment, you should make sure that it is well planned out. First, think about the factors that affect the outcome of your experiment. You want to create a list that’s as all-encompassing as possible. Anything that may change the outcome, put on your list. Then pare it down into the ones that you know are going to be the biggest contributors.
Once you have done that, you can set the levels at which to run each factor. You want the low and high levels to be as far apart as possible. Not too low that you won’t see an effect (if your experiment is cooking something, don’t set the temperature so low that nothing happens). Not too high that it’s dangerous (as in cooking, you don’t want to burn your product).
Finally, you want to make sure your experiment is balanced when it comes to the factors in your experiment. Taking the cooking example above a little further, suppose you have three factors you are testing: time, temperature, and ingredient quality. Let’s also say that you are testing at two different levels: low and high (symbolized by minus and plus signs, respectively). We can write this out in a table:
This table contains all the possible combinations of the three factors. It’s called an ‘orthogonal array’ because it’s balanced. Each column has the same number of pluses and minuses (4 in this case). This balance in the array allows all factors to be uncorrelated and independent from each other.
With these steps, you have ensured that your experiment is well planned out and balanced when looking at your factors.
Always randomize
At the start of this post, I said that an experiment should be planned out, yet random. Well we have the planned-out part, now let’s get into the random part.
In any experimentation, influence from external sources (variables you are not studying) should be kept to a minimum. One way to do this is randomizing your runs.
As an example, let’s look at the table above with the cooking example. Let’s say that it represents the order of how the experiment was run. So, all the low temperature runs were made together and then all the high ones together. This makes sense, right? Perform all the runs at one temperature before adjusting up to the next setting.
The problem is, what if there is an issue with your oven that causes the temperature to fluctuate more, early in the experiment and less later. This time-related issue introduces variation (bias) into your results that you didn’t know about.
To reduce the influence of this variable, randomize your run order. It may take more time adjusting your oven for every run, but it will remove that unwanted variation.
Temperature is a popular example to illustrate randomization. But this can be said of any factor that may have time-related problems. It could be warm-up time on a machine or the physical tiring of an operator. Randomization is used to guard against bias as much as you can when running an experiment.
Conclusions
Hopefully, you see now why I said to keep your experiments planned but random. It sounds like an oxymoron, but it’s not. Not in the way I’m talking about it here!
Stat-Ease is here for you during these trying times. We can help you with your design and analysis of experiments, whether at home or in the lab. Please reach out if you have a question, sales@statease.com
A summary of information that may be important to you
Access to Design-Expert® software while working at home:
Access to FREE educational materials:
2020 European Conference: Our conference (www.statease.com/events/doe-user-meetings/8th-european-doe-meeting/) is being re-imagined into an online opportunity that will be accessible to our global audience!
To receive information by email, go to www.statease.com/publications/signup/ and signup for our email list.
If you have other needs while transitioning to a new work setup, or an Academic online learning environment, please contact sales@statease.com
Design-Expert® software, v12 offers formulators a simplified modeling option crafted to maximize essential mixture-process interaction information, while minimizing experimental costs. This new tool is nicknamed a “KCV model” after the initials of the developers – Scott Kowalski, John Cornell, and Geoff Vining. Below, Geoff reminisces on the development of these models.
To help learn this innovative methodology, first view a recorded webinar on the subject.
Next, sign up for the workshop "Mixture Design for Optimal Formulations".
The origin of the KCV designs goes back to a mixtures short-course that I taught for Doug Montgomery at an adhesives company in Ohio. One of the topics was mixture-process variables experiments, and Doug's notes for the course contained an example using ratios of mixture proportions with the process variables. Looking at the resulting designs, I recognized that ratios did not cover the mixture design space well. Scott Kowalski was beginning his dissertation at the time. Suddenly, he had a new chapter (actually two new chapters)!
The basic idea underlying the KCV designs is to start with a true second-order model in both the mixture and process variables and then to apply the mixture constraint. The mixture constraint is subtle and can produce several models, each with a different number of terms. A fundamental assumption underlying the KCV designs is that the mixture by process variable interactions are of serious interest, especially in production. Typically, corporate R&D develops the basic formulation, often under extremely pristine conditions. Too often, R&D makes the pronouncement that "Thou shall not play with the formula." However, there are situations where production is much smoother if we can take advantage of a mixture component by process variable interaction that improves yields or minimizes a major problem. Of course, that change requires changing the formula.
Cornell (2002) is the definitive text for all things dealing with mixture experiments. It covers every possible model for every situation. However, John in his research always tended to treat the process variables as nuisance. In fact, John's intuitions on Taguchi's combined array go back to the famous fish patty experiment in his book. The fish patty experiment looked at combining three different types of fish and involved three different processing conditions. John's intuition was how to create the best formulation robust to the processing conditions, recognizing that the use of these fish patties was in a major fast-food chain. The processing conditions in actual practice typically were in the hands of teenagers who may or may not follow the protocol precisely.
John's basic instincts followed the corporate R&D-production divide. He rarely, if ever, truly worried about the mixture by process variable interactions. In addition, his first instinct always was to cross a full mixture component experiment with a full factorial experiment in the process variables. If he needed to fractionate a mixture-process experiment, he always fractionated the process variable experiment, because it primarily provided the "noise".
The basic KCV approach reversed the focus. Why can we not fractionate the mixture experiment in such a way that if a process variable is not significant, the resulting design projects down to a standard full mixture experiment? In the process, we also can see the impact of possible mixture by process variable interactions.
I still vividly remember when Scott presented the basic idea in his very first dissertation committee meeting. Of course, John Cornell was on Scott's committee. In Scott's first committee meeting, he outlined the full second-order model in both the mixture and process variables and then proceeded to apply the mixture constraint in such a way as to preserve the mixture by process variable interactions. John, who was not the biggest fan of optimal designs when a standard mixture experiment would work well, immediately jumped up and ran to the board where Scott was presenting the basic idea. John was afraid that we were proposing to use this model and apply our favorite D-optimal algorithm, which may or may not look like a standard mixture design. John and I were very good friends. I simply told him to sit down and wait a few minutes. He reluctantly did. Five minutes later, Scott presented our design strategy for the basic KCV designs, illustrating the projection properties where if a process variable was unimportant the mixture design collapsed to a standard full mixture experiment. John saw that this approach addressed his basic concerns about the blind use of an optimal design algorithm. He immediately became a convert, hence the C in KCV. He saw that we were basically crossing a good design in the process variables, which itself could be a fraction, with a clever fraction of the mixture component experiment.
John's preference to cross a mixture experiment with a process variable design meant that it was very easy to extend these designs to split-plot structures. As a result, we had two very natural chapters for Scott's dissertation. The first paper (Kowalski, Cornell, and Vining 2000) appeared in Communications in Statistics in a special issue guest edited by Norman Draper. The second paper (Kowalski, Cornell, and Vining 2002) appeared in Technometrics.
There are several benefits to the KCV design strategy. First, these designs have very nice projection properties. Of course, they were constructed specifically to achieve this goal. Second, it can significantly reduce the overall design size while still preserving the ability to estimate highly informative models. Third, unlike the approach that I taught in Ohio, the KCV designs cover the mixture experimental design space much better while still providing the equivalent information. The underlying models for both approaches are equivalent.
It has been very gratifying to see Design-Expert incorporate the KCV designs. We hope that Design-Expert users find them valuable.
References
Cornell, J.A. (2002). Experiments with Mixtures: Designs, Models, and the Analysis of Mixture Data, 3rd ed. NewYork: John Wiley and Sons.
Kowalski, S.M., Cornell, J.A., and Vining, G.G. (2000). “A New Model and Class of Designs for Mixture Experiments with Process Variables,” Communications in Statistics – Theory and Methods, 29, pp. 2255-2280.
Kowalski, S.M., Cornell, J.A., and Vining, G.G. (2002). “Split-Plot Designs and Estimation Methods for Mixture Experiments with Process Variables,” Technometrics, 44, pp. 72-79.
This blog post is from James Cawse, Consultant and Principal at Cawse and Effect, LLC. Jim uses his unique blend of chemical knowledge, statistical skills, industrial process experience, and quality commitment to find solutions for his client's difficult experimental and process problems. He received his Ph.D. in Organic Chemistry from Stanford University. On top of all that, he's a great guy! Visit his website (link above) to find out more about Jim, his background, and his company.
The basic rationale for using a statistically based DOE in any science are straightforward. The DOE method provides:
DOE works so well in most scientific disciplines because Mother Nature is kind. In general:
Y = B0 + B1x1 + B2x2 + B12x1x2 + B11x12 +…
In contrast, chemistry offers unique challenges to the team of experimenter and statistician. Chemistry is a science replete with nonlinearities, complex interactions, and nonquantitative factors and responses. Chemical experiments require more forethought and better planning than most DOE’s. Chemistry-specific elements must be considered.
Above all, chemists make mixtures of ‘stuff’. These may be catalysts, drugs, personal care items, petrochemicals, or others. A beginner trying to apply DOE to a mixture system may think to start with a conventional cubic factorial design. It soon becomes clear, however, that there is an impossible situation when the (+1, +1, +1) corner requires 100% of A and B and C! The actual experimental space of a mixture is a triangular simplex. This can be rotated into the plane to show a simplex design, and it can easily be extended to high dimensions such as a tetrahedron.
It is rare that a real mixture experiment will actually use 100% of the components as points. A real experiment with be constrained by upper and lower bounds, or by proportionality requirements. The active ingredients may also be tiny amounts in a solvent. The response to a mixture may be a function of the amount used (fertilizers or insecticides, for example). And the conditions of the process which the mixture is used in may also be important, as in baking a cake – or optimizing a pharmaceutical reaction. All of these will require special designs.
Fortunately, all of these simple and complex mixture designs have been extensively studied and are covered by Cornell3, Anderson et al4, and Design-Expert® software.
The goal of a kinetics study is an equation which describes the progress of the reaction. The fundamental reality of chemical kinetics is
Rate = f(concentrations, temperature).
However, the form of the equation is highly dependent on the details of the reaction mechanism! The very simplest reaction has the first-order form
Rate = k*C1
which is easily treated by regression. The next most complex reaction has the form
Rate = k*C1*C2
in which the critical factors are multiplied – no longer the additive form of a typical linear model. The complexity continues to increase with multistep reactions.
Catalysis studies are chemical kinetics taken to the highest degree of complication! In industry, catalysts are often improved over years or decades. This process frequently results in increasingly complex catalyst formulations with components which interact in increasingly complex ways. A basic catalyst may have as many as five active co-catalysts. We now find multiple 2-factor interactions pointing to 3-factor interactions. As the catalyst is further refined, the Law of Diminishing Returns sets in. As you get closer to the theoretical limit – any improvement disappears in the noise!
As we look at the actual chemicals which may appear as factors in our experiments, we often find numbers appearing as part of their names. Often the only difference among these molecules is the length of the chain (C-12, 14, 16, 18) and it is tempting to incorporate this as numeric levels of the factor. Actually, this is a qualitative factor; calling it numeric invites serious error! The correct description, now available in Design-Expert, is ’Discrete Numeric’.
The real message, however, is that the experimenters must never take off their ’chemist hat‘ when putting on a ’statistics hat’!
Reference Materials: