Next in our 40th anniversary “Ask an Expert” blog series is Leonard “Len” Rubinstein, a Distinguished Scientist at Merck. He has over 3 decades of experience in the pharmaceutical industry, with a background in immunology. Len has spent the last couple of decades working on bioanalytical development, supporting bioprocess and clinical assay endpoints. He’s also a decades-long proponent of design of experiments (DOE), so we reached out to learn what he has to say!
When did you first learn about DOE? What convinced you to try it?
I first learned about DOE in 1996. I enrolled in a six-day training course to better understand the benefits of this approach in my assay development.
What convinced you to stick with DOE, rather than going back to one-factor-at-a-time (OFAT) designs?
Once I started using the DOE approach, I was able to shorten development time but, more importantly, gained insights into understanding interactions and modeling the results to predict optimal parameters that provided the most robust and least variable bioanalytical methods. Afterward, I could never go back to OFAT!
How do you currently use & promote DOE at your company?
DOE has been used in many areas across the company for years, but it has not been explicitly used for the analytical methods supporting clinical studies. I raised awareness through presentations and some brief training sessions. Afterward, after my management adopted it, I started sponsoring the training. Since 2018, I have sponsored four in-person training sessions, each with 20 participants.
Some examples of where we used DOE can be found at the end of this interview.
What’s been your approach for spreading the word about how beneficial DOE is?
Convincing others to use DOE is about allowing them to experience the benefits and see how it’s more productive than using an OFAT approach. They get a better understanding of the boundaries of the levels of their factors to have little effect on the result and, more importantly, sometimes discard what they thought was an important factor(s) in favor of those that truly influenced their desired outcome.
Is there anything else you’d like to share to further the cause of DOE?
It would be beneficial if our scientists were exposed to DOE approaches in secondary education, be it a BA/BS, MA/MS, or PhD program. Having an introduction better prepares those who go on to develop the foundation and a desire to continue using the DOE approach and honing their skills with this type of statistical design in their method development.
And there you have it! We appreciate Len’s perspective and hope you’re able to follow in his footsteps for experimental success. If you’re a secondary education teacher and want to take Len’s advice about introducing DOE to your students, send us a note: we have “course-in-a-box” options for qualified instructors, and we offer discounts to all academics who want to use Stat-Ease software or learn DOE from us.
Len’s published research:
Whiteman, M.C., Bogardus, L., Giacone, D.G., Rubinstein, L.J., Antonello, J.M., Sun, D., Daijogo, S. and K.B. Gurney. 2018. Virus reduction neutralization test: A single-cell imaging high-throughput virus neutralization assay for Dengue. American Journal of Tropical Medicine and Hygiene. 99(6):1430-1439.
Sun, D., Hsu, A., Bogardus, L., Rubinstein, L.J., Antonello, J.M., Gurney, K.B., Whiteman, M.C. and S. Dellatore. 2021. Development and qualification of a fast, high-throughput and robust imaging-based neutralization assay for respiratory syncytial virus. Journal of Immunological Methods. 494:113054
Marchese, R.D., Puchalski, D., Miller, P., Antonello, J., Hammond, O., Green, T., Rubinstein, L.J., Caulfield, M.J. and D. Sikkema. 2009. Optimization and validation of a multiplex, electrochemiluminescence-based detection assay for the quantitation of immunoglobulin G serotype-specific anti-pneumococcal antibodies in human serum. Clinical and Vaccine Immunology. 16(3):387-396.
Welcome to our first Publication Roundup! In these monthly posts, we'll feature recent papers that cited Design-Expert® or Stat-Ease® 360 software. Please submit your paper to us if you haven't seen it featured yet!
Mark's comment: make sure to check out publication #4 by researchers from GITAM School of Science in Hyderabad, India. They provide all the raw data, the ANOVAs, model graphs and, most importantly, enhancing the quality of medicines via multifactor design of experiments (DOE).
Welcome to the first entry in our 40th anniversary Ask An Expert series, where we talk to current and past power users of Design-Expert® and Stat-Ease® 360 software about their experience with design of experiments (DOE) and our software. For this post, we interviewed Shari Kraber, formerly the Client Success Manager, Workshop Manager, and Senior Instructor for Stat-Ease. Shari retired in 2022 after nearly 3 decades of helping clients across all industries learn DOE and implement it to save time & money making breakthrough improvements on their products & processes.
What’s the biggest benefit to educating your team about DOE?
You’ll break the habit of testing changes one at a time. Many systems will have unknown interactions, and only structured DOE test plans will reveal them. Your team will learn a new way of approaching problems, which helps the company in the long run.
You spent so many years helping folks change from one-factor-at-a-time testing to using DOE. What’s something about DOE that more people should know?
Remember that DOE is about trying to get a bunch of information from a small sample of a large process. The analysis does not need to be perfect in order to be useful. Don’t get paralysis by analysis – just find a simple and reasonable model and then CONFIRM the results. You should use software to design and plan the experiments – a good (robust) design will help offset the inevitable problems encountered while running the physical experiment, so that the analysis will be useful enough to make business decisions.
Your background is as a process engineer at 3M, but you always insisted that anyone working with formulations should use mixture designs. Why?
When the response is dependent on the proportions of the ingredients, then two things make this different from a process design:
So, what’s the best way to train your team on DOE?
I think distance learning is great. Stat-Ease started doing distance-learning training via Zoom during the COVID-19 pandemic, and it remains a popular choice for teams. The big advantage of distance learning is that the massive amount of information provided is more digestible in half-day segments. The in-person training is pretty intense and is not as ideal educationally. Yes, it is nice to have a live trainer, but honestly the retention of the materials is BETTER using distance learning.
Finally, what features of Stat-Ease software do you want more folks to know about?
I have quite a few recommendations!
First, there are several great editing features if you right-click on any graphs in the software:
And some more:
If you’re ready to train your team on DOE, check out our public training options or email us with your questions. Shari still teaches classes on a part-time basis, and our whole team would love to get you rolling with best practices for DOE.
Thanksgiving is fast approaching—time to begin the meal planning. With this in mind, the NBC Today show’s October 22nd tips for "75 Thanksgiving desserts for the sweetest end to your feast" caught my eye, in particular the Donut Loaf pound cake. My 11 grandkids would love this “giant powdered sugar donut” (and their Poppa, too!).
I became a big fan of pound cake in the early 1990s while teaching DOE to food scientists at Sara Lee Corporation. Their ready-made pound cakes really hit the spot. However, it is hard to beat starting from scratch and baking your own pound cake. The recipe goes backs hundreds of years to a time when many people could not read, thus it simply called for a pound each of flour, butter, sugar and eggs. Not having a strong interest in baking and wanting to minimize ingredients and complexity (other than adding milk for moisture and baking powder for tenderness), I made this formulation my starting point for a mixture DOE, using the Sara Lee classic pound cake as the standard for comparison.
As I always advise Stat-Ease clients, before designing an experiment, begin with the first principles. I took advantage of my work with Sara Lee to gain insights on the food science of pound cake. Then I checked out Rose Levy Beranbaum’s The Cake Bible from my local library. I was a bit dismayed to learn from this research that the experts recommended cake flour, which costs about four times more than the all-purpose (AP) variety. Having worked in a flour mill during my time at General Mills as a process engineer, I was skeptical. Therefore, I developed a way to ‘have my cake and eat it too’: via a multicomponent constraint (MCC), my experiment design incorporated both varieties of flour. Figure 1 shows how to enter this in Stat-Ease software.
Figure 1. Setting up the pound cake experiment with a multicomponent constraint on the flours
By the way, as you can see in the screen shot, I scaled back the total weight of each experimental cake to 1 pound (16 ounces by weight), keeping each of the four ingredients in a specified range with the MCC preventing the combined amount of flour from going out of bounds.
The trace plot shown in Figure 2 provides the ingredient directions for a pound cake that pleases kids (based on tastes of my young family of 5 at the time) are straight-forward: more sugar, less eggs and go with the cheap AP flour (its track not appreciably different than the cake flour.)
Figure 2. Trace plot for pound cake experiment
For all the details on my pound cake experiment, refer to "Mixing it up with Computer-Aided Design"—the manuscript for a publication by Today's Chemist at Work in their November 1997 issue. This DOE is also featured in “MCCs Made as Easy as Making a Pound Cake” in Chapter 6 of Formulation Simplified: Finding the Sweet Spot through Design and Analysis of Experiments with Mixtures.
The only thing I would do different nowadays is pour a lot of powdered sugar over the top a la the Today show recipe. One thing that I will not do, despite it being so popular during the Halloween/Thanksgiving season, is add pumpkin spice. But go ahead if you like—do your own thing while experimenting on pound cake for your family’s feast. Happy holidays! Enjoy!
To learn more about MCCs and master DOE for food, chemical, pharmaceutical, cosmetic or any other recipe improvement projects, enroll in a Stat-Ease “Mixture Design for Optimal Formulations” public workshop or arrange for a private presentation to your R&D team.
Hi folks! It was wonderful to meet with so many new prospects and long-standing clients at the Advanced Manufacturing Minneapolis expo last week. One highlight of the show was running our own design of experiments (DOE) in-booth: a test to pinpoint the height and distance of a foam cat launched from our Cat-A-Pult toy. Visitors got to choose a cat and launch it based on our randomized run sheet. We got lots of takers coming in to watch the cats fly, and we even got a visit from local mascot Goldy Gopher!
Mark, Tony, and Rachel are all UMN alums - go Gophers!
But I’m getting a bit ahead of myself: this experiment primarily shows off the ease of use and powerful analytical capabilities of Design-Expert® and Stat-Ease® 360 software. I’m no statistician – the last math class I took was in high school, over a decade ago – but even a marketer like me was able to design, run, and analyze a DOE with just a little advice. Here’s how it worked.
Let’s start at the beginning, with the design. My first task was to decide what factors I wanted to test. There were lots of options! The two most obvious were the built-in experimental parts of the toy: the green and orange knobs on either side of the Cat-A-Pult, with spring tension settings from 1 to 5.
However, there were plenty of other places where there could be variation in my ‘pulting system:
Some of these questions can be answered with subject matter knowledge – in the case of launch pressure, by reading the instruction manual.
For our experiment, the surface question was moot: we had no way to test it, as the convention floor was covered in carpet. We also had no way to test beforehand if there were differences in mass between colors of cat, since we lacked a tool with sufficient precision. I settled on just testing the experimental knobs, but decided to account for some of this variation in other ways. We divided the experiment into blocks based on which specific Cat-A-Pult we were using, and numbered them from 1 to 5. And, while I decided to let people choose their cat color to enhance the fun aspect, we still tracked which color of cat was launched for each run - just in case.
Since my chosen two categoric factors had five levels each, I decided to use the Multilevel Categoric design tool to set up my DOE. One thing I learned from Mark is that these are an “ordinal” type of categoric factor: there is an order to the levels, as opposed to a factor like the color of the cat or the type of flooring (a “nominal” factor). We decided to just test 3 of the 5 Cat-A-Pults, trying to be reasonable about how many folks would want to play with the cats, so we set the design to have 3 replicates separated out into 3 blocks. This would help us identify if there were any differences between the specific Cat-A-Pults.
For my responses, I chose the Cat-A-Pult’s recommended ones: height and distance. My Stat-Ease software then gave me the full, 5x5, 25-run factorial design for this, with a total of 75 runs for the 3 replicates blocked by 'pult, meaning we would test every combination of green knob level and orange knob level on each Cat-A-Pult. More runs means more accurate and precise modeling of our system, and we expected to be able to get 75 folks to stop by and launch a cat.
And so, armed with my run sheet, I set up our booth experiment! I brought two measuring tapes for the launch zone: one laid along the side of it to measure distance, and one hanging from the booth wall to measure height. My measurement process was, shall we say, less than precise: for distance, the tester and I eyeballed the point at which the cat first landed after launch, then drew a line over to our measuring tape. For height, I took a video of the launch, then scrolled back to the frame at the apex of the cat’s arc and once again eyeballed the height measurement next to it. In addition to blocking the specific Cat-A-Pult used, we tracked which color of cat was selected in case that became relevant. (We also had to append A and B to the orange cat after the first orange cat was mistaken for swag!)
Whee! I'm calling that one at 23 inches.
Over the course of the conference, we completed 50 runs, getting through the full range of settings for ‘pults 1 and 2. While that’s less than we had hoped, it’s still plenty for a good analysis. I ran the analysis for height, following the steps I learned in our Finding the Vital Settings via Factorial Analysis eLearning module.
The half-normal plot of effects and the ANOVA table for Height.
The green knob was the only significant effect on Height, but the relatively low Predicted R² model-fit-statistic value of 0.36 tells us that there’s a lot of noise that the model doesn’t explain. Mark directed me to check the coefficients, where we discovered that there was a 5-inch variation in height between the two Cat-A-Pult! That’s a huge difference, considering that our Height response peaked at 27 inches.
With that caveat in mind, we looked at the diagnostic plots and the one-factor plot for Height. The diagnostics all looked fine, but the Least Significant Difference bars showed us something interesting: there didn’t seem to be significant differences between setting the green knob at 1-3, or between settings 4-5, but there was a difference between those two groups.
One-factor interaction plot for Height.
With this analysis under my belt, I moved on to Distance. This one was a bit trickier, because while both knobs were clearly significant to the model, I wasn’t sure whether or not to include the interaction. I decided to include it because that’s what multifactor DOE is for, as opposed to one-factor-at-a-time experimentation: we’re trying to look for interactions between factors. So once again, I turned to the diagnostics.
The three main diagnostic plots for Distance.
Here's where I ran into a complication: our primary diagnostic tools told me there was something off with our data. There’s a clear S-shaped pattern in the Normal Plot of Residuals and the Residuals vs. Predicted graph shows a slight megaphone shape. No transform was recommended according to the Box-Cox plot, but Mark suggested I try a square-root transform anyways to see if we could get more of the data to fit the model. So I did!
The diagnostics again, after transforming.
Unfortunately, that didn’t fix the issues I saw in the diagnostics. In fact, it revealed that there’s a chance two of our runs were outliers: runs #10 and #26. Mark and I reviewed the process notes for those runs and found that run #10 might have suffered from operator error: he was the one helping our experimenter at the booth while I ran off for lunch, and he reported that he didn’t think he accurately captured the results the way I’d been doing it. With that in mind, I decided to ignore that run when analyzing the data. This didn’t result in a change in the analysis for Height, but it made a large difference when analyzing Distance. The Box-Cox plot recommended a log transform for analyzing Distance, so I applied one. This tightened the p-value for the interaction down to 0.03 and brought the diagnostics more into line with what we expected.
The two-factor interaction plot for Distance.
While this interaction plot is a bit trickier to read than the one-factor plot for Height, we can still clearly see that there’s a significant difference between certain sets of setting combinations. It’s obvious that setting the orange knob to 1 keeps the distance significantly lower than other settings, regardless of the green knob’s setting. The orange knob’s setting also seems to matter more as the green knob’s setting increases.
Normally, this is when I’d move on to optimization, and figuring out which setting combinations will let me accurately hit a “sweet spot” every time. However, this is where I stopped. Given the huge amount of variation in height between the two Cat-A-Pults, I’m not confident that any height optimization I do will be accurate. If we’d gotten those last 25 runs with ‘pult #3, I might have had enough data to make a more educated decision; I could set a Cat-A-Pult on the floor and know for certain that the cat would clear the edge of the litterbox when launched! I’ll have to go back to the “lab” and collect more data the next time we’re out at a trade show.
One final note before I bring this story to a close: the instruction manual for the Cat-A-Pult actually tells us what the orange and green knobs are supposed to do. The orange knob controls the release point of the Cat-A-Pult, affecting the trajectory of the cat, and the green knob controls the spring tension, affecting the force with which the cat is launched.
I mentioned this to Mark, and it surprised us both! The intuitive assumption would be that the trajectory knob would primarily affect height, but the results showed that the orange knob’s settings didn’t significantly affect the height of the launch at all. “That,” Mark told me, “is why it’s good to run empirical studies and not assume anything!”
We hope to see you the next time we’re out and about. Our next planned conference is our 8th European DOE User Meeting in Amsterdam, the Netherlands on June 18-20, 2025. Learn more here, and happy experimenting!