By Ben Pellegrini, Intellegens CEO
Design of experiments (DOE) should mean you do fewer experiments and get better results. So why isn’t everyone using this methodology, all the time? We’ll answer this question in this short blog series, beginning by introducing DOE and why you should consider it.
What is DOE?
DOE is a methodical way of planning, conducting, and analyzing controlled tests to optimise processes or products. It applies statistical methods to systematically, but efficiently, explore the variables in a design space. It’s often described as an alternative to ‘trial and error’ or to simply testing all possible combinations of factors in a system. In practice, of course, no sensible research team operates purely by ‘trial and error’, while testing all combinations is usually impossible. But much R&D does proceed on what we might call ‘informed trial-and-improvement’ – applying domain knowledge to select the inputs to the experiment most likely to impact its outputs, then focusing testing on those factors.
Domain knowledge should be used to identify targets and place sensible boundaries on experiment. But over-relying on human intuition can miss solutions. That’s where DOE comes in. Systematic DOE approaches were first developed by Ronald Fisher in the 1920s and evolved by statisticians such as George Box. The underlying mathematics guides selection of a subset of all possible experiments to ensure meaningful coverage of the chosen design space, while reducing the number of tests involved, and also avoiding problems that might invalidate the experiment (for example, in the randomisation of sampling, or treatment of uncertainty). This quickly gets complicated, particularly when we go beyond varying one factor at a time into multifactorial design.
Benefits and challenges
Since the 1980s, DOE has been enabled by a range of software tools, most of which were originally developed as generic statistical analysis programs. Effective application of such tools has much to offer R&D in areas like formulations, chemicals, and materials science:
- QUANTITY – systematically designed experimental programs should require fewer experiments to reach the best possible result, enabling you to deliver more product and process innovations.
- TIME – fewer experiments translates to faster time-to-market.
- COST – savings come not only through reducing the number of tests, but also via efficient use of resources within tests.
- QUALITY – a systematic approach identifies solutions that might otherwise not be found, improving product quality and removing risk.
But there are challenges with conventional DOE. A lack of awareness of the approach, its perceived complexity, the requirement for statistical knowledge when implementing it, and the difficulty of changing whole experimental programs are all barriers to DOE being introduced at all. Where it is used, it can be hard to strike the balance between enabling a wider exploration of design space and limiting the amount of experiment it recommends. So, DOE may still drive relatively high experimental costs and time. If not used knowledgeably, it can even increase this burden.
So where do we go from here?
In the next blog, we’ll discuss how machine learning can help to address some of the drawbacks of conventional DOE. In the final blog of this mini-series, we’ll discuss adaptive experimental design with machine learning – think ‘super-powered DOE’ – and how this might quickly and easily deliver the promise of DOE.
If you’d like to take a short cut, you can always read our white paper on that topic now!