Intellegens Blog – Stephen Warde, November 2021
Discussing applied machine learning for chemicals, materials and manufacturing – see all blog posts.
Everyone wants to reduce the number of experiments. That was the result when we recently surveyed enterprises that develop materials, chemicals, or manufacturing processes and asked which factors were important when they considered using machine learning. More effectively focusing experiments wasn’t just top of the list, it was selected by 100% of respondents.
This makes sense. Experiment is essential to product development – whether for testing hypotheses, for systematic exploration of possible solutions, or as part of a ‘trial-and-error’ approach. This work is, of course, focused through the knowledge and intuition of scientists. The use of smart statistical methods (‘Design of experiments’, or DOE) increases efficiency. Advances in simulation and high throughput experimentation help, although they also add new costs and increase the amount of data that must be understood. The net result is that a typical experimental program still costs millions and soaks up time.
Writing recently on machine learning applied to science in the UK’s Observer newspaper, Professor John Naughton mused on DeepMind’s recent foray into molecular biology. He expressed scepticism, concluding that machine learning doesn’t provide insight into how an effect is caused by something else. For that, he thinks, “we need ye olde scientific method, not a fancy neural network.”
Well, at Intellegens, we do indeed develop some very “fancy” machine learning – an algorithm that can extract value from experimental data, even when that data is sparse or noisy – a scenario that causes most machine learning approaches to fail. But we still sympathise with Naughton’s opinion… to a limited extent!
Experiment and the scientific method remain central to the work we do with customers. But, for us, machine learning works alongside experiment, in a process of ‘adaptive experimental design’. Our Alchemite™ software builds a predictive model from an experimental dataset. Through its ‘explainable AI’ analysis tools it can, in fact, provide insight into causality – showing which input parameters drive which outcomes. But, crucially, it also offers an accurate measure of the accuracy of its predictions and recommends which experiments to do next in order to improve that accuracy. It often suggests ways to cover the property space being investigated with dramatically fewer experiments than would be proposed by conventional DOE.
Machine learning shouldn’t be positioned in opposition to experiment. Rather, it can make experimentation better, getting you off the experimental treadmill through a pragmatic, iterative process. A typical Alchemite™ project targets reductions in experimental time of 50-80%. In a recent webinar, one of our customers spoke about how this freed up their chemists’ time to think more creatively about product development ideas. Experiments will remain at the heart of product development, but you could be redeploying a lot of the time spent on them more usefully.
If you’ve really nothing better to do with that time, that’s OK.
Otherwise, why not take a look at machine learning for adaptive experimental design?