One of many interesting points that struck me following the recent Intellegens User Group Meeting (UGM) was the importance of being able to ‘play around’ with machine learning – and of having the right tools to enable that process. We’ve discussed the challenge of making ML less of a ‘black box’ on this blog previously. The user meeting provided some excellent examples of the benefits of interacting productively with the AI when designing experiments and formulations.
Blog post by Stephen Warde.

Our Head of Machine Learning, Tom Whitehead, opened the meeting with a rundown of recent developments in the Alchemite™ Suite software. Of course, he explored all of the headline new features that we’ve been talking about in recent months – the Excel add-on, the use of generative AI to guide users, introducing an understanding of chemistry via SMILES strings. But he also shared a series of smaller analytics enhancements designed to support Design of Experiments (DOE) that perhaps risk slipping under the radar, because they’re part of a continual improvement process that constantly drip-feeds new capabilities into the software in response to user feedback.
The new DOE Profiler, for example, is a plot that helps users to understand why ML has proposed particular experiments, and what difference it might make if assumptions were tweaked. A DOE radar plot explores tradeoffs between factors influencing suggested experiments. And there were more examples. Such tools allow users to train an ML model, use it to propose new experiments, but then – vitally – to dig into why the ML is suggesting those experiments, and to make adjustments to their experimental plans before proceeding.
These tools are providing the insights that users have found themselves looking for when using the ML. Building them into the Alchemite™ apps, when combined with the speed of the Alchemite™ algorithm, allows for fast iteration – ‘playing around’ – that enables users to hit targets in even less time. Tom shared that another new analytic, which tracks how the ML model’s suggestions evolve over time, had been added just that week, and was based on an analysis that one of the users at the meeting had developed for one of their projects.
In other sessions at the UGM, users shared case studies of applying Alchemite™ in R&D. One such story, which focused on optimizing rheological properties of a formulation, provided another example of playing around with the ML. The user shared how initial predictions using a strong series of constraints had not identified candidate formulations that met their criteria. However, Alchemite™ allowed them to quickly and easily modify the constraints to enable more flexible exploration of design space, resulting in a number of formulations that met their criteria, one of which has been tested successfully.
These examples illustrate the tradeoffs in making ML a productive R&D tool. Users want speed and ease when training ML models and using those models to make predictions and guide decisions. They don’t want that process to require significant data pre-processing, coding, or statistical analysis. But they also want the option to interrogate the results in order to focus-in on the best pathways to success. The challenge is to give them the right tools to do this, quickly and easily, without overcomplicating their workflows. That challenge can only be met by working closely with users of the technology to give them the controls and analyses that they find they need when engaged in practical DOE projects.
This is the philosophy driving the ongoing development of Alchemite™ Suite at Intellegens – a series of task-centric apps focused on the needs of specific users for DOE, formulation design, or gaining R&D insights. To see what this looks like in practice, why not arrange to try out Alchemite™?