Technical Paper
Lessons from NOA, a schema-gated conversational research testbed
This Technical Paper shares experience from ongoing work to prototype an Agentic AI system that can plan and execute multi-step scientific workflows in a reliable manner. The goal is to guide the long-term AI roadmap at Intellegens, although insights from this work are already informing development of generative AI tools in the Alchemite™ software. This paper is written for anyone interested in this long-term direction or in collaborating in this area.
Executive Summary
Agentic AI systems that can plan, select tools, and execute multi-step actions have accelerated many knowledge workflows. In scientific R&D, however, the bar is higher: results must be accurate, reproducible, and under explicit human control.
To explore what it takes to make agents trustworthy in this setting, we built NOA (No Ordinary Assistant), an internal research testbed. NOA combines conversational interaction with a schema-gated architecture: an agent can propose a workflow, but execution is restricted to schema-validated tools and deterministic workflows, with full provenance and researcher approval.
This work is not theoretical. What we learn from NOA feeds directly into our AI capabilities roadmap and our Alchemite™ Insight software tools. We have completed Phases 1 and 2 of this roadmap (analytics explanations and user guidance) and are actively building Phase 3 (chat interface). NOA explores what comes after our five-phase roadmap—orchestrating across tools and platforms.