Generative AI and decision-making agentic workflows have moved from theory to deployment in less than two years. And while organisations are still debating potential ROI from AI investments, the real question is simpler and more urgent: what conditions need to be in place for these systems to be GMP?
Good Manufacturing Practice is fundamentally about ensuring that systems affecting product quality are validated for their intended use, operate within defined boundaries, are monitored over time, and remain accountable to human oversight. It requires traceability. It requires change control. It requires reconstruct-ability of decisions.
It does not require determinism. This distinction becomes critical as we move from predictive models to generative and agentic AI.
Generative AI in GMP
Generative AI makes people uncomfortable because it produces language that looks authoritative. It can draft deviation investigations, propose CAPAs, and suggest equipment maintenance or batch impact assessments.
The concern is not that it is probabilistic. The concern is that it can influence decisions, in simple, human-readable, step-by-step and word-by-word instructions.
So can generative AI be used in a GMP environment?
Yes, but only under strict, explicit constraints.
If a generative model drafts a deviation report, the prompt structure must be defined. The data sources it can access must be bounded. Outputs must be logged and versioned. Human review must be mandatory and documented. The system must clearly establish that the model supports the decision, rather than makes it.
The moment its output is treated as authoritative without oversight, you have stepped outside GMP expectations.
The issue is not variability in wording. It is whether the decision pathway is transparent and defensible.
Agentic AI and Data Selection
The more complex challenge arises when AI begins selecting data, prioritising evidence, or recommending actions.
If a system reviews validated process data and flags potential batch impact, that is manageable; provided the dataset is controlled, the model version is fixed, and outputs are traceable.
If a system dynamically decides which historical deviations are relevant to an investigation, the risk profile increases. Now the model is shaping the evidence base itself.
In GMP terms, this leads to one essential question: can you trace why the AI chose certain data and left out other data?
This depends on bounded data access, where the AI can only use information from a defined and controlled set of sources, supported by full logging and version control. When those elements are in place, the system can operate within GMP expectations. When they are missing, the issue is a loss of data governance.
If the answer is no, the problem is not artificial intelligence. It is loss of data governance.
Autonomy is not the regulatory issue. Untraceable autonomy is.
It helps to compare this to Advanced Process Control (APC). For years, GMP environments have relied on APC systems that automatically adjust processes within validated limits. They observe conditions, calculate responses, and act without human intervention. They are fully accepted because their models, boundaries, and change controls are defined and governed.
Agentic AI follows a similar structural pattern: it observes, reasons, and acts. The difference is that APC is deterministic while agentic AI is probabilistic and adaptive.
GMP has already accepted controlled autonomy. AI now tests whether probabilistic autonomy can be governed to the same standard.
The Shift in Validation Thinking
You cannot validate a generative model by proving it gives the same answer every time.
You validate its intended use, the limits within which it operates, the data universe it can access, the controls around retraining or updating, the monitoring of performance and drift, and the accountability of the human who signs off.
It is not a workaround. It is classic GMP applied to a new class of tool.
The mistake organisations make is trying to validate the model itself as if it were a fixed algorithm. The correct approach is to validate the governed system around it.
In Conclusion
AI is not GMP.
But AI operating within a defined, controlled, and monitored architecture can absolutely exist inside a GMP environment.