AI based systems think differently – can they be GxP validated?

“This development creates enormous opportunities for efficiency and improved quality, but also complex challenges regarding compliance and existing regulatory requirements,” says Industry Sales Lead, Kasper Goth. He has followed the field for many years and, through his role in the Life Science industry, is focused on how AI logic can be embedded in a highly regulated environment.
AI based systems break the logic
For decades, GxP validation of systems and processes in the Life Science industry has followed well established standards. The foundation has been, and still is, predictability demonstrated through repeatable tests, screen dumps and extensive documentation. When a system or process is validated, the expectation is clear: the same input produces the same output every time.
“AI breaks with this logic,” says Nicholai Stålung, Head of AI at Twoday. “The algorithms are more dynamic and the output can vary depending on context, data and prompts. You could say that AI delivers answers that are less black and white and more ‘the most correct’ rather than ‘the only correct’.”
AI based systems challenge the traditional approach to system validation
Regulatory authorities such as EMA (European Medicines Agency) and FDA (U.S. Food and Drug Administration) continue to require the same fundamentals for GxP systems: documentation, traceability and predictability. These requirements do not change significantly just because the technology does. This creates a challenge we are only beginning to address. We must become capable of validating on AI’s terms rather than using today’s traditional logic, where an answer is either entirely right or entirely wrong.
According to Nicholai Stålung, several new aspects become important for understanding whether the system delivers the intended output:
- Which data has the model been trained on?
- From where does the system source data for decision making?
- Will changes over time alter the logic and thus the output?
- The system’s logic becomes more important to understand and is a key input to the validation assessment.
Furthermore, controls must be established to address known AI risks such as bias, hallucinations and data drift. This means validation can no longer be viewed as a one off activity at go live but as a continuous process in which the model is monitored, reassessed and potentially revalidated on an ongoing basis. Compliance therefore becomes more closely tied to governance and operations, whereas validation today is primarily a project activity.
A cautious view of the future
Looking ahead, AI in Life Science will move from experimentation to established practice. This will likely happen as regulatory frameworks become more specific and authorities clarify their expectations around validating AI based systems.
An important trend may be hybrid models where AI assists with analysis, documentation and decision support while humans retain ultimate responsibility. Human in the loop principles will be central to ensuring quality, compliance and trust.
At the same time, governance, ethics and transparency will gain importance. Organisations will increasingly need to document how AI is used, which data is applied and how risks are managed organisationally and not only technically. Validation of AI logic therefore becomes not just an IT or quality issue but a strategic theme involving leadership, compliance, legal and the business as a whole.
“In short, AI is not just a technological upgrade. It is a new reality that requires us to rethink the entire compliance framework,” says Kasper Goth.
What do we do in the short term?
Even though the use of AI is still in its early stages, AI logic is already present in GxP environments, and we must consider how to perform GxP validation in systems with AI components. Our advice to customers preparing to validate logic that includes AI is to develop an internal guideline that addresses how to handle AI elements in validation. This guideline should, at a minimum, involve project teams, subject matter experts and QA. It should also include:
- How do we document prompting, logic and code?
- How do we document the data on which the system has been trained?
- How do we document that AI is part of the system and which components are AI driven?
- How do we ensure additional rigor when documenting AI elements?
An entire industry and its regulators face a major cultural shift.
Kasper Goth believes that in the short term we should establish immediate guidelines to support the upcoming work with validating AI based GxP systems. In the longer term, we will fundamentally change how we work with systems. We must develop entirely new standards and processes that can accommodate the more complex logic of future IT systems and make room for AI’s greatest strength: systems that can improve themselves over time, with new functionality arriving at a pace we have never seen before.
You might also like
No related content