| 21 May 2026 14:00 - 16:00 GLP webinar - 21 mei |
| 01 Jun 2026 GCP On-site event Q2 2026 (of Sep 2026) Risk Management / Quality by design |
| 01 Oct 2026 GCP Webinar: Risk-based Audit Management in Oct 2026 |
| 04 Nov 2026 RQA Global Conference - 4 t/m 6 november |
| 04 Nov 2026 RQA Global Conference - 4 t/m 6 november |
| 04 Nov 2026 RQA Global Conference - 4 t/m 6 november |
| 20 Nov 2026 GMP themadag 'Sustainability' - 20 november |
| 04 Dec 2026 GLP Themadag - 4 december |
| 01 Jan 2027 GCP webinar - Januari 2027 |
DARQA, the Dutch Association of Research Quality Assurance, used its Spring Event 2026 to try something more ambitious than a standard themed meeting. Instead of treating AI as a narrow technology topic, it built a full-day combined GxP programme around AI in Practice: a moderated opening keynote, a plenary session on work and labour-market effects, parallel healthcare case presentations, and a collaborative workshop track for GCP, GLP and GMP professionals. The day was chaired by Victor Broers, who linked the sessions, moderated Q&A, and acted as the in-room anchor for the remote keynote. The event was explicitly framed as practical, applicable and “free of hype,” aimed at DARQA members as well as QA professionals, auditors, clinical research specialists and quality managers more broadly.
The strongest thematic thread was that AI is no longer mainly a question of technical capability. Throughout the day, speakers kept returning to judgment, context, evidence quality and workflow fit. Fabrizio Maniglio argued that quality professionals are moving “from knowledge worker to wisdom worker,” because AI can process information at scale, but human beings still carry meaning, ethical reasoning and accountable judgment. Sabrina Genz then showed that the real labour-market effects of AI cannot be reduced to “jobs disappear” or “jobs survive”; the decisive level is the task, not the job title. In the afternoon, Myrthe Jager and Peter Prinsen showed what this looks like in practice: AI becomes valuable when it is narrowly purposed, validated against a real need, and embedded in a real process, whether that is intraoperative tumor classification or nationwide cancer data structuring.
The workshop track sharpened the governance side of the same message. Its central phrase — “digital sovereignty as continuity of GxP evidence” — turned a broad policy term into a concrete QA/QC question: when systems degrade, suppliers delay, formats lock you in, or records are challenged, can you still retrieve, understand and defend the evidence chain? The workshop succeeded because it translated AI readiness into evidence readiness. Participants repeatedly discovered that the weak point is often not the model, but the source record, the audit trail, the metadata, the export route, or the fallback path. That insight connected directly back to the rest of the day.
A final recurring theme was the human side of change. DARQA did not end the day with another technical talk, but with Nancy Beers’ serious-play session, which made a strong point in a different language: if organisations want meaningful human oversight, they have to preserve and practise the human capabilities that matter — communication, reflection, focus, experimentation, collaboration and the courage to question assumptions. That made the close more than entertainment. It completed the day’s argument. AI may be accelerating rapidly, but regulated practice still depends on people who can interpret, decide, connect and act responsibly together.
Fabrizio Maniglio opened the thematic programme by acknowledging the room’s uncertainty rather than dismissing it. AI, he argued, is moving faster than any technology wave before it, so confusion is understandable — but it is not too late. His starting point was that regulated industries are still at the beginning, provided they start learning now. From there he contrasted today’s extraordinary technical possibilities with the reality of paper-heavy, siloed, bureaucratic processes. His most memorable challenge was that many organisations are trying to attach AI to outdated workflows: “faster paperwork is still paperwork.” Instead of asking how to accelerate current processes, he urged the audience to ask what those processes are actually trying to achieve, and whether they would be designed in the same way from scratch today.
The keynote’s key idea was the wisdom gap. AI is becoming increasingly strong at converting data into information and knowledge, but human beings remain responsible for judgment, meaning and accountable decision-making. In Maniglio’s formulation, quality professionals are evolving “from knowledge worker to wisdom worker.” He reinforced that with two strong cautions: first, Chesterton’s Fence — do not remove a control before you understand why it exists; second, do not hide behind symbolic “human in the loop” language if the human lacks the understanding, authority or time to exercise real oversight. In Q&A, he argued that regulators are unlikely to define every acceptable AI use in time, so industry must stop waiting passively and instead build justified, safe, fit-for-purpose solutions itself. On the question of tasks humans cannot realistically re-perform, he was equally clear: assurance must then come from model design, validation and intended-use control, not from pretending that a nominal reviewer can meaningfully verify everything afterwards.
Sabrina Genz took the room from strategic reflection into labour economics, and she did so interactively. Using live polling, she asked participants first what opportunities AI offers and then what challenges it brings. The answers were revealing: opportunities clustered around efficiency, speed, focus and quality, while concerns clustered around validation, correctness, trust, safety, ethics, data security and acceptance. She used those responses as a bridge into her main theme: the real effect of AI on work is more complex than the public replacement narrative suggests. Her key line, in substance, was that we should think in tasks rather than whole jobs. Some tasks are automated, some are augmented, and the practical outcome depends heavily on which tasks move and in what context.
Genz showed that new technology can both remove work and create new work. Drawing on occupation research, she noted that only about 40% of current US occupations existed in 1940, which means many of today’s jobs are historically new. She then unpacked how AI can lower barriers to entry in some jobs, as in her taxi-driver example, while making other jobs more knowledge-intensive, as in her proofreader example where spell-checking disappears and conceptual review remains. She also presented evidence that AI can raise productivity substantially in specific settings: around 40% faster on writing tasks and about 15% faster in customer support, with especially large gains for less experienced workers. That, however, brought the room to one of the day’s sharpest open questions: if AI can substitute part of junior learning in the short term, how do organisations still develop future experts in the long term? Genz did not overstate the evidence. Her strongest conclusion was that AI should not be rolled out everywhere by default. Productivity gains appear when the tool fits the task, the environment and the user — and that remains a design, governance and societal choice, not just a technical one.
Myrthe Jager’s session showed AI at two very different points in the oncology pipeline, which is exactly what made it so effective. Her title, “From Molecules to Minutes,” captured the arc: from early-stage, data-intensive molecular discovery to near-real-time clinical decision support. In the first half, she discussed the MIRSA project on melanoma resistance to immune checkpoint inhibition. The biological promise is high, but the data are sparse, expensive and difficult. Spatial transcriptomics can preserve tissue context and reveal where cells sit and interact, but the method costs about €15,000 per slide, takes significant time, and captures only a tiny fraction of the underlying transcriptomic signal. That is where AI becomes necessary: for segmenting cells, improving signal quality and eventually identifying patterns and biomarkers in data that are too incomplete for straightforward human interpretation.
The second half of her session turned to Sturgeon, a much more mature and immediately tangible use case: ultra-fast brain tumor classification during surgery. Here the clinical problem is clear. Molecular diagnosis normally takes about a week, but surgical decisions have to be made immediately. The Sturgeon workflow combines nanopore sequencing with machine learning trained on simulated shallow sequence runs, allowing meaningful classification from extremely sparse data. Jager showed how practical this has become: a compact setup with a MinION sequencer, pipettes, heat blocks and even a gaming laptop can produce a diagnostic result within roughly 80 minutes, with the classification signal emerging earlier during the sequencing run. Most strikingly, she reported that the method is already in routine use at the Princess Máxima Center: more than 100 cases, a clear diagnosis in time in 85% of cases, and an adjusted surgical strategy in 14%. Her wider lesson was important for DARQA: useful medical AI is not generic AI, but carefully bounded, validated, translational work built on difficult data and real workflow discipline.
Peter Prinsen shifted the focus from molecular medicine to national data infrastructure. As head of Clinical Data Science at IKNL, he showed that some of the most important AI questions in cancer care are not about flashy models at all, but about the sustainability of data collection and structuring at scale. He introduced the Netherlands Cancer Registry as a nationwide asset with more than 95% coverage, data going back to 1989, around 2.5 million patients and 2.9 million tumors, and a large manual registration effort involving about 125 FTE. Much of the relevant hospital information still arrives in unstructured text and is abstracted manually months after diagnosis. Against a background of ageing populations, rising cancer incidence, more demand for data and pressure for greater timeliness, his message was blunt: this way of working is becoming unsustainable.
Prinsen’s answer was practical rather than utopian. He outlined three broad routes for AI-supported structuring of oncology data: hospitals structure their own data; IKNL receives unstructured data and structures it locally; or IKNL brings the model to the hospital and structures data there in a federated way. What made his presentation especially relevant to DARQA was the emphasis on local control, privacy and digital sovereignty. For sensitive patient data, he argued, local models are essential — on premise, or at least on infrastructure not controlled by American cloud providers. He also described compute as a real bottleneck, with IKNL only now finalizing a two-GPU workstation for serious experimentation. His concluding line was a sober one: AI will probably not replace data managers, but it will change their work. Machines can collect the easier items; human experts then shift toward checking, resolving and interpreting the harder ones. In other words, AI here is not a replacement fantasy but an operational necessity that still depends on validation, secure deployment and well-governed data.
The workshop track was the day’s most explicitly collaborative element, and also its most directly DARQA-shaped contribution. Instead of asking participants whether they preferred cloud or on-premise systems, it translated digital sovereignty into one operational question: can your organisation still produce fast, complete and defensible evidence when pressure hits? The workshop was designed around three cases — one each for GCP, GLP and GMP — and asked mixed groups to do what the facilitators called “data detective work”: map the evidence path, identify where the chain breaks, estimate the Mean Time To Evidence (MTTE), define a fallback route, and end with one sharp supplier or IT question and one 30-day action.
What actually happened in the room was even more interesting. The opening became more dialogic than planned, because participants immediately connected the topic to AI, data quality, lineage, duplicated documents and the weakness of static exports. That turned out to be productive. The workshop’s central lesson became clearer precisely because the room insisted on the AI link: before trustworthy automation can exist, the underlying evidence path has to be visible and defensible. The three domain groups then surfaced different versions of the same problem. In GCP, the issue was evidential completeness across multiple SaaS systems and whether one subject file can be reconstructed on time. In GLP, the issue was whether archived PDFs are enough when the real challenge is still the original electronic raw data, processing parameters and legacy readability. In GMP, the issue was whether QA can still make a defensible release-or-hold decision when systems degrade and raw audit trails are incomplete. The workshop’s strongest line was also its simplest: AI readiness begins with evidence readiness.
Just before the closing session, DARQA briefly turned the lens on itself. Based on a member survey with 73 responses — about 26% of the membership — the association showed that it has a strong, experienced base, but also a clear renewal challenge: 92% of respondents had worked in the industry for more than 15 years. The board presented four priorities: stronger cooperation with other societies, renewal of activities, more active member involvement, and better communication. To support that last part, DARQA announced a new committee, Connect and Grow, and issued an open call for members to help strengthen visibility, communication and member connection.
Nancy Beers closed the day by changing the mode without losing the seriousness. Her opening line was playful — she called herself the “infotainment” part of the programme — but her argument was practical: play in organisations is not a luxury and not “just fun.” It is a method for learning, reflection, innovation, collaboration and behavioural insight. Drawing on her background in game-based learning, IT and facilitation, she argued that innovative people are usually playful people: astronauts, hackers and builders all experiment, test boundaries and ask what else a system can do. She also made clear that play is not valuable because it is easy. One of her sharpest lines was that play is often “not about fun… it’s about frustration,” because frustration reveals habits, pressure points and team dynamics.
Her games were chosen carefully. The Start/Stop exercise reset the room physically and cognitively after a long day. The Living Organization game turned organisational growth, hierarchy and communication loss into something visible, as participants discovered how quickly direction becomes distorted through layers and how rarely people think to communicate explicitly. The Black Stories puzzle exposed bias and assumption-making in a format that felt very close to investigation logic. Finally, Brainshock dramatized multitasking and overload by forcing participants to combine movement, maths and personal questions at once. The group response was strong throughout: laughter, recognition, useful discomfort and thoughtful debriefing. That last part mattered most, because Beers’ key closing line was that “a serious game is a serious game when you do a proper debrief. Because otherwise it was just fun.” In the context of an AI-focused GxP day, her session made a fitting final point: if organisations want better judgment and better collaboration, they must deliberately cultivate the human capabilities that machines do not automatically strengthen for them.
Taken together, the DARQA Spring Event 2026 delivered what it set out to do: a practical, cross-GxP exploration of AI that avoided both hype and simplistic rejection. For attendees, it offered validation, challenge and useful language. For non-attendees, the main message is clear enough to carry forward: in regulated environments, the real question is no longer whether AI exists, but how to make its use understandable, defensible and genuinely useful in practice.
DARQA’s purpose is to provide its circa 300 members a platform enabling them to exchange current knowledge, new developments and regulatory interpretation in order to facilitate continuous improvement.
We provide status and visibility for individuals concerned with quality within the life sciences industry (R&D and manufacturing), healthcare and food/feed sectors.
DARQA is dedicated to involve GxP and Healthcare Inspectors in activities to strive for mutual understanding and stimulate a converging process supporting our motto: ‘Collaboration upfront rather than confrontation during an inspection’.
DARQA does this by:
- Organising events, workshops and/or network meetings where members can meet and share knowledge and information;
- Providing information through this website and the publication of newsletters;
- Assessing and commenting on new legislation and regulations;
- Maintaining external contacts with inspectorates and associations;
- Participating in (inter) national forums and symposiums;
- Promoting the expertise of DARQA’s members in the field of Quality Assurance and the recognition of DARQA as a centre of excellence;
- Maintaining and encouraging contact with governmental bodies and other organisations that have similar fields of interest to DARQA.