Ingo Baumann is a partner at Thescon. He holds a diploma in chemical engineering which he studied at the European University of Applied Sciences Fresenius. Ingo worked in software outfits and consulting companies (IBS, now Siemens Industry Software, and IDS Scheer) before he joined Thescon. With almost 20 years of experience in the regulated industries, Ingo Baumann is an expert in the area of Computerised Systems Validation (CSV), GMP (Good Manufacturing Practice), GDP (Good Distribution Practice), GLP (Good Laboratory Practice), and SOX implementations. Applying regulatory requirements to recent trends (e.g. applications based on artificial intelligence, big data and agile software developments) are currents aspects of his activities.
Thescon is a recognised consulting partner for the pharmaceutical industry, wholesalers, medical devices, the chemical industry, biotechnology and food packaging, safety and distribution logistics. Areas of proven expertise cover regulated industries processes, compliance and quality management, IT system selection, and project management.
Self-Driving Cars Take Ethical Decisions – Let’s Talk About AI in Healthcare
Artificial intelligence (AI) is a technology that promises a wide range of disrupting possibilities for helping the life science industries face current and future challenges. Some examples of these areas that may benefit substantially from AI-based applications include rapidly accelerated drug development processes (e.g. structure-based drug design), improved automated detection of diseases, analysis of digital biomarkers and patient-specific treatments via precision medicine and advanced diagnostics. Early adopters tend to focus on the opportunities to proclaim potential benefits. Even though these same early adopters might be aware of potential flaws, they often tend to neglect in-depth risk assessments. In recent months, however, skeptical observations and concerns have been raised more frequently.
Turning the dazzling possibilities into profits is not an easy task for the life science industry, since the obvious strict regulations must be followed. Conflicts inevitably arise with the technology-driven R&D and IT departments on one side of the issues and with quality and risk management units on the other side. ‘Traditional’ qualification, verification and validation methodologies don’t match highly dynamic, agile developments and machine learning. AI applications are understood to reckon with a certain degree of risk acceptance, so flexibility must factor into such methodologies. This kind of flexibility challenges the general concept of safe products and completely predictable processes.
This presentation will point out the positions leading to those intra-organisational conflicts and will refer to analogous instances in other faster-pacing industries. It will also outline the basics of AI/machine learning technologies as well as potential risk control scenarios.