Date: June 5, 2025 Location: De Doelen, Rotterdam, The Netherlands Spoken language: Dutch
At the Data Driven Healthcare event, you’ll hear all about the latest developments in the field of Healthcare & AI. The question of how to approach this responsibly is essential and will not be left out.
Which challenges in healthcare can we solve with data and AI — and which ones can’t we? How do you approach this? And what are the dilemmas healthcare institutions need to take into account?
In a panel discussion — featuring, among others, INDICATE Co-Lead Michel van Genderen — these topics will be explored. Different perspectives and interests will be represented in the conversation.
Date: September 11, 2025 – September 12, 2025 Location: Hilton Rotterdam, The Netherlands
Artificial Intelligence (AI) continues to impact society in various and profound ways. It is inevitable that health care will face significant changes through the introduction of AI systems that are intended to support the health care system with diagnosis, treatment decision-making, hospital management, medical research and development, nursing care, and the health infrastructure (homecare, insurance etc.) more broadly.
The conference ‘Responsible AI in Health Care’ will clearly map how the introduction of AI in medicine unfolds, which aspects of the health care systems will be impacted most, and how this impact will unfold. First and foremost, it intends to discuss in depth, the question how to shape this transition in a responsible way.
The INDICATE Data Protection Workgroup has been established.
The responsibilities and tasks include:
Data security and privacy – Oversee the design and implementation of the project’s data management processes, ensuring strong data protection and privacy protocols.
Regulatory compliance – Advise on compliance with the General Data Protection Regulation (GDPR) and other relevant national or European data protection laws.
Risk assessment – Identify potential data security risks, propose mitigation strategies, and support the development of safe data-sharing infrastructures across European ICUs.
Within the INDICATE consortium, we have established the Communication Network, a dedicated group that meets every three months for one hour to discuss and coordinate our communication efforts.
During these meetings, we cover:
Content planning and dissemination strategies
Updates on Work Package (WP) outcomes
How to engage with journalists and the media on our topics
Key themes from our communication strategy
If you’re involved in INDICATE and would like to contribute to the Communication Network, you can sign up by emailing maaike@indicate-europe.eu.
We will be scheduling the upcoming meetings for Q2, Q3, and Q4 soon. Stay tuned for updates!
With the support of the AI Ethics Lab at Erasmus MC – co-founded by internist-intensivist Michel van Genderen, Project Coordinator of INDICATE – TU Delft Digital Ethics Centre is accredited by the World Health Organization (WHO). From now on, the centre will advise the WHO on ethical aspects and regulations regarding AI in healthcare. Below you will find an interview with Michel van Genderen in Skipr – one of the Dutch national magazines about innovation in healthcare.
TU Delft and Erasmus MC will advise WHO on responsible AI in healthcare
TU Delft’s Digital Ethics Centre will advise the World Health Organization (WHO) on ethical and legal aspects of AI in healthcare. The research centre, which works closely with Erasmus MC, has been accredited as an official cooperation partner.
Artificial intelligence (AI) has great potential for healthcare, but integration and implementation are struggling. Applications do not find their way to the workplace or suffer from bias. As a consultant Ethics and Governance of AI in Healthcare, TU Delft will help the World Health Organisation uphold ethical principles and healthcare standards and values. Among other things, by translating guidelines into practice. ‘Together with WHO, we have already drafted frameworks for responsible use of AI and Generative AI in healthcare,’ says Stefan Buijsman, associate professor of Responsible AI TU Delft. ‘Now they are approaching us to start making this concrete. How will it work in practice?’
[…]
Michel van Genderen: ‘‘AI can only improve healthcare if we have a good ethical foundation.’
With the support of the AI Ethics Lab at Erasmus MC – co-founded by internist-intensivist Michel van Genderen, Project Coordinator of INDICATE – TU Delft Digital Ethics Centre is accredited by the World Health Organization (WHO). From now on, the centre will advise the WHO on ethical aspects and regulations regarding AI in healthcare.
“AI has great potential to transform healthcare, but that can only happen if what we do is done right,” said Michel van Genderen.
Quality of life
Healthcare systems worldwide are under pressure. Implementing AI in healthcare is one of the most frequently cited solutions. However, applying AI in healthcare is not without challenges. In fact, research shows that only two percent of all AI innovations are actually adopted in practice. Many innovations do not function well in real-world settings or fail to gain acceptance from healthcare professionals. Additionally, AI introduces various ethical dilemmas—such as the decision between initiating life-extending treatments for a patient versus focusing on quality of life.
In practice
To successfully implement AI in healthcare, it is crucial to establish clear ethical standards. International guidelines have been developed, but they must still be translated into practical application. This is where the TU Delft Digital Ethics Centre will play a key role. The centre collaborates with Erasmus MC in the AI Ethics Lab (REAiHL).
Safe and responsible use
Van Genderen co-founded the AI Ethics Lab in 2023 together with intensivist and ICU head Diederik Gommers, Associate Professor Stefan Buijsman from TU Delft, and Distinguished Professor Jeroen van den Hoven from TU Delft. The lab brings together nurses, doctors, data scientists, data engineers, researchers, and ethicists to ensure that AI is deployed safely and responsibly in patient care.
The foundation
Van Genderen explains: “I am convinced that AI will transform the way we work in healthcare. But AI can only improve healthcare if we build on a strong ethical foundation. The foundation is key—what we do must be right. We must take the lead and define the standards in this emerging field.”
“AI is already helping us determine when a patient can be safely discharged after an oncological surgery.”
Ethical norms are crucial for the responsible use of AI in healthcare.
Earlier discharge
“The WHO accreditation confirms that this collaboration is unique and improves healthcare for everyone. We are already seeing this in an ongoing project at Erasmus MC, where AI helps us determine when a patient can be safely discharged after a major oncological surgery. On average, these patients are able to go home four days earlier.”
Potential for the future
This success lays the groundwork for further innovations. Ideas developed at TU Delft can be tested directly in a hospital environment, opening new possibilities for real-world applications. Stefan Buijsman, co-founder of the TU Delft Digital Ethics Centre, is enthusiastic about the potential: “It is essential to assess whether our ideas work in a hospital setting. We develop ethical guidelines and technological solutions, and Erasmus MC provides the environment to test their practical effectiveness and identify unmet needs.”
“By joining forces, we ensure that AI can actually be used in the workplace—because it is truly needed.”
A necessary step
Gommers also shares his enthusiasm: “This accreditation confirms that the collaboration between TU Delft, Erasmus MC, and Erasmus University is yielding significant results within Convergence. By combining our strengths, we ensure that AI can be effectively implemented in clinical practice—because it is truly needed. However, this must always be done safely and responsibly.”
The Responsible and Ethical AI in Healthcare Lab (REAiHL) is a partnership between Erasmus MC, TU Delft, and software company SAS. The Erasmus MC DataHub serves as the lab’s base, where scientists, physicians, and data scientists collaborate to develop guidelines for ethical and relevant AI implementation in healthcare. Erasmus MC consolidates all its AI expertise within the AI Accelerator, and REAiHL is part of the Convergence Centre for Responsible AI in Healthcare.
The WHO is a specialised agency of the United Nations (UN) dedicated to global health.