PhD Candidate

Contract 48 months
Entry level MSc degree in computer science, biomedical engineering, applied physics, etc
Deadline Feb 28, 2025
Application Official vacancy link at TU/e


Abstract

Recent years have witnessed significant advancements in deep learning and its widespread applications in medical image analysis. However, the integration of these AI-based tools into clinical workflows remains limited. One major obstacle limiting clinical adoption is the lack of explainability in current models. Clinicians are often reluctant to rely on algorithms with opaque decision-making processes. This situation underscores the pressing need for advanced machine learning techniques that not only deliver high performance but also provide interpretable insights into how conclusions are drawn from medical images. Addressing this need is essential for building clinical trust, facilitating the acceptance of AI-driven decision-support systems, and ultimately improving the efficiency and quality of patient care.

This PhD project aims to foster clinical trust by advancing explainable AI for image-based diagnosis and treatment decision-support in neurovascular diseases, with a focus on stroke and intracranial aneurysms. You will develop and validate innovative methodologies that improve informed decision-making in medical image analysis, including detecting subtle changes, highlighting abnormalities in medical images, and quantifying feature contributions and associated uncertainties. –> more details

Keywords

deep learning, XAI, trustworthy AI, python, image processing, medical computer vision, neurovascular diseases, etc.

Requirements

  • Good knowledge in machine/deep learning and (medical) image processing.
  • Hand-on experience with python programming (e.gl, scikit-learn, PyTorch) and development of novel machine/deep learning models.