Predicting depression risk in cancer patients with multimodal data Har passerat
Torsdag 25 maj 2023 11:30 - 11:35 G3
Föreläsare: Anne De Hond
Spår: MIE: Natural Language Processing
When patients with cancer develop depression, it is often left untreated. We developed a prediction model for depression risk within the first month after starting cancer treatment using machine learning and Natural Language Processing (NLP) models. The LASSO logistic regression model based on structured data performed well, whereas the NLP model based on only clinician notes did poorly. After further validation, prediction models for depression risk could lead to earlier identification and treatment of vulnerable patients, ultimately improving cancer care and treatment adherence.
Språk
English
Seminarietyp
Enbart på plats
Föreläsningssyfte
Inspiration
Kunskapsnivå
Fördjupning
Målgrupp
Tekniker/IT/Utvecklare
Forskare (även studerande)
Studerande
Omsorgspersonal
Vårdpersonal
Nyckelord
Innovativ/forskning
Konferens
MIE
Författare
Anne de Hond, Marieke van Buchem, Claudio Fanconi, Ilse Kant, Ewout W Steyerberg, Tina Hernandez-Boussard
Föreläsare
Anne De Hond Föreläsare
PhD Candidate
Leiden University Medical Center
Anne obtained her master's degree in Econometrics and Management Science from the Erasmus University Rotterdam. Her econometrics studies piqued her interest in data modelling for the healthcare sector. Anne started her PhD research in 2018 at the Erasmus School of Health Policy and Management where she studied adaptation to disability and quality of life assessment. After a year and a half, her research interests pivoted towards artificial intelligence for clinical prediction algorithms. She continued her PhD research in 2019 at the Leiden University Medical Center under the supervision of prof. dr. Ewout Steyerberg and dr. Ilse Kant. During her PhD, she collaborated with prof. dr. Tina Hernandez-Boussard at Stanford University, where she studied multi-modal prediction models and algorithmic fairness. Anne's research interests are fairness for AI models, validation of AI for healthcare practice, and explainable AI.