Header image for Vitalis 2025
Profile image for Can Shared Models Keep Secrets Safe?

Can Shared Models Keep Secrets Safe? Passed

Tuesday May 20, 2025 13:30 - 14:00 F1

Lecturers: Fazeleh Hoseini, Johan Östman

Track: Data / Information

Collaboration has been identified by the AI Commission as a key enabler for the successful adoption of AI in Sweden. However, collaborating on tasks involving sensitive data, such as data subject to GDPR, presents significant challenges. A critical issue is the difficulty in assessing the risk of revealing sensitive information when sharing trained machine learning models or synthetic data.

To address this, we are developing LeakPro, an open-source tool designed to evaluate the risk of sensitive data leakage when sharing models or synthetic data. LeakPro supports various healthcare-relevant scenarios, including API-based model sharing, federated learning, and synthetic data generation. It handles multiple data modalities, such as images, text, tabular data, and graphs.

In this talk, we will introduce LeakPro and demonstrate its capabilities through examples from healthcare applications.

Language

English

Topic

Data and Information

Seminar type

Live + On site

Lecture type

Presentation

Objective of lecture

Inspiration

Level of knowledge

Intermediate

Target audience

Management/decision makers
Politicians
Organizational development
Technicians/IT/Developers
Researchers
Students

Keyword

Benefits/effects
Innovation/research
Test/validation
Information security

Lecturers

Profile image for Fazeleh Hoseini

Fazeleh Hoseini Lecturer

Research Engineer
AI Sweden

Machine learning research engineer

Profile image for Johan Östman

Johan Östman Lecturer

AI Sweden