Artificial intelligence (AI) has become a key topic over the last few years, with many discussions on how it can be used to assist human beings in safety related industries such as the automotive and medical device sectors. However there are many challenges in using AI safety related applications, none more important than assessing the level of risk AI can introduce.
In this presentation we look at AI used in the medical device domain and how companies should approach the tricky topic of risk management. One of the key questions in minimising risk, do we actually need the given AI? We look at the classic usability technique of function analysis, as a means of determining if the functionality is better suited to a human being or a device. Assuming AI is the better approach, then we describe the best techniques for requirements management absolutely key, regardless of AI or non adaptive analysis. AI can be locked or adaptive, locked is the easier of the two areas as it has a deterministic nature given a known input dataset, adaptive AI has many more challenges associated with it.
Thirdly we consider risk management techniques to reduce the potential causes of harm associated with AI, particularly around bias in models or bias in datasets used for training models.
At the end of the day AI adds real value to an industry such as the medical device sector, but risks must be managed properly, to ensure that we harness the true benefits of this approach.
Digital Assistive Technology
Apps, Patient safety, Information security, Usability