Img Interp Models Draw

Creating interpretable models

It is desirable to construct prediction models which are both accurate and interpretable, to ensure that clinicians understand the basis for the predictions and recommendations of decision-support systems.

One way to increase the interpretability of the complex models produced by modern machine learning algorithms (e.g. deep learning, ensembles) is to identify which predictors/features are ‘important’ to the model’s predictions and to quantify this importance.

Alternatively, one could trade-off model performance and interpretability, adopting a less-accurate but easy-to-understand model structure (e.g. linear regression or decision tree).

Unfortunately, neither of these options is very useful in medical domains:

  • standard feature importance assessment methods are not appropriate for many medical informatics problems, such as modelling and analysing electronic health records (EHRs);
  • implementing sub-optimal prediction models in high-consequence medical settings is hard to justify.

We deliver a model that can be utilised by a clinician for example, as it is developed in their language and terminology, and it has quantifiable predictive performance, derived by reverse engineering our complex models.

Volv learns an interpretable modelfrom the predictions of a good/robust model (which is proprietary to Volv), and then we assess the predictive importance of the features of this new, interpretable model. This delivers a model that can be utilised by a clinician, as it is developed in their language and terminology, and it retains the quantifiable, predictive performance of the original model. Importantly, we, as humans, can learn new things from these models that are novel.

One of the truly interesting things about this process is that they can sometimes be more than one interpretable model generated, which may in fact mirror the different clinical settings within which patients can find themselves. This is important as it means that the interpretable models can be more clinically relevant. Contact us to find out more.