Interpretability
Interpretability#
The following notebooks demonstrate how to measure the interpretability of a model, to test whether interpretability improves with increasing Nh (the number of human annotations), and how to plot the distribution of regression coefficients.
Interpretability Tests - This notebook demonstrates how to run the interpretability tests on human and enhanced data to determine whether the enhanced data adds value by augmenting the human data.
Interpretability with increasing N[h] - This notebook demonstrates how interpretability of ML-assisted enhanced data increases with increasing Nh. This notebooks takes a look at the effect of increasing Nh while holding N = Nh +Nm fixed. Intuitively, this can be thought of as adding human annotations to some of the existing interviews that are currently machine annotated.
Distribution of Regression Coefficients - This notebook demonstrates how to run the interpretability tests on a model and plot the distribution of regression coefficients. The sizes of both the human annotated (Nh) and machine annotated (Nm) samples are varied to evaluate how many documents should be annotated by humans to achieve a certain level of interpretability.