Skip to content
Tests status Codecov report GitHub License PyPI - Version

Probabilistic Confusion Matrices

prob_conf_mat is a Python package for performing statistical inference with confusion matrices. It quantifies the amount of uncertainty present, aggregates semantically related experiments into experiment groups, and compares experiments against each other for significance.

Installation#

Installation can be done using from pypi can be done using pip:

pip install prob_conf_mat

Or, if you're using uv, simply run:

uv add prob_conf_mat

The project currently depends on the following packages:

Dependency tree
bayes-conf-mat
├── jaxtyping v0.3.2
├── matplotlib v3.10.3
├── numpy v2.3.0
├── scipy v1.15.3
├── seaborn v0.13.2
│   └── pandas v2.3.0
└── tabulate v0.9.0

Development Environment#

This project was developed using uv. To install the development environment, simply clone this github repo:

git clone https://github.com/ioverho/prob_conf_mat.git

And then run the uv sync --dev command:

uv sync --dev

The development dependencies should automatically install into the .venv folder.

Documentation#

For more information about the package, motivation, how-to guides and implementation, please see the documentation website. We try to use Daniele Procida's structure for Python documentation.

The documentation is broadly divided into 4 sections:

  1. Getting Started: a collection of small tutorials to help new users get started
  2. How To: more expansive guides on how to achieve specific things
  3. Reference: in-depth information about how to interface with the library
  4. Explanation: explanations about why things are the way they are
Learning Coding
Practical Getting Started How-To Guides
Theoretical Explanation Reference

Quick Start#

In depth tutorials taking you through all basic steps are available on the documentation site. For the impatient, here's a standard use case.

First define a study, and set some sensible hyperparameters for the simulated confusion matrices.

from prob_conf_mat import Study

study = Study(
    seed=0,
    num_samples=10000,
    ci_probability=0.95,
)

Then add a experiment and confusion matrix to the study:

study.add_experiment(
  experiment_name="model_1/fold_0",
  confusion_matrix=[
    [13, 0, 0],
    [0, 10, 6],
    [0,  0, 9],
  ],
  confusion_prior=0,
  prevalence_prior=1,
)

Finally, add some metrics to the study:

study.add_metric("acc")

We are now ready to start generating summary statistics about this experiment. For example:

study.report_metric_summaries(
  metric="acc",
  table_fmt="github"
)
Group Experiment Observed Median Mode 95.0% HDI MU Skew Kurt
model_1 fold_0 0.8421 0.8499 0.8673 [0.7307, 0.9464] 0.2157 -0.5627 0.2720

So while this experiment achieves an accuracy of 84.21%, a more reasonable estimate (given the size of the test set, and) would be 84.99%. There is a 95% probability that the true accuracy lies between 73.07%-94.64%.

Visually that looks something like:

fig = study.plot_metric_summaries(metric="acc")

Metric distribution

Now let's add a confusion matrix for the same model, but estimated using a different fold. We want to know what the average performance is for that model across the different folds:

study.add_experiment(
  experiment_name="model_1/fold_1",
  confusion_matrix=[
      [12, 1, 0],
      [1, 8, 7],
      [0, 2, 7],
  ],
  confusion_prior=0,
  prevalence_prior=1,
)

We can equip each metric with an inter-experiment aggregation method, and we can then request summary statistics about the aggregate performance of the experiments using 'model_1':

study.add_metric(
    metric="acc",
    aggregation="beta",
)

fig = study.plot_forest_plot(metric="acc")

Forest plot

Note that estimated aggregate accuracy has much less uncertainty (a smaller HDI/MU).

These experiments seem pretty different. But is this difference significant? Let's assume that for this example a difference needs to be at least '0.05' to be considered significant. In that case, we can quickly request the probability of their difference:

fig = study.plot_pairwise_comparison(
    metric="acc",
    experiment_a="model_1/fold_0",
    experiment_b="model_1/fold_1",
    min_sig_diff=0.05,
)

Comparison plot

There's about an 82% probability that the difference is in fact significant. While likely, there isn't quite enough data to be sure.

Credits#

The following are some packages and libraries which served as inspiration for aspects of this project: arviz, bayestestR, BERTopic, jaxtyping, mici, , python-ci, statsmodels.

A lot of the approaches and methods used in this project come from published works. Some especially important works include:

  1. Goutte, C., & Gaussier, E. (2005). A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In European conference on information retrieval (pp. 345-359). Berlin, Heidelberg: Springer Berlin Heidelberg.
  2. Tötsch, N., & Hoffmann, D. (2021). Classifier uncertainty: evidence, potential impact, and probabilistic treatment. PeerJ Computer Science, 7, e398.
  3. Kruschke, J. K. (2013). Bayesian estimation supersedes the t test. Journal of Experimental Psychology: General, 142(2), 573.
  4. Makowski, D., Ben-Shachar, M. S., Chen, S. A., & Lüdecke, D. (2019). Indices of effect existence and significance in the Bayesian framework. Frontiers in psychology, 10, 2767.
  5. Hill, T. (2011). Conflations of probability distributions. Transactions of the American Mathematical Society, 363(6), 3351-3372.
  6. Chandler, J., Cumpston, M., Li, T., Page, M. J., & Welch, V. J. H. W. (2019). Cochrane handbook for systematic reviews of interventions. Hoboken: Wiley, 4.

Citation#

@software{ioverho_prob_conf_mat,
    author = {Verhoeven, Ivo},
    license = {MIT},
    title = {{prob\_conf\_mat}},
    url = {https://github.com/ioverho/prob_conf_mat}
}