Glossary

black-box model

A machine learning model whose internal decision-making process is not accessible or interpretable by users.

classifier

A model that outputs a predicted label or category based on input features.

detector

A model that identifies multiple objects within an image by predicting their categories and bounding boxes based on input features.

explanation

A conceptual or visual interpretation of why a model produced a specific output.

MC-RISE

Multi-Color RISE — a variant of RISE that incorporates color information when generating saliency maps.

occlusion

A saliency technique that hides parts of the input data to evaluate changes in model predictions.

perturbation

A small, intentional change applied to input data (e.g. noise, occlusion) to test the sensitivity of model outputs.

RISE

Randomized Input Sampling for Explanation — a black-box saliency algorithm that generates saliency maps by sampling randomly masked inputs.

saliency algorithm

A computational method for estimating which input regions are most influential in a model’s prediction.

saliency map

A visual representation that highlights input regions (e.g. parts of an image) most relevant to a model’s decision.

similarity scoring

The process of measuring how alike two inputs are, often used in retrieval, ranking, or tracking tasks.

superpixels

Groups of adjacent pixels with similar color and texture used as regions in some saliency techniques.

visualization

The display of data or model behavior (e.g. saliency maps) in a human-interpretable format to aid understanding.

white-box model

A machine learning model whose internal logic, parameters, and feature importance are transparent to and interpretable by users (e.g. decision trees, linear regression).

XAI

Explainable Artificial Intelligence — methods and tools that help humans understand, trust, and interpret machine learning outputs.