Implementations

Included with this toolkit are a number of implementations for the interfaces described in the previous section. Unlike the interfaces, which declare operation and use case, implementations provide variations on how to satisfy the interface-defined use case, varying trade-offs, or results implications.

Image Perturbation

Class: MCRISEGrid

class xaitk_saliency.impls.perturb_image.mc_rise.MCRISEGrid(n: int, s: int, p1: float, k: int, seed: int | None = None, threads: int | None = 4)

Based on Hatakeyama et. al: https://openaccess.thecvf.com/content/ACCV2020/papers/Hatakeyama_Visualizing_Color-wise_Saliency_of_Black-Box_Image_Classification_Models_ACCV_2020_paper.pdf

get_config() dict[str, Any]

Return a JSON-compliant dictionary that could be passed to this class’s from_config method to produce an instance with identical configuration.

In the most cases, this involves naming the keys of the dictionary based on the initialization argument names as if it were to be passed to the constructor via dictionary expansion. In some cases, where it doesn’t make sense to store some object constructor parameters are expected to be supplied at as configuration values (i.e. must be supplied at runtime), this method’s returned dictionary may leave those parameters out. In such cases, the object’s from_config class-method would also take additional positional arguments to fill in for the parameters that this returned configuration lacks.

Returns:

JSON type compliant configuration dictionary.

Return type:

dict

perturb(ref_image: ndarray) ndarray

Warning: this implementation returns a different shape than typically expected by this interface. Instead of [nMasks x Height x Width], masks of shape [kColors x nMasks x Height x Width] are returned.

Parameters:

ref_image – np.ndarray Reference image to generate perturbations from.

Returns:

np.ndarray Mask matrix with shape [kColors x nMasks x Height x Width].

Class: RandomGrid

class xaitk_saliency.impls.perturb_image.random_grid.RandomGrid(n: int, s: Tuple[int, int], p1: float, seed: int | None = None, threads: int | None = None)

Generate masks using a random grid of set cell size. If the chosen cell size does not divide an image evenly, then the grid is over-sized and the resulting mask is centered and cropped. Each mask is also shifted randomly by a maximum of half the cell size in both x and y.

This method is based on RISE (http://bmvc2018.org/contents/papers/1064.pdf) but aims to address the changing cell size, given images of different sizes, aspect of that implementation. This method keeps cell size constant and instead adjusts the overall grid size for different sized images.

Parameters:
  • n – Number of masks to generate.

  • s – Dimensions of the grid cells in pixels. E.g. (3, 4) would use a grid of 3x4 pixel cells.

  • p1 – Probability of a grid cell being set to 1 (not occluded). This should be a float value in the [0, 1] range.

  • seed – A seed to use for the random number generator, allowing for masks to be reproduced.

  • threads – Number of threads to use when generating masks. If this is <=0 or None, no threading is used and processing is performed in-line serially.

get_config() Dict[str, Any]

Return a JSON-compliant dictionary that could be passed to this class’s from_config method to produce an instance with identical configuration.

In the most cases, this involves naming the keys of the dictionary based on the initialization argument names as if it were to be passed to the constructor via dictionary expansion. In some cases, where it doesn’t make sense to store some object constructor parameters are expected to be supplied at as configuration values (i.e. must be supplied at runtime), this method’s returned dictionary may leave those parameters out. In such cases, the object’s from_config class-method would also take additional positional arguments to fill in for the parameters that this returned configuration lacks.

Returns:

JSON type compliant configuration dictionary.

Return type:

dict

perturb(ref_img: ndarray) ndarray

Transform an input reference image into a number of mask matrices indicating the perturbed regions.

Output mask matrix should be three-dimensional with the format [nMasks x Height x Width], sharing the same height and width to the input reference image. The implementing algorithm may determine the quantity of output masks per input image. These masks should indicate the regions in the corresponding perturbed image that have been modified. Values should be in the [0, 1] range, where a value closer to 1.0 indicates areas of the image that are unperturbed. Note that output mask matrices may be of a floating-point type to allow for fractional perturbation.

Parameters:

ref_image – Reference image to generate perturbations from.

Returns:

Mask matrix with shape [nMasks x Height x Width].

Class: RISEGrid

class xaitk_saliency.impls.perturb_image.rise.RISEGrid(n: int, s: int, p1: float, seed: int | None = None, threads: int | None = 4)

Based on Petsiuk et. al: http://bmvc2018.org/contents/papers/1064.pdf

Implementation is borrowed from the original authors: https://github.com/eclique/RISE/blob/master/explanations.py

get_config() Dict[str, Any]

Return a JSON-compliant dictionary that could be passed to this class’s from_config method to produce an instance with identical configuration.

In the most cases, this involves naming the keys of the dictionary based on the initialization argument names as if it were to be passed to the constructor via dictionary expansion. In some cases, where it doesn’t make sense to store some object constructor parameters are expected to be supplied at as configuration values (i.e. must be supplied at runtime), this method’s returned dictionary may leave those parameters out. In such cases, the object’s from_config class-method would also take additional positional arguments to fill in for the parameters that this returned configuration lacks.

Returns:

JSON type compliant configuration dictionary.

Return type:

dict

perturb(ref_image: ndarray) ndarray

Transform an input reference image into a number of mask matrices indicating the perturbed regions.

Output mask matrix should be three-dimensional with the format [nMasks x Height x Width], sharing the same height and width to the input reference image. The implementing algorithm may determine the quantity of output masks per input image. These masks should indicate the regions in the corresponding perturbed image that have been modified. Values should be in the [0, 1] range, where a value closer to 1.0 indicates areas of the image that are unperturbed. Note that output mask matrices may be of a floating-point type to allow for fractional perturbation.

Parameters:

ref_image – Reference image to generate perturbations from.

Returns:

Mask matrix with shape [nMasks x Height x Width].

Class: SlidingRadial

class xaitk_saliency.impls.perturb_image.sliding_radial.SlidingRadial(radius: tuple[float, float] = (50, 50), stride: tuple[int, int] = (20, 20), sigma: tuple[float, float] | None = None)

Produce perturbation matrices generated by sliding a radial occlusion area with configured radius over the area of an image. When the two radius values are the same, circular masks are generated; otherwise, elliptical masks are generated. Passing sigma values will apply a Gaussian filter to the mask, blurring it. This results in a smooth transition from full occlusion in the center of the radial to no occlusion at the edge.

Due to the geometry of sliding radials, if the stride given does not evenly divide the radial size along the applicable axis, then the result plane of values when summing the generated masks will not be even.

Related, if the stride is set to be larger than the radial diameter, the resulting plane of summed values will also not be even, as there be increasingly long valleys of unperturbed space between masked regions.

The generated masks are boolean if no blurring is used, otherwise the masks will be of floating-point type in the [0, 1] range.

get_config() dict[str, Any]

Get the configuration dictionary of the SlidingRadial instance.

Returns:

dict[str, Any]: Configuration dictionary.

classmethod get_default_config() dict[str, Any]

Returns the default configuration for the SlidingRadial.

This method provides a default configuration dictionary, specifying default values for key parameters in the factory. It can be used to create an instance of the factory with preset configurations.

Returns:

dict[str, Any]: A dictionary containing default configuration parameters.

perturb(ref_image: ndarray) ndarray

Produce a mask based on a radial occlusion area with configured radius over the area of an image

Parameters:

ref_image – Reference image to generate perturbations from.

Returns:

Mask matrix with shape [nMasks x Height x Width].

Class: SlidingWindow

class xaitk_saliency.impls.perturb_image.sliding_window.SlidingWindow(window_size: tuple[int, int] = (50, 50), stride: tuple[int, int] = (20, 20))

Produce perturbation matrices based on hard, block-y occlusion areas as generated by sliding a window of a configured size over the area of an image.

Due to the geometry of sliding windows, if the stride given does not evenly divide the window size along the applicable axis, then the result plane of values when summing the generated masks will not be even.

Related, if the stride is set to be larger than the window size, the resulting plane of summed values will also not be even, as there be increasingly long valleys of unperturbed space between masked regions.

get_config() dict[str, Any]

Get the configuration dictionary of the SlidingWindow instance.

Returns:

dict[str, Any]: Configuration dictionary.

classmethod get_default_config() dict[str, Any]

Returns the default configuration for the SlidingWindow.

This method provides a default configuration dictionary, specifying default values for key parameters in the factory. It can be used to create an instance of the factory with preset configurations.

Returns:

dict[str, Any]: A dictionary containing default configuration parameters.

perturb(ref_image: ndarray) ndarray

Produce a mask based on hard, block-y occlusion areas as generated by sliding a window

Parameters:

ref_image – Reference image to generate perturbations from.

Returns:

Mask matrix with shape [nMasks x Height x Width].

Heatmap Generation

Class: DRISEScoring

class xaitk_saliency.impls.gen_detector_prop_sal.drise_scoring.DRISEScoring

This D-RISE implementation transforms black-box object detector predictions into visual saliency heatmaps. Specifically, we make use of perturbed detections generated using the RISEGrid image perturbation class and a similarity metric that captures both the localization and categorization aspects of object detection.

Object detection representations used here would need to encapsulate localization information (i.e. bounding box regions), class scores, and objectness scores (if applicable to the detector, such as YOLOv3). Object detections are converted into (4+1+nClasses) vectors (4 indices for bounding box locations, 1 index for objectness, and nClasses indices for different object classes).

If your detections consist of a single class prediction and confidence score instead of scores for each class, it is best practice to replace the objectness score with the confidence score and use a one-hot encoding of the prediction as the class scores.

Based on Petsiuk et al: https://arxiv.org/abs/2006.03204

generate(ref_dets: ndarray, perturbed_dets: ndarray, perturbed_masks: ndarray) ndarray

Generate visual saliency heatmaps from black-box object detector predictions

Parameters:
  • ref_dets – np.ndarray Reference detections from the reference image

  • perturbed_dets – np.ndarray Pertured detections generated from the reference image

  • perturbed_masks – np.ndarray Perturbation masks numpy.ndarray over the reference image.

Returns:

np.ndarray Generated visual saliency heatmap.

get_config() dict

Get the configuration dictionary of the DRISEScoring instance.

Returns:

dict[str, Any]: Configuration dictionary.

iou(box_a: ndarray, box_b: ndarray) ndarray

Compute the intersection over union (IoU) of two sets of boxes.

E.g.:
A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)
Parameters:
  • box_a – (np.array) bounding boxes, Shape: [A,4]

  • box_b – (np.array) bounding boxes, Shape: [B,4]

Returns:

iou(np.array), Shape: [A,B].

Class: MCRISEScoring

class xaitk_saliency.impls.gen_classifier_conf_sal.mc_rise_scoring.MCRISEScoring(k: int, p1: float = 0.0)

Saliency map generation based on the MC-RISE implementation. This version utilizes only the input perturbed image confidence predictions and does not utilize reference image confidences. This implementation also takes influence from debiased RISE and may take an optional debias probability, p1 (0 by default). In the original paper this is paired with the same probability used in RISE perturbation mask generation (see the p1 parameter in xaitk_saliency.impls.perturb_image.mc_rise.MCRISEGrid).

Based on Hatakeyama et. al: https://openaccess.thecvf.com/content/ACCV2020/papers/Hatakeyama_Visualizing_Color-wise_Saliency_of_Black-Box_Image_Classification_Models_ACCV_2020_paper.pdf

generate(image_conf: ndarray, perturbed_conf: ndarray, perturbed_masks: ndarray) ndarray

Warning: this implementation returns a different shape than typically expected by this interface. Instead of [nClasses x H x W], saliency maps of shape [kColors x nClasses x H x W] are generated, one per color per class.

Parameters:
  • image_conf – np.ndarray Reference image predicted class-confidence vector, as a numpy.ndarray, for all classes that require saliency map generation. This should have a shape [nClasses], be float-typed and with values in the [0,1] range.

  • perturbed_conf – np.ndarray Perturbed image predicted class confidence matrix. Classes represented in this matrix should be congruent to classes represented in the image_conf vector. This should have a shape [nMasks x nClasses], be float-typed and with values in the [0,1] range.

  • perturbed_masks – np.ndarray Perturbation masks numpy.ndarray over the reference image. This should be parallel in association to the classification results input into the perturbed_conf parameter. This should have a shape [kColors x nMasks x H x W], and values in range [0, 1], where a value closer to 1 indicate areas of the image that are unperturbed.

Returns:

np.ndarray Generated visual saliency heatmap for each input class as a float-type numpy.ndarray of shape [kColors x nClasses x H x W].

Raises:

ValueError If number of perturbations masks and respective confidence lengths do not match.

get_config() dict[str, Any]

Return a JSON-compliant dictionary that could be passed to this class’s from_config method to produce an instance with identical configuration.

In the most cases, this involves naming the keys of the dictionary based on the initialization argument names as if it were to be passed to the constructor via dictionary expansion. In some cases, where it doesn’t make sense to store some object constructor parameters are expected to be supplied at as configuration values (i.e. must be supplied at runtime), this method’s returned dictionary may leave those parameters out. In such cases, the object’s from_config class-method would also take additional positional arguments to fill in for the parameters that this returned configuration lacks.

Returns:

JSON type compliant configuration dictionary.

Return type:

dict

Class: OcclusionScoring

class xaitk_saliency.impls.gen_classifier_conf_sal.occlusion_scoring.OcclusionScoring

This saliency implementation transforms black-box image classification scores into saliency heatmaps. This should require a sequence of per-class confidences predicted on the reference image, a number of per-class confidences as predicted on perturbed images, as well as the masks of the reference image perturbations (as would be output from a PerturbImage implementation).

The perturbation masks used by the following implementation are expected to be of type integer. Masks containing values of type float are rounded to the nearest value and binarized with value 1 replacing values greater than or equal to half of the maximum value in mask after rounding while 0 replaces the rest.

generate(image_conf: ndarray, perturbed_conf: ndarray, perturbed_masks: ndarray) ndarray

Generate saliency maps

Parameters:
  • image_conf – np.ndarray Reference confidence lengths from the reference image

  • perturbed_conf – np.ndarray Perturbed confidence lengths from the reference image

  • perturbed_masks – np.ndarray Perturbation masks numpy.ndarray over the reference image.

Returns:

np.ndarray Generated visual saliency heatmap.

get_config() dict

Get the configuration dictionary of the OcclusionScoring instance.

Returns:

dict[str, Any]: Configuration dictionary.

Class: RISEScoring

class xaitk_saliency.impls.gen_classifier_conf_sal.rise_scoring.RISEScoring(p1: float = 0.0)

Saliency map generation based on the original RISE implementation. This version utilizes only the input perturbed image confidence predictions and does not utilize reference image confidences. This implementation also takes influence from debiased RISE and may take an optional debias probability, p1 (0 by default). In the original paper this is paired with the same probability used in RISE perturbation mask generation (see the p1 parameter in xaitk_saliency.impls.perturb_image.rise.RISEGrid).

Based on Hatakeyama et. al: https://openaccess.thecvf.com/content/ACCV2020/papers/Hatakeyama_Visualizing_Color-wise_Saliency_of_Black-Box_Image_Classification_Models_ACCV_2020_paper.pdf

generate(_: ndarray, perturbed_conf: ndarray, perturbed_masks: ndarray) ndarray

Generate saliency maps

Parameters:
  • image_conf – np.ndarray Reference confidence lengths from the reference image

  • perturbed_conf – np.ndarray Perturbed confidence lengths from the reference image

  • perturbed_masks – np.ndarray Perturbation masks numpy.ndarray over the reference image.

Returns:

np.ndarray Generated visual saliency heatmap.

get_config() dict[str, Any]

Get the configuration dictionary of the RISEScoring instance.

Returns:

dict[str, Any]: Configuration dictionary.

Class: SimilarityScoring

class xaitk_saliency.impls.gen_descriptor_sim_sal.similarity_scoring.SimilarityScoring(proximity_metric: str = 'euclidean')

This saliency implementation transforms proximity in feature space into saliency heatmaps. This should require feature vectors for the reference image, for each query image, and for perturbed versions of the reference image, as well as the masks of the reference image perturbations (as would be output from a PerturbImage implementation).

The resulting saliency maps are relative to the reference image. As such, each map denotes regions in the reference image that make it more or less similar to the corresponding query image.

generate(ref_descr: ndarray, query_descrs: ndarray, perturbed_descrs: ndarray, perturbed_masks: ndarray) ndarray

Generate visual saliency heatmaps for similarity from vectors

Parameters:
  • ref_descr – np.ndarray Feature vectors from the reference image

  • query_descrs – np.ndarray Query vectors from the reference image

  • perturbed_descrs – np.ndarray Perturbed vectors from the reference image

  • perturbed_masks – np.ndarray Perturbation masks numpy.ndarray over the reference image.

Returns:

np.ndarray Generated visual saliency heatmap.

get_config() dict

Get the configuration dictionary of the SimilarityScoring instance.

Returns:

dict[str, Any]: Configuration dictionary.

Class: SquaredDifferenceScoring

class xaitk_saliency.impls.gen_classifier_conf_sal.squared_difference_scoring.SquaredDifferenceScoring

This saliency implementation transforms black-box confidence predictions from a classification-style network into saliency heatmaps. This should require a sequence of classification scores predicted on the reference image, a number of classification scores predicted on perturbed images, as well as the masks of the reference image perturbations (as would be output from a PerturbImage implementation).

This implementation uses the squared difference of the reference scores and the perturbed scores to compute the saliency maps. This gives an indication of general saliency without distinguishing between positive and negative. The resulting maps are normalized between the range [0,1].

Based on Greydanus et. al: https://arxiv.org/abs/1711.00138

generate(reference: ndarray, perturbed: ndarray, perturbed_masks: ndarray) ndarray

Generate saliency heatmaps from black-box confidence predictions

Parameters:
  • reference – np.ndarray Reference predictions from the reference image

  • perturbed – np.ndarray Perturbed predictions from the reference image

  • perturbed_masks – np.ndarray Perturbation masks numpy.ndarray over the reference image.

Returns:

np.ndarray Generated visual saliency heatmap.

get_config() dict

Get the configuration dictionary of the SquaredDifferenceScoring instance.

Returns:

dict[str, Any]: Configuration dictionary.

End-to-End Saliency Generation

Image Classification

Class: MCRISEStack

class xaitk_saliency.impls.gen_image_classifier_blackbox_sal.mc_rise.MCRISEStack(n: int, s: int, p1: float, fill_colors: Sequence[Sequence[int]], seed: int | None = None, threads: int = 0)

Encapsulation of the perturbation-occlusion method using specifically the MC-RISE implementations of the component algorithms.

This more specifically encapsulates the MC-RISE method as presented in their paper and code. See references in the MCRISEGrid and MCRISEScoring documentation.

This implementation shares the p1 probability and ‘k’ number colors with the internal MCRISEScoring instance use, to make use of the debiasing described in the MC-RISE paper. Debiasing is always on.

get_config() dict[str, Any]

Return a JSON-compliant dictionary that could be passed to this class’s from_config method to produce an instance with identical configuration.

In the most cases, this involves naming the keys of the dictionary based on the initialization argument names as if it were to be passed to the constructor via dictionary expansion. In some cases, where it doesn’t make sense to store some object constructor parameters are expected to be supplied at as configuration values (i.e. must be supplied at runtime), this method’s returned dictionary may leave those parameters out. In such cases, the object’s from_config class-method would also take additional positional arguments to fill in for the parameters that this returned configuration lacks.

Returns:

JSON type compliant configuration dictionary.

Return type:

dict

Class: PerturbationOcclusion

class xaitk_saliency.impls.gen_image_classifier_blackbox_sal.occlusion_based.PerturbationOcclusion(perturber: PerturbImage, generator: GenerateClassifierConfidenceSaliency, threads: int = 0)

Generator composed of modular perturbation and occlusion-based algorithms.

This implementation exposes a public attribute fill. This may be set to a scalar or sequence value to indicate a color that should be used for filling occluded areas as determined by the given PerturbImage implementation. This is a parameter to be set during runtime as this is most often driven by the black-box algorithm used, if at all.

classmethod from_config(config_dict: dict, merge_default: bool = True) C

Create a PerturbationOcclusion instance from a configuration dictionary.

Args:

config_dict (dict): Configuration dictionary with perturber details. merge_default (bool): Whether to merge with the default configuration.

Returns:

PerturbationOcclusion: An instance of PerturbationOcclusion.

get_config() dict[str, Any]

Get the configuration dictionary of the PerturbationOcclusion instance.

Returns:

dict[str, Any]: Configuration dictionary.

classmethod get_default_config() dict[str, Any]

Returns the default configuration for the PerturbationOcclusion.

This method provides a default configuration dictionary, specifying default values for key parameters in the factory. It can be used to create an instance of the factory with preset configurations.

Returns:

dict[str, Any]: A dictionary containing default configuration parameters.

Class: RISEStack

class xaitk_saliency.impls.gen_image_classifier_blackbox_sal.rise.RISEStack(n: int, s: int, p1: float, seed: int | None = None, threads: int = 0, debiased: bool = True)

Encapsulation of the perturbation-occlusion method using specifically the RISE implementations of the component algorithms.

This more specifically encapsulates the original RISE method as presented in their paper and code. See references in the RISEGrid and RISEScoring documentation.

This implementation shares the p1 probability with the internal RISEScoring instance use, effectively causing this implementation to utilize debiased RISE.

property fill: int | Sequence[int] | None

Gets the fill value

get_config() dict[str, Any]

Get the configuration dictionary of the RISEStack instance.

Returns:

dict[str, Any]: Configuration dictionary.

Class: SlidingWindowStack

class xaitk_saliency.impls.gen_image_classifier_blackbox_sal.slidingwindow.SlidingWindowStack(window_size: tuple[int, int] = (50, 50), stride: tuple[int, int] = (20, 20), threads: int = 0)

Encapsulation of the perturbation-occlusion method using specifically sliding windows and the occlusion-scoring method. See the SlidingWindow and OcclusionScoring documentation for more details.

property fill: int | Sequence[int] | None

Gets the fill value

get_config() dict[str, Any]

Get the configuration dictionary of the SlidingWindowStack instance.

Returns:

dict[str, Any]: Configuration dictionary.

classmethod get_default_config() dict[str, Any]

Returns the default configuration for the SlidingWindowStack.

This method provides a default configuration dictionary, specifying default values for key parameters in the factory. It can be used to create an instance of the factory with preset configurations.

Returns:

dict[str, Any]: A dictionary containing default configuration parameters.

Image Similarity

Class: PerturbationOcclusion

class xaitk_saliency.impls.gen_image_similarity_blackbox_sal.occlusion_based.PerturbationOcclusion(perturber: PerturbImage, generator: GenerateDescriptorSimilaritySaliency, fill: int | Sequence[int] | ndarray | None = None, threads: int | None = None)

Image similarity saliency generator composed of modular perturbation and occlusion-based algorithms.

This implementation exposes its fill attribute as public. This allows it to be set during runtime as this is most often driven by the black-box algorithm used, if at all.

classmethod from_config(config_dict: dict, merge_default: bool = True) C

Create a PerturbationOcclusion instance from a configuration dictionary.

Args:

config_dict (dict): Configuration dictionary with perturber details. merge_default (bool): Whether to merge with the default configuration.

Returns:

PerturbationOcclusion: An instance of PerturbationOcclusion.

get_config() dict[str, Any]

Get the configuration dictionary of the PerturbationOcclusion instance.

Returns:

dict[str, Any]: Configuration dictionary.

classmethod get_default_config() dict[str, Any]

Returns the default configuration for the PerturbationOcclusion.

This method provides a default configuration dictionary, specifying default values for key parameters in the factory. It can be used to create an instance of the factory with preset configurations.

Returns:

dict[str, Any]: A dictionary containing default configuration parameters.

Class: SBSMStack

class xaitk_saliency.impls.gen_image_similarity_blackbox_sal.sbsm.SBSMStack(window_size: tuple[int, int] = (50, 50), stride: tuple[int, int] = (20, 20), proximity_metric: str = 'euclidean', fill: int | Sequence[int] | ndarray | None = None, threads: int | None = None)

Encapsulation of the perturbation-occlusion method using specifically the sliding window image perturbation and similarity scoring algorithms to generate similarity-based visual saliency maps. See the documentation of SlidingWindow and SimilarityScoring for details.

property fill: int | Sequence[int] | ndarray | None

Gets the fill value

get_config() dict[str, Any]

Get the configuration dictionary of the SBSMStack instance.

Returns:

dict[str, Any]: Configuration dictionary.

classmethod get_default_config() dict[str, Any]

Returns the default configuration for the SBSMStack.

This method provides a default configuration dictionary, specifying default values for key parameters in the factory. It can be used to create an instance of the factory with preset configurations.

Returns:

dict[str, Any]: A dictionary containing default configuration parameters.

Object Detection

Class: PerturbationOcclusion

class xaitk_saliency.impls.gen_object_detector_blackbox_sal.occlusion_based.PerturbationOcclusion(perturber: PerturbImage, generator: GenerateDetectorProposalSaliency, fill: int | Sequence[int] | ndarray | None = None, threads: int | None = 0)

Generator composed of modular perturbation and occlusion-based algorithms.

This implementation exposes its fill attribute as public. This allows it to be set during runtime as this is most often driven by the black-box algorithm used, if at all.

Class: DRISEStack

class xaitk_saliency.impls.gen_object_detector_blackbox_sal.drise.DRISEStack(n: int, s: int, p1: float, seed: int | None = None, fill: int | Sequence[int] | ndarray | None = None, threads: int | None = 0)

Encapsulation of the perturbation-occlusion method using the RISE image perturbation and DRISE scoring algorithms to generate visual saliency maps for object detections. See references in the RISEGrid and DRISEScoring documentation.