NoiseTunnel¶
- class captum.attr.NoiseTunnel(attribution_method)[source]¶
Adds gaussian noise to each input in the batch nt_samples times and applies the given attribution algorithm to each of the samples. The attributions of the samples are combined based on the given noise tunnel type (nt_type): If nt_type is smoothgrad, the mean of the sampled attributions is returned. This approximates smoothing the given attribution method with a Gaussian Kernel. If nt_type is smoothgrad_sq, the mean of the squared sample attributions is returned. If nt_type is vargrad, the variance of the sample attributions is returned.
More details about adding noise can be found in the following papers:
This method currently also supports batches of multiple examples input, however it can be computationally expensive depending on the model, the dimensionality of the data and execution environment. It is assumed that the batch size is the first dimension of input tensors.
- Parameters:
attribution_method (Attribution) – An instance of any attribution algorithm of type Attribution. E.g. Integrated Gradients, Conductance or Saliency.
- attribute(inputs, nt_type='smoothgrad', nt_samples=5, nt_samples_batch_size=None, stdevs=1.0, draw_baseline_from_distrib=False, **kwargs)[source]¶
- Parameters:
inputs (Tensor or tuple[Tensor, ...]) – Input for which integrated gradients are computed. If forward_func takes a single tensor as input, a single input tensor should be provided. If forward_func takes multiple tensors as input, a tuple of the input tensors should be provided. It is assumed that for all given input tensors, dimension 0 corresponds to the number of examples, and if multiple input tensors are provided, the examples must be aligned appropriately.
nt_type (str, optional) – Smoothing type of the attributions. smoothgrad, smoothgrad_sq or vargrad Default: smoothgrad if type is not provided.
nt_samples (int, optional) – The number of randomly generated examples per sample in the input batch. Random examples are generated by adding gaussian random noise to each sample. Default: 5 if nt_samples is not provided.
nt_samples_batch_size (int, optional) – The number of the nt_samples that will be processed together. With the help of this parameter we can avoid out of memory situation and reduce the number of randomly generated examples per sample in each batch. Default: None if nt_samples_batch_size is not provided. In this case all nt_samples will be processed together.
stdevs (float or tuple of float, optional) – The standard deviation of gaussian noise with zero mean that is added to each input in the batch. If stdevs is a single float value then that same value is used for all inputs. If it is a tuple, then it must have the same length as the inputs tuple. In this case, each stdev value in the stdevs tuple corresponds to the input with the same index in the inputs tuple. Default: 1.0 if stdevs is not provided.
draw_baseline_from_distrib (bool, optional) – Indicates whether to randomly draw baseline samples from the baselines distribution provided as an input tensor. Default: False
**kwargs (Any, optional) – Contains a list of arguments that are passed to attribution_method attribution algorithm. Any additional arguments that should be used for the chosen attribution method should be included here. For instance, such arguments include additional_forward_args and baselines.
- Returns:
- attributions (Tensor or tuple[Tensor, …]):
Attribution with respect to each input feature. attributions will always be the same size as the provided inputs, with each value providing the attribution of the corresponding input index. If a single tensor is provided as inputs, a single tensor is returned. If a tuple is provided for inputs, a tuple of corresponding sized tensors is returned.
- delta (float, returned if return_convergence_delta=True):
Approximation error computed by the attribution algorithm. Not all attribution algorithms return delta value. It is computed only for some algorithms, e.g. integrated gradients. Delta is computed for each input in the batch and represents the arithmetic mean across all nt_samples perturbed tensors for that input.
- Return type:
attributions or 2-element tuple of attributions, delta
Examples:
>>> # ImageClassifier takes a single input tensor of images Nx3x32x32, >>> # and returns an Nx10 tensor of class probabilities. >>> net = ImageClassifier() >>> ig = IntegratedGradients(net) >>> input = torch.randn(2, 3, 32, 32, requires_grad=True) >>> # Creates noise tunnel >>> nt = NoiseTunnel(ig) >>> # Generates 10 perturbed input tensors per image. >>> # Computes integrated gradients for class 3 for each generated >>> # input and averages attributions across all 10 >>> # perturbed inputs per image >>> attribution = nt.attribute(input, nt_type='smoothgrad', >>> nt_samples=10, target=3)
- has_convergence_delta()[source]¶
This method informs the user whether the attribution algorithm provides a convergence delta (aka an approximation error) or not. Convergence delta may serve as a proxy of correctness of attribution algorithm’s approximation. If deriving attribution class provides a compute_convergence_delta method, it should override both compute_convergence_delta and has_convergence_delta methods.
- Returns:
Returns whether the attribution algorithm provides a convergence delta (aka approximation error) or not.
- Return type: