Skip to content

Preprocessing

Intensity

Transform Description
RescaleIntensity Rescale intensity values to a certain range
ZNormalization Subtract mean and divide by standard deviation
HistogramStandardization Standardize histogram of foreground intensities
Mask Mask an image using a label map
Clamp Clamp intensity values into a range
PCA Reduce the number of channels using PCA
To Change the dtype or device of image data

NormalizationTransform

Bases: IntensityTransform

Base class for intensity preprocessing transforms.

Parameters:

Name Type Description Default
masking_method TypeMaskingMethod

Defines the mask used to compute the normalization statistics. It can be one of:

  • None: the mask image is all ones, i.e. all values in the image are used.

  • A string: key to a torchio.LabelMap in the subject which is used as a mask, OR an anatomical label: 'Left', 'Right', 'Anterior', 'Posterior', 'Inferior', 'Superior' which specifies a side of the mask volume to be ones.

  • A function: the mask image is computed as a function of the intensity image. The function must receive and return a torch.Tensor

None
**kwargs

See Transform for additional keyword arguments.

{}

Examples:

>>> import torchio as tio
>>> subject = tio.datasets.Colin27()
>>> subject
Colin27(Keys: ('t1', 'head', 'brain'); images: 3)
>>> transform = tio.ZNormalization()  # ZNormalization is a subclass of NormalizationTransform
>>> transformed = transform(subject)  # use all values to compute mean and std
>>> transform = tio.ZNormalization(masking_method='brain')
>>> transformed = transform(subject)  # use only values within the brain
>>> transform = tio.ZNormalization(masking_method=lambda x: x > x.mean())
>>> transformed = transform(subject)  # use values above the image mean

__call__(data)

Transform data and return a result of the same type.

Parameters:

Name Type Description Default
data InputType

Instance of torchio.Subject, 4D torch.Tensor or numpy.ndarray with dimensions \((C, W, H, D)\), where \(C\) is the number of channels and \(W, H, D\) are the spatial dimensions. If the input is a tensor, the affine matrix will be set to identity. Other valid input types are a SimpleITK image, a torchio.Image, a NiBabel Nifti1 image or a dict. The output type is the same as the input type.

required

get_base_args()

Provides easy access to the arguments used to instantiate the base class (Transform) of any transform.

This method is particularly useful when a new transform can be represented as a variant of an existing transform (e.g. all random transforms), allowing for seamless instantiation of the existing transform with the same arguments as the new transform during apply_transform.

Note

The p argument (probability of applying the transform) is excluded to avoid multiplying the probability of both existing and new transform.

add_base_args(arguments, overwrite_on_existing=False)

Add the init args to existing arguments

validate_keys_sequence(keys, name) staticmethod

Ensure that the input is not a string but a sequence of strings.

to_hydra_config()

Return a dictionary representation of the transform for Hydra instantiation.

arguments_are_dict()

Check if main arguments are dict.

Return True if the type of all attributes specified in the args_names have dict type.

Spatial

Transform Description
CropOrPad Crop or pad an image to a target shape
Crop Crop an image
Pad Pad an image
Resize Resize an image to a target shape
Resample Resample an image to a different voxel spacing
ToCanonical Reorder data to canonical orientation
ToOrientation Reorder data to a given orientation
ToReferenceSpace Resample to a reference image space
Transpose Transpose spatial dimensions
EnsureShapeMultiple Pad to ensure shape is a multiple of a value
CopyAffine Copy the affine matrix from one image to another

Label

Transform Description
RemapLabels Remap integer labels in a segmentation
RemoveLabels Remove labels from a segmentation
SequentialLabels Map labels to sequential integers
OneHot Convert a label map to one-hot encoding
Contour Create a binary image with contour of each label
KeepLargestComponent Keep the largest connected component