Skip to content

CopyAffine

CopyAffine

Bases: SpatialTransform

Copy the spatial metadata from a reference image in the subject.

Small unexpected differences in spatial metadata across different images of a subject can arise due to rounding errors while converting formats.

If the shape and orientation of the images are the same and their affine attributes are different but very similar, this transform can be used to avoid errors during safety checks in other transforms and samplers.

Parameters:

Name Type Description Default
target str

Name of the image within the subject whose affine matrix will be used.

required

Examples:

>>> import torch
>>> import torchio as tio
>>> import numpy as np
>>> np.random.seed(0)
>>> affine = np.diag((*(np.random.rand(3) + 0.5), 1))
>>> t1 = tio.ScalarImage(tensor=torch.rand(1, 100, 100, 100), affine=affine)
>>> # Let's simulate a loss of precision
>>> # (caused for example by NIfTI storing spatial metadata in single precision)
>>> bad_affine = affine.astype(np.float16)
>>> t2 = tio.ScalarImage(tensor=torch.rand(1, 100, 100, 100), affine=bad_affine)
>>> subject = tio.Subject(t1=t1, t2=t2)
>>> resample = tio.Resample(0.5)
>>> resample(subject).shape  # error as images are in different spaces
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/fernando/git/torchio/torchio/data/subject.py", line 101, in shape
    self.check_consistent_attribute('shape')
  File "/Users/fernando/git/torchio/torchio/data/subject.py", line 229, in check_consistent_attribute
    raise RuntimeError(message)
RuntimeError: More than one shape found in subject images:
{'t1': (1, 210, 244, 221), 't2': (1, 210, 243, 221)}
>>> transform = tio.CopyAffine('t1')
>>> fixed = transform(subject)
>>> resample(fixed).shape
(1, 210, 244, 221)
Warning

This transform should be used with caution. Modifying the spatial metadata of an image manually can lead to incorrect processing of the position of anatomical structures. For example, a machine learning algorithm might incorrectly predict that a lesion on the right lung is on the left lung.

Note

For more information, see some related discussions on GitHub:

  • https://github.com/TorchIO-project/torchio/issues/354
  • https://github.com/TorchIO-project/torchio/discussions/489
  • https://github.com/TorchIO-project/torchio/pull/584
  • https://github.com/TorchIO-project/torchio/issues/430
  • https://github.com/TorchIO-project/torchio/issues/382
  • https://github.com/TorchIO-project/torchio/pull/592

__call__(data)

Transform data and return a result of the same type.

Parameters:

Name Type Description Default
data InputType

Instance of torchio.Subject, 4D torch.Tensor or numpy.ndarray with dimensions \((C, W, H, D)\), where \(C\) is the number of channels and \(W, H, D\) are the spatial dimensions. If the input is a tensor, the affine matrix will be set to identity. Other valid input types are a SimpleITK image, a torchio.Image, a NiBabel Nifti1 image or a dict. The output type is the same as the input type.

required

get_base_args()

Provides easy access to the arguments used to instantiate the base class (Transform) of any transform.

This method is particularly useful when a new transform can be represented as a variant of an existing transform (e.g. all random transforms), allowing for seamless instantiation of the existing transform with the same arguments as the new transform during apply_transform.

Note

The p argument (probability of applying the transform) is excluded to avoid multiplying the probability of both existing and new transform.

add_base_args(arguments, overwrite_on_existing=False)

Add the init args to existing arguments

validate_keys_sequence(keys, name) staticmethod

Ensure that the input is not a string but a sequence of strings.

to_hydra_config()

Return a dictionary representation of the transform for Hydra instantiation.