Subject
The Subject is a data structure used to store
images associated with a subject and any other metadata necessary for
processing.
Subject objects can be sliced using the standard NumPy / PyTorch slicing syntax, returning a new subject with sliced images. This is only possible if all images in the subject have the same spatial shape.
All transforms applied to a Subject are saved
in its history attribute.
Subject
Bases: dict
Class to store information about the images corresponding to a subject.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
*args
|
If provided, a dictionary of items. |
()
|
|
**kwargs
|
dict[str, Any]
|
Items that will be added to the subject sample. |
{}
|
Examples:
>>> import torchio as tio
>>> # One way:
>>> subject = tio.Subject(
... one_image=tio.ScalarImage('path_to_image.nii.gz'),
... a_segmentation=tio.LabelMap('path_to_seg.nii.gz'),
... age=45,
... name='John Doe',
... hospital='Hospital Juan Negrín',
... )
>>> # If you want to create the mapping before, or have spaces in the keys:
>>> subject_dict = {
... 'one image': tio.ScalarImage('path_to_image.nii.gz'),
... 'a segmentation': tio.LabelMap('path_to_seg.nii.gz'),
... 'age': 45,
... 'name': 'John Doe',
... 'hospital': 'Hospital Juan Negrín',
... }
>>> subject = tio.Subject(subject_dict)
shape
property
spatial_shape
property
spacing
property
get_inverse_transform(warn=True, ignore_intensity=False, image_interpolation=None)
Get a reversed list of the inverses of the applied transforms.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
warn
|
bool
|
Issue a warning if some transforms are not invertible. |
True
|
ignore_intensity
|
bool
|
If |
False
|
image_interpolation
|
str | None
|
Modify interpolation for scalar images inside transforms that perform resampling. |
None
|
apply_inverse_transform(**kwargs)
Apply the inverse of all applied transforms, in reverse order.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Keyword arguments passed on to
|
{}
|
check_consistent_attribute(attribute, relative_tolerance=1e-06, absolute_tolerance=1e-06, message=None)
Check for consistency of an attribute across all images.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
attribute
|
str
|
Name of the image attribute to check |
required |
relative_tolerance
|
float
|
Relative tolerance for |
1e-06
|
absolute_tolerance
|
float
|
Absolute tolerance for |
1e-06
|
Examples:
>>> import numpy as np
>>> import torch
>>> import torchio as tio
>>> scalars = torch.randn(1, 512, 512, 100)
>>> mask = torch.tensor(scalars > 0).type(torch.int16)
>>> af1 = np.eye([0.8, 0.8, 2.50000000000001, 1])
>>> af2 = np.eye([0.8, 0.8, 2.49999999999999, 1]) # small difference here (e.g. due to different reader)
>>> subject = tio.Subject(
... image = tio.ScalarImage(tensor=scalars, affine=af1),
... mask = tio.LabelMap(tensor=mask, affine=af2)
... )
>>> subject.check_consistent_attribute('spacing') # no error as tolerances are > 0
Note
To check that all values for a specific attribute are close
between all images in the subject, numpy.allclose() is used.
This function returns True if
\(|a_i - b_i| \leq t_{abs} + t_{rel} * |b_i|\), where
\(a_i\) and \(b_i\) are the \(i\)-th element of the same
attribute of two images being compared,
\(t_{abs}\) is the absolute_tolerance and
\(t_{rel}\) is the relative_tolerance.
get_image(image_name)
Get a single image by its name.
load()
Load images in subject on RAM.
unload()
Unload images in subject.
add_image(image, image_name)
Add an image to the subject instance.
remove_image(image_name)
Remove an image from the subject instance.