Skip to content

RandomLabelsToImage

RandomLabelsToImage

Bases: RandomTransform, IntensityTransform

Randomly generate an image from a segmentation.

Based on the work by Billot et al.: A Learning Strategy for Contrast-agnostic MRI Segmentation and Partial Volume Segmentation of Brain MRI Scans of any Resolution and Contrast.

Parameters:

Name Type Description Default
label_key str | None

String designating the label map in the subject that will be used to generate the new image.

None
used_labels Sequence[int] | None

Sequence of integers designating the labels used to generate the new image. If categorical encoding is used, label_channels refers to the values of the categorical encoding. If one hot encoding or partial-volume label maps are used, label_channels refers to the channels of the label maps. Default uses all labels. Missing voxels will be filled with zero or with voxels from an already existing volume, see image_key.

None
image_key str

String designating the key to which the new volume will be saved. If this key corresponds to an already existing volume, missing voxels will be filled with the corresponding values in the original volume.

'image_from_labels'
mean Sequence[TypeRangeFloat] | None

Sequence of means for each label. For each value \(v\), if a tuple \((a, b)\) is provided then \(v \sim \mathcal{U}(a, b)\). If None, default_mean range will be used for every label. If not None and label_channels is not None, mean and label_channels must have the same length.

None
std Sequence[TypeRangeFloat] | None

Sequence of standard deviations for each label. For each value \(v\), if a tuple \((a, b)\) is provided then \(v \sim \mathcal{U}(a, b)\). If None, default_std range will be used for every label. If not None and label_channels is not None, std and label_channels must have the same length.

None
default_mean TypeRangeFloat

Default mean range.

(0.1, 0.9)
default_std TypeRangeFloat

Default standard deviation range.

(0.01, 0.1)
discretize bool

If True, partial-volume label maps will be discretized. Does not have any effects if not using partial-volume label maps. Discretization is done taking the class of the highest value per voxel in the different partial-volume label maps using torch.argmax() on the channel dimension (i.e. 0).

False
ignore_background bool

If True, input voxels labeled as 0 will not be modified.

False
**kwargs

See Transform for additional keyword arguments.

{}
Tip

It is recommended to blur the new images in order to simulate partial volume effects at the borders of the synthetic structures. See RandomBlur.

Examples:

>>> import torchio as tio
>>> subject = tio.datasets.ICBM2009CNonlinearSymmetric()
>>> # Using the default parameters
>>> transform = tio.RandomLabelsToImage(label_key='tissues')
>>> # Using custom mean and std
>>> transform = tio.RandomLabelsToImage(
...     label_key='tissues', mean=[0.33, 0.66, 1.], std=[0, 0, 0]
... )
>>> # Discretizing the partial volume maps and blurring the result
>>> simulation_transform = tio.RandomLabelsToImage(
...     label_key='tissues', mean=[0.33, 0.66, 1.], std=[0, 0, 0], discretize=True
... )
>>> blurring_transform = tio.RandomBlur(std=0.3)
>>> transform = tio.Compose([simulation_transform, blurring_transform])
>>> transformed = transform(subject)  # subject has a new key 'image_from_labels' with the simulated image
>>> # Filling holes of the simulated image with the original T1 image
>>> rescale_transform = tio.RescaleIntensity(
...     out_min_max=(0, 1), percentiles=(1, 99))   # Rescale intensity before filling holes
>>> simulation_transform = tio.RandomLabelsToImage(
...     label_key='tissues',
...     image_key='t1',
...     used_labels=[0, 1]
... )
>>> transform = tio.Compose([rescale_transform, simulation_transform])
>>> transformed = transform(subject)  # subject's key 't1' has been replaced with the simulated image

See also

RemapLabels.

__call__(data)

Transform data and return a result of the same type.

Parameters:

Name Type Description Default
data InputType

Instance of torchio.Subject, 4D torch.Tensor or numpy.ndarray with dimensions \((C, W, H, D)\), where \(C\) is the number of channels and \(W, H, D\) are the spatial dimensions. If the input is a tensor, the affine matrix will be set to identity. Other valid input types are a SimpleITK image, a torchio.Image, a NiBabel Nifti1 image or a dict. The output type is the same as the input type.

required

get_base_args()

Provides easy access to the arguments used to instantiate the base class (Transform) of any transform.

This method is particularly useful when a new transform can be represented as a variant of an existing transform (e.g. all random transforms), allowing for seamless instantiation of the existing transform with the same arguments as the new transform during apply_transform.

Note

The p argument (probability of applying the transform) is excluded to avoid multiplying the probability of both existing and new transform.

add_base_args(arguments, overwrite_on_existing=False)

Add the init args to existing arguments

validate_keys_sequence(keys, name) staticmethod

Ensure that the input is not a string but a sequence of strings.

to_hydra_config()

Return a dictionary representation of the transform for Hydra instantiation.

arguments_are_dict()

Check if main arguments are dict.

Return True if the type of all attributes specified in the args_names have dict type.

plot

Source code
import torch
import torchio as tio
torch.manual_seed(42)
colin = tio.datasets.Colin27(2008)
label_map = colin.cls
colin.remove_image('t1')
colin.remove_image('t2')
colin.remove_image('pd')
downsample = tio.Resample(1)
blurring_transform = tio.RandomBlur(std=0.6)
create_synthetic_image = tio.RandomLabelsToImage(
    image_key='synthetic',
    ignore_background=True,
)
transform = tio.Compose((
    downsample,
    create_synthetic_image,
    blurring_transform,
))
colin_synth = transform(colin)
colin_synth.plot()