I have a 3D array holding voxels from a mri dataset. The model could be stretched along one or more directions. E.g. the voxel size (x,y,z) could be 0.5x0.5x2 mm. Now I want to
You can also use PyTorch + TorchIO. The following is the resampling of a 3D array with gaussian interpolation (the zoom function is adapted from the code that my supervisor wrote for me):
def zoom(
label_stack: Sequence[Sequence[SupportsFloat]],
spacing: Sequence[Union[SupportsInt, SupportsFloat]],
) -> np.ndarray:
spacing = np.asanyarray(spacing)
target = tuple(1 / spacing)
# Resample the array with respect to spacing parameters
arr4d = np.expand_dims(label_stack, 0)
transform = tio.Resample(target, image_interpolation="gaussian")
transformed = transform(arr4d)
result = np.squeeze(transformed)
# Perform an arbitrary gaussian to smoothen the surface.
# Achieves a more smooth surface
sigma = (round(spacing[0] * 0.25), 7, 7)
gaussian_3d = gaussian_filter(result, sigma=sigma)
# Binary thresholding
highpass_threshold = np.median(gaussian_3d[gaussian_3d > 0.09])
thresh = threshold_array(gaussian_3d, highpass=highpass_threshold)
return thresh
def threshold_array(
array: np.ndarray,
lowpass: Union[Union[SupportsInt, SupportsFloat], None] = None,
highpass: Union[Union[SupportsInt, SupportsFloat], None] = None,
) -> np.ndarray:
if lowpass and highpass:
msg = "Defition of both lowpass and highpass is illogical"
raise ValueError(msg)
if not lowpass and not highpass:
msg = "Either lowpass or highpass has to be defined"
raise ValueError(msg)
array = np.asanyarray(array)
filtered = np.zeros_like(array, dtype=np.int8)
if lowpass:
filtered[array < lowpass] = 1
if highpass:
filtered[array > highpass] = 1
return filtered
The images array is z-x-y, and zoom has been designed accordingly.
Spacing is defined by the user. Spacing has to be a sequence that has the same length as the shape of the image numpy array. The interpolated image will have the shape as spacing * "old shape".
I wish there was cubic method for the interpolation in the Resample function. I tried with gaussian + thresholding, but haven't gotten any great results. Better than whatever I could muster with SciPy though.
Please credit when used.
You can use TorchIO for that:
import torchio as tio
image = tio.ScalarImage(sFileName)
resample = tio.Resample(1)
resampled = resample(image)
This is probably the best approach, the zoom method is designed for precisely this kind of task.
from scipy.ndimage import zoom
new_array = zoom(array, (0.5, 0.5, 2))
changes the size in each dimension by the specified factor. If the original shape of array was, say, (40, 50, 60)
, the new one will be (20, 25, 120)
.
SciPy has a large set of methods for signal processing. Most relevant here are decimate and resample_poly. I use the latter below
from scipy.signal import resample_poly
factors = [(1, 2), (1, 2), (2, 1)]
for k in range(3):
array = resample_poly(array, factors[k][0], factors[k][1], axis=k)
The factors (which must be integers) are of up- and down-sampling. That is:
Possible downside: the process happens independently in each dimension, so the spatial structure may not be taken into account as well as by ndimage methods.
This is more hands-on, but also more laborious and without the benefit of filtering: straightforward downsampling. We have to make a grid for the interpolator, using original step sizes in each direction. After the interpolator is created, it needs to be evaluated on a new grid; its call method takes a different kind of grid format, prepared with mgrid
.
values = np.random.randint(0, 256, size=(40, 50, 60)).astype(np.uint8) # example
steps = [0.5, 0.5, 2.0] # original step sizes
x, y, z = [steps[k] * np.arange(array.shape[k]) for k in range(3)] # original grid
f = RegularGridInterpolator((x, y, z), values) # interpolator
dx, dy, dz = 1.0, 1.0, 1.0 # new step sizes
new_grid = np.mgrid[0:x[-1]:dx, 0:y[-1]:dy, 0:z[-1]:dz] # new grid
new_grid = np.moveaxis(new_grid, (0, 1, 2, 3), (3, 0, 1, 2)) # reorder axes for evaluation
new_values = f(new_grid)
Downside: e.g., when a dimension is reduced by 2, it will in effect drop every other value, which is simple downsampling. Ideally, one should average neighboring values in this case. In terms of signal processing, low-pass filtering precedes downsampling in decimation.