In layer-fMRI, we spend so much time and effort to achieve high spatial resolutions and small voxel sizes during the acquisition. However, during the evaluation pipeline much of this spatial resolution can be lost during multiple resampling steps.
In this post, I want to discuss sources of signal blurring during spatial resampling steps and potential strategies to account for them.
Resampling results in blurring
In conventional low-resoltion fMRI analysis pipelines there are usually multiple spatial resampling step involved:
- Spatial resampling during motion correction of EPI time series within individual fMRI runs. This is the only resampling that you cannot avoid. No matter what kind of analysis you do, it should always involve motion correction.
- Spatial resampling during alignment of multiple fMRI runs within one scan session.
- Spatial resampling during distortion correction, E.g. with Topup or B0-field map.
- Spatial resampling during alignment of the functional EPI data to a high-resolution “anatomical” reference scan.
- Spatial resampling to a subject-independent template (uncommon in layer-fMRI).
In these spatial resampling steps, the signal of any given voxel will be undergo a spatial transformation to a new voxel at a different location. When the voxel signal is translated by integer numbers of voxel sizes, this resampling step does not result in signal blurring. In this case, the signal stays in the same 3D-voxel-grid and the signal values of the voxels are just reassigned.
However, in practice, the signal of a voxel needs to be relocated by an arbitrary distance. For example, it can happen that the signal of a voxel needs to be shifted by half the voxel size. Then, the signal of one voxel needs to redistributed across two new voxels. This means that the resulting image will look blurrier.
As a rule of thumb, every resampling step can lower the resolution by about the voxel size. This resampling dependent blurring has been reported in the literature from time to time and is gaining a lot of extra attention in the context of layer-dependent fMRI. Ville Renvall showed this effective resolution loss due to spatial resampling in the context of distortion correction in the supplementary material of his paper. Later, Jonathan Polimeni also showed it for motion correction.


Similar to spatial blurring during spatial resampling, there can be an independent source of temporal blurring during the slice time correction (Thanks to Remi Gau for pointing this out). In the presence of head motion during the readout, the two spatial and temporal interpolations are not completely independent anymore. Nypipe allows to apply motion correction and slice-time correction simultaneously.
Strategies to reduce resolution loss in the evaluation
Strategy 1: Doing the layering in EPI space
The most efficient way of minimizing resolution losses due to spatial resampling is to refrain from unnecessary spatial resampling all together.
Especially the steps 2-5 above can be avoided, when all the analysis it conducted directly in distorted EPI space. In this case, layers are defined in EPI space, the distortion correction and the registration to high-res “anatomical” scans is no longer necessary.

How to do the layering analysis in the distorted EPI space is explained in this blog post.
Of course, the motion correction cannot be refrained from. So there is still this one motion correction resampling step that needs to be conduced.
Strategy 2: Aligning the anatomy to EPI and not the other way around
Aside of the motion correction, most of the spatial resampling steps can be avoided by distorting and aligning the “anatomical” reference data to the functional data and not the other way around.
This step is explained in more detail in a recent blog post here.

This approach can reduce the number of resampling steps. However, I found it challenging because of poor registration quality in many protocols and participants.
Strategy 3: Applying spatial resampling on a finer grid than the voxel size
Since the resolution loss is a direct result from the signal redistribution of finite voxel sizes, it could be theoretically minimized by using a finer voxel grid. Finer than the effective resolution.
Valentin Kemper came up with the following way of reconstructing EPI data on a finer grid directly at the scanner in the ICE-chain, before the individual coil data are combined. His recipe is given below:
- Open twix (Ctrl&Esc -> run).
- Select the dataset in the left panel (not the main one in the middle).
- Click the xBuilder symbol.
- Click edit. This will open the file with all the recon parameters. You can open the search field with the binocular icon.
- Search for:
- “imagesp” and multiply the number by 2
- “uc2dinterpolation” and set it to 1
- “roft” and multiply it by 2
- “peft” and multiply it by 2
- “3dft” and multiply it by 2
- “resolution” and divide the number by 2. This means that the original matrix size should be even-numbered
- Save the file and click the start button to retro-reconstruct the dataset on a finer grid.

For most of the layer-fMRI applications, small field of views are used and images do not need to be saved for every RF-channel individually. In those cases this approach is very effective to minimize resolution losses.
However, I found it not very practical for all applications. It increases the data size by a factor of 8. So, my usual data sizes of 32 GB (uncombined) would now be 256 GB! This is the the size of the entire pixel data base on the scanner host and it takes about 14 hours to transfer.
Isn’t is embarrassing that a multi-million dollar MRI scanner is limited by such mundane limits of data size?
Valentin Kemper describes this approach in one of his recent papers. Don’t forget to check out the fascinating gifs that he could generate with it.
Strategy 4: Using an appropriate spatial interpolation algorithm
The spatial resampling can be done with multiple interpolation algorithms. Most popular might be: Nearest Neighbor, linear, and spline. To my surprise, resampling with a linear interpolation is much to blurry for its popularity.
My personal advise is to refrain from interpolation methods of Nearest-Neighbor or “linear” and use a higher order splines instead.





Strategy 5: Doing all spatial resampling in one step
In the vast majority of layer-papers that I looked at the various resampling steps (motion correction, alignment between runs, distortion corrections, and registration to anatomy) are not applied consecutively. in most cases few of those steps are combined together. E.g. motion correction and registration between runs is often applied in one step. Similarly, distortion correction and registration to anatomy are often applied together.
Programs like AFNI and FMRIPREP support the theoretical possibility to concatenate all the spatial resampling steps into one warp-field that is applied only once. This could be beneficial to minimize corresponding resolution loss.
However, in the regime of sub-millimeter voxels of layer fMRI, I found this approach still challenging. For layer-resolutions, most of the conventional alignment pipelines are not working well enough on a sub-voxel level without further ado. In fact, in most layer-papers, a specialized custom-optimized alignment strategy with manual interventions is applied.
It is also not so straight forward to combined the motion correction of SPM (my favorite for MOCO) the distortion correction of FSL (my favorite for Topup) and ANTS (my favorite for alignment to “reference” data). Hence I found it practically challenging to apply all resampling operations in one single alignment step. And I don’t know of any layer-fMRI pipeline (compared here) that could apply motion-correction, distortion, correction and registration in one step.

I believe that future work in the direction will make it easier.
Hi. Someone sent this post to me (not sure why) so I’ll try to add something constructive.
Basically the underlying idea is correct: avoid resampling because it is equivalent to smoothing. This is well known.
So what?
Your initial set up here is a bit of a strawman. No one in their right mind would motion correct to a target, resample-and-write-out, then coregister to a T1 then resample-and-write-out, normalize to a template then resample-and-write-out. That is just silly and not a reflection of reality. For example, as best I can remember, SPM motion correction only calculates an affine matrix for each volume and alters the header. It doesn’t resample anything (realign and unwarp is different but you want fsl for unwarping right?). So listing all the places one COULD resample isn’t the same as listing the places where one actually DOES resample.
So your strategy #5 there is really the only issue here since it makes all the other discussion moot. 2 becomes roughly equivalent (though slightly worse), 4 should always be done, and 3 is stupid as an actual solution to the original problem. The AFNI people can speak for themselves, but 5 its not as involved as you make it out to be, nor is the combination across multiple platforms a big deal (with minor caveats). Motion correction and coregistration (and whatever it is you are doing with multiple runs) are with matrices. Multiply the matrices and you have the cumulative movement. Can’t get much easier than that. AFNI, FSL, SPM, and ANTs affine transform matrices are interchangeable with minor formatting differences. Tools exist to convert some. At worst, you should be able to take the 6 translation+rotation parameters that they all spit out and create which ever type matrix you need from that. Its trivial. The non-linear part isn’t. FSL is different from ANTs and SPM (has AFNI moved out of the stone age and implemented a WORKING nonlinear warping yet?) However, there are ways to convert or recalculate if desired. Multiple groups have published on this one way or other.
So do 5. Spend the time to figure out how to do it with the combination of methods you want to use.
LikeLike
Thank you very much for you comment on this issue.
I fully agree that option 5 would a desirable solution to the problem (while options 1-4 might work good enough too).
Unfortunately, in the realm of layer-resolutions it’s a long way to go from the realization that something *should* be possible to the point where is *works* robustly. This is partly due to the high level of accuracy required in layer-fMRI and also partly due to the fact that layer-fMRI data are usually only covering a very small FOV (<1% of the cortex).
In case you achieve a stable way of doing it, your input would be highly appreciated. E.g. example data could be downloaded here: https://activecho.cit.nih.gov/t/i5d1hoj6. Given that none of the current layer-fMRI studies uses this option suggests that your contributions would have a huge impact to the field.
→ SPM in fact also does allow resampling.
→ AFNI in fact does do non-linear
LikeLike