The purpose of this blog post is to provide guidance on how to get started with the layer-fMRI analysis suite: LAYNII. This post is an extended version of the LAYNII README.
When you want to analyze functional magnetic resonance imaging (fMRI) signals across cortical depths, you need to know which voxel overlaps with which cortical depth. The relative cortical depth of each voxel is calculated based on the geometry of the proximal cortical gray matter boundaries. One of these boundaries is the inner gray matter boundary which often faces the white matter and the other boundary is the outer gray matter boundary which often faces the cerebrospinal fluid. Once the cortical depth of each voxel is calculated based on the cortical gray matter geometry, corresponding layers can be assigned to cortical depths based on several principles.
One of the fundamental principles used for “assigning layers to cortical depths” (aka layering, layerification) is the equi-volume principle. This layering principle was proposed by Bok in 1929, where he tries to subdivide the cortex across little layer-chunks that have the same volume. I.e. gyri and sulci will exhibit any given layer at a different cortical depth, dependent on the cortical folding and volume sizes (see figure below).
With respect to applying equi-volume principle in layer-fMRI, the equi-volume layering has gone through quite a story. A plot with many parallels to Anakin Skywalker.
In this blog, the equi-volume layering approach is evaluated. Furthermore, it is demonstrated how to use it in LAYNII software.
Doing layer-fMRI sometimes feels like doing nothing more than noise management. One must have a full grown masochistic personality trait to enjoy working with such messy data. Namely, layer-fMRI time series data suffer from each and every one of the artifacts in conventional fMRI; they are just much worse and there are also a few extra artifacts that we need to worry about. As such, layer-fMRI time series usually suffer from amplified ghosting, time-variable intermittent ghosting, non-gaussian noise, noise-coupling, motion artifacts, and signal blurring.
Thus, we need to have a set of metrics that tell us whether or not we can trust our specific data sets. We would like to have quality assessment (QA) tools that tell us when we need to stop wasting our time on artifact-infested data and throw them away. It would be extremely helpful to have tools that extract a basic set of QA metrics that are specifically optimized and suited for sub-millimeter resolution fMRI artifacts.
This blog post discusses a number of these layer-fMRI specific QA metrics and describes how to generate them in LAYNII.
Did you acquire a layer-fMRI study without VASO? Did you even acquire your data with GE-BOLD EPI? Don’t you know that this contrast is dominated by unwanted signals from locally unspecific large draining veins?
That’s ok. Don’t be down in the mouth. Nobody is perfect. It happens to the best of us 😉 Luckily, there are several models out there that should help you to tease out the tiny microvascular GE-BOLD signal that you care about and help you to remove the dominating macro-vascular venous signal. However, note that some of these vein-removal models work better than others. None of the models is perfect! But some of them are useful. The most relevant approaches are implemented in the LAYNII software suit on a voxel-wise level.
In this blog post, I want to describe these de-veining models and how to use them to get rid of unwanted macrovascular venous signals in LAYNII.
This post lists the background material of the hands-on tutorial about high-resolution EPI on SIEMENS scanners.
In this blog post, I want to share my thoughts on the number of layers that should be extracted from any given dataset. I will try to give an overview of how many layers are usually extracted in the field, I’ll describe my personal choices of layer numbers, and I will try to discuss the challenges of layer signal extraction along the way.
In this blog post Sri Kashyap and I describe how to deal with the registration of high-resolution datasets across days, across different resolutions, and across different sequences.
I am particularly fond of the following two tools: Firstly, ITK-SNAP for visually-guided manual alignment and secondly, using ANTs programs: antsRegistration and antsApplyTransforms.
In this Blog post, I seek to describe a quick example of how to analyse high-resolution data across layers and columns with LAYNII.
This is a step-by-step description on how to obtain layer profiles from any high-resolution fMRI dataset. It is based on manual delineated ROIs and does not require the tricky analysis steps including distortion correction, registration to whole brain “anatomical” datasets, or automatic tissue type segmentation. Hence this is a very quick way for a first glance of the freshly acquired data.
The important steps are: 1.) Upscaling, 2.) Manual delineation of GM, 3.) Calculation of cortical depths in ROI, 4.) Extracting functional data based on calculated cortical depths.
- Manual aligment of MP2RAGE with EPI (optional when MP2RAGE is acquired in same session)
- ANTS alignment of MP2RAGE and EPI. (part of anatomical_maser.sh, see github)
- Running Freesurfer on MP2RAGE data in EPI space. (part of anatomical_maser.sh, see github)
- Using SUMA to get fine samples tissue borders in EPI-voxel space (in oblique space) (part of anatomical_maser.sh, see github)
- Manual correction of Freesurfer GM-ribbon
- calculating layers from GM-ribbon in neuroDebian.