Are you ever annoyed how hard it is to get brain data off the scanner? The fact that scanners usually contain private information about patients and are thus embedded in maximally restrictive clinical cyber-security environments, makes it quite complicated to get access to the data. Especially when visiting collaborative sites.
In this this Hackathon project, we aim to develop a purely uni-directional (safe) data streaming “hack” to transfer MRI data directly to the cloud by means dynamic QR codes.
In the early days of the Internet, modems (modulator-demodulator) were used to (i) convert digital information into audio streams, (ii) transfer them across telephone lines, and (iii) convert them back into the digital domain. Here, we aim to do the same thing with pixel data of MRI scans. However, instead of audio signal we will use machine-readable visual information: QR codes.
Specific aims of the Brain QR modem
1.) We will develop an ICE-Functor that converts pixel data to QR codes in real time
2.) We will develop an Android app that converts the streamed QR coded into a series of png that are directly streamed to the cloud (Drive folder).
3.) We will develop a LayNii program that converts stacks of PNG images into Nii files.
This project contains many consecutive components of a modem. And will likely take 2-3 rounds of Hackathons to be completed.
When you want to analyze functional magnetic resonance imaging (fMRI) signals across cortical depths, you need to know which voxel overlaps with which cortical depth. The relative cortical depth of each voxel is calculated based on the geometry of the proximal cortical gray matter boundaries. One of these boundaries is the inner gray matter boundary which often faces the white matter and the other boundary is the outer gray matter boundary which often faces the cerebrospinal fluid. Once the cortical depth of each voxel is calculated based on the cortical gray matter geometry, corresponding layers can be assigned to cortical depths based on several principles.
One of the fundamental principles used for “assigning layers to cortical depths” (aka layering, layerification) is the equi-volume principle. This layering principle was proposed by Bok in 1929, where he tries to subdivide the cortex across little layer-chunks that have the same volume. I.e. gyri and sulci will exhibit any given layer at a different cortical depth, dependent on the cortical folding and volume sizes (see figure below).
With respect to applying equi-volume principle in layer-fMRI, the equi-volume layering has gone through quite a story. A plot with many parallels to Anakin Skywalker.
In this blog, the equi-volume layering approach is evaluated. Furthermore, it is demonstrated how to use it in LAYNII software.
Doing layer-fMRI sometimes feels like doing nothing more than noise management. One must have a full grown masochistic personality trait to enjoy working with such messy data. Namely, layer-fMRI time series data suffer from each and every one of the artifacts in conventional fMRI; they are just much worse and there are also a few extra artifacts that we need to worry about. As such, layer-fMRI time series usually suffer from amplified ghosting, time-variable intermittent ghosting, non-gaussian noise, noise-coupling, motion artifacts, and signal blurring.
Thus, we need to have a set of metrics that tell us whether or not we can trust our specific data sets. We would like to have quality assessment (QA) tools that tell us when we need to stop wasting our time on artifact-infested data and throw them away. It would be extremely helpful to have tools that extract a basic set of QA metrics that are specifically optimized and suited for sub-millimeter resolution fMRI artifacts.
This blog post discusses a number of these layer-fMRI specific QA metrics and describes how to generate them in LAYNII.
Did you acquire a layer-fMRI study without VASO? Did you even acquire your data with GE-BOLD EPI? Don’t you know that this contrast is dominated by unwanted signals from locally unspecific large draining veins?
That’s ok. Don’t be down in the mouth. Nobody is perfect. It happens to the best of us 😉 Luckily, there are several models out there that should help you to tease out the tiny microvascular GE-BOLD signal that you care about and help you to remove the dominating macro-vascular venous signal. However, note that some of these vein-removal models work better than others. None of the models is perfect! But some of them are useful. The most relevant approaches are implemented in the LAYNII software suit on a voxel-wise level.
In this blog post, I want to describe these de-veining models and how to use them to get rid of unwanted macrovascular venous signals in LAYNII.
How can one assign layers to discrete voxels? Is it possible to perform topographical fMRI analyses across layers and columns directly in the original voxel space that raw data from the scanner come in?
The MP2RAGE sequence is very popular for 7T anatomical imaging and is very commonly used to acquire 0.7-1 mm resolution whole brain anatomical reference data. Aside of this common application, it can also be very helpful for layer-fMRI studies to obtain even higher resolution T1 maps in the range of 0.5mm iso. However, when optimizing MP2RAGE sequence parameters for layer-fMRI studies, there are a few things that might be helpful to keep in mind.
In this post, I would like to discuss the challenges of using the popular MP2RAGE sequence in layer-fMRI studies. Specifically I will discuss challenges/features regarding:
In this blog post I want to go through the analysis pipeline of layer-dependent VASO.
I will go through the all the analysis steps that need to be done to go from raw data from the scanner to final layer profiles. The entire thing will take about 30 min (10 min analysis and 20 min explaining and browsing through data).
During the entire analysis pipeline I am using the following software packages: SPM, AFNI, and LAYNII, and gnuplot (if you want fancier plotting tools)
In this blog post, I want to share my thoughts on the number of layers that should be extracted from any given dataset. I will try to give an overview of how many layers are usually extracted in the field, I’ll describe my personal choices of layer numbers, and I will try to discuss the challenges of layer signal extraction along the way.
Often we would like to normalize depth-dependent fMRI signals and assign it to specific cytoarchitectonially defined cortical layers. However, we often only have access to cytoarchitectonial histology data in the form to figures in papers. But since we only have the web-view or the PDF available, we cannot easily extract those data as a layer-profile. Since most layering tools are designed for nii data only, paper figures (e.g. jpg or GNP) are not straight-forwardly transformed to layer profiles.
In this blob post, I describe a set of steps on how to convert any paper figure into a nii-file that allows the extraction of layer profiles.
This is a step-by-step description on how to obtain layer profiles from any high-resolution fMRI dataset. It is based on manual delineated ROIs and does not require the tricky analysis steps including distortion correction, registration to whole brain “anatomical” datasets, or automatic tissue type segmentation. Hence this is a very quick way for a first glance of the freshly acquired data.
The important steps are: 1.) Upscaling, 2.) Manual delineation of GM, 3.) Calculation of cortical depths in ROI, 4.) Extracting functional data based on calculated cortical depths.
Almost every modern fMRI protocol (at SIEMENS scanners) uses GRAPPA. However, only very few people pay a lot of attention on optimal usage of the GRAPPA auto-callibration data. I realized the importance of optimizing GRAPPA parameters when doing high-resolution EPI. At high resolutions, GRAPPA-related noise can become an increasingly important limitation. This is especially true with the low bandwidth that the body gradient coils force us to use.
In this blog-post I will explain how the GRAPPA kernel-size affects the fMRI data quality, how you can change it, how you can find out which kernel-size was used, and I will descrive simple software tools to identify regions that might benefit from adaptations of the GRAPPA-kernel size.