Image pre-processing#
Preparing slides before image acquisition can be a tedious task : it happens that some slices are flipped (either upside-down or left/right), put too close from each other (resulting in a part of a different slice being visible in an image), too close from the slide edge... In such cases, one might need to clean the image so that only the actual slice is visible in the image.
Pre-processing scripts#
Two scripts are provided in scripts/preprocessing
to this end. They require first to export the images from the microscope software to standard image files with metadata (eg. OME-TIFF files).
The process is then :
- Split each channel in single-channel images,
- Detect automatically the brain contour in the specified target channel,
- Save the resulting brain mask as an image,
- Apply the mask to all channels and save resulting cleaned images,
- Review manually the masks, if not satisfied, manually edit the correspond single-channel image in ImageJ,
- Rerun the brain contour detection and re-apply the masks to all channels,
- Merge cleaned channels in a multi-channel, pyramidal OME-TIFF image ready to be used in QuPath.
The first script, preprocess_split_channels.py
handles steps 1-6, preprocess_merge_channel.py
takes care of the last step.
Info
The reason we need to split channels is to get images that can be easily openned in a third-party software such as ImageJ for conveninent editing.
Usage#
First and foremost, export the images from the microscope software to OME-TIFF. For Zeiss ZEN, have a look at this guide. Say the images were exported to a directory called ~/input_directory/
.
Split channels and find brain mask#
Copy the script preprocess_split_channels.py
located in scripts/preprocessing
on your computer. Read the options at the top of the script and edit according to your need.
Especially, the TASKS
dictionary specifies what actions are to be performed.
The process is the following :
-
(if
move=True
) Move images from~/input_directory
to~/images/merged_original/
. The files will be renamed depending on the options set in the script header. TheIN_PREFIX
parameter allows the slice number to be parsed. TheOUT_PREFIX
is the prefix of the renamed image and all subsequent use.Example
ZEN exported images named :
A1A4_s1.ome.tiff
,A1A4_s2.ome.tiff
, ...
SettingIN_PREFIX
to"_s"
andOUT_PREFIX
toanimalid_
will result in image being moved from~/input_directory/animalid_s1.ome.tiff
to~/images/animalid_001.ome.tiff
, and so on. Theimages
folder name is customizable but will always be in the parent directory ofinput_drectory
. -
(if
split=True
) While moving and renaming the image, it will also read the actual image data, and split each channel in separate single-channel images. The image files will have the same name and are stored in~/images/ch01
,~/images/ch02
... folders. -
(if
clean=True
) The parameterDETECTION_CHANNEL
sets which channel will be used to find the brain contour. The corresponding single-channel file is read, brain detection is performed, the resulting mask is saved in~images/masks
. Since the image is already loaded, the mask is also applied directly to it, and the cleaned, masked image is saved in~/images/chXX_cleaned
, whereXX
corresponds toDETECTION_CHANNEL
.Info
If the mask image file already exists, the image is skipped. Likewise, if
overwrite_cleaned
is turned off (eg. set toFalse
), if an image with the same name already exist in thechXX_cleaned
folders, it will be skipped. -
The mask is subsequently applied to all other channels in the same manner : cleaned images have the same name as the renamed original file, and stored in their respective
chXX_cleaned
folders. - Visually assess the quality of the masks stored in
~/images/masks/
. Previews are generated in thepreviews
folder. If they are satisfactory, skip to the next section.
If for some images the mask is not satisfactory, note down their names and :
- Delete the mask file (not the preview !).
- Delete the corresponding cleaned image in each channel.
- Open ImageJ, drag & drop the corresponding single-channel original image from the channel used for detection.
- Manually edit it so that the brain slice is easily detected. This means deleting the bits not part of the slice, usually when those bits are close to the slice itself. One could for instance use the
Freehand selections
tool, select the parts to remove and hit Del. - Save the image (Ctrl+S), overwriting the original.
- Repeat for each un-satisfactory mask.
- Back to the script, turn off
reformat
andsplit
inTASKS
, since that's already done. Only the missing masks will be computed, and only the missing images from thechXX_cleaned
folders will be written (unlessoverwrite_cleaned
is set toTrue
).
Example
Automatic brain contour detection failed for animalid_012.tiff
.
I delete ~/images/masks/animalid_012.tiff
. I also delete ~/images/ch01_cleaned/animalid_012.tiff
, ~/images/ch02_cleaned/animalid_012.tiff
and ~/images/ch03_cleaned/animalid_012.tiff
.
I drag & drop ~/images/ch01/animalid_012.tiff
in ImageJ, draw the brain contour manually with Freehand selections tool, invert the selection, hit Del, save the image, overwriting it.
Finally, I edit the script, setting reformat=False
and split=False
in TASKS
, and re-run the script. Only one mask will be computed and applied.
Now, we only have to merge all the channels back to single pyramidal OME-TIFF images ready to be used in QuPath.
Merge channels#
Copy the preprocess_merge_channels.py
script on your computer.
This one is more straighfoward :
- Fill the input directory. This is where the script can find each
chXX_cleaned
folders,~/images/
in the example above. - Fill the output directory. This could be for instance
~/images/merged_cleaned/
. -
Fill the
CHANNELS
parameters. This is a dictionary, setting the name and color of each channel. The order is important, it needs to be sorted as thechXX_cleaned
folders are.Example
The first channel (
ch01_cleaned
) corresponds to the NISSL staining imaged in the CFP channel, the second channel (ch02_cleaned
) corresponds to the EGFP channel.CHANNELS
would then look like :{"CFP": (0, 0, 255), "EGFP": (0, 255, 0)}
. -
Fill the pyramids and tiles options. The default value should work fine for most use cases.
- Run the script. Images in
OUTPUT_DIRECTORY
are ready to be added to a QuPath project !
Danger
The pixel size is read from the OME-TIFF files and propagated along the pre-processing steps until the final images, so make sure it is correct when exporting the files from the microscope software.
Brain contour detection#
The algorithm to detect the brain contour is defined in the function find_brain_mask()
in the preprocess_split_channels.py
script. All the parameters are customizable in the DETECTION_PARAMETERS
variable.
In a nutshell :
- Zeroes are replaced with a fixed background value (
bkg
). This is to account when manually removing parts in ImageJ, the image background will be high compared to the 0 induced by this operation and edge detection will be sub-optimal. - The image is downsampled (
downscale
) for performance -- the full resolution is not needed. - Edge filter with the Canny algorithm (using
cannysigma
andcannythresh
), implemented in scikit-image. - Morphological closing (dilation followed by erosion) to keep only "big" objects, using
closeradius
. - Fill the holes.
- Keep only the biggest remaining object.
- Resize the mask to the original image resolution.