Pipeline#
While you can use QuPath and cuisto
functionalities as you see fit, there exists a pipeline version of those. It requires a specific structure to store files (so that the different scripts know where to look for data). It also requires that you have detections stored as geojson files, which can be achieved using a pixel classifier and further segmentation (see here) for example.
Purpose#
This is especially useful to perform quantification for several animals at once, where you'll only need to specify the root directory and the animals identifiers that should be pooled together, instead of having to manually specify each detections and annotations files.
Three main scripts and function are used within the pipeline :
exportPixelClassifierProbabilities.groovy
to create prediction maps of objects of interestsegment_image.py
to segment those maps and create geojson files to be imported back to QuPath as detectionspipelineImportExport.groovy
to :- clear all objects
- import ABBA regions
- mirror regions names
- import geojson detections (from
$folderPrefix$segmentation/$segTag$/geojson
) - add measurements to detections
- add atlas coordinates to detections
- add hemisphere to detections' parents
- add regions measurements
- count for punctal objects
- cumulated length for lines objects
- export detections measurements
- as CSV for punctual objects
- as JSON for lines
- export annotations as CSV
Directory structure#
Following a specific directory structure ensures subsequent scripts and functions can find required files. The good news is that this structure will mostly be created automatically using the segmentation scripts (from QuPath and Python), as long as you stay consistent filling the parameters of each script.
The structure expected by the groovy all-in-one script and cuisto
batch-process function is the following :
some_directory/
├──AnimalID0/
│ ├── animalid0_qupath/
│ └── animalid0_segmentation/
│ └── segtag/
│ ├── annotations/
│ ├── detections/
│ ├── geojson/
│ └── probabilities/
├──AnimalID1/
│ ├── animalid1_qupath/
│ └── animalid1_segmentation/
│ └── segtag/
│ ├── annotations/
│ ├── detections/
│ ├── geojson/
│ └── probabilities/
Info
Except the root directory and the QuPath project, the rest is automatically created based on the parameters provided in the different scripts. Here's the description of the structure and the requirements :
animalid0
should be a convenient animal identifier.- The hierarchy must be followed.
- The experiment root directory,
AnimalID0
, can be anything but should correspond to one and only one animal. - Subsequent
animalid0
should be lower case. animalid0_qupath
can be named as you wish in practice, but should be the QuPath project.animalid0_segmentation
should be called exactly like this -- replacinganimalid0
with the actual animal ID. It will be created automatically with theexportPixelClassifierProbabilities.groovy
script.segtag
corresponds to the type of segmentation (cells, fibers...). It is specified in theexportPixelClassifierProbabilities
script. It could be anything, but to recognize if the objects are polygons (and should be counted per regions) or polylines (and the cumulated length should be measured), there are some hardcoded keywords in thesegment_images.py
andpipelineImportExport.groovy
scripts :- Cells-like when you need measurements related to its shape (area, circularity...) :
cells
,cell
,polygons
,polygon
- Cells-like when you consider them as punctual :
synapto
,synaptophysin
,syngfp
,boutons
,points
- Fibers-like (polylines) :
fibers
,fiber
,axons
,axon
- Cells-like when you need measurements related to its shape (area, circularity...) :
annotations
contains the atlas regions measurements as TSV files.detections
contains the objects atlas coordinates and measurements as CSV files (for punctal objects) or JSON (for polylines objects).geojson
contains objects stored as geojson files. They could be generated with the pixel classifier prediction map segmentation.probabilities
contains the prediction maps to be segmented by thesegment_images.py
script.
Tip
You can see an example minimal directory structure with only annotations stored in resources/multi
.
Usage#
Tip
Remember that this is merely an example pipeline, you can shortcut it at any points, as long as you end up with TSV files following the requirements for cuisto
.
- Create a QuPath project.
- Register your images on an atlas with ABBA and export the registration back to QuPath.
- Use a pixel classifier and export the prediction maps with the
exportPixelClassifierProbabilities.groovy
script. You need to get a pixel classifier or create one. - Segment those maps with the
segment_images.py
script to generate the geojson files containing the objects of interest. - Run the
pipelineImportExport.groovy
script on your QuPath project. - Set up your configuration files.
- Then, analysing your data with any number of animals should be as easy as executing those lines in Python (either from IPython directly or in a script to easily run it later) :
Tip
You can see a live example in this demo notebook.