Last active
October 15, 2024 06:00
-
-
Save Svidro/68dd668af64ad91b2f76022015dd8a45 to your computer and use it in GitHub Desktop.
Making Measurements in QuPath
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Collections of scripts harvested mainly from Pete, but also picked up from the forums | |
TOC | |
Accessing dynamic measurements.groovy - Most annotation measurements are dynamically created when you click on the annotation, and | |
are not accessible through the standard getMeasurement function. This is a way around that. | |
Affine transformation.groovy - access more accurate measurements for the affine transformation used in the image alignment (m5/m6+) | |
Alignment of local cells.groovy - check neighborhood for similarly aligned cells | |
Angles for cells.groovy - Calculate angles relative to horizontal. | |
Area measurements per class to annotation.groovy - Summary measurements for tile/area based analyses. Should work for all classes present. | |
Area measurements to annotation.groovy - Kind of terrible holdover, but adds size measurements to Annotations. Could be altered for | |
detection tiles, which would likely be more useful. | |
Cell summary measurements to annotation.groovy - Go through the cells and add some sort of summary measurement to their parent | |
annotation. Examples might be the mean area of all cells, or the min and max intensities of cells of a certain class. Get createive. | |
Chromaticity - cell measurement.groovy - Demonstration of how to calculate the green chromaticity using Calculate Features. | |
Class cell counts,percentages and density to parent annotation.groovy - mostly same as above but for cells | |
Class percentages to TMA measurements.groovy - Checks all cells in each core for membership within a listed set of classes. | |
Colocalization v4.groovy - Actually v3, and works with 0.2.0m2. Calculate Pearson's and Manders coefficients for detections | |
Added a version for 0.2.0m7 | |
Colocalization 0.1.2.groovy - Version of above script that works for 0.1.2 and 0.1.3. Does not work for 0.2.0+ | |
Create detection measurements.groovy - Create new detection measurements as combinations of other detection measurements. | |
For example, the ratio of the channel 2 nuclear intensity to the channel 3 nuclear intensity. | |
Density map measurements to objects.groovy - Pete's scripts to use the density maps to apply measurements to objects | |
Detections - add full intensity measurements.groovy - Adds intensity measurements to objects, including median and border/membrane | |
Distance between two annotations.groovy - Calculates the distance between two annotations. Also the overlap if there is any overlap. | |
Distances between annotations.groovy - Calculates distances between edges of all annotations and classes in the image. | |
Label cells by TMA core.groovy - Rename cells based on their parent core. Could probably be done better with getDecendantObjects() | |
Local Cell Density.groovy - Add a measurement to each cell based on the number of other cells within X microns - very slow | |
Metadata by script in m5.groovy - set pixel sizes by adjusting the metadata for an image. | |
metadata by script in m10.groovy - 0.2.0 M10 | |
Nearest Neighbors by class.groovy - calculates NN distances | |
Nuclear and cytoplasmic color vector means.groovy - Complicated script, but essentially allows you to create sets of color vectors | |
and obtain cytoplasmic and nuclear mean values for them. Useful in complex brightfield stains, has been used to differentiate cells in | |
5 stain plus hematoxylin images. | |
Points are in which annotations.groovy - version 1 See thread for intended use: https://forum.image.sc/t/manual-annotation-and-measurements/25051/5?u=research_associate | |
RSquared all channels per annotation.groovy - calculates R^2 between every pair of possible channels, per annotation. | |
RSquared calculation.groovy - Calculates R-squared values. Does not currently save them anywhere. | |
Tile summary measurements to parent Annotation.groovy - Creates measurements for the total area and percentage area for each class. | |
Percentages are based off of annotation area, a different calculation would be needed if you have a "whitespace" tile type | |
Total subcellular intensity to cell value.groovy - Sums the total intensity of subcellular detections (area*mean intensity summed). | |
Primary functions here include: | |
Using "hierarchy = getCurrentHierarchy()" to get access to the hierarchy, so that you can more easily access subsets of cells | |
Using findAll{true/false statements here} to generate lists of objects you want to perform operations on. | |
The following gets all objects that are positive within whatever preceeds findAll | |
.findAll{it.getPathClass() == getPathClass("Positive")} | |
The simplest way to access a measurement is... measurement(object,"measurement name") | |
So if I wanted to print the nuclear area of each of my cells, for some reason: | |
getCellObjects().each{ | |
print measurement(it, "Nucleus: Area) | |
} | |
That cycles through each cell, and prints "it"s nuclear area. | |
The following access the measurement list, which is the list you see in the lower right of the Hierarchy tab when selecting | |
an object. | |
getMeasurementList() | |
getMeasurementValue(key) | |
putMeasurement(key, value) | |
Sometimes you may want to search an objects list using: | |
ml = object.getMeasurementList() | |
to generate a list called ml. | |
For any given list of objects, you could also use | |
getCellObjects().each{ measurement(it, "Nucleus: Area")} | |
to access the nuclear area of each cell. | |
Other times, you may know exactly what you want to modify, and can just use: | |
object.getMeasurementList().putMeasurement(key, value) | |
For adding a micrometer symbol in v1.2, use " + qupath.lib.common.GeneralTools.micrometerSymbol() + " |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Courtesy of Olivier Burri on QuPath Gitter | |
//For 0.1.2 | |
//import qupath.lib.gui.models.ObservableMeasurementTableData | |
//For 0.2.0 | |
import qupath.lib.gui.measure.ObservableMeasurementTableData | |
def ob = new ObservableMeasurementTableData(); | |
def annotations = getAnnotationObjects() | |
// This line creates all the measurements | |
ob.setImageData(getCurrentImageData(), annotations); | |
annotations.each { annotation->println( ob.getNumericValue(annotation, "H-score") ) | |
} | |
/* | |
Using this script to access the X and Y coordinates per cell | |
cells = getCellObjects() | |
ob = new ObservableMeasurementTableData(); | |
ob.setImageData(getCurrentImageData(), cells); | |
cells.each{ | |
print ob.getNumericValue(it, "Centroid X µm") | |
} | |
import qupath.lib.gui.measure.ObservableMeasurementTableData | |
/* | |
Using this script to calculate circularity for annotations | |
import qupath.lib.gui.models.ObservableMeasurementTableData | |
def ob = new ObservableMeasurementTableData(); | |
def annotations = getAnnotationObjects() | |
// This line creates all the measurements | |
ob.setImageData(getCurrentImageData(), annotations); | |
annotations.each { | |
area=ob.getNumericValue(it, "Area µm^2") | |
perimeter=ob.getNumericValue(it, "Perimeter µm") | |
circularity = 4*3.14159*area/(perimeter*perimeter) | |
it.getMeasurementList().putMeasurement("Circularity", circularity) | |
} | |
*/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//0.2.0 | |
import qupath.lib.gui.align.ImageServerOverlay | |
def overlay = getCurrentViewer().getCustomOverlayLayers().find {it instanceof ImageServerOverlay} | |
print overlay.getAffine() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Uses Ben Pearson's script from https://groups.google.com/forum/#!searchin/qupath-users/rotate%7Csort:date/qupath-users/UvkNb54fYco/ri_4K6tiCwAJ | |
//Creates an area defined by the ortho and paradist measurements (orthogonal to the cell's orientation or parallel) | |
//Calculates how similarly the cells with centroids inside of that area are aligned. | |
//Requires "Angles for cells.groovy" script below to be run first. | |
//0.2.0 | |
//Edit below for 0.1.2 | |
print "running, please wait, Very Slow Process." | |
def server = getCurrentImageData().getServer() | |
//For 0.1.2 | |
//def sizePixels = server.getAveragedPixelSizeMicrons() | |
//For 0.2.0 | |
sizePixels = server.getPixelCalibration().getAveragedPixelSizeMicrons() | |
//EDIT SHAPE OF REGION TO CHECK FOR ALIGNMENT HERE | |
def orthoDist = 40/sizePixels | |
def paraDist = 30/sizePixels | |
def triangleArea(double Ax,double Ay,double Bx,double By,double Cx,double Cy) { | |
return (((Ax*(By- Cy) + Bx*(Cy-Ay) + Cx*(Ay-By))/2).abs()) | |
} | |
//def DIST = 10 | |
def cellList = getCellObjects() | |
//scan through all cells | |
getCellObjects().each{ | |
//get some values for th current cell | |
def originalAngle = it.getMeasurementList().getMeasurementValue("Cell angle") | |
def radians = Math.toRadians(-originalAngle) | |
def cellX = it.getNucleusROI().getCentroidX() | |
def cellY = it.getNucleusROI().getCentroidY() | |
//create a list of nearby cells | |
/* SIMPLE VERSION, A BOX | |
def nearbyCells = cellList.findAll{ c-> | |
DIST > server.getAveragedPixelSizeMicrons()*Math.sqrt(( c.getROI().getCentroidX() - cellX)*(c.getROI().getCentroidX() - cellX)+( c.getROI().getCentroidY() - cellY)*(c.getROI().getCentroidY() - cellY)); | |
} */ | |
/* | |
def roi = new RectangleROI(cellX-orthoDist/2, cellY-paraDist/2, orthoDist, paraDist) | |
def points = roi.getPolygonPoints() | |
def roiPointsArx = points.x.toArray() | |
def roiPointsAry = points.y.toArray() | |
*/ | |
def roiPointsArx = [cellX-paraDist/2, cellX+paraDist/2, cellX+paraDist/2, cellX-paraDist/2 ] | |
def roiPointsAry = [cellY+orthoDist/2, cellY+orthoDist/2, cellY-orthoDist/2, cellY-orthoDist/2 ] | |
for (i= 0; i< roiPointsAry.size(); i++) | |
{ | |
// correct the center to 0 | |
roiPointsArx[i] = roiPointsArx[i] - cellX | |
roiPointsAry[i] = roiPointsAry[i] - cellY | |
//Makes prime placeholders, which allows the calculations x'=xcos(theta)-ysin(theta), y'=ycos(theta)+xsin(theta) to be performed | |
double newPointX = roiPointsArx[i] | |
double newPointY = roiPointsAry[i] | |
// then rotate | |
roiPointsArx[i] = (newPointX * Math.cos(radians)) - (newPointY * Math.sin(radians)) | |
roiPointsAry[i] = (newPointY * Math.cos(radians)) + (newPointX * Math.sin(radians)) | |
// then move it back | |
roiPointsArx[i] = roiPointsArx[i] + cellX | |
roiPointsAry[i] = roiPointsAry[i] + cellY | |
} | |
//addObject(new PathAnnotationObject(new PolygonROI(roiPointsArx as float[], roiPointsAry as float[], -1, 0, 0))) | |
def nearbyCells = cellList.findAll{ orthoDist*paraDist-5 < ( triangleArea(roiPointsArx[0], roiPointsAry[0],roiPointsArx[1] ,roiPointsAry[1], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY()) | |
+triangleArea(roiPointsArx[1], roiPointsAry[1],roiPointsArx[2] ,roiPointsAry[2], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY()) | |
+triangleArea(roiPointsArx[2], roiPointsAry[2],roiPointsArx[3] ,roiPointsAry[3], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY()) | |
+triangleArea(roiPointsArx[3], roiPointsAry[3],roiPointsArx[0] ,roiPointsAry[0], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY())) && | |
orthoDist*paraDist+5 >( triangleArea(roiPointsArx[0], roiPointsAry[0],roiPointsArx[1] ,roiPointsAry[1], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY()) | |
+triangleArea(roiPointsArx[1], roiPointsAry[1],roiPointsArx[2] ,roiPointsAry[2], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY()) | |
+triangleArea(roiPointsArx[2], roiPointsAry[2],roiPointsArx[3] ,roiPointsAry[3], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY()) | |
+triangleArea(roiPointsArx[3], roiPointsAry[3],roiPointsArx[0] ,roiPointsAry[0], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY())) | |
} | |
//print(nearbyCells) | |
//prevent divide by zero errors | |
if (nearbyCells.size() < 2){ it.getMeasurementList().putMeasurement("LMADSD", 90); return;} | |
def angleList = [] | |
//within the local cells, find the differences in angle | |
for (cell in nearbyCells){ | |
def currentAngle = cell.getMeasurementList().getMeasurementValue("Cell angle") | |
def angleDifference = (currentAngle - originalAngle).abs() | |
//angles between two objects should be at most 90 degrees, or perpendicular | |
if (angleDifference > 90){ | |
angleList << (180 - Math.max(currentAngle, originalAngle)+Math.min(currentAngle,originalAngle)) | |
} else {angleList << angleDifference} | |
} | |
//complete the list with the original data point | |
//angleList << 0 | |
//calculate the standard deviation of the angular differences | |
def localAngleDifferenceMean = angleList.sum()/angleList.size() | |
def variance = 0 | |
angleList.each{v-> variance += (v-localAngleDifferenceMean)*(v-localAngleDifferenceMean)} | |
def stdDev = Math.sqrt(variance/(angleList.size())) | |
// add measurement for local, mean angle difference, standard deviation | |
//println("stddev "+stdDev) | |
it.getMeasurementList().putMeasurement("LMADSD", stdDev) | |
} | |
print "done" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//from Pete | |
//0.2.0 | |
//Change for 0.1.2 shown below | |
import qupath.imagej.objects.* | |
getCellObjects().each{ | |
def ml = it.getMeasurementList() | |
def roi = it.getNucleusROI() | |
//for 0.1.2 | |
//def roiIJ = ROIConverterIJ.convertToIJRoi(roi, 0, 0, 1) | |
roiIJ = IJTools.convertToIJRoi(roi, 0, 0, 1) | |
def angle = roiIJ.getFeretValues()[1] | |
ml.putMeasurement('Nucleus angle', angle) | |
ml.close() | |
} | |
fireHierarchyUpdate() | |
print "done" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Checks for all detections within a given annotation, DOES NOT EXCLUDE DETECTIONS WITHIN SUB-ANNOTATIONS. | |
//That last bit should make it compatible with trained classifiers. | |
//Result is the percentage area for all detections of a given class being applied as a measurement to the parent annotation. | |
//0.1.2 | |
import qupath.lib.objects.PathDetectionObject | |
def imageData = getCurrentImageData() | |
def server = imageData.getServer() | |
def pixelSize = server.getPixelHeightMicrons() | |
Set classList = [] | |
for (object in getAllObjects().findAll{it.isDetection() /*|| it.isAnnotation()*/}) { | |
classList << object.getPathClass() | |
} | |
println(classList) | |
hierarchy = getCurrentHierarchy() | |
for (annotation in getAnnotationObjects()){ | |
def annotationArea = annotation.getROI().getArea() | |
for (aClass in classList){ | |
if (aClass){ | |
def tiles = hierarchy.getDescendantObjects(annotation,null, PathDetectionObject).findAll{it.getPathClass() == aClass} | |
double totalArea = 0 | |
for (def tile in tiles){ | |
totalArea += tile.getROI().getArea() | |
} | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" area px", totalArea) | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" area um^2", totalArea*pixelSize*pixelSize) | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" area %", totalArea/annotationArea*100) | |
} | |
} | |
} | |
println("done") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Useful when using detection objects returned from ImageJ macros. Note that areas are in pixels and would need to be converted to microns | |
//0.1.2 | |
import qupath.lib.objects.PathDetectionObject | |
hierarchy = getCurrentHierarchy() | |
for (annotation in getAnnotationObjects()){ | |
//Block 1 | |
def tiles = hierarchy.getDescendantObjects(annotation,null, PathDetectionObject) | |
double totalArea = 0 | |
for (def tile in tiles){ | |
totalArea += tile.getROI().getArea() | |
} | |
annotation.getMeasurementList().putMeasurement("Marked area px", totalArea) | |
def annotationArea = annotation.getROI().getArea() | |
annotation.getMeasurementList().putMeasurement("Marked area %", totalArea/annotationArea*100) | |
} | |
println("done") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Sometimes you may want to add a summary measurement from cells within each annotation to the annotation itself. | |
//This will allow you to see that measurement in the "Show Annotation Measurements" list. | |
//In this case, it will add the total area taken up by Positive class cells within each annotation to their parent | |
//annotation as "Positive Area" | |
//0.1.2 | |
import qupath.lib.objects.PathCellObject | |
hierarchy = getCurrentHierarchy() | |
for (annotation in getAnnotationObjects()){ | |
//Block 1 | |
def positiveCells = hierarchy.getDescendantObjects(annotation,null, PathCellObject).findAll{it.getPathClass() == getPathClass("Positive")} | |
double totalArea = 0 | |
for (def cell in positiveCells){ | |
totalArea += cell.getMeasurementList().getMeasurementValue("Cell: Area") | |
} | |
//Comment the following in or out depending on whether you want to see the output | |
//println("Mean area for Positive is: " + totalArea/positiveCells.size) | |
//println("Total Positive Area is: " + totalArea) | |
//Add the total as "Positive Area" to each annotation. | |
annotation.getMeasurementList().putMeasurement("Positive Area", totalArea) | |
//Add the percentage positive area to the annotations measurement list | |
def annotationArea = annotation.getROI().getArea() | |
annotation.getMeasurementList().putMeasurement("Positive Area %", totalArea/annotationArea*100) | |
//Block 2 - add as many blocks as you have classes | |
//... | |
} | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// https://forum.image.sc/t/detecting-purple-chromogen-classifying-cells-based-on-green-chromaticity/35576/5 | |
//0.2.0 | |
// Add intensity features (cells already detected) | |
selectCells(); | |
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 1.0, "region": "Cell nucleus", "tileSizeMicrons": 25.0, "colorOD": false, "colorStain1": false, "colorStain2": false, "colorStain3": false, "colorRed": true, "colorGreen": true, "colorBlue": true, "colorHue": false, "colorSaturation": false, "colorBrightness": false, "doMean": true, "doStdDev": false, "doMinMax": false, "doMedian": false, "doHaralick": false, "haralickDistance": 1, "haralickBins": 32}'); | |
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 1.0, "region": "ROI", "tileSizeMicrons": 25.0, "colorOD": false, "colorStain1": false, "colorStain2": false, "colorStain3": false, "colorRed": true, "colorGreen": true, "colorBlue": true, "colorHue": false, "colorSaturation": false, "colorBrightness": false, "doMean": true, "doStdDev": false, "doMinMax": false, "doMedian": false, "doHaralick": false, "haralickDistance": 1, "haralickBins": 32}'); | |
// Add chromaticity measurements | |
def nucleusMeasurement = "Nucleus: 1.00 µm per pixel: %s: Mean" | |
def cellMeasurement = "ROI: 1.00 µm per pixel: %s: Mean" | |
for (cell in getCellObjects()) { | |
def measurementList = cell.getMeasurementList() | |
addGreenChromaticity(measurementList, nucleusMeasurement) | |
addGreenChromaticity(measurementList, cellMeasurement) | |
measurementList.close() | |
} | |
fireHierarchyUpdate() | |
def addGreenChromaticity(measurementList, measurement) { | |
double r = measurementList.getMeasurementValue(String.format(measurement, "Red")) | |
double g = measurementList.getMeasurementValue(String.format(measurement, "Green")) | |
double b = measurementList.getMeasurementValue(String.format(measurement, "Blue")) | |
def name = String.format(measurement, "Green chromaticity") | |
measurementList.putMeasurement(name, g/Math.max(1, r+g+b)) | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//0.2.0 | |
//Update to M5 measurements calculator for Multiplex analysis | |
// Initial script by Mike Nelson @Mike_Nelson on image.sc | |
// Project checking from Sara McArdle @smcardle on image.sc | |
// Additional information at link below. | |
// https://forum.image.sc/t/m9-multiplex-classifier-script-updates-cell-summary-measurements-visualization/34663 | |
import qupath.lib.plugins.parameters.ParameterList | |
import qupath.lib.objects.PathCellObject | |
import qupath.lib.objects.PathObjectTools | |
imageData = getCurrentImageData() | |
server = imageData.getServer() | |
def cal = getCurrentServer().getPixelCalibration() | |
hierarchy = getCurrentHierarchy() | |
separatorsForBaseClass = "[.-_,:]+" //add an extra symbol between the brackets if you need to split on a different character | |
boolean fullClassesB = true | |
boolean baseClassesB = true | |
boolean percentages = true | |
boolean densities = false | |
boolean removeOld = false | |
boolean checkProject = true | |
/*************************************************************************/ | |
//////////// REMOVE BELOW THIS SECTION IF RUNNING FOR PROJECT//////////////// | |
//EDIT THE CORRECT BOOLEAN VALUES ABOVE MANUALLY | |
def params = new ParameterList() | |
//Yes, I am bad at dialog boxes. | |
.addBooleanParameter("fullClassesB", "Measurements for full classes ", fullClassesB, "E.g. CD68:PDL1 cells would be calculated on their own") | |
.addBooleanParameter("baseClassesB", "Measurements for individual base classes ", baseClassesB, "Measurements show up as 'All' plus the base class name") | |
.addBooleanParameter("percentages", "Percentages? ", percentages, "") | |
.addBooleanParameter("densities", "Cell densities? ", densities, "Not recommended without pixel size metadata or some sort of annotation outline") | |
.addBooleanParameter("removeOld", "Clear old measurements, can be slow ", removeOld, "Slow, but needed when the classifier changes. Otherwise classes that no longer exist will not be overwritten with 0 resulting in extra incorrect measurements") | |
.addBooleanParameter("checkProject", "Use class names from entire project, can be slow! ", checkProject, "Needed for projects where all images are analyzed and compared") | |
if (!Dialogs.showParameterDialog("Cell summary measurements: M9 VALUES DO NOT AUTOMATICALLY UPDATE", params)) | |
return | |
fullClassesB = params.getBooleanParameterValue("fullClassesB") | |
baseClassesB = params.getBooleanParameterValue("baseClassesB") | |
percentages = params.getBooleanParameterValue("percentages") | |
densities = params.getBooleanParameterValue("densities") | |
removeOld = params.getBooleanParameterValue("removeOld") | |
checkProject = params.getBooleanParameterValue("checkProject") | |
//////////// REMOVE ABOVE THIS SECTION IF RUNNING FOR PROJECT /////////////////// | |
/*************************************************************************/ | |
if (removeOld){ | |
Set annotationMeasurements = [] | |
getAnnotationObjects().each{it.getMeasurementList().getMeasurementNames().each{annotationMeasurements << it}} | |
List remove =[] | |
annotationMeasurements.each{ if(it.contains("%") || it.contains("^")) {removeMeasurements(qupath.lib.objects.PathAnnotationObject, it);}} | |
} | |
Set baseClasses = [] | |
Set classNames = [] | |
if (checkProject){ | |
getProject().getImageList().each{ | |
def objs = it.readHierarchy().getDetectionObjects() | |
classes = objs.collect{it?.getPathClass()?.toString()} | |
classNames.addAll(classes) | |
} | |
classNames.each{ | |
it?.tokenize(separatorsForBaseClass).each{str-> | |
baseClasses << str.trim() | |
} | |
} | |
println("Classifications: "+classNames) | |
println("Base Classes: "+baseClasses) | |
}else{ | |
classNames.addAll(getDetectionObjects().collect{it?.getPathClass()?.toString()} as Set) | |
classNames.each{ | |
it?.tokenize(separatorsForBaseClass).each{str-> | |
baseClasses << str.trim() | |
} | |
} | |
println("Classifications: "+classNames) | |
println("Base Classes: "+baseClasses) | |
} | |
//This section calculates measurements for the full classes (all combinations of base classes) | |
if (fullClassesB){ | |
for (annotation in getAnnotationObjects()){ | |
totalCells = [] | |
qupath.lib.objects.PathObjectTools.getDescendantObjects(annotation,totalCells, PathCellObject) | |
for (aClass in classNames){ | |
if (aClass){ | |
if (totalCells.size() > 0){ | |
cells = totalCells.findAll{it.getPathClass().toString() == aClass} | |
print cells.size() | |
if (percentages) {annotation.getMeasurementList().putMeasurement(aClass.toString()+" %", cells.size()*100/totalCells.size())} | |
annotationArea = annotation.getROI().getScaledArea(cal.pixelWidth, cal.pixelHeight) | |
if (densities) {annotation.getMeasurementList().putMeasurement(aClass.toString()+" cells/mm^2", cells.size()/(annotationArea/1000000))} | |
} else { | |
if (percentages) {annotation.getMeasurementList().putMeasurement(aClass.toString()+" %", 0)} | |
if (densities) {annotation.getMeasurementList().putMeasurement(aClass.toString()+" cells/mm^2", 0)} | |
} | |
} | |
} | |
} | |
} | |
//This section only calculates densities of the base class types, regardless of other class combinations. | |
//So all PDL1 positive cells would counted for a PDL1 sub class, even if the cells had a variety of other sub classes. | |
if (baseClassesB){ | |
for (annotation in getAnnotationObjects()){ | |
totalCells = [] | |
qupath.lib.objects.PathObjectTools.getDescendantObjects(annotation,totalCells, PathCellObject) | |
for (aClass in baseClasses){ | |
if (totalCells.size() > 0){ | |
cells = totalCells.findAll{it.getPathClass().toString().contains(aClass)} | |
if (percentages) {annotation.getMeasurementList().putMeasurement("All "+aClass+" %", cells.size()*100/totalCells.size())} | |
annotationArea = annotation.getROI().getScaledArea(cal.pixelWidth, cal.pixelHeight) | |
if (densities) {annotation.getMeasurementList().putMeasurement("All "+aClass+" cells/mm^2", cells.size()/(annotationArea/1000000))} | |
} else { | |
if (percentages) {annotation.getMeasurementList().putMeasurement("All "+aClass+" %", 0)} | |
if (densities) {annotation.getMeasurementList().putMeasurement("All "+aClass+" cells/mm^2", 0)} | |
} | |
} | |
} | |
} | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Checks for all detections within a given annotation, DOES NOT EXCLUDE DETECTIONS WITHIN SUB-ANNOTATIONS. | |
//That last bit should make it compatible with trained classifiers. | |
//0.1.2 | |
//Do you want to include cell counts? If so, make True. This can cause duplicate measurements in 1.3 and beyond | |
COUNTS = false | |
import qupath.lib.objects.PathCellObject | |
imageData = getCurrentImageData() | |
server = imageData.getServer() | |
pixelSize = server.getPixelHeightMicrons() | |
Set classList = [] | |
for (object in getAllObjects().findAll{it.isDetection() /*|| it.isAnnotation()*/}) { | |
classList << object.getPathClass() | |
} | |
println(classList) | |
hierarchy = getCurrentHierarchy() | |
for (annotation in getAnnotationObjects()){ | |
totalCells = [] | |
totalCells = hierarchy.getDescendantObjects(annotation,null, PathCellObject) | |
for (aClass in classList){ | |
if (aClass){ | |
if (totalCells.size() > 0){ | |
cells = hierarchy.getDescendantObjects(annotation,null, PathCellObject).findAll{it.getPathClass() == aClass} | |
if(COUNTS){annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells", cells.size())} | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" %", cells.size()*100/totalCells.size()) | |
annotationArea = annotation.getROI().getArea() | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells/mm^2", cells.size()/(annotationArea*pixelSize*pixelSize/1000000)) | |
} else { | |
if(COUNTS){annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells", 0)} | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" %", 0) | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells/mm^2", 0) | |
} | |
} | |
} | |
} | |
println("done") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// Add percentages by cell class to each TMA core, tested for | |
//0.1.2 | |
import qupath.lib.objects.PathCellObject | |
hierarchy = getCurrentHierarchy() | |
cores = hierarchy.getTMAGrid().getTMACoreList() | |
Set list = [] | |
for (object in getAllObjects().findAll{it.isDetection() /*|| it.isAnnotation()*/}) { | |
list << object.getPathClass().toString() | |
} | |
cores.each { | |
//Find the cell count in this core | |
total = hierarchy.getDescendantObjects(it, null, PathCellObject).size() | |
//Prevent divide by zero errors in empty TMA cores | |
if (total != 0){ | |
for (className in list) { | |
cellType = hierarchy.getDescendantObjects(it,null, PathCellObject).findAll{it.getPathClass() == getPathClass(className)}.size() | |
it.getMeasurementList().putMeasurement(className+" cell %", cellType/(total)*100) | |
} | |
} | |
else { | |
for (className in list) { | |
it.getMeasurementList().putMeasurement(className+" cell %", 0) | |
} | |
} | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Version 3.1 Should work for 0.1.2 and 0.1.3 | |
//****************VALUES TO EDIT***********// | |
//Channel numbers are based on cell measurement/channel order in Brightness/contrast menu, starting with 1 | |
int FIRST_CHANNEL = 9 | |
int SECOND_CHANNEL = 9 | |
//CHOOSE ONE: "cell", "nucleus", "cytoplasm", "tile", "detection", "subcell" | |
//"detection" should be the equivalent of everything | |
String objectType = "cell" | |
//These should be figured out for a given sample to eliminate background signal | |
//Pixels below this value will not be considered for a given channel. | |
//Used for Manders coefficients only. | |
ch1Background = 100000 | |
ch2Background = 100000 | |
//***************NO TOUCHEE past here************// | |
import qupath.lib.regions.RegionRequest | |
import qupath.imagej.images.servers.ImagePlusServer | |
import qupath.imagej.images.servers.ImagePlusServerBuilder | |
import ij.process.ByteProcessor; | |
import ij.process.ImageProcessor; | |
import java.awt.image.BufferedImage | |
import ij.ImagePlus | |
import qupath.imagej.objects.ROIConverterIJ | |
import ij.process.ImageProcessor | |
import qupath.lib.roi.RectangleROI | |
import qupath.lib.images.servers.ImageServer | |
import qupath.lib.objects.PathObject | |
import qupath.imagej.helpers.IJTools | |
import qupath.lib.gui.ImageWriterTools | |
def imageData = getCurrentImageData() | |
def hierarchy = imageData.getHierarchy() | |
ImageServer<BufferedImage> serverOriginal = imageData.getServer() | |
String path = serverOriginal.getPath() | |
double downsample = 1.0 | |
def server = ImagePlusServerBuilder.ensureImagePlusWholeSlideServer(serverOriginal) | |
println("Running, please wait...") | |
//target the objects you want to analyze | |
if(objectType == "cell" || objectType == "nucleus" || objectType == "cytoplasm" ){detections = getCellObjects()} | |
if(objectType == "tile"){detections = getDetectionObjects().findAll{it.isTile()}} | |
if(objectType == "detection"){detections = getDetectionObjects()} | |
if(objectType == "subcell") {detections = getObjects({p-> p.class == qupath.imagej.detect.cells.SubcellularDetection.SubcellularObject.class})} | |
println("Count = "+ detections.size()) | |
detections.each{ | |
//Get the bounding box region around the target detection | |
roi = it.getROI() | |
region = RegionRequest.createInstance(path, downsample, roi) | |
imp = server.readImagePlusRegion(region).getImage() | |
//Extract the first channel as a list of pixel values | |
imp.setC(FIRST_CHANNEL) | |
firstChanImage = imp.getProcessor() | |
firstChanImage = firstChanImage.convertToFloatProcessor() //Needed to handle big numbers | |
ch1Pixels = firstChanImage.getPixels() | |
//Create a mask so that only the pixels we want from the bounding box area are used in calculations | |
bpSLICs = createObjectMask(firstChanImage, downsample, it, objectType).getPixels() | |
//println(bpSLICs) | |
//println(bpSLICs.getPixels()) | |
//println("ch1 size"+ch1.size()) | |
size = ch1Pixels.size() | |
imp.setC(SECOND_CHANNEL) | |
secondChanImage= imp.getProcessor() | |
secondChanImage=secondChanImage.convertToFloatProcessor() | |
ch2Pixels = secondChanImage.getPixels() | |
//use mask to extract only the useful pixels into new lists | |
//Maybe it would be faster to remove undesirable pixels instead? | |
ch1 = [] | |
ch2 = [] | |
for (i=0; i<size; i++){ | |
if(bpSLICs[i]){ | |
ch1<<ch1Pixels[i] | |
ch2<<ch2Pixels[i] | |
} | |
} | |
//Calculating the mean for Pearson's | |
ch1Premean = [] | |
ch2Premean = [] | |
for (x in ch1) ch1Premean<<(x/ch1.size()) | |
for (x in ch2) ch2Premean<<(x/ch2.size()) | |
double ch1Mean = ch1Premean.sum() | |
double ch2Mean = ch2Premean.sum() | |
//get the new number of pixels to be analyzed | |
size = ch1.size() | |
//Create the sum for the top half of the pearson's correlation coefficient | |
top = [] | |
for (i=0; i<size;i++){top << (ch1[i]-ch1Mean)*(ch2[i]-ch2Mean)} | |
pearsonTop = top.sum() | |
//Sums for the two bottom parts | |
botCh1 = [] | |
for (i=0; i<size;i++){botCh1<< (ch1[i]-ch1Mean)*(ch1[i]-ch1Mean)} | |
rootCh1 = Math.sqrt(botCh1.sum()) | |
botCh2 = [] | |
for (i=0; i<size;i++){botCh2 << (ch2[i]-ch2Mean)*(ch2[i]-ch2Mean)} | |
rootCh2 = Math.sqrt(botCh2.sum()) | |
pearsonBot = rootCh2*rootCh1 | |
double pearson = pearsonTop/pearsonBot | |
String name = "Pearson Corr "+FIRST_CHANNEL+"+"+SECOND_CHANNEL | |
it.getMeasurementList().putMeasurement(name, pearson) | |
//Start Manders calculations | |
double m1Top = 0 | |
for (i=0; i<size;i++){if (ch2[i] > ch2Background){m1Top += Math.max(ch1[i]-ch1Background,0)}} | |
double m1Bottom = 0 | |
for (i=0; i<size;i++){m1Bottom += Math.max(ch1[i]-ch1Background,0)} | |
double m2Top = 0 | |
for (i=0; i<size;i++){if (ch1[i] > ch1Background){m2Top += Math.max(ch2[i]-ch2Background,0)}} | |
double m2Bottom = 0 | |
for (i=0; i<size;i++){m2Bottom += Math.max(ch2[i]-ch2Background,0)} | |
//Check for divide by zero and add measurements | |
name = "M1 "+objectType+": ratio of Ch"+FIRST_CHANNEL+" intensity in Ch"+SECOND_CHANNEL+" areas" | |
double M1 = m1Top/m1Bottom | |
if (M1.isNaN()){M1 = 0} | |
it.getMeasurementList().putMeasurement(name, M1) | |
double M2 = m2Top/m2Bottom | |
if (M2.isNaN()){M2 = 0} | |
name = "M2 "+objectType+": ratio of Ch"+SECOND_CHANNEL+" intensity in Ch"+FIRST_CHANNEL+" areas" | |
it.getMeasurementList().putMeasurement(name, M2) | |
} | |
println("Done!") | |
//Making a mask. Phantom of the Opera style. | |
def createObjectMask(ImageProcessor ip, double downsample, PathObject object, String objectType) { | |
//create a byteprocessor that is the same size as the region we are analyzing | |
def bp = new ByteProcessor(ip.getWidth(), ip.getHeight()) | |
//create a value to fill into the "good" area | |
bp.setValue(1.0) | |
//extract the ROI and shift the position so that it is within the stand-alone image region | |
//Otherwise the coordinates are based off of the original image, and not just the small subsection we are analyzing | |
if (objectType == "nucleus"){ | |
def roi = object.getNucleusROI() | |
shift = roi.translate(ip.getWidth()/2-roi.getCentroidX(), ip.getHeight()/2-roi.getCentroidY()) | |
def roiIJ = ROIConverterIJ.convertToIJRoi(shift, 0, 0, downsample) | |
bp.fill(roiIJ) | |
}else if (objectType == "cytoplasm"){ | |
def nucleus = object.getNucleusROI() | |
shiftNuc = nucleus.translate(ip.getWidth()/2-roi.getCentroidX(), ip.getHeight()/2-roi.getCentroidY()) | |
roiIJNuc = ROIConverterIJ.convertToIJRoi(shiftNuc, 0, 0, downsample) | |
def roi = object.getROI() | |
shift = roi.translate(ip.getWidth()/2-roi.getCentroidX(), ip.getHeight()/2-roi.getCentroidY()) | |
def roiIJ = ROIConverterIJ.convertToIJRoi(shift, 0, 0, downsample) | |
bp.fill(roiIJ) | |
bp.setValue(0) | |
bp.fill(roiIJNuc) | |
} else { | |
def roi = object.getROI() | |
shift = roi.translate(ip.getWidth()/2-roi.getCentroidX(), ip.getHeight()/2-roi.getCentroidY()) | |
roiIJ = ROIConverterIJ.convertToIJRoi(shift, 0, 0, downsample) | |
bp.fill(roiIJ) | |
} | |
//fill the ROI with the setValue to create the mask, the other values should be 0 | |
return bp | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//0.2.0, but null pointer exception in certain image types. Have not been able to track it down. | |
//****************VALUES TO EDIT***********// | |
//Channel numbers are based on cell measurement/channel order in Brightness/contrast menu, starting with 1 | |
int FIRST_CHANNEL = 2 | |
int SECOND_CHANNEL = 3 | |
//CHOOSE ONE: "cell", "nucleus", "cytoplasm", "tile", "detection", "subcell" | |
//"detection" should be the equivalent of everything | |
String objectType = "cell" | |
//These should be figured out for a given sample to eliminate background signal | |
//Pixels below this value will not be considered for a given channel. | |
//Used for Manders coefficients only. | |
ch1Background = 1000 | |
ch2Background = 10000 | |
//***************No touchee past here************// | |
import qupath.lib.regions.RegionRequest | |
import ij.process.ByteProcessor; | |
import ij.process.ImageProcessor; | |
import java.awt.image.BufferedImage | |
import qupath.imagej.tools.IJTools | |
import ij.process.ImageProcessor | |
import qupath.lib.images.servers.ImageServer | |
import qupath.lib.objects.PathObject | |
import qupath.lib.images.PathImage | |
import qupath.imagej.tools.PathImagePlus | |
def imageData = getCurrentImageData() | |
def hierarchy = imageData.getHierarchy() | |
def serverOriginal = imageData.getServer() | |
String path = serverOriginal.getPath() | |
double downsample = 1.0 | |
ImageServer<BufferedImage> server = serverOriginal | |
println("Running, please wait...") | |
//target the objects you want to analyze | |
if(objectType == "cell" || objectType == "nucleus" || objectType == "cytoplasm" ){detections = getCellObjects()} | |
if(objectType == "tile"){detections = getDetectionObjects().findAll{it.isTile()}} | |
if(objectType == "detection"){detections = getDetectionObjects()} | |
if(objectType == "subcell") {detections = getObjects({p-> p.class == qupath.lib.objects.PathDetectionObject.class})} | |
println("Count = "+ detections.size()) | |
detections.each{ | |
//Get the bounding box region around the target detection | |
roi = it.getROI() | |
request = RegionRequest.createInstance(path, downsample, roi) | |
pathImage = IJTools.convertToImagePlus(server, request) | |
imp = pathImage.getImage() | |
//pathImage = PathImagePlus.createPathImage(imp, request) | |
//imp.show() | |
//imps = ij.plugin.ChannelSplitter.split(imp) | |
//println(imp.getClass()) | |
//Extract the first channel as a list of pixel values | |
//firstChanImage = imps[FIRST_CHANNEL-1] | |
firstChanImage = imp.getStack().getProcessor(FIRST_CHANNEL) | |
firstChanImage = firstChanImage.convertToFloatProcessor() //Needed to handle big numbers | |
ch1Pixels = firstChanImage.getPixels() | |
//Create a mask so that only the pixels we want from the bounding box area are used in calculations | |
bpSLICs = createObjectMask(pathImage, it, objectType).getPixels() | |
//println(bpSLICs) | |
//println(bpSLICs.getPixels()) | |
//println("ch1 size"+ch1.size()) | |
size = ch1Pixels.size() | |
secondChanImage= imp.getStack().getProcessor(SECOND_CHANNEL) | |
secondChanImage=secondChanImage.convertToFloatProcessor() | |
ch2Pixels = secondChanImage.getPixels() | |
//use mask to extract only the useful pixels into new lists | |
//Maybe it would be faster to remove undesirable pixels instead? | |
ch1 = [] | |
ch2 = [] | |
for (i=0; i<size; i++){ | |
if(bpSLICs[i]){ | |
ch1<<ch1Pixels[i] | |
ch2<<ch2Pixels[i] | |
} | |
} | |
/* | |
println(ch1) | |
println(ch2) | |
println("ch1 size"+ch1.size()) | |
println("ch2 size"+ch2.size()) | |
println("ch1mean "+ch1Mean) | |
println("ch2sum "+ch2.sum()) | |
println("ch2mean "+ch2Mean) | |
*/ | |
//Check for div by zero errors | |
if(ch1.size() == 0 || ch2.size() == 0){return} | |
//Calculating the mean for Pearson's | |
double ch1Mean = ch1.sum()/ch1.size() | |
double ch2Mean = ch2.sum()/ch2.size() | |
//get the new number of pixels to be analyzed | |
size = ch1.size() | |
//Create the sum for the top half of the pearson's correlation coefficient | |
top = [] | |
for (i=0; i<size;i++){top << (ch1[i]-ch1Mean)*(ch2[i]-ch2Mean)} | |
pearsonTop = top.sum() | |
//Sums for the two bottom parts | |
botCh1 = [] | |
for (i=0; i<size;i++){botCh1<< (ch1[i]-ch1Mean)*(ch1[i]-ch1Mean)} | |
rootCh1 = Math.sqrt(botCh1.sum()) | |
botCh2 = [] | |
for (i=0; i<size;i++){botCh2 << (ch2[i]-ch2Mean)*(ch2[i]-ch2Mean)} | |
rootCh2 = Math.sqrt(botCh2.sum()) | |
pearsonBot = rootCh2*rootCh1 | |
double pearson = pearsonTop/pearsonBot | |
String name = "Pearson Corr "+objectType+":"+FIRST_CHANNEL+"+"+SECOND_CHANNEL | |
it.getMeasurementList().putMeasurement(name, pearson) | |
//Start Manders calculations | |
double m1Top = 0 | |
for (i=0; i<size;i++){if (ch2[i] > ch2Background){m1Top += Math.max(ch1[i]-ch1Background,0)}} | |
double m1Bottom = 0 | |
for (i=0; i<size;i++){m1Bottom += Math.max(ch1[i]-ch1Background,0)} | |
double m2Top = 0 | |
for (i=0; i<size;i++){if (ch1[i] > ch1Background){m2Top += Math.max(ch2[i]-ch2Background,0)}} | |
double m2Bottom = 0 | |
for (i=0; i<size;i++){m2Bottom += Math.max(ch2[i]-ch2Background,0)} | |
//Check for divide by zero and add measurements | |
name = "M1 "+objectType+": ratio of Ch"+FIRST_CHANNEL+" intensity in Ch"+SECOND_CHANNEL+" areas" | |
double M1 = m1Top/m1Bottom | |
if (M1.isNaN()){M1 = 0} | |
it.getMeasurementList().putMeasurement(name, M1) | |
double M2 = m2Top/m2Bottom | |
if (M2.isNaN()){M2 = 0} | |
name = "M2 "+objectType+": ratio of Ch"+SECOND_CHANNEL+" intensity in Ch"+FIRST_CHANNEL+" areas" | |
it.getMeasurementList().putMeasurement(name, M2) | |
} | |
println("Done!") | |
//Making a mask. Phantom of the Opera style. | |
def createObjectMask(PathImage pathImage, PathObject object, String objectType) { | |
//create a byteprocessor that is the same size as the region we are analyzing | |
def bp = new ByteProcessor(pathImage.getImage().getWidth(), pathImage.getImage().getHeight()) | |
//create a value to fill into the "good" area | |
bp.setValue(1.0) | |
if (objectType == "nucleus"){ | |
def roi = object.getNucleusROI() | |
def roiIJ = IJTools.convertToIJRoi(roi, pathImage) | |
bp.fill(roiIJ) | |
}else if (objectType == "cytoplasm"){ | |
def nucleus = object.getNucleusROI() | |
roiIJNuc = IJTools.convertToIJRoi(nucleus, pathImage) | |
def roi = object.getROI() | |
//fill in the whole cell area | |
def roiIJ = IJTools.convertToIJRoi(roi, pathImage) | |
bp.fill(roiIJ) | |
//remove the nucleus | |
bp.setValue(0) | |
bp.fill(roiIJNuc) | |
} else { | |
def roi = object.getROI() | |
roiIJ = IJTools.convertToIJRoi(roi, pathImage) | |
bp.fill(roiIJ) | |
} | |
//fill the ROI with the setValue to create the mask, the other values should be 0 | |
return bp | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Generating measurements in detections from other measurements created in QuPath | |
//0.1.2 and 0.2.0 | |
detections = getDetectionObjects() | |
detections.each{ | |
relativeDistribution2 = measurement(it, "ROI: 2.00 µm per pixel: Channel 2: Mean")/measurement(it, "ROI: 2.00 µm per pixel: Channel 2: Median") | |
it.getMeasurementList().putMeasurement("RelativeCh2", relativeDistribution2) | |
} | |
println("done") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//0.3.2 | |
//TWO scripts, make sure to only take the one you want. | |
//https://gist.github.com/petebankhead/2e7325d8c560677bba9b867f68070300 | |
/** | |
* Script to add density map values to detection centroids in QuPath v0.3. | |
* | |
* Note that this hasn't been tested very much... and assumes a 2D image. | |
* At the very least, you should use 'Measure -> Show measurement maps' as a sanity check. | |
* | |
* Written for https://forum.image.sc/t/qupath-number-of-detections-per-tile/64603/10 | |
* | |
* @author Pete Bankhead | |
*/ | |
String densityMapName = 'Tumor density map' // You'll need a saved density map with this name in the project | |
String densityMeasurementName = densityMapName // Make this more meaningful if needed | |
// Get the current image | |
def imageData = getCurrentImageData() | |
// Load a density map builder & create an ImageServer from it | |
def builder = loadDensityMap(densityMapName) | |
def server = builder.buildServer(imageData) | |
// Read the entire density map (we assume it's 2D, and low enough resolution for this to work!) | |
def request = RegionRequest.createInstance(server) | |
def img = server.readBufferedImage(request) | |
// Get all the objects to which we want to add measurements | |
def pathObjects = getDetectionObjects() | |
double downsample = request.getDownsample() | |
// Select the band (channel) of the density map to use | |
// (there might only be 1... counting starts at 0) | |
int band = 0 | |
// Add centroid measurement to all objects | |
pathObjects.parallelStream().forEach { p -> | |
int x = (int)(p.getROI().getCentroidX() / downsample) | |
int y = (int)(p.getROI().getCentroidY() / downsample) | |
float val = img.getRaster().getSampleFloat(x, y, band) | |
try (def ml = p.getMeasurementList()) { | |
ml.putMeasurement(densityMeasurementName, val) | |
} | |
} | |
// Finish up | |
fireHierarchyUpdate() | |
println 'Done!' | |
//https://gist.github.com/petebankhead/6286adcea24dd73af83e822bdb7a2132 | |
/** | |
* Script to add density map values to *some* detection centroids in QuPath v0.3, | |
* limited to only use a subset of the detections on the image. | |
* | |
* It does this by copying the relevant objects and adding them to a temporary ImageData. | |
* | |
* Note that this hasn't been tested very much... and assumes a 2D image. | |
* At the very least, you should use 'Measure -> Show measurement maps' as a sanity check. | |
* | |
* Written for https://forum.image.sc/t/qupath-number-of-detections-per-tile/64603/14 | |
* | |
* @see https://gist.github.com/petebankhead/2e7325d8c560677bba9b867f68070300 | |
* | |
* @author Pete Bankhead | |
*/ | |
String densityMapName = 'Tumor density map' // You'll need a saved density map with this name in the project | |
String densityMeasurementName = 'Some useful name' // Make this more meaningful if needed | |
// Get the current image | |
def imageData = getCurrentImageData() | |
def hierarchy = imageData.getHierarchy() | |
// Get the parent objects we care about | |
def parentAnnotations = getSelectedObjects() | |
// Alternatively, define parent objects using all annotations with a specified class | |
// String parentClass = 'Tumor' | |
//def parentAnnotations = getAnnotationObjects().findAll { p -> | |
// return p.isAnnotation() && p.getPathClass() == getPathClass(parentClass) | |
//} | |
// Get all the detections that fall inside the parent annotations | |
def pathObjects = new HashSet<>() | |
for (def parent in parentAnnotations) { | |
def contained = hierarchy.getObjectsForROI(null, parent.getROI()).findAll {p -> p.isDetection()} | |
pathObjects.addAll(contained) | |
} | |
// Add a clone of the detections to a new, temporary object hierarchy | |
// (and a new, temporary ImageData) | |
def imageDataTemp = new qupath.lib.images.ImageData(imageData.getServer()) | |
def hierarchyTemp = imageDataTemp.getHierarchy() | |
def clonedObjects = pathObjects.collect { p -> PathObjectTools.transformObject(p, null, false) } | |
hierarchyTemp.addPathObjects(clonedObjects) | |
// Load a density map builder & create an ImageServer from it | |
def builder = loadDensityMap(densityMapName) | |
def server = builder.buildServer(imageDataTemp) | |
// Read the entire density map (we assume it's 2D, and low enough resolution for this to work!) | |
def request = RegionRequest.createInstance(server) | |
def img = server.readBufferedImage(request) | |
double downsample = request.getDownsample() | |
// Select the band (channel) of the density map to use | |
// (there might only be 1... counting starts at 0) | |
int band = 0 | |
// Add centroid measurement to all objects | |
pathObjects.parallelStream().forEach { p -> | |
int x = (int)(p.getROI().getCentroidX() / downsample) | |
int y = (int)(p.getROI().getCentroidY() / downsample) | |
float val = img.getRaster().getSampleFloat(x, y, band) | |
try (def ml = p.getMeasurementList()) { | |
ml.putMeasurement(densityMeasurementName, val) | |
} | |
} | |
// Finish up | |
fireHierarchyUpdate() | |
println 'Done!' | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// Remove this if you don't need to generate new cell intensity measurements (it may be quite slow) | |
// 0.2.0 | |
//Original source https://forum.image.sc/t/stardist-qupath-cell-segmentation-transfer-on-aligned-image-set/45362/5 | |
//With substitutions from @petebankhead and @mesencephalon | |
import qupath.lib.analysis.features.ObjectMeasurements | |
import qupath.lib.images.ImageData | |
import qupath.lib.images.servers.ImageServerMetadata | |
import qupath.lib.images.servers.TransformedServerBuilder | |
def imageData = getCurrentImageData() | |
def server = new TransformedServerBuilder(imageData.getServer()) | |
.deconvolveStains(imageData.getColorDeconvolutionStains()) | |
.build() | |
// or server = getCurrentServer() //for IF | |
def measurements = ObjectMeasurements.Measurements.values() as List | |
def compartments = ObjectMeasurements.Compartments.values() as List // Won't mean much if they aren't cells... | |
def downsample = 1.0 | |
for (detection in getDetectionObjects()) { | |
ObjectMeasurements.addIntensityMeasurements( | |
server, detection, downsample, measurements, compartments | |
) | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//QuPath 0.2.3+ | |
// https://forum.image.sc/t/qupath-distance-between-annotations/47960/2 | |
// Get the objects to compare | |
// This assumes that just *one* annotation has each specified name | |
def hifPos = getAnnotationObjects().find {it.getName() == 'HIF-positive area 2'} | |
def pimoPos = getAnnotationObjects().find {it.getName() == 'PIMO-positive area 2'} | |
// Make sure we're on the same plane (only really relevant for z-stacks, time series) | |
def plane = hifPos.getROI().getImagePlane() | |
if (plane != pimoPos.getROI().getImagePlane()) { | |
println 'Annotations are on different planes!' | |
return | |
} | |
// Convert to geometries & compute distance | |
// Note: see https://locationtech.github.io/jts/javadoc/org/locationtech/jts/geom/Geometry.html#distance-org.locationtech.jts.geom.Geometry- | |
def g1 = hifPos.getROI().getGeometry() | |
def g2 = pimoPos.getROI().getGeometry() | |
double distancePixels = g1.distance(g2) | |
println "Distance between annotations: ${distancePixels} pixels" | |
// Attempt conversion to calibrated units | |
def cal = getCurrentServer().getPixelCalibration() | |
if (cal.pixelWidth != cal.pixelHeight) { | |
println "Pixel width != pixel height ($cal.pixelWidth vs. $cal.pixelHeight)" | |
println "Distance measurements will be calibrated using the average of these" | |
} | |
double distanceCalibrated = distancePixels * cal.getAveragedPixelSize() | |
println "Distance between annotations: ${distanceCalibrated} ${cal.pixelWidthUnit}" | |
// Check intersection as well | |
def intersection = g1.intersection(g2) | |
if (intersection.isEmpty()) | |
println "No intersection between areas" | |
else { | |
def roi = GeometryTools.geometryToROI(intersection, plane) | |
def annotation = PathObjects.createAnnotationObject(roi, getPathClass('Intersection')) | |
addObject(annotation) | |
selectObjects(annotation) | |
println "Annotated created for intersection" | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// Script further modified from Pete Bankhead's post https://forum.image.sc/t/qupath-distance-between-annotations/47960/2 | |
// Calculate the distances from each object to the nearest object of each other class. Saved to the object's measurement list. | |
// Distances recorded should stay 0 if there are no other objects, but it should find a non-zero distance if there are multiple of the same class | |
// Does NOT work *between* multiple points in a Points object, though Points objects will count as a single object. | |
///////////////////////////////////////////////////// | |
// DOES NOT WORK/CHECK FOR ZSTACKS AND TIME SERIES // | |
///////////////////////////////////////////////////// | |
print "Caution, this may take some time for large numbers of objects." | |
print "If the time is excessive for your project, you may want to consider size thresholding some objects, or adjusting the objectsToCheck" | |
objectsToCheck = getAllObjects().findAll{it.isDetection() || it.isAnnotation()} | |
PrecisionModel PM = new PrecisionModel(PrecisionModel.FIXED) | |
classList = objectsToCheck.collect{it.getPathClass()} as Set | |
def cal = getCurrentServer().getPixelCalibration() | |
if (cal.pixelWidth != cal.pixelHeight) { | |
println "Pixel width != pixel height ($cal.pixelWidth vs. $cal.pixelHeight)" | |
println "Distance measurements will be calibrated using the average of these" | |
} | |
Map combinedClassObjects = [:] | |
classList.each{c-> | |
currentClassObjects = getAllObjects().findAll{it.getPathClass() == c} | |
geom = null | |
currentClassObjects.eachWithIndex{o, i-> | |
if(i==0){geom = o.getROI().getGeometry()}else{ | |
geom = GeometryPrecisionReducer.reduce(geom.union(o.getROI().getGeometry()), PM) | |
//geom =(geom.union(o.getROI().getGeometry())).buffer(0) | |
} | |
} | |
combinedClassObjects[c] = geom | |
} | |
objectsToCheck.each{ o -> | |
//Store the shortest non-zero distance between an annotation and another class of annotation | |
def g1 = o.getROI().getGeometry().buffer(0) | |
//If there are multiple annotations of the same type, prevent checking distances against itself | |
combinedClassObjects.each{cco-> | |
combinedGeometry = cco.value | |
if (o.getPathClass() == cco.key){ | |
combinedGeometry= combinedGeometry.difference(GeometryPrecisionReducer.reduce(g1, PM)) | |
//combinedGeometry= (combinedGeometry.difference(g1)).buffer(0) | |
//print "internal" | |
} | |
double distancePixels = g1.distance(combinedGeometry) | |
double distanceCalibrated = distancePixels * cal.getAveragedPixelSize() | |
o.getMeasurementList().putMeasurement("Distance in um to nearest "+cco.key, distanceCalibrated) | |
} | |
} | |
import org.locationtech.jts.precision.GeometryPrecisionReducer | |
import org.locationtech.jts.geom.PrecisionModel | |
print "Done! Distances saved to each object's measurement list" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//better way to label cells by TMA core | |
//0.1.2 | |
hierarchy = getCurrentHierarchy() | |
hierarchy.getTMAGrid().getTMACoreList().each{ | |
coreName = it.getName() | |
hierarchy.getDescendantObjects(it, null, qupath.lib.objects.PathCellObject).each{ c-> | |
c.setName(coreName) | |
} | |
} | |
/* Version to specifically rename objects in annotations one level below the TMA. | |
hierarchy = getCurrentHierarchy() | |
hierarchy.getTMAGrid().getTMACoreList().each{ | |
coreName = it.getName() | |
hierarchy.getDescendantObjects(it, null, qupath.lib.objects.PathCellObject).each{ c-> | |
if (c.getLevel() == 3){ | |
cellName = c.getPathClass().toString() | |
print cellName | |
c.setName(coreName+" - "+cellName) | |
} | |
} | |
} | |
*/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// label cells within an annotation within a TMA core by the TMA core, not the annotation. | |
// Remove one getParent if there is no tissue annotation. | |
// 0.1.2 and 0.2.0 | |
getDetectionObjects() each {detection -> detection.setName(detection.getParent().getParent().getName())} | |
fireHierarchyUpdate() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Objective: Find the number of neighbors within distance X | |
//Problem: Delaunay clusters exclude nearby cells if another cluster obscures those cells | |
// 0.2.0 - VERY SLOW | |
/**********Detection radius*************/ | |
distanceMicrons = 25 | |
/***************************************/ | |
//Replace the next coding line with | |
//totalCells = getCellObjects().findAll{it.getPathClass() == getPathClass("Tumor"} | |
//to look at only one class (Tumor, in the above example) in the line below. | |
totalCells = getCellObjects() | |
print "please wait, this may take a long time" | |
totalCells.each{ | |
originalClass = it.getPathClass() | |
it.setPathClass(getPathClass("DjdofiSdflKFj")) | |
detectionCentroidDistances(false) | |
closeCells = totalCells.findAll{measurement(it,"Distance to detection DjdofiSdflKFj µm") <= distanceMicrons && measurement(it,"Distance to detection DjdofiSdflKFj µm") != 0} | |
it.getMeasurementList().putMeasurement("Cells within "+distanceMicrons+" microns", closeCells.size()) | |
it.setPathClass(originalClass) | |
} | |
removeMeasurements(qupath.lib.objects.PathCellObject, "Distance to detection DjdofiSdflKFj µm"); | |
println("Done") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Sometimes you need to set the metadata for a group of images, like TIFF files. | |
//0.2.0 | |
//Other script is shorter! | |
import static qupath.lib.gui.scripting.QPEx.* | |
import qupath.lib.images.servers.ImageServerMetadata | |
def imageData = getCurrentImageData() | |
def server = imageData.getServer() | |
def oldMetadata = server.getMetadata() | |
def newMetadata = new ImageServerMetadata.Builder(oldMetadata) | |
.magnification(10.0) | |
.pixelSizeMicrons(1.25, 1.25) | |
.build() | |
imageData.updateServerMetadata(newMetadata) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//https://forum.image.sc/t/script-for-sum-of-nucleaus-area-of-a-specific-annotation/36913/22 | |
// Choose the actual values, not always 0.5! | |
setPixelSizeMicrons(0.5, 0.5) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Nearest neighbor between full classes. 0.2.0. | |
//Essentially replaced by "Detect centroid distances 2D" command. | |
//Would need modifications for base classes. | |
//Note, summary measurements are by default turned off. Uncomment the bottom section. | |
//Reason: with 27 classes this leads to over 700 annotation level summary measurements, YMMV | |
imageData = getCurrentImageData() | |
server = imageData.getServer() | |
def metadata = getCurrentImageData().getServer().getOriginalMetadata() | |
def pixelSize = metadata.pixelCalibration.pixelWidth.value | |
maxDist = Math.sqrt(server.getHeight()*server.getHeight()+server.getWidth()*server.getWidth()) | |
classes = new ArrayList<>(getDetectionObjects().collect {it.getPathClass()?.getBaseClass()} as Set) | |
print "Classes found: " + classes.size() | |
cellsByClass = [] | |
classes.each{c-> | |
cellsByClass << getCellObjects().findAll{it.getPathClass() == c} | |
} | |
print "Beginning calculations: This can be slow for large data sets, wait for 'Done' message to prevent errors." | |
def near = 0.0 | |
for (i=0; i<classes.size(); i++){ | |
cellsByClass[i].each{c-> | |
nearest = [] | |
for (k=0; k<classes.size(); k++){ | |
near = maxDist | |
//cycle through all cells of k Class finding the min distance | |
cellsByClass[k].each{d-> | |
dist = Math.sqrt(( c.getNucleusROI().getCentroidX() - d.getNucleusROI().getCentroidX())*(c.getNucleusROI().getCentroidX() - d.getNucleusROI().getCentroidX())+( c.getNucleusROI().getCentroidY() - d.getNucleusROI().getCentroidY())*(c.getNucleusROI().getCentroidY() - d.getNucleusROI().getCentroidY())) | |
if (dist > 0){ | |
near = Math.min(near,dist) | |
} | |
} | |
c.getMeasurementList().putMeasurement("Nearest "+ classes[k].toString(), near*pixelSize) | |
} | |
} | |
} | |
//Make measurements for Annotations | |
//This generates a MASSIVE list if you have many classes. Not recommended for export if there are more than 3-4 classes. | |
/* | |
getAnnotationObjects().each{anno-> | |
//Swap the below "classList" with "baseClasses" to get distances between all base classes | |
classes.each{c-> | |
cellsOfOneType = anno.getChildObjects().findAll{it.getPathClass() == c} | |
if (cellsOfOneType.size()>0){ | |
classes.each{s-> | |
currentTotal = 0 | |
cellsOfOneType.each{ | |
currentTotal += measurement(it, "Nearest "+ s.toString()) | |
} | |
anno.getMeasurementList().putMeasurement("Mean distance in µm from "+s.toString()+" to nearest "+c.toString(),currentTotal/cellsOfOneType.size()) | |
} | |
}} | |
} | |
*/ | |
print "Done" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Calculate the mean OD values in the nucleus and cytoplasm for any number of sets of color vectors | |
//Intended for 0.1.2, there are easier ways to do this in 0.2.0 with the ability to choose Nucleus as the ROI for Add intensity features. | |
import qupath.lib.objects.* | |
//This function holds a list of color vectors and their Add Intensity Features command that will add the desired measurements | |
//to your cells. Make sure you name the stains (for example in the first example, Stain 1 is called "Blue") differently | |
//so that their Measurements will end up labeled differently. Notice that the Add Intensity Features command includes | |
//"Colorstain":true, etc. which needs to be true for the measurements you wish to add. | |
def addColors(){ | |
setColorDeconvolutionStains('{"Name" : "DAB Yellow", "Stain 1" : "Blue", "Values 1" : "0.56477 0.65032 0.50806 ", "Stain 2" : "Yellow", "Values 2" : "0.0091 0.01316 0.99987 ", "Background" : " 255 255 255 "}'); | |
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 0.25, "region": "ROI", "tileSizeMicrons": 25.0, "colorOD": true, "colorStain1": true, "colorStain2": true, "colorStain3": false, "colorRed": false, "colorGreen": false, "colorBlue": false, "colorHue": false, "colorSaturation": false, "colorBrightness": false, "doMean": true, "doStdDev": false, "doMinMax": false, "doMedian": false, "doHaralick": false, "haralickDistance": 1, "haralickBins": 32}'); | |
setColorDeconvolutionStains('{"Name" : "Background1", "Stain 1" : "Blue Background1", "Values 1" : "0.56195 0.77393 0.29197 ", "Stain 2" : "Beige Background1", "Values 2" : "0.34398 0.59797 0.72396 ", "Background" : " 255 255 255 "}'); | |
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 0.25, "region": "ROI", "tileSizeMicrons": 25.0, "colorOD": false, "colorStain1": true, "colorStain2": true, "colorStain3": false, "colorRed": false, "colorGreen": false, "colorBlue": false, "colorHue": false, "colorSaturation": false, "colorBrightness": false, "doMean": true, "doStdDev": false, "doMinMax": false, "doMedian": false, "doHaralick": false, "haralickDistance": 1, "haralickBins": 32}'); | |
} | |
//The only thing beyond this point that should need to be modified is the removalList command at the end, which you can disable | |
//if you wish to keep whole cell measurements | |
// Get cells & create temporary nucleus objects - storing link to cell in a map | |
def cells = getCellObjects() | |
def map = [:] | |
for (cell in cells) { | |
def detection = new PathDetectionObject(cell.getNucleusROI()) | |
map[detection] = cell | |
} | |
// Get the nuclei as a list | |
def nuclei = map.keySet() as List | |
// and then select the nuclei | |
getCurrentHierarchy().getSelectionModel().setSelectedObjects(nuclei, null) | |
// Add as many sets of color deconvolution stains and Intensity features plugins as you want here | |
//This section ONLY adds measurements to the temporary nucleus objects, not the cell | |
addColors() | |
//etc etc. make sure each set has different names for the stains or else they will overwrite | |
// Don't need selection now | |
clearSelectedObjects() | |
// Can update measurements generated for the nucleus to the parent cell's measurement list | |
for (nucleus in nuclei) { | |
def cell = map[nucleus] | |
def cellMeasurements = cell.getMeasurementList() | |
for (key in nucleus.getMeasurementList().getMeasurementNames()) { | |
double value = nucleus.getMeasurementList().getMeasurementValue(key) | |
def listOfStrings = key.tokenize(':') | |
def baseValueName = listOfStrings[-2]+listOfStrings[-1] | |
nuclearName = "Nuclear" + baseValueName | |
cellMeasurements.putMeasurement(nuclearName, value) | |
} | |
cellMeasurements.closeList() | |
} | |
//I want to remove the original whole cell measurements which contain the mu symbol | |
// Not yet sure I will find the whole cell useful so not adding it back in yet. | |
def removalList = [] | |
//Create whole cell measurements for all of the above stains | |
selectDetections() | |
addColors() | |
//Create cytoplasmic measurements by subtracting the nuclear measurements from the whole cell, based total intensity (mean value*area) | |
for (cell in cells) { | |
//A mess of things I could probably call within functions | |
def cellMeasurements = cell.getMeasurementList() | |
double cellArea = cell.getMeasurementList().getMeasurementValue("Cell: Area") | |
double nuclearArea = cell.getMeasurementList().getMeasurementValue("Nucleus: Area") | |
double cytoplasmicArea = cellArea-nuclearArea | |
for (key in cell.getMeasurementList().getMeasurementNames()) { | |
//check if the value is one of the added intensity measurements | |
if (key.contains("per pixel")){ | |
//check if we already have this value in the list. | |
//probably an easier way to do this outside of every cycle of the for loop | |
if (!removalList.contains(key)) removalList<<key | |
double value = cell.getMeasurementList().getMeasurementValue(key) | |
//calculate the sum of the OD measurements | |
cellOD = value * cellArea | |
//break each measurement into component parts, then take the last two | |
// which will usually contain the color vector and "mean" | |
def listOfStrings = key.tokenize(':') | |
def baseValueName = listOfStrings[-2]+listOfStrings[-1] | |
//access the nuclear value version of the base name, and use it and the whole cell value to | |
//calcuate the rough cytoplasmic value | |
def cytoplasmicKey = "Cytopasmic" + baseValueName | |
def nuclearKey = "Nuclear" + baseValueName | |
def nuclearOD = nuclearArea * cell.getMeasurementList().getMeasurementValue(nuclearKey) | |
def cytoplasmicValue = (cellOD - nuclearOD)/cytoplasmicArea | |
cellMeasurements.putMeasurement(cytoplasmicKey, cytoplasmicValue) | |
} | |
} | |
cellMeasurements.closeList() | |
} | |
removalList.each {println(it)} | |
//comment out this line if you want the whole cell measurements. | |
removalList.each {removeMeasurements(qupath.lib.objects.PathCellObject, it)} | |
fireHierarchyUpdate() | |
println "Done!" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//0.1.2 | |
//Overall purpose: Groups of points are a single point object, and are not recorded as measurements within annotation objects. | |
//This script takes a group of created points, and counts which are within certain annotation regions. | |
//https://forum.image.sc/t/manual-annotation-and-measurements/25051/5?u=research_associate | |
//Main script start | |
//Assumes Tumor and Peri-tumor regions have been created and classified. | |
//Assumes Nerve Cell objects per area have been created | |
//Assumes no unclassified annotations prior to creating script | |
pixelSize = getCurrentImageData().getServer().getPixelHeightMicrons() | |
stroma = getAnnotationObjects().findAll{it.getPathClass() == getPathClass("Stroma") && it.getROI().isArea()} | |
totalArea = 0 | |
stroma.each{ | |
totalArea += it.getROI().getArea() | |
} | |
totalArea = totalArea*pixelSize*pixelSize | |
println("total stroma "+totalArea) | |
periTumorArea = 0 | |
periTumor = getAnnotationObjects().findAll{it.getPathClass() == getPathClass("periTumor")&& it.getROI().isArea()} | |
periTumor.each{ | |
periTumorArea += it.getROI().getArea() | |
} | |
periTumorArea = periTumorArea*pixelSize*pixelSize | |
println("peritumor area "+periTumorArea) | |
tumorArea = 0 | |
tumor = getAnnotationObjects().findAll{it.getPathClass() == getPathClass("Tumor")&& it.getROI().isArea()} | |
tumor.each{ | |
tumorArea += it.getROI().getArea() | |
} | |
tumorArea = tumorArea*pixelSize*pixelSize | |
println("tumor area "+tumorArea) | |
totalPeriTumorArea = periTumorArea - tumorArea | |
println("adjusted peritumor area "+totalPeriTumorArea) | |
totalStromalArea = totalArea - periTumorArea | |
println("adjusted stroma area"+ totalStromalArea) | |
points = getAnnotationObjects().findAll{it.isPoint() } | |
createSelectAllObject(true); | |
resultsSummary = getAnnotationObjects().findAll{it.getPathClass() == null} | |
resultsSummary[0].setPathClass(getPathClass("Results")) | |
resultsSummary[0].getMeasurementList().putMeasurement("Stroma Area um^2", totalStromalArea) | |
resultsSummary[0].getMeasurementList().putMeasurement("Tumor Area um^2", tumorArea) | |
resultsSummary[0].getMeasurementList().putMeasurement("Peri-Tumor Area um^2",totalPeriTumorArea) | |
tumorPoints = points.findAll{it.getPathClass() == getPathClass("Tumor")} | |
totalTumorPoints = 0 | |
tumorPoints.each{totalTumorPoints += it.getROI().getPointList().size()} | |
println("tumor nerves"+totalTumorPoints) | |
stromaPoints = points.findAll{it.getPathClass() == getPathClass("Stroma")} | |
totalStromaPoints = 0 | |
stromaPoints.each{totalStromaPoints += it.getROI().getPointList().size()} | |
println("stroma nerves"+totalStromaPoints) | |
periTumorPoints = points.findAll{it.getPathClass() == getPathClass("periTumor")} | |
totalPeriTumorPoints = 0 | |
periTumorPoints.each{totalPeriTumorPoints += it.getROI().getPointList().size()} | |
println("peritumor nerves"+totalPeriTumorPoints) | |
resultsSummary[0].getMeasurementList().putMeasurement("Stroma Nerves per mm^2",1000000*totalStromaPoints/totalStromalArea) | |
resultsSummary[0].getMeasurementList().putMeasurement("Tumor Nerves per mm^2",1000000*totalTumorPoints/tumorArea) | |
resultsSummary[0].getMeasurementList().putMeasurement("Peri-Tumor Nerves per mm^2",1000000*totalPeriTumorPoints/totalPeriTumorArea) | |
getAnnotationObjects().each{it.setLocked(true)} | |
print "Done!" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Tested and working in 0.2.3 | |
//Intended to be used on small area annotations to check for particularly high R-Sq | |
//Measurements below the cutoff (between 0-1) will be ignored - not added to the measurement list | |
cutoff = 0.5 | |
//Adds R^2 value between two chosen channels within all annotations. | |
//Look in the measurement list of any given annotation | |
//ANNOTATIONS MAY FAIL IF THE ANNOTATIONS ARE TOO LARGE. | |
def imageData = getCurrentImageData() | |
def hierarchy = imageData.getHierarchy() | |
def serverOriginal = imageData.getServer() | |
nchannels = getCurrentImageData().getServer().nChannels() | |
String path = serverOriginal.getPath() | |
double downsample = 1.0 | |
ImageServer<BufferedImage> server = serverOriginal | |
logger.info("Channel count: "+nchannels) | |
getAnnotationObjects().each{ | |
//Get the bounding box region around the target detection | |
roi = it.getROI() | |
request = RegionRequest.createInstance(path, downsample, roi) | |
pathImage = IJTools.convertToImagePlus(server, request) | |
imp = IJTools.convertToImagePlus(server, request).getImage() | |
//println(imp.getClass()) | |
//Extract the first channel as a list of pixel values | |
for (c = 1; c<=nchannels; c++){ | |
firstChanImage = imp.getStack().getProcessor(c) | |
firstChanImage = firstChanImage.convertToFloatProcessor() //Needed to handle big numbers | |
ch1Pixels = firstChanImage.getPixels() | |
//Create a mask so that only the pixels we want from the bounding box area are used in calculations | |
bpSLICs = createObjectMask(pathImage, it).getPixels() | |
int size = ch1Pixels.size() | |
//Cycle through all remaining channels to compare them to channel i | |
for (k = c+1;k<=nchannels;k++){ | |
secondChanImage= imp.getStack().getProcessor(k) | |
secondChanImage=secondChanImage.convertToFloatProcessor() | |
ch2Pixels = secondChanImage.getPixels() | |
ch1 = [] | |
ch2 = [] | |
for (i=0; i<size-1; i++){ | |
if(bpSLICs[i]){ | |
ch1<<ch1Pixels[i] | |
ch2<<ch2Pixels[i] | |
} | |
} | |
def points = new double [ch1.size()][2] | |
for(i=0;i < ch1.size()-1; i++){ | |
points[i][0] = ch1[i] | |
points[i][1] = ch2[i] | |
} | |
def regression = new org.apache.commons.math3.stat.regression.SimpleRegression() | |
regression.addData(points) | |
double r2 = regression.getRSquare() | |
double slope = regression.getSlope() | |
String name = c+"+"+k+" R^2" | |
String slopeName = c+"+"+k+" slope" | |
if (r2 > cutoff){ | |
it.getMeasurementList().putMeasurement(name, r2) | |
it.getMeasurementList().putMeasurement(slopeName, slope) | |
} | |
} | |
} | |
} | |
def createObjectMask(PathImage pathImage, PathObject object) { | |
//create a byteprocessor that is the same size as the region we are analyzing | |
def bp = new ByteProcessor(pathImage.getImage().getWidth(), pathImage.getImage().getHeight()) | |
//create a value to fill into the "good" area | |
bp.setValue(1.0) | |
def roi = object.getROI() | |
roiIJ = IJTools.convertToIJRoi(roi, pathImage) | |
bp.fill(roiIJ) | |
//fill the ROI with the setValue to create the mask, the other values should be 0 | |
return bp | |
} | |
import javafx.beans.property.SimpleLongProperty | |
import qupath.lib.gui.QuPathGUI | |
import qupath.lib.regions.RegionRequest | |
import ij.process.ByteProcessor; | |
import ij.process.ImageProcessor; | |
import java.awt.image.BufferedImage | |
//import qupath.imagej.objects.ROIConverterIJ | |
import ij.process.ImageProcessor | |
import qupath.lib.images.servers.ImageServer | |
import qupath.lib.objects.PathObject | |
import qupath.imagej.tools.IJTools | |
import qupath.lib.images.PathImage | |
//import qupath.imagej.objects.PathImagePlus | |
import org.slf4j.Logger; | |
import org.slf4j.LoggerFactory; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Calculate the Rsquared value to look for linear relationships between two measurements. | |
//See complex scripts for a GUI and plots | |
//0.1.2 and 0.2.0 | |
//Use the findAll statement to select specific classes of cells | |
//it.getPathClass() == getPathClass("Tumor") | |
cells = getCellObjects().findAll{it} | |
def points = new double [cells.size()-1][2] | |
for(i=0;i < cells.size()-1; i++){ | |
points[i][0] = measurement(cells[i], "Nucleus: Area"); | |
points[i][1] = measurement(cells[i], "Nucleus: Perimeter") | |
} | |
line = bestFit(points) | |
//bestFit snagged from | |
//https://blog.kenweiner.com/2008/12/groovy-best-fit-line.html | |
def bestFit(pts) { | |
// Find sums of x, y, xy, x^2 | |
n = pts.size() | |
xSum = pts.collect() {p -> p[0]}.sum() | |
ySum = pts.collect() {p -> p[1]}.sum() | |
xySum = pts.collect() {p -> p[0]*p[1]}.sum() | |
xSqSum = pts.collect() {p -> p[0]*p[0]}.sum() | |
// Find m and b such that y = mx + b | |
// m is the slope of the line and b is the y-intercept | |
m = (n*xySum - xSum*ySum) / (n*xSqSum - xSum*xSum) | |
b = (ySum - m*xSum) / n | |
// Find start and end points based on the left-most and right-most points | |
x1 = pts.collect() {p -> p[0]}.min() | |
y1 = m*x1 + b | |
x2 = pts.collect() {p -> p[0]}.max() | |
y2 = m*x2 + b | |
[[x1, y1], [x2, y2]] | |
println("slope :"+m+" intercept :"+b) | |
line = [m,b,ySum] | |
return (line) | |
} | |
meanY = line[2]/points.size() | |
pointError = [] | |
lineError = [] | |
for (i=0; i<cells.size()-1; i++){ | |
pointError << (points[i][1]-meanY)*(points[i][1]-meanY) | |
lineError << (line[0]*points[i][0]+line[1] - meanY)*(line[0]*points[i][0]+line[1] - meanY) | |
} | |
println("R^2 = "+ lineError.sum()/pointError.sum()) | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Checks for all detections within a given annotation, DOES NOT EXCLUDE DETECTIONS WITHIN SUB-ANNOTATIONS. | |
//That last bit should make it compatible with trained classifiers. | |
//0.1.2 | |
import qupath.lib.objects.PathDetectionObject | |
def imageData = getCurrentImageData() | |
def server = imageData.getServer() | |
def pixelSize = server.getPixelHeightMicrons() | |
Set classList = [] | |
for (object in getAllObjects().findAll{it.isDetection() /*|| it.isAnnotation()*/}) { | |
classList << object.getPathClass() | |
} | |
println(classList) | |
hierarchy = getCurrentHierarchy() | |
for (annotation in getAnnotationObjects()){ | |
for (aClass in classList){ | |
if (aClass){ | |
def tiles = hierarchy.getDescendantObjects(annotation,null, PathDetectionObject).findAll{it.getPathClass() == aClass} | |
double totalArea = 0 | |
for (def tile in tiles){ | |
totalArea += tile.getROI().getArea() | |
} | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" area px", totalArea) | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" area um^2", totalArea*pixelSize*pixelSize) | |
def annotationArea = annotation.getROI().getArea() | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" area %", totalArea/annotationArea*100) | |
} | |
} | |
} | |
println("done") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// Save the total value of your subcellular detection intensities to the cell measurement list so that it may be exported | |
// with the cell, or used for classification | |
//0.1.2 and 0.2.0 | |
// This value could then be divided by the total area of subcellular detection (Num spots, if Expected spot size is left as 1) | |
// for the mean intensity | |
// Create the name of the new measurement, in this case Channel 3 of a fluorescent image. | |
// ONLY the "Channel 3" should change to the name of the stain you are measuring, for example "DAB" in a brightfield image | |
def subcellularDetectionChannel = "Subcellular cluster: Channel 3: " | |
def newKey = subcellularDetectionChannel+"Mean Intensity" | |
//This step ensures that there is at least a measurement value of 0 in each cell | |
for (def cell : getCellObjects()) { | |
def ml = cell.getMeasurementList() | |
ml.putMeasurement(newKey, 0) | |
} | |
//Create a list of all subcellular objects | |
def subCells = getObjects({p -> p.class == qupath.imagej.detect.cells.SubcellularDetection.SubcellularObject.class}) | |
// Loop through all subcellular detections | |
for (c in subCells) { | |
// Find the containing cell | |
def cell = c.getParent() | |
def ml = cell.getMeasurementList() | |
double area = c.getMeasurementList().getMeasurementValue( subcellularDetectionChannel+"Area") | |
double intensity = c.getMeasurementList().getMeasurementValue( subcellularDetectionChannel+"Mean channel intensity") | |
//calculate the total intensity of stain in this subcellular object, and add it to the total | |
double stain = area*intensity | |
double x = cell.getMeasurementList().getMeasurementValue(newKey); | |
x = x+stain | |
ml.putMeasurement(newKey, x) | |
} | |
println("Total subcellular stain intensity added to cell measurement list as " + newKey) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//0.4.x | |
// from https://forum.image.sc/t/importing-cell-types-back-into-qupath-by-object-id/76718/8?u=research_associate | |
// Path to CSV file | |
def path = "/path/to/cellmeta.csv" | |
// Color separator | |
def delim = "," | |
// Get a map from cell ID -> cell | |
def cells = getCellObjects() | |
def cellsById = cells.groupBy(c -> c.getID().toString()) | |
// Read lines | |
def lines = new File(path).readLines() | |
def header = lines.pop().split(delim) | |
// Handle each line | |
for (def line in lines) { | |
def map = lineToMap(header, line.split(delim)) | |
def id = map['Object ID'] | |
def cell = cellsById[id] | |
if (cell == null) { | |
println "WARN: No cell found for $id" | |
continue | |
} | |
// Can set a list of classifications like this (will be auto-converted to PathClass in v0.4) | |
cell.classifications = [map['PhenoGraph_clusters']] | |
} | |
// Helper function to create a map from column headings -> values | |
Map lineToMap(String[] header, String[] content) { | |
def map = [:] | |
if (header.size() != content.size()) { | |
throw new IllegalArgumentException("Header length doesn't match the content length!") | |
} | |
for (int i = 0; i < header.size(); i++) | |
map[header[i]] = content[i] | |
return map | |
} |
Oops, only saw this now. Sorry, Gists don't generate any warnings about comments and I don't check for them that often. The best place to get help is the image.sc forum. If you include a complete description of your project with some images about what you want to try to accomplish, I might be able to help. Otherwise, stick with it, I was in pretty much the same boat a few years ago.
What I think you are asking about though is using the Subcellular option. You can generate subcellular objects in the Analyze menu, which serves as your threshold within each cell. It should also be covered in the colocalization guide on the forum.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, very amateur/non-coder. Currently, I am trying to analyse 11 channel images. I use cell detection to classify those images on a nuclear marker then grow out to identify cytoplasm. I have tried to incorporate the colocalisation script but I am struggling to get it to run
Is it possible to then create a mask off one channel to only measure parameters in those pixels that are part of the mask, classified by each individual cell?
Im using qupath 0.2 and it is the 5th channel of the image I want to mask to, 10 and 11 are DNA markers. Any help would be greatly appreciated.
Thanks