-
-
Save rmd13/c89be1d718b902dc963175982e791a13 to your computer and use it in GitHub Desktop.
Making Measurements in QuPath
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Collections of scripts harvested mainly from Pete, but also picked up from the forums | |
TOC | |
Accessing dynamic measurements.groovy - Most annotation measurements are dynamically created when you click on the annotation, and | |
are not accessible through the standard getMeasurement function. This is a way around that. | |
Affine transformation.groovy - access more accurate measurements for the affine transformation used in the image alignment (m5/m6+) | |
Alignment of local cells.groovy - check neighborhood for similarly aligned cells | |
Angles for cells.groovy - Calculate angles relative to horizontal. | |
Area measurements per class to annotation.groovy - Summary measurements for tile/area based analyses. Should work for all classes present. | |
Area measurements to annotation.groovy - Kind of terrible holdover, but adds size measurements to Annotations. Could be altered for | |
detection tiles, which would likely be more useful. | |
Cell summary measurements to annotation.groovy - Go through the cells and add some sort of summary measurement to their parent | |
annotation. Examples might be the mean area of all cells, or the min and max intensities of cells of a certain class. Get createive. | |
Chromaticity - cell measurement.groovy - Demonstration of how to calculate the green chromaticity using Calculate Features. | |
Class cell counts,percentages and density to parent annotation.groovy - mostly same as above but for cells | |
Class percentages to TMA measurements.groovy - Checks all cells in each core for membership within a listed set of classes. | |
Colocalization v4.groovy - Actually v3, and works with 0.2.0m2. Calculate Pearson's and Manders coefficients for detections | |
Added a version for 0.2.0m7 | |
Colocalization 0.1.2.groovy - Version of above script that works for 0.1.2 and 0.1.3. Does not work for 0.2.0+ | |
Create detection measurements.groovy - Create new detection measurements as combinations of other detection measurements. | |
For example, the ratio of the channel 2 nuclear intensity to the channel 3 nuclear intensity. | |
Label cells by TMA core.groovy - Rename cells based on their parent core. Could probably be done better with getDecendantObjects() | |
Metadata by script in m5.groovy - set pixel sizes by adjusting the metadata for an image. | |
metadata by script in m10.groovy - 0.2.0 M10 | |
Nearest Neighbors by class.groovy - calculates NN distances | |
Nuclear and cytoplasmic color vector means.groovy - Complicated script, but essentially allows you to create sets of color vectors | |
and obtain cytoplasmic and nuclear mean values for them. Useful in complex brightfield stains, has been used to differentiate cells in | |
5 stain plus hematoxylin images. | |
Points are in which annotations.groovy - version 1 See thread for intended use: https://forum.image.sc/t/manual-annotation-and-measurements/25051/5?u=research_associate | |
RSquared calculation.groovy - Calculates R-squared values. Does not currently save them anywhere. | |
Tile summary measurements to parent Annotation.groovy - Creates measurements for the total area and percentage area for each class. | |
Percentages are based off of annotation area, a different calculation would be needed if you have a "whitespace" tile type | |
Total subcellular intensity to cell value.groovy - Sums the total intensity of subcellular detections (area*mean intensity summed). | |
Primary functions here include: | |
Using "hierarchy = getCurrentHierarchy()" to get access to the hierarchy, so that you can more easily access subsets of cells | |
Using findAll{true/false statements here} to generate lists of objects you want to perform operations on. | |
The following gets all objects that are positive within whatever preceeds findAll | |
.findAll{it.getPathClass() == getPathClass("Positive")} | |
The simplest way to access a measurement is... measurement(object,"measurement name") | |
So if I wanted to print the nuclear area of each of my cells, for some reason: | |
getCellObjects().each{ | |
print measurement(it, "Nucleus: Area) | |
} | |
That cycles through each cell, and prints "it"s nuclear area. | |
The following access the measurement list, which is the list you see in the lower right of the Hierarchy tab when selecting | |
an object. | |
getMeasurementList() | |
getMeasurementValue(key) | |
putMeasurement(key, value) | |
Sometimes you may want to search an objects list using: | |
ml = object.getMeasurementList() | |
to generate a list called ml. | |
For any given list of objects, you could also use | |
getCellObjects().each{ measurement(it, "Nucleus: Area")} | |
to access the nuclear area of each cell. | |
Other times, you may know exactly what you want to modify, and can just use: | |
object.getMeasurementList().putMeasurement(key, value) | |
For adding a micrometer symbol in v1.2, use " + qupath.lib.common.GeneralTools.micrometerSymbol() + " |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Courtesy of Olivier Burri on QuPath Gitter | |
//For 0.1.2 | |
//import qupath.lib.gui.models.ObservableMeasurementTableData | |
//For 0.2.0 | |
import qupath.lib.gui.measure.ObservableMeasurementTableData | |
def ob = new ObservableMeasurementTableData(); | |
def annotations = getAnnotationObjects() | |
// This line creates all the measurements | |
ob.setImageData(getCurrentImageData(), annotations); | |
annotations.each { annotation->println( ob.getNumericValue(annotation, "H-score") ) | |
} | |
/* | |
Using this script to calculate circularity for annotations | |
import qupath.lib.gui.models.ObservableMeasurementTableData | |
def ob = new ObservableMeasurementTableData(); | |
def annotations = getAnnotationObjects() | |
// This line creates all the measurements | |
ob.setImageData(getCurrentImageData(), annotations); | |
annotations.each { | |
area=ob.getNumericValue(it, "Area µm^2") | |
perimeter=ob.getNumericValue(it, "Perimeter µm") | |
circularity = 4*3.14159*area/(perimeter*perimeter) | |
it.getMeasurementList().putMeasurement("Circularity", circularity) | |
} | |
*/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//0.2.0 | |
import qupath.lib.gui.align.ImageServerOverlay | |
def overlay = getCurrentViewer().getCustomOverlayLayers().find {it instanceof ImageServerOverlay} | |
print overlay.getAffine() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Uses Ben Pearson's script from https://groups.google.com/forum/#!searchin/qupath-users/rotate%7Csort:date/qupath-users/UvkNb54fYco/ri_4K6tiCwAJ | |
//Creates an area defined by the ortho and paradist measurements (orthogonal to the cell's orientation or parallel) | |
//Calculates how similarly the cells with centroids inside of that area are aligned. | |
//Requires "Angles for cells.groovy" script below to be run first. | |
//0.2.0 | |
//Edit below for 0.1.2 | |
print "running, please wait, Very Slow Process." | |
def server = getCurrentImageData().getServer() | |
//For 0.1.2 | |
//def sizePixels = server.getAveragedPixelSizeMicrons() | |
//For 0.2.0 | |
sizePixels = server.getPixelCalibration().getAveragedPixelSizeMicrons() | |
//EDIT SHAPE OF REGION TO CHECK FOR ALIGNMENT HERE | |
def orthoDist = 40/sizePixels | |
def paraDist = 30/sizePixels | |
def triangleArea(double Ax,double Ay,double Bx,double By,double Cx,double Cy) { | |
return (((Ax*(By- Cy) + Bx*(Cy-Ay) + Cx*(Ay-By))/2).abs()) | |
} | |
//def DIST = 10 | |
def cellList = getCellObjects() | |
//scan through all cells | |
getCellObjects().each{ | |
//get some values for th current cell | |
def originalAngle = it.getMeasurementList().getMeasurementValue("Cell angle") | |
def radians = Math.toRadians(-originalAngle) | |
def cellX = it.getNucleusROI().getCentroidX() | |
def cellY = it.getNucleusROI().getCentroidY() | |
//create a list of nearby cells | |
/* SIMPLE VERSION, A BOX | |
def nearbyCells = cellList.findAll{ c-> | |
DIST > server.getAveragedPixelSizeMicrons()*Math.sqrt(( c.getROI().getCentroidX() - cellX)*(c.getROI().getCentroidX() - cellX)+( c.getROI().getCentroidY() - cellY)*(c.getROI().getCentroidY() - cellY)); | |
} */ | |
/* | |
def roi = new RectangleROI(cellX-orthoDist/2, cellY-paraDist/2, orthoDist, paraDist) | |
def points = roi.getPolygonPoints() | |
def roiPointsArx = points.x.toArray() | |
def roiPointsAry = points.y.toArray() | |
*/ | |
def roiPointsArx = [cellX-paraDist/2, cellX+paraDist/2, cellX+paraDist/2, cellX-paraDist/2 ] | |
def roiPointsAry = [cellY+orthoDist/2, cellY+orthoDist/2, cellY-orthoDist/2, cellY-orthoDist/2 ] | |
for (i= 0; i< roiPointsAry.size(); i++) | |
{ | |
// correct the center to 0 | |
roiPointsArx[i] = roiPointsArx[i] - cellX | |
roiPointsAry[i] = roiPointsAry[i] - cellY | |
//Makes prime placeholders, which allows the calculations x'=xcos(theta)-ysin(theta), y'=ycos(theta)+xsin(theta) to be performed | |
double newPointX = roiPointsArx[i] | |
double newPointY = roiPointsAry[i] | |
// then rotate | |
roiPointsArx[i] = (newPointX * Math.cos(radians)) - (newPointY * Math.sin(radians)) | |
roiPointsAry[i] = (newPointY * Math.cos(radians)) + (newPointX * Math.sin(radians)) | |
// then move it back | |
roiPointsArx[i] = roiPointsArx[i] + cellX | |
roiPointsAry[i] = roiPointsAry[i] + cellY | |
} | |
//addObject(new PathAnnotationObject(new PolygonROI(roiPointsArx as float[], roiPointsAry as float[], -1, 0, 0))) | |
def nearbyCells = cellList.findAll{ orthoDist*paraDist-5 < ( triangleArea(roiPointsArx[0], roiPointsAry[0],roiPointsArx[1] ,roiPointsAry[1], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY()) | |
+triangleArea(roiPointsArx[1], roiPointsAry[1],roiPointsArx[2] ,roiPointsAry[2], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY()) | |
+triangleArea(roiPointsArx[2], roiPointsAry[2],roiPointsArx[3] ,roiPointsAry[3], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY()) | |
+triangleArea(roiPointsArx[3], roiPointsAry[3],roiPointsArx[0] ,roiPointsAry[0], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY())) && | |
orthoDist*paraDist+5 >( triangleArea(roiPointsArx[0], roiPointsAry[0],roiPointsArx[1] ,roiPointsAry[1], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY()) | |
+triangleArea(roiPointsArx[1], roiPointsAry[1],roiPointsArx[2] ,roiPointsAry[2], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY()) | |
+triangleArea(roiPointsArx[2], roiPointsAry[2],roiPointsArx[3] ,roiPointsAry[3], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY()) | |
+triangleArea(roiPointsArx[3], roiPointsAry[3],roiPointsArx[0] ,roiPointsAry[0], it.getNucleusROI().getCentroidX(),it.getNucleusROI().getCentroidY())) | |
} | |
//print(nearbyCells) | |
//prevent divide by zero errors | |
if (nearbyCells.size() < 2){ it.getMeasurementList().putMeasurement("LMADSD", 90); return;} | |
def angleList = [] | |
//within the local cells, find the differences in angle | |
for (cell in nearbyCells){ | |
def currentAngle = cell.getMeasurementList().getMeasurementValue("Cell angle") | |
def angleDifference = (currentAngle - originalAngle).abs() | |
//angles between two objects should be at most 90 degrees, or perpendicular | |
if (angleDifference > 90){ | |
angleList << (180 - Math.max(currentAngle, originalAngle)+Math.min(currentAngle,originalAngle)) | |
} else {angleList << angleDifference} | |
} | |
//complete the list with the original data point | |
//angleList << 0 | |
//calculate the standard deviation of the angular differences | |
def localAngleDifferenceMean = angleList.sum()/angleList.size() | |
def variance = 0 | |
angleList.each{v-> variance += (v-localAngleDifferenceMean)*(v-localAngleDifferenceMean)} | |
def stdDev = Math.sqrt(variance/(angleList.size())) | |
// add measurement for local, mean angle difference, standard deviation | |
//println("stddev "+stdDev) | |
it.getMeasurementList().putMeasurement("LMADSD", stdDev) | |
} | |
print "done" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//from Pete | |
//0.2.0 | |
//Change for 0.1.2 shown below | |
import qupath.imagej.objects.* | |
getCellObjects().each{ | |
def ml = it.getMeasurementList() | |
def roi = it.getNucleusROI() | |
//for 0.1.2 | |
//def roiIJ = ROIConverterIJ.convertToIJRoi(roi, 0, 0, 1) | |
roiIJ = IJTools.convertToIJRoi(roi, 0, 0, 1) | |
def angle = roiIJ.getFeretValues()[1] | |
ml.putMeasurement('Nucleus angle', angle) | |
ml.close() | |
} | |
fireHierarchyUpdate() | |
print "done" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Checks for all detections within a given annotation, DOES NOT EXCLUDE DETECTIONS WITHIN SUB-ANNOTATIONS. | |
//That last bit should make it compatible with trained classifiers. | |
//Result is the percentage area for all detections of a given class being applied as a measurement to the parent annotation. | |
//0.1.2 | |
import qupath.lib.objects.PathDetectionObject | |
def imageData = getCurrentImageData() | |
def server = imageData.getServer() | |
def pixelSize = server.getPixelHeightMicrons() | |
Set classList = [] | |
for (object in getAllObjects().findAll{it.isDetection() /*|| it.isAnnotation()*/}) { | |
classList << object.getPathClass() | |
} | |
println(classList) | |
hierarchy = getCurrentHierarchy() | |
for (annotation in getAnnotationObjects()){ | |
def annotationArea = annotation.getROI().getArea() | |
for (aClass in classList){ | |
if (aClass){ | |
def tiles = hierarchy.getDescendantObjects(annotation,null, PathDetectionObject).findAll{it.getPathClass() == aClass} | |
double totalArea = 0 | |
for (def tile in tiles){ | |
totalArea += tile.getROI().getArea() | |
} | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" area px", totalArea) | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" area um^2", totalArea*pixelSize*pixelSize) | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" area %", totalArea/annotationArea*100) | |
} | |
} | |
} | |
println("done") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Useful when using detection objects returned from ImageJ macros. Note that areas are in pixels and would need to be converted to microns | |
//0.1.2 | |
import qupath.lib.objects.PathDetectionObject | |
hierarchy = getCurrentHierarchy() | |
for (annotation in getAnnotationObjects()){ | |
//Block 1 | |
def tiles = hierarchy.getDescendantObjects(annotation,null, PathDetectionObject) | |
double totalArea = 0 | |
for (def tile in tiles){ | |
totalArea += tile.getROI().getArea() | |
} | |
annotation.getMeasurementList().putMeasurement("Marked area px", totalArea) | |
def annotationArea = annotation.getROI().getArea() | |
annotation.getMeasurementList().putMeasurement("Marked area %", totalArea/annotationArea*100) | |
} | |
println("done") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Sometimes you may want to add a summary measurement from cells within each annotation to the annotation itself. | |
//This will allow you to see that measurement in the "Show Annotation Measurements" list. | |
//In this case, it will add the total area taken up by Positive class cells within each annotation to their parent | |
//annotation as "Positive Area" | |
//0.1.2 | |
import qupath.lib.objects.PathCellObject | |
hierarchy = getCurrentHierarchy() | |
for (annotation in getAnnotationObjects()){ | |
//Block 1 | |
def positiveCells = hierarchy.getDescendantObjects(annotation,null, PathCellObject).findAll{it.getPathClass() == getPathClass("Positive")} | |
double totalArea = 0 | |
for (def cell in positiveCells){ | |
totalArea += cell.getMeasurementList().getMeasurementValue("Cell: Area") | |
} | |
//Comment the following in or out depending on whether you want to see the output | |
//println("Mean area for Positive is: " + totalArea/positiveCells.size) | |
//println("Total Positive Area is: " + totalArea) | |
//Add the total as "Positive Area" to each annotation. | |
annotation.getMeasurementList().putMeasurement("Positive Area", totalArea) | |
//Add the percentage positive area to the annotations measurement list | |
def annotationArea = annotation.getROI().getArea() | |
annotation.getMeasurementList().putMeasurement("Positive Area %", totalArea/annotationArea*100) | |
//Block 2 - add as many blocks as you have classes | |
//... | |
} | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// https://forum.image.sc/t/detecting-purple-chromogen-classifying-cells-based-on-green-chromaticity/35576/5 | |
//0.2.0 | |
// Add intensity features (cells already detected) | |
selectCells(); | |
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 1.0, "region": "Cell nucleus", "tileSizeMicrons": 25.0, "colorOD": false, "colorStain1": false, "colorStain2": false, "colorStain3": false, "colorRed": true, "colorGreen": true, "colorBlue": true, "colorHue": false, "colorSaturation": false, "colorBrightness": false, "doMean": true, "doStdDev": false, "doMinMax": false, "doMedian": false, "doHaralick": false, "haralickDistance": 1, "haralickBins": 32}'); | |
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 1.0, "region": "ROI", "tileSizeMicrons": 25.0, "colorOD": false, "colorStain1": false, "colorStain2": false, "colorStain3": false, "colorRed": true, "colorGreen": true, "colorBlue": true, "colorHue": false, "colorSaturation": false, "colorBrightness": false, "doMean": true, "doStdDev": false, "doMinMax": false, "doMedian": false, "doHaralick": false, "haralickDistance": 1, "haralickBins": 32}'); | |
// Add chromaticity measurements | |
def nucleusMeasurement = "Nucleus: 1.00 µm per pixel: %s: Mean" | |
def cellMeasurement = "ROI: 1.00 µm per pixel: %s: Mean" | |
for (cell in getCellObjects()) { | |
def measurementList = cell.getMeasurementList() | |
addGreenChromaticity(measurementList, nucleusMeasurement) | |
addGreenChromaticity(measurementList, cellMeasurement) | |
measurementList.close() | |
} | |
fireHierarchyUpdate() | |
def addGreenChromaticity(measurementList, measurement) { | |
double r = measurementList.getMeasurementValue(String.format(measurement, "Red")) | |
double g = measurementList.getMeasurementValue(String.format(measurement, "Green")) | |
double b = measurementList.getMeasurementValue(String.format(measurement, "Blue")) | |
def name = String.format(measurement, "Green chromaticity") | |
measurementList.putMeasurement(name, g/Math.max(1, r+g+b)) | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Checks for all detections within a given annotation, DOES NOT EXCLUDE DETECTIONS WITHIN SUB-ANNOTATIONS. | |
//That last bit should make it compatible with trained classifiers. | |
//0.2.0 | |
import qupath.lib.objects.PathCellObject | |
imageData = getCurrentImageData() | |
server = imageData.getServer() | |
def metadata = getCurrentImageData().getServer().getOriginalMetadata() | |
def pixelSize = metadata.pixelCalibration.pixelWidth.value | |
Set classList = [] | |
for (object in getAllObjects().findAll{it.isDetection() /*|| it.isAnnotation()*/}) { | |
classList << object.getPathClass() | |
} | |
println(classList) | |
hierarchy = getCurrentHierarchy() | |
def totalCells = [] | |
for (annotation in getAnnotationObjects()){ | |
totalCells = [] | |
//qupath.lib.objects.helpers.PathObjectTools.getDescendantObjects(annotation,totalCells, PathCellObject) | |
qupath.lib.objects.PathObjectTools.getDescendantObjects(annotation,totalCells, PathCellObject) | |
for (aClass in classList){ | |
if (aClass){ | |
if (totalCells.size() > 0){ | |
cells = totalCells.findAll{it.getPathClass() == aClass} | |
//annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells", cells.size()) | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" %", cells.size()*100/totalCells.size()) | |
annotationArea = annotation.getROI().getArea() | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells/mm^2", cells.size()/(annotationArea*pixelSize*pixelSize/1000000)) | |
} else { | |
//annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells", 0) | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" %", 0) | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells/mm^2", 0) | |
} | |
} | |
} | |
} | |
println("done") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Checks for all detections within a given annotation, DOES NOT EXCLUDE DETECTIONS WITHIN SUB-ANNOTATIONS. | |
//That last bit should make it compatible with trained classifiers. | |
//0.1.2 | |
//Do you want to include cell counts? If so, make True. This can cause duplicate measurements in 1.3 and beyond | |
COUNTS = false | |
import qupath.lib.objects.PathCellObject | |
imageData = getCurrentImageData() | |
server = imageData.getServer() | |
pixelSize = server.getPixelHeightMicrons() | |
Set classList = [] | |
for (object in getAllObjects().findAll{it.isDetection() /*|| it.isAnnotation()*/}) { | |
classList << object.getPathClass() | |
} | |
println(classList) | |
hierarchy = getCurrentHierarchy() | |
for (annotation in getAnnotationObjects()){ | |
totalCells = [] | |
totalCells = hierarchy.getDescendantObjects(annotation,null, PathCellObject) | |
for (aClass in classList){ | |
if (aClass){ | |
if (totalCells.size() > 0){ | |
cells = hierarchy.getDescendantObjects(annotation,null, PathCellObject).findAll{it.getPathClass() == aClass} | |
if(COUNTS){annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells", cells.size())} | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" %", cells.size()*100/totalCells.size()) | |
annotationArea = annotation.getROI().getArea() | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells/mm^2", cells.size()/(annotationArea*pixelSize*pixelSize/1000000)) | |
} else { | |
if(COUNTS){annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells", 0)} | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" %", 0) | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells/mm^2", 0) | |
} | |
} | |
} | |
} | |
println("done") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// Add percentages by cell class to each TMA core, tested for | |
//0.1.2 | |
import qupath.lib.objects.PathCellObject | |
hierarchy = getCurrentHierarchy() | |
cores = hierarchy.getTMAGrid().getTMACoreList() | |
Set list = [] | |
for (object in getAllObjects().findAll{it.isDetection() /*|| it.isAnnotation()*/}) { | |
list << object.getPathClass().toString() | |
} | |
cores.each { | |
//Find the cell count in this core | |
total = hierarchy.getDescendantObjects(it, null, PathCellObject).size() | |
//Prevent divide by zero errors in empty TMA cores | |
if (total != 0){ | |
for (className in list) { | |
cellType = hierarchy.getDescendantObjects(it,null, PathCellObject).findAll{it.getPathClass() == getPathClass(className)}.size() | |
it.getMeasurementList().putMeasurement(className+" cell %", cellType/(total)*100) | |
} | |
} | |
else { | |
for (className in list) { | |
it.getMeasurementList().putMeasurement(className+" cell %", 0) | |
} | |
} | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Version 3.1 Should work for 0.1.2 and 0.1.3 | |
//****************VALUES TO EDIT***********// | |
//Channel numbers are based on cell measurement/channel order in Brightness/contrast menu, starting with 1 | |
int FIRST_CHANNEL = 9 | |
int SECOND_CHANNEL = 9 | |
//CHOOSE ONE: "cell", "nucleus", "cytoplasm", "tile", "detection", "subcell" | |
//"detection" should be the equivalent of everything | |
String objectType = "cell" | |
//These should be figured out for a given sample to eliminate background signal | |
//Pixels below this value will not be considered for a given channel. | |
//Used for Manders coefficients only. | |
ch1Background = 100000 | |
ch2Background = 100000 | |
//***************NO TOUCHEE past here************// | |
import qupath.lib.regions.RegionRequest | |
import qupath.imagej.images.servers.ImagePlusServer | |
import qupath.imagej.images.servers.ImagePlusServerBuilder | |
import ij.process.ByteProcessor; | |
import ij.process.ImageProcessor; | |
import java.awt.image.BufferedImage | |
import ij.ImagePlus | |
import qupath.imagej.objects.ROIConverterIJ | |
import ij.process.ImageProcessor | |
import qupath.lib.roi.RectangleROI | |
import qupath.lib.images.servers.ImageServer | |
import qupath.lib.objects.PathObject | |
import qupath.imagej.helpers.IJTools | |
import qupath.lib.gui.ImageWriterTools | |
def imageData = getCurrentImageData() | |
def hierarchy = imageData.getHierarchy() | |
ImageServer<BufferedImage> serverOriginal = imageData.getServer() | |
String path = serverOriginal.getPath() | |
double downsample = 1.0 | |
def server = ImagePlusServerBuilder.ensureImagePlusWholeSlideServer(serverOriginal) | |
println("Running, please wait...") | |
//target the objects you want to analyze | |
if(objectType == "cell" || objectType == "nucleus" || objectType == "cytoplasm" ){detections = getCellObjects()} | |
if(objectType == "tile"){detections = getDetectionObjects().findAll{it.isTile()}} | |
if(objectType == "detection"){detections = getDetectionObjects()} | |
if(objectType == "subcell") {detections = getObjects({p-> p.class == qupath.imagej.detect.cells.SubcellularDetection.SubcellularObject.class})} | |
println("Count = "+ detections.size()) | |
detections.each{ | |
//Get the bounding box region around the target detection | |
roi = it.getROI() | |
region = RegionRequest.createInstance(path, downsample, roi) | |
imp = server.readImagePlusRegion(region).getImage() | |
//Extract the first channel as a list of pixel values | |
imp.setC(FIRST_CHANNEL) | |
firstChanImage = imp.getProcessor() | |
firstChanImage = firstChanImage.convertToFloatProcessor() //Needed to handle big numbers | |
ch1Pixels = firstChanImage.getPixels() | |
//Create a mask so that only the pixels we want from the bounding box area are used in calculations | |
bpSLICs = createObjectMask(firstChanImage, downsample, it, objectType).getPixels() | |
//println(bpSLICs) | |
//println(bpSLICs.getPixels()) | |
//println("ch1 size"+ch1.size()) | |
size = ch1Pixels.size() | |
imp.setC(SECOND_CHANNEL) | |
secondChanImage= imp.getProcessor() | |
secondChanImage=secondChanImage.convertToFloatProcessor() | |
ch2Pixels = secondChanImage.getPixels() | |
//use mask to extract only the useful pixels into new lists | |
//Maybe it would be faster to remove undesirable pixels instead? | |
ch1 = [] | |
ch2 = [] | |
for (i=0; i<size; i++){ | |
if(bpSLICs[i]){ | |
ch1<<ch1Pixels[i] | |
ch2<<ch2Pixels[i] | |
} | |
} | |
//Calculating the mean for Pearson's | |
ch1Premean = [] | |
ch2Premean = [] | |
for (x in ch1) ch1Premean<<(x/ch1.size()) | |
for (x in ch2) ch2Premean<<(x/ch2.size()) | |
double ch1Mean = ch1Premean.sum() | |
double ch2Mean = ch2Premean.sum() | |
//get the new number of pixels to be analyzed | |
size = ch1.size() | |
//Create the sum for the top half of the pearson's correlation coefficient | |
top = [] | |
for (i=0; i<size;i++){top << (ch1[i]-ch1Mean)*(ch2[i]-ch2Mean)} | |
pearsonTop = top.sum() | |
//Sums for the two bottom parts | |
botCh1 = [] | |
for (i=0; i<size;i++){botCh1<< (ch1[i]-ch1Mean)*(ch1[i]-ch1Mean)} | |
rootCh1 = Math.sqrt(botCh1.sum()) | |
botCh2 = [] | |
for (i=0; i<size;i++){botCh2 << (ch2[i]-ch2Mean)*(ch2[i]-ch2Mean)} | |
rootCh2 = Math.sqrt(botCh2.sum()) | |
pearsonBot = rootCh2*rootCh1 | |
double pearson = pearsonTop/pearsonBot | |
String name = "Pearson Corr "+FIRST_CHANNEL+"+"+SECOND_CHANNEL | |
it.getMeasurementList().putMeasurement(name, pearson) | |
//Start Manders calculations | |
double m1Top = 0 | |
for (i=0; i<size;i++){if (ch2[i] > ch2Background){m1Top += Math.max(ch1[i]-ch1Background,0)}} | |
double m1Bottom = 0 | |
for (i=0; i<size;i++){m1Bottom += Math.max(ch1[i]-ch1Background,0)} | |
double m2Top = 0 | |
for (i=0; i<size;i++){if (ch1[i] > ch1Background){m2Top += Math.max(ch2[i]-ch2Background,0)}} | |
double m2Bottom = 0 | |
for (i=0; i<size;i++){m2Bottom += Math.max(ch2[i]-ch2Background,0)} | |
//Check for divide by zero and add measurements | |
name = "M1 "+objectType+": ratio of Ch"+FIRST_CHANNEL+" intensity in Ch"+SECOND_CHANNEL+" areas" | |
double M1 = m1Top/m1Bottom | |
if (M1.isNaN()){M1 = 0} | |
it.getMeasurementList().putMeasurement(name, M1) | |
double M2 = m2Top/m2Bottom | |
if (M2.isNaN()){M2 = 0} | |
name = "M2 "+objectType+": ratio of Ch"+SECOND_CHANNEL+" intensity in Ch"+FIRST_CHANNEL+" areas" | |
it.getMeasurementList().putMeasurement(name, M2) | |
} | |
println("Done!") | |
//Making a mask. Phantom of the Opera style. | |
def createObjectMask(ImageProcessor ip, double downsample, PathObject object, String objectType) { | |
//create a byteprocessor that is the same size as the region we are analyzing | |
def bp = new ByteProcessor(ip.getWidth(), ip.getHeight()) | |
//create a value to fill into the "good" area | |
bp.setValue(1.0) | |
//extract the ROI and shift the position so that it is within the stand-alone image region | |
//Otherwise the coordinates are based off of the original image, and not just the small subsection we are analyzing | |
if (objectType == "nucleus"){ | |
def roi = object.getNucleusROI() | |
shift = roi.translate(ip.getWidth()/2-roi.getCentroidX(), ip.getHeight()/2-roi.getCentroidY()) | |
def roiIJ = ROIConverterIJ.convertToIJRoi(shift, 0, 0, downsample) | |
bp.fill(roiIJ) | |
}else if (objectType == "cytoplasm"){ | |
def nucleus = object.getNucleusROI() | |
shiftNuc = nucleus.translate(ip.getWidth()/2-roi.getCentroidX(), ip.getHeight()/2-roi.getCentroidY()) | |
roiIJNuc = ROIConverterIJ.convertToIJRoi(shiftNuc, 0, 0, downsample) | |
def roi = object.getROI() | |
shift = roi.translate(ip.getWidth()/2-roi.getCentroidX(), ip.getHeight()/2-roi.getCentroidY()) | |
def roiIJ = ROIConverterIJ.convertToIJRoi(shift, 0, 0, downsample) | |
bp.fill(roiIJ) | |
bp.setValue(0) | |
bp.fill(roiIJNuc) | |
} else { | |
def roi = object.getROI() | |
shift = roi.translate(ip.getWidth()/2-roi.getCentroidX(), ip.getHeight()/2-roi.getCentroidY()) | |
roiIJ = ROIConverterIJ.convertToIJRoi(shift, 0, 0, downsample) | |
bp.fill(roiIJ) | |
} | |
//fill the ROI with the setValue to create the mask, the other values should be 0 | |
return bp | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//0.2.0, but null pointer exception in certain image types. Have not been able to track it down. | |
//****************VALUES TO EDIT***********// | |
//Channel numbers are based on cell measurement/channel order in Brightness/contrast menu, starting with 1 | |
int FIRST_CHANNEL = 2 | |
int SECOND_CHANNEL = 3 | |
//CHOOSE ONE: "cell", "nucleus", "cytoplasm", "tile", "detection", "subcell" | |
//"detection" should be the equivalent of everything | |
String objectType = "cell" | |
//These should be figured out for a given sample to eliminate background signal | |
//Pixels below this value will not be considered for a given channel. | |
//Used for Manders coefficients only. | |
ch1Background = 1000 | |
ch2Background = 10000 | |
//***************No touchee past here************// | |
import qupath.lib.regions.RegionRequest | |
import ij.process.ByteProcessor; | |
import ij.process.ImageProcessor; | |
import java.awt.image.BufferedImage | |
import qupath.imagej.tools.IJTools | |
import ij.process.ImageProcessor | |
import qupath.lib.images.servers.ImageServer | |
import qupath.lib.objects.PathObject | |
import qupath.lib.images.PathImage | |
import qupath.imagej.tools.PathImagePlus | |
def imageData = getCurrentImageData() | |
def hierarchy = imageData.getHierarchy() | |
def serverOriginal = imageData.getServer() | |
String path = serverOriginal.getPath() | |
double downsample = 1.0 | |
ImageServer<BufferedImage> server = serverOriginal | |
println("Running, please wait...") | |
//target the objects you want to analyze | |
if(objectType == "cell" || objectType == "nucleus" || objectType == "cytoplasm" ){detections = getCellObjects()} | |
if(objectType == "tile"){detections = getDetectionObjects().findAll{it.isTile()}} | |
if(objectType == "detection"){detections = getDetectionObjects()} | |
if(objectType == "subcell") {detections = getObjects({p-> p.class == qupath.lib.objects.PathDetectionObject.class})} | |
println("Count = "+ detections.size()) | |
detections.each{ | |
//Get the bounding box region around the target detection | |
roi = it.getROI() | |
request = RegionRequest.createInstance(path, downsample, roi) | |
pathImage = IJTools.convertToImagePlus(server, request) | |
imp = pathImage.getImage() | |
//pathImage = PathImagePlus.createPathImage(imp, request) | |
//imp.show() | |
//imps = ij.plugin.ChannelSplitter.split(imp) | |
//println(imp.getClass()) | |
//Extract the first channel as a list of pixel values | |
//firstChanImage = imps[FIRST_CHANNEL-1] | |
firstChanImage = imp.getProcessor(FIRST_CHANNEL) | |
firstChanImage = firstChanImage.convertToFloatProcessor() //Needed to handle big numbers | |
ch1Pixels = firstChanImage.getPixels() | |
//Create a mask so that only the pixels we want from the bounding box area are used in calculations | |
bpSLICs = createObjectMask(pathImage, it, objectType).getPixels() | |
//println(bpSLICs) | |
//println(bpSLICs.getPixels()) | |
//println("ch1 size"+ch1.size()) | |
size = ch1Pixels.size() | |
secondChanImage= imp.getProcessor(SECOND_CHANNEL) | |
secondChanImage=secondChanImage.convertToFloatProcessor() | |
ch2Pixels = secondChanImage.getPixels() | |
//use mask to extract only the useful pixels into new lists | |
//Maybe it would be faster to remove undesirable pixels instead? | |
ch1 = [] | |
ch2 = [] | |
for (i=0; i<size; i++){ | |
if(bpSLICs[i]){ | |
ch1<<ch1Pixels[i] | |
ch2<<ch2Pixels[i] | |
} | |
} | |
/* | |
println(ch1) | |
println(ch2) | |
println("ch1 size"+ch1.size()) | |
println("ch2 size"+ch2.size()) | |
println("ch1mean "+ch1Mean) | |
println("ch2sum "+ch2.sum()) | |
println("ch2mean "+ch2Mean) | |
*/ | |
//Calculating the mean for Pearson's | |
double ch1Mean = ch1.sum()/ch1.size() | |
double ch2Mean = ch2.sum()/ch2.size() | |
//get the new number of pixels to be analyzed | |
size = ch1.size() | |
//Create the sum for the top half of the pearson's correlation coefficient | |
top = [] | |
for (i=0; i<size;i++){top << (ch1[i]-ch1Mean)*(ch2[i]-ch2Mean)} | |
pearsonTop = top.sum() | |
//Sums for the two bottom parts | |
botCh1 = [] | |
for (i=0; i<size;i++){botCh1<< (ch1[i]-ch1Mean)*(ch1[i]-ch1Mean)} | |
rootCh1 = Math.sqrt(botCh1.sum()) | |
botCh2 = [] | |
for (i=0; i<size;i++){botCh2 << (ch2[i]-ch2Mean)*(ch2[i]-ch2Mean)} | |
rootCh2 = Math.sqrt(botCh2.sum()) | |
pearsonBot = rootCh2*rootCh1 | |
double pearson = pearsonTop/pearsonBot | |
String name = "Pearson Corr "+objectType+":"+FIRST_CHANNEL+"+"+SECOND_CHANNEL | |
it.getMeasurementList().putMeasurement(name, pearson) | |
//Start Manders calculations | |
double m1Top = 0 | |
for (i=0; i<size;i++){if (ch2[i] > ch2Background){m1Top += Math.max(ch1[i]-ch1Background,0)}} | |
double m1Bottom = 0 | |
for (i=0; i<size;i++){m1Bottom += Math.max(ch1[i]-ch1Background,0)} | |
double m2Top = 0 | |
for (i=0; i<size;i++){if (ch1[i] > ch1Background){m2Top += Math.max(ch2[i]-ch2Background,0)}} | |
double m2Bottom = 0 | |
for (i=0; i<size;i++){m2Bottom += Math.max(ch2[i]-ch2Background,0)} | |
//Check for divide by zero and add measurements | |
name = "M1 "+objectType+": ratio of Ch"+FIRST_CHANNEL+" intensity in Ch"+SECOND_CHANNEL+" areas" | |
double M1 = m1Top/m1Bottom | |
if (M1.isNaN()){M1 = 0} | |
it.getMeasurementList().putMeasurement(name, M1) | |
double M2 = m2Top/m2Bottom | |
if (M2.isNaN()){M2 = 0} | |
name = "M2 "+objectType+": ratio of Ch"+SECOND_CHANNEL+" intensity in Ch"+FIRST_CHANNEL+" areas" | |
it.getMeasurementList().putMeasurement(name, M2) | |
} | |
println("Done!") | |
//Making a mask. Phantom of the Opera style. | |
def createObjectMask(PathImage pathImage, PathObject object, String objectType) { | |
//create a byteprocessor that is the same size as the region we are analyzing | |
def bp = new ByteProcessor(pathImage.getImage().getWidth(), pathImage.getImage().getHeight()) | |
//create a value to fill into the "good" area | |
bp.setValue(1.0) | |
if (objectType == "nucleus"){ | |
def roi = object.getNucleusROI() | |
def roiIJ = IJTools.convertToIJRoi(roi, pathImage) | |
bp.fill(roiIJ) | |
}else if (objectType == "cytoplasm"){ | |
def nucleus = object.getNucleusROI() | |
roiIJNuc = IJTools.convertToIJRoi(nucleus, pathImage) | |
def roi = object.getROI() | |
//fill in the whole cell area | |
def roiIJ = IJTools.convertToIJRoi(roi, pathImage) | |
bp.fill(roiIJ) | |
//remove the nucleus | |
bp.setValue(0) | |
bp.fill(roiIJNuc) | |
} else { | |
def roi = object.getROI() | |
roiIJ = IJTools.convertToIJRoi(roi, pathImage) | |
bp.fill(roiIJ) | |
} | |
//fill the ROI with the setValue to create the mask, the other values should be 0 | |
return bp | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Generating measurements in detections from other measurements created in QuPath | |
//0.1.2 and 0.2.0 | |
detections = getDetectionObjects() | |
detections.each{ | |
relativeDistribution2 = measurement(it, "ROI: 2.00 µm per pixel: Channel 2: Mean")/measurement(it, "ROI: 2.00 µm per pixel: Channel 2: Median") | |
it.getMeasurementList().putMeasurement("RelativeCh2", relativeDistribution2) | |
} | |
println("done") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//better way to label cells by TMA core | |
//0.1.2 | |
hierarchy = getCurrentHierarchy() | |
hierarchy.getTMAGrid().getTMACoreList().each{ | |
coreName = it.getName() | |
hierarchy.getDescendantObjects(it, null, qupath.lib.objects.PathCellObject).each{ c-> | |
c.setName(coreName) | |
} | |
} | |
/* Version to specifically rename objects in annotations one level below the TMA. | |
hierarchy = getCurrentHierarchy() | |
hierarchy.getTMAGrid().getTMACoreList().each{ | |
coreName = it.getName() | |
hierarchy.getDescendantObjects(it, null, qupath.lib.objects.PathCellObject).each{ c-> | |
if (c.getLevel() == 3){ | |
cellName = c.getPathClass().toString() | |
print cellName | |
c.setName(coreName+" - "+cellName) | |
} | |
} | |
} | |
*/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// label cells within an annotation within a TMA core by the TMA core, not the annotation. | |
// Remove one getParent if there is no tissue annotation. | |
// 0.1.2 and 0.2.0 | |
getDetectionObjects() each {detection -> detection.setName(detection.getParent().getParent().getName())} | |
fireHierarchyUpdate() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Sometimes you need to set the metadata for a group of images, like TIFF files. | |
//0.2.0 | |
//Other script is shorter! | |
import static qupath.lib.gui.scripting.QPEx.* | |
import qupath.lib.images.servers.ImageServerMetadata | |
def imageData = getCurrentImageData() | |
def server = imageData.getServer() | |
def oldMetadata = server.getMetadata() | |
def newMetadata = new ImageServerMetadata.Builder(oldMetadata) | |
.magnification(10.0) | |
.pixelSizeMicrons(1.25, 1.25) | |
.build() | |
imageData.updateServerMetadata(newMetadata) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//https://forum.image.sc/t/script-for-sum-of-nucleaus-area-of-a-specific-annotation/36913/22 | |
// Choose the actual values, not always 0.5! | |
setPixelSizeMicrons(0.5, 0.5) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Nearest neighbor between full classes. 0.2.0. | |
//Essentially replaced by "Detect centroid distances 2D" command. | |
//Would need modifications for base classes. | |
//Note, summary measurements are by default turned off. Uncomment the bottom section. | |
//Reason: with 27 classes this leads to over 700 annotation level summary measurements, YMMV | |
imageData = getCurrentImageData() | |
server = imageData.getServer() | |
def metadata = getCurrentImageData().getServer().getOriginalMetadata() | |
def pixelSize = metadata.pixelCalibration.pixelWidth.value | |
maxDist = Math.sqrt(server.getHeight()*server.getHeight()+server.getWidth()*server.getWidth()) | |
classes = new ArrayList<>(getDetectionObjects().collect {it.getPathClass()?.getBaseClass()} as Set) | |
print "Classes found: " + classes.size() | |
cellsByClass = [] | |
classes.each{c-> | |
cellsByClass << getCellObjects().findAll{it.getPathClass() == c} | |
} | |
print "Beginning calculations: This can be slow for large data sets, wait for 'Done' message to prevent errors." | |
def near = 0.0 | |
for (i=0; i<classes.size(); i++){ | |
cellsByClass[i].each{c-> | |
nearest = [] | |
for (k=0; k<classes.size(); k++){ | |
near = maxDist | |
//cycle through all cells of k Class finding the min distance | |
cellsByClass[k].each{d-> | |
dist = Math.sqrt(( c.getNucleusROI().getCentroidX() - d.getNucleusROI().getCentroidX())*(c.getNucleusROI().getCentroidX() - d.getNucleusROI().getCentroidX())+( c.getNucleusROI().getCentroidY() - d.getNucleusROI().getCentroidY())*(c.getNucleusROI().getCentroidY() - d.getNucleusROI().getCentroidY())) | |
if (dist > 0){ | |
near = Math.min(near,dist) | |
} | |
} | |
c.getMeasurementList().putMeasurement("Nearest "+ classes[k].toString(), near*pixelSize) | |
} | |
} | |
} | |
//Make measurements for Annotations | |
//This generates a MASSIVE list if you have many classes. Not recommended for export if there are more than 3-4 classes. | |
/* | |
getAnnotationObjects().each{anno-> | |
//Swap the below "classList" with "baseClasses" to get distances between all base classes | |
classes.each{c-> | |
cellsOfOneType = anno.getChildObjects().findAll{it.getPathClass() == c} | |
if (cellsOfOneType.size()>0){ | |
classes.each{s-> | |
currentTotal = 0 | |
cellsOfOneType.each{ | |
currentTotal += measurement(it, "Nearest "+ s.toString()) | |
} | |
anno.getMeasurementList().putMeasurement("Mean distance in µm from "+s.toString()+" to nearest "+c.toString(),currentTotal/cellsOfOneType.size()) | |
} | |
}} | |
} | |
*/ | |
print "Done" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Calculate the mean OD values in the nucleus and cytoplasm for any number of sets of color vectors | |
//Intended for 0.1.2, there are easier ways to do this in 0.2.0 with the ability to choose Nucleus as the ROI for Add intensity features. | |
import qupath.lib.objects.* | |
//This function holds a list of color vectors and their Add Intensity Features command that will add the desired measurements | |
//to your cells. Make sure you name the stains (for example in the first example, Stain 1 is called "Blue") differently | |
//so that their Measurements will end up labeled differently. Notice that the Add Intensity Features command includes | |
//"Colorstain":true, etc. which needs to be true for the measurements you wish to add. | |
def addColors(){ | |
setColorDeconvolutionStains('{"Name" : "DAB Yellow", "Stain 1" : "Blue", "Values 1" : "0.56477 0.65032 0.50806 ", "Stain 2" : "Yellow", "Values 2" : "0.0091 0.01316 0.99987 ", "Background" : " 255 255 255 "}'); | |
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 0.25, "region": "ROI", "tileSizeMicrons": 25.0, "colorOD": true, "colorStain1": true, "colorStain2": true, "colorStain3": false, "colorRed": false, "colorGreen": false, "colorBlue": false, "colorHue": false, "colorSaturation": false, "colorBrightness": false, "doMean": true, "doStdDev": false, "doMinMax": false, "doMedian": false, "doHaralick": false, "haralickDistance": 1, "haralickBins": 32}'); | |
setColorDeconvolutionStains('{"Name" : "Background1", "Stain 1" : "Blue Background1", "Values 1" : "0.56195 0.77393 0.29197 ", "Stain 2" : "Beige Background1", "Values 2" : "0.34398 0.59797 0.72396 ", "Background" : " 255 255 255 "}'); | |
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 0.25, "region": "ROI", "tileSizeMicrons": 25.0, "colorOD": false, "colorStain1": true, "colorStain2": true, "colorStain3": false, "colorRed": false, "colorGreen": false, "colorBlue": false, "colorHue": false, "colorSaturation": false, "colorBrightness": false, "doMean": true, "doStdDev": false, "doMinMax": false, "doMedian": false, "doHaralick": false, "haralickDistance": 1, "haralickBins": 32}'); | |
} | |
//The only thing beyond this point that should need to be modified is the removalList command at the end, which you can disable | |
//if you wish to keep whole cell measurements | |
// Get cells & create temporary nucleus objects - storing link to cell in a map | |
def cells = getCellObjects() | |
def map = [:] | |
for (cell in cells) { | |
def detection = new PathDetectionObject(cell.getNucleusROI()) | |
map[detection] = cell | |
} | |
// Get the nuclei as a list | |
def nuclei = map.keySet() as List | |
// and then select the nuclei | |
getCurrentHierarchy().getSelectionModel().setSelectedObjects(nuclei, null) | |
// Add as many sets of color deconvolution stains and Intensity features plugins as you want here | |
//This section ONLY adds measurements to the temporary nucleus objects, not the cell | |
addColors() | |
//etc etc. make sure each set has different names for the stains or else they will overwrite | |
// Don't need selection now | |
clearSelectedObjects() | |
// Can update measurements generated for the nucleus to the parent cell's measurement list | |
for (nucleus in nuclei) { | |
def cell = map[nucleus] | |
def cellMeasurements = cell.getMeasurementList() | |
for (key in nucleus.getMeasurementList().getMeasurementNames()) { | |
double value = nucleus.getMeasurementList().getMeasurementValue(key) | |
def listOfStrings = key.tokenize(':') | |
def baseValueName = listOfStrings[-2]+listOfStrings[-1] | |
nuclearName = "Nuclear" + baseValueName | |
cellMeasurements.putMeasurement(nuclearName, value) | |
} | |
cellMeasurements.closeList() | |
} | |
//I want to remove the original whole cell measurements which contain the mu symbol | |
// Not yet sure I will find the whole cell useful so not adding it back in yet. | |
def removalList = [] | |
//Create whole cell measurements for all of the above stains | |
selectDetections() | |
addColors() | |
//Create cytoplasmic measurements by subtracting the nuclear measurements from the whole cell, based total intensity (mean value*area) | |
for (cell in cells) { | |
//A mess of things I could probably call within functions | |
def cellMeasurements = cell.getMeasurementList() | |
double cellArea = cell.getMeasurementList().getMeasurementValue("Cell: Area") | |
double nuclearArea = cell.getMeasurementList().getMeasurementValue("Nucleus: Area") | |
double cytoplasmicArea = cellArea-nuclearArea | |
for (key in cell.getMeasurementList().getMeasurementNames()) { | |
//check if the value is one of the added intensity measurements | |
if (key.contains("per pixel")){ | |
//check if we already have this value in the list. | |
//probably an easier way to do this outside of every cycle of the for loop | |
if (!removalList.contains(key)) removalList<<key | |
double value = cell.getMeasurementList().getMeasurementValue(key) | |
//calculate the sum of the OD measurements | |
cellOD = value * cellArea | |
//break each measurement into component parts, then take the last two | |
// which will usually contain the color vector and "mean" | |
def listOfStrings = key.tokenize(':') | |
def baseValueName = listOfStrings[-2]+listOfStrings[-1] | |
//access the nuclear value version of the base name, and use it and the whole cell value to | |
//calcuate the rough cytoplasmic value | |
def cytoplasmicKey = "Cytopasmic" + baseValueName | |
def nuclearKey = "Nuclear" + baseValueName | |
def nuclearOD = nuclearArea * cell.getMeasurementList().getMeasurementValue(nuclearKey) | |
def cytoplasmicValue = (cellOD - nuclearOD)/cytoplasmicArea | |
cellMeasurements.putMeasurement(cytoplasmicKey, cytoplasmicValue) | |
} | |
} | |
cellMeasurements.closeList() | |
} | |
removalList.each {println(it)} | |
//comment out this line if you want the whole cell measurements. | |
removalList.each {removeMeasurements(qupath.lib.objects.PathCellObject, it)} | |
fireHierarchyUpdate() | |
println "Done!" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//0.1.2 | |
//Overall purpose: Groups of points are a single point object, and are not recorded as measurements within annotation objects. | |
//This script takes a group of created points, and counts which are within certain annotation regions. | |
//https://forum.image.sc/t/manual-annotation-and-measurements/25051/5?u=research_associate | |
//Main script start | |
//Assumes Tumor and Peri-tumor regions have been created and classified. | |
//Assumes Nerve Cell objects per area have been created | |
//Assumes no unclassified annotations prior to creating script | |
pixelSize = getCurrentImageData().getServer().getPixelHeightMicrons() | |
stroma = getAnnotationObjects().findAll{it.getPathClass() == getPathClass("Stroma") && it.getROI().isArea()} | |
totalArea = 0 | |
stroma.each{ | |
totalArea += it.getROI().getArea() | |
} | |
totalArea = totalArea*pixelSize*pixelSize | |
println("total stroma "+totalArea) | |
periTumorArea = 0 | |
periTumor = getAnnotationObjects().findAll{it.getPathClass() == getPathClass("periTumor")&& it.getROI().isArea()} | |
periTumor.each{ | |
periTumorArea += it.getROI().getArea() | |
} | |
periTumorArea = periTumorArea*pixelSize*pixelSize | |
println("peritumor area "+periTumorArea) | |
tumorArea = 0 | |
tumor = getAnnotationObjects().findAll{it.getPathClass() == getPathClass("Tumor")&& it.getROI().isArea()} | |
tumor.each{ | |
tumorArea += it.getROI().getArea() | |
} | |
tumorArea = tumorArea*pixelSize*pixelSize | |
println("tumor area "+tumorArea) | |
totalPeriTumorArea = periTumorArea - tumorArea | |
println("adjusted peritumor area "+totalPeriTumorArea) | |
totalStromalArea = totalArea - periTumorArea | |
println("adjusted stroma area"+ totalStromalArea) | |
points = getAnnotationObjects().findAll{it.isPoint() } | |
createSelectAllObject(true); | |
resultsSummary = getAnnotationObjects().findAll{it.getPathClass() == null} | |
resultsSummary[0].setPathClass(getPathClass("Results")) | |
resultsSummary[0].getMeasurementList().putMeasurement("Stroma Area um^2", totalStromalArea) | |
resultsSummary[0].getMeasurementList().putMeasurement("Tumor Area um^2", tumorArea) | |
resultsSummary[0].getMeasurementList().putMeasurement("Peri-Tumor Area um^2",totalPeriTumorArea) | |
tumorPoints = points.findAll{it.getPathClass() == getPathClass("Tumor")} | |
totalTumorPoints = 0 | |
tumorPoints.each{totalTumorPoints += it.getROI().getPointList().size()} | |
println("tumor nerves"+totalTumorPoints) | |
stromaPoints = points.findAll{it.getPathClass() == getPathClass("Stroma")} | |
totalStromaPoints = 0 | |
stromaPoints.each{totalStromaPoints += it.getROI().getPointList().size()} | |
println("stroma nerves"+totalStromaPoints) | |
periTumorPoints = points.findAll{it.getPathClass() == getPathClass("periTumor")} | |
totalPeriTumorPoints = 0 | |
periTumorPoints.each{totalPeriTumorPoints += it.getROI().getPointList().size()} | |
println("peritumor nerves"+totalPeriTumorPoints) | |
resultsSummary[0].getMeasurementList().putMeasurement("Stroma Nerves per mm^2",1000000*totalStromaPoints/totalStromalArea) | |
resultsSummary[0].getMeasurementList().putMeasurement("Tumor Nerves per mm^2",1000000*totalTumorPoints/tumorArea) | |
resultsSummary[0].getMeasurementList().putMeasurement("Peri-Tumor Nerves per mm^2",1000000*totalPeriTumorPoints/totalPeriTumorArea) | |
getAnnotationObjects().each{it.setLocked(true)} | |
print "Done!" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Calculate the Rsquared value to look for linear relationships between two measurements. | |
//See complex scripts for a GUI and plots | |
//0.1.2 and 0.2.0 | |
//Use the findAll statement to select specific classes of cells | |
//it.getPathClass() == getPathClass("Tumor") | |
cells = getCellObjects().findAll{it} | |
def points = new double [cells.size()-1][2] | |
for(i=0;i < cells.size()-1; i++){ | |
points[i][0] = measurement(cells[i], "Nucleus: Area"); | |
points[i][1] = measurement(cells[i], "Nucleus: Perimeter") | |
} | |
line = bestFit(points) | |
//bestFit snagged from | |
//https://blog.kenweiner.com/2008/12/groovy-best-fit-line.html | |
def bestFit(pts) { | |
// Find sums of x, y, xy, x^2 | |
n = pts.size() | |
xSum = pts.collect() {p -> p[0]}.sum() | |
ySum = pts.collect() {p -> p[1]}.sum() | |
xySum = pts.collect() {p -> p[0]*p[1]}.sum() | |
xSqSum = pts.collect() {p -> p[0]*p[0]}.sum() | |
// Find m and b such that y = mx + b | |
// m is the slope of the line and b is the y-intercept | |
m = (n*xySum - xSum*ySum) / (n*xSqSum - xSum*xSum) | |
b = (ySum - m*xSum) / n | |
// Find start and end points based on the left-most and right-most points | |
x1 = pts.collect() {p -> p[0]}.min() | |
y1 = m*x1 + b | |
x2 = pts.collect() {p -> p[0]}.max() | |
y2 = m*x2 + b | |
[[x1, y1], [x2, y2]] | |
println("slope :"+m+" intercept :"+b) | |
line = [m,b,ySum] | |
return (line) | |
} | |
meanY = line[2]/points.size() | |
pointError = [] | |
lineError = [] | |
for (i=0; i<cells.size()-1; i++){ | |
pointError << (points[i][1]-meanY)*(points[i][1]-meanY) | |
lineError << (line[0]*points[i][0]+line[1] - meanY)*(line[0]*points[i][0]+line[1] - meanY) | |
} | |
println("R^2 = "+ lineError.sum()/pointError.sum()) | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Checks for all detections within a given annotation, DOES NOT EXCLUDE DETECTIONS WITHIN SUB-ANNOTATIONS. | |
//That last bit should make it compatible with trained classifiers. | |
//0.1.2 | |
import qupath.lib.objects.PathDetectionObject | |
def imageData = getCurrentImageData() | |
def server = imageData.getServer() | |
def pixelSize = server.getPixelHeightMicrons() | |
Set classList = [] | |
for (object in getAllObjects().findAll{it.isDetection() /*|| it.isAnnotation()*/}) { | |
classList << object.getPathClass() | |
} | |
println(classList) | |
hierarchy = getCurrentHierarchy() | |
for (annotation in getAnnotationObjects()){ | |
for (aClass in classList){ | |
if (aClass){ | |
def tiles = hierarchy.getDescendantObjects(annotation,null, PathDetectionObject).findAll{it.getPathClass() == aClass} | |
double totalArea = 0 | |
for (def tile in tiles){ | |
totalArea += tile.getROI().getArea() | |
} | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" area px", totalArea) | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" area um^2", totalArea*pixelSize*pixelSize) | |
def annotationArea = annotation.getROI().getArea() | |
annotation.getMeasurementList().putMeasurement(aClass.getName()+" area %", totalArea/annotationArea*100) | |
} | |
} | |
} | |
println("done") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// Save the total value of your subcellular detection intensities to the cell measurement list so that it may be exported | |
// with the cell, or used for classification | |
//0.1.2 and 0.2.0 | |
// This value could then be divided by the total area of subcellular detection (Num spots, if Expected spot size is left as 1) | |
// for the mean intensity | |
// Create the name of the new measurement, in this case Channel 3 of a fluorescent image. | |
// ONLY the "Channel 3" should change to the name of the stain you are measuring, for example "DAB" in a brightfield image | |
def subcellularDetectionChannel = "Subcellular cluster: Channel 3: " | |
def newKey = subcellularDetectionChannel+"Mean Intensity" | |
//This step ensures that there is at least a measurement value of 0 in each cell | |
for (def cell : getCellObjects()) { | |
def ml = cell.getMeasurementList() | |
ml.putMeasurement(newKey, 0) | |
} | |
//Create a list of all subcellular objects | |
def subCells = getObjects({p -> p.class == qupath.imagej.detect.cells.SubcellularDetection.SubcellularObject.class}) | |
// Loop through all subcellular detections | |
for (c in subCells) { | |
// Find the containing cell | |
def cell = c.getParent() | |
def ml = cell.getMeasurementList() | |
double area = c.getMeasurementList().getMeasurementValue( subcellularDetectionChannel+"Area") | |
double intensity = c.getMeasurementList().getMeasurementValue( subcellularDetectionChannel+"Mean channel intensity") | |
//calculate the total intensity of stain in this subcellular object, and add it to the total | |
double stain = area*intensity | |
double x = cell.getMeasurementList().getMeasurementValue(newKey); | |
x = x+stain | |
ml.putMeasurement(newKey, x) | |
} | |
println("Total subcellular stain intensity added to cell measurement list as " + newKey) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment