Agreement calculation
Last updated
Last updated
This section of the documentation will cover how RedBrick AI calculates inter-annotator agreement between two users.
For two sets of labels, annotation instances are matched up by category. For the same category, instances are matched up by selecting pairs that maximize the overall agreement score. For two instances of the same category, RedBrick AI uses the following similarity functions
RedBrick AI uses IOU for these annotation types. For two annotations A and B IOU is defined by:
Landmarks
For landmarks/keypoints, RedBrick AI uses a normalized Root Mean Squared Error (RMSE) to compute similarity, where similarity is .
Where is the number of components of the point (2 for 2D, 3 for 3D), and are normalized components (by width, height, depth of the image) of the two points.
Comparisons of length measurements are done by comparing the two sets of points (using the technique covered above) that define the length line.
For angle measurements, the vectors between each arm of the angle measurement are compared. The two angles comparing both sets of measurement arms are computed. The similarity score is then defined by:
For classification labels, the agreement is binary. If the chosen category and attributes match, the consensus score will be 100%, otherwise, it will be 0%.
To generate a single score between two sets of labels, a series of averages are computed.
Scores of matching annotations instances of the same category are averaged, to generate a single score per category.
Scores are then averaged per category.
Scores are then averaged per label type to generate a single score per label type.
For videos, scores are calculated per frame and averaged to generate a single score per sequence.
For multi-series studies, scores are averaged by volume to generate a single score per study.
Where are the angles between the two sets of measurement arms.