Consensus
Last updated
Last updated
On RedBrick AI, the term Consensus represents the two types of multi-reader, single output Project flows available on the platform.
Both of these flows provide you with both a quantitative measure of annotation quality (by means of an Inter-Annotator Agreement Score) and the opportunity to create higher-quality annotations by combining the opinions of multiple annotators.
In Consensus Projects, multiple annotators are required to label each Task in the Label Stage.
By default, all annotation is done "in the blind", so each annotator will only be able to see the images in the Task, and not the annotations done by their peers.
Once all the annotators have completed the Task, RedBrick AI will calculate an Inter-Annotator Agreement Score between the annotations. Please reference the following documentation for more information on how we calculate these scores.
The Inter-Annotator Agreement Score is a quantitative measure of quality that can help you select the best set of annotations created by your annotators. It also gives reviewers the ability to arbitrate between the opinions of multiple annotators before generating a single, high-quality Ground Truth.
For all Projects on RedBrick AI, the decision to enable Multiple Labeling must be made at Project creation.
For Consensus Projects (not Task Duplication Projects), you can take the following steps:
Enable the Multiple Labeling toggle in Step (3) Project creation;
Select Single output;
Select a Consensus subtype: either Best Annotator or Manual Merge;
Select a minimum number of labelers that will be required to annotate each Task;
Proceed with Project creation as needed;
As of version 1.2.0, RedBrick AI supports two types of Consensus Projects:
Best Annotator Projects, which functions identically to a pre-1.2.0 Consensus Project;
Manual Merge Projects, which allow you to build a ground truth label set by adding individual annotations and classifications to it;
While the end result of each Project type is the same, i.e. a single set of curated Ground Truth labels generated from multiple readers, Best Annotator and Manual Merge Projects approach the process of building Ground Truth labels in fundamentally different ways.
For RedBrick veterans, Best Annotator Projects function identically to pre-1.2.0 Consensus Projects.
In other words, the annotation experience for labelers is unchanged, but reviewers will be presented with a flow that slightly differs from that of the standard RedBrick Review Stage.
In the Consensus_Best
Review Stage, the annotation sets of all labelers will be displayed in the Annotation Tool.
The reviewer’s primary task is to analyze the multiple sets of annotations generated by the labelers and produce a single set that will be saved and pushed to the next Stage. In RedBrick AI's Editor, this single set of annotations is referred to as the Best Version.
Note: by default, RedBrick AI selects the set of annotations with the highest Inter-Annotator Score as the Best Version.
Reviewers can view, hide/show, etc. all sets of annotations in a Task. This allows the reviewer to analyze the work done by the labelers and select the set of annotations that they consider to be of the highest quality.
The users that annotated the Task will be listed in the lefthand toolbar. By default, annotations are color-coded by user, but the Consensus Reviewer can also control the visibility of each Object Label Entity individually.
If a reviewer is satisfied with an existing annotation set, they can push the label set to the Ground Truth Stage by:
Clicking on the user's name in the lefthand toolbar;
Clicking on Mark as best. This will designate the user's labels as Best Version;
Accepting the Task by clicking on Accept in the top-right corner;
If a reviewer wishes to make changes to an existing set of annotations, they can do so by:
Clicking on the user's name in the lefthand toolbar;
Clicking on Copy & edit. This will duplicate the user's annotations, turning it into a new, working Best Version that the the reviewer can modify as necessary;
Once the reviewer is satisfied with their edits, they can accept the Task by clicking on Accept in the top righthand corner;
If there is no clear Best Annotator, the reviewer can create their own Best Version by clicking on Create a new version in the lefthand toolbar.
From there, the reviewer can annotate and accept or reject the Task as necessary.
The video below contains a brief walkthrough of how you can use Consensus in both your Project and the Editor.
Manual Merge Projects were launched with RedBrick v 1.2.0 and allow for a much more granular approach to building a single Ground Truth label set from the work of many annotators.
In Best Annotator Projects, Consensus_Best
Review presumes that the reviewer is starting their work from one "best" set of labels.
However, the Consensus_Merge
review flow adopts a more "pick and choose" approach in which the reviewer builds their Ground Truth labels and classifications one at a time.
Therefore, when a Consensus_Merge
reviewer opens a Task, they are presented with an empty Ground Truth label set:
All labelers' annotations are visible by default, but reviewers can show/hide annotations as desired;
The Ground Truth label set, which must be built by the Consensus_Merge
reviewer;
All annotations are visible to the reviewer, differentiated by color;
To add annotations to the Ground Truth label set, navigate to the lefthand sidebar and click on the Copy to Ground Truth button next to either the individual annotation or a user's annotation set (see below).
Clicking on Copy to Ground Truth will duplicate the annotation(s) in question to your Ground Truth Label Set, creating a new annotation that the reviewer can then modify as necessary.
To add Classifications to your Ground Truth set, select the type of Classification you'd like to review and click on it in the lefthand toolbar. This will cause a window to appear in the center of the screen that allows you to either:
Fill in your own Classifications based on the input of the annotators;
Copy and paste an annotator's Classifications to the Ground Truth set;
When you are finished, click outside the box to return to the Editor.
For both Best Annotator and Manual Merge Projects, rejecting a Task will cause the Task to be sent back to the original annotators along with the labels they generated.
All labelers will be required to re-annotate the Task and finalize it in order for the Task to be returned to the corresponding Review Stage.
Note: Send to Stage operations are disabled for all Consensus Projects.
RedBrick AI has an Automatic Assignment Protocol that will automatically assign multiple users to a Task. This protocol is enabled by default on Project creation and can be configured either when creating your Project or any time afterward in your Project's General Settings.
As annotators request Tasks by clicking on the Label button in the top right of the Dashboard, RedBrick AI will automatically assign available Tasks by prioritizing those that are already in progress/or assigned to other users.
Alternatively, you can manually override any Task assignment on the Data page. When Consensus is enabled, the Assign dropdown will allow you to select multiple users.
Optionally, you can manually assign any number of labelers to a Consensus Task, including a number that is greater than the minimum number of required labelers. However, the Automatic Assignment Protocol will only assign up to the minimum value.
Once all assigned annotators have completed a Task, RedBrick AI will generate an Inter-Annotator Agreement Score, which is calculated by comparing each labeler's annotations with those of every other labeler and averaging the pairs of scores.
User 1
Score(U1,U2)
Score(U1,U3)
User 2
Score(U2,U1)
Score(U2,U3)
User 3
Score(U3,U1)
Score(U3,U2)
Agreement = Average(
Score(U1,U2)
,
Score(U1,U3)
,
Score(U2,U3)
).
The type of comparison function used to calculate the Score
depends on the type of data and annotations you and your team are working with. Please reference the following documentation to read more about how RedBrick calculates Inter-Annotator Agreement.
If a task has gone through Consensus, you will get access to all versions of the annotations done by all users. You will also have access to additional metadata like the annotation similarity scores. You can export the data using the following CLI command inside your project directory:
Please view the format reference for an overview of the exported format.
If you want to export only a single version of the annotations (i.e. the labeler with the best annotations or the base annotations qualified in Review), you can run the following command: