On RedBrick AI, the term Consensus represents the two types of multi-reader, single output Project flows available on the platform.

Both of these flows provide you with both a quantitative measure of annotation quality (by means of an Inter-Annotator Agreement Score) and the opportunity to create higher-quality annotations by combining the opinions of multiple annotators.

What are Consensus Projects?

In Consensus Projects, multiple annotators are required to label each Task in the Label Stage.

By default, all annotation is done “in the blind”, so each annotator will only be able to see the images in the Task, and not the annotations done by their peers.

Comparison between standard flow and Consensus flow with 3 labelers

Once all the annotators have completed the Task, RedBrick AI will calculate an Inter-Annotator Agreement Score between the annotations. Please reference the following documentation for more information on how we calculate these scores.

The Inter-Annotator Agreement Score is a quantitative measure of quality that can help you select the best set of annotations created by your annotators. It also gives reviewers the ability to arbitrate between the opinions of multiple annotators before generating a single, high-quality Ground Truth.

Creating Consensus Projects

For all Projects on RedBrick AI, the decision to enable Multiple Labeling must be made at Project creation.

For Consensus Projects (not Task Duplication Projects), you can take the following steps:

  1. Enable the Multiple Labeling toggle in Step (3) Project creation;
  2. Select Single output;
  3. Select a Consensus subtype: either Best Annotator or Manual Merge;
  4. Select a minimum number of labelers that will be required to annotate each Task;
  5. Proceed with Project creation as needed;

Project creation with Consensus

Types of Consensus Projects

As of version 1.2.0, RedBrick AI supports two types of Consensus Projects:

  • Best Annotator Projects, which functions identically to a pre-1.2.0 Consensus Project;
  • Manual Merge Projects, which allow you to build a ground truth label set by adding individual annotations and classifications to it;

While the end result of each Project type is the same, i.e. a single set of curated Ground Truth labels generated from multiple readers, Best Annotator and Manual Merge Projects approach the process of building Ground Truth labels in fundamentally different ways.


Best Annotator Projects

For RedBrick veterans, Best Annotator Projects function identically to pre-1.2.0 Consensus Projects.

In other words, the annotation experience for labelers is unchanged, but reviewers will be presented with a flow that slightly differs from that of the standard RedBrick Review Stage.

Best Annotator Review Stage

In the Consensus_Best Review Stage, the annotation sets of all labelers will be displayed in the Annotation Tool.

The reviewer’s primary task is to analyze the multiple sets of annotations generated by the labelers and produce a single set that will be saved and pushed to the next Stage. In RedBrick AI’s Editor, this single set of annotations is referred to as the Best Version.

Note: by default, RedBrick AI selects the set of annotations with the highest Inter-Annotator Score as the Best Version.

Reviewers can view, hide/show, etc. all sets of annotations in a Task. This allows the reviewer to analyze the work done by the labelers and select the set of annotations that they consider to be of the highest quality.

The users that annotated the Task will be listed in the lefthand toolbar. By default, annotations are color-coded by user, but the Consensus Reviewer can also control the visibility of each Object Label Entity individually.

The lefthand toolbar of Best Annotator Review


Accepting a labeler’s annotations without edits

If a reviewer is satisfied with an existing annotation set, they can push the label set to the Ground Truth Stage by:

  1. Clicking on the user’s name in the lefthand toolbar;
  2. Clicking on Mark as best. This will designate the user’s labels as Best Version;
  3. Accepting the Task by clicking on Accept in the top-right corner;

Clicking into a labeler's annotations


Accepting a labeler’s annotations with edits

If a reviewer wishes to make changes to an existing set of annotations, they can do so by:

  1. Clicking on the user’s name in the lefthand toolbar;
  2. Clicking on Copy & edit. This will duplicate the user’s annotations, turning it into a new, working Best Version that the the reviewer can modify as necessary;
  3. Once the reviewer is satisfied with their edits, they can accept the Task by clicking on Accept in the top righthand corner;

Creating a new Best Version from scratch

If there is no clear Best Annotator, the reviewer can create their own Best Version by clicking on Create a new version in the lefthand toolbar.

From there, the reviewer can annotate and accept or reject the Task as necessary.

Creating a Best Version


Best Annotator Video Walkthrough

The video below contains a brief walkthrough of how you can use Consensus in both your Project and the Editor.

A video walkthrough of Best Annotator Consensus Projects


Manual Merge Projects

Manual Merge Projects were launched with RedBrick v 1.2.0 and allow for a much more granular approach to building a single Ground Truth label set from the work of many annotators.

Manual Merge Review Stage

In Best Annotator Projects, Consensus_Best Review presumes that the reviewer is starting their work from one “best” set of labels.

However, the Consensus_Merge review flow adopts a more “pick and choose” approach in which the reviewer builds their Ground Truth labels and classifications one at a time.

Therefore, when a Consensus_Merge reviewer opens a Task, they are presented with an empty Ground Truth label set:

Consensus_Merge Review

  1. All labelers’ annotations are visible by default, but reviewers can show/hide annotations as desired;
  2. The Ground Truth label set, which must be built by the Consensus_Merge reviewer;
  3. All annotations are visible to the reviewer, differentiated by color;

Adding Annotations to Ground Truth

To add annotations to the Ground Truth label set, navigate to the lefthand sidebar and click on the Copy to Ground Truth button next to either the individual annotation or a user’s annotation set (see below).

Before - adding an annotation to Ground Truth

Clicking on Copy to Ground Truth will duplicate the annotation(s) in question to your Ground Truth Label Set, creating a new annotation that the reviewer can then modify as necessary.

After - adding an annotation to Ground Truth


Adding Classifications to Ground Truth

To add Classifications to your Ground Truth set, select the type of Classification you’d like to review and click on it in the lefthand toolbar. This will cause a window to appear in the center of the screen that allows you to either:

  1. Fill in your own Classifications based on the input of the annotators;
  2. Copy and paste an annotator’s Classifications to the Ground Truth set;

Adding Classifications to Ground Truth

When you are finished, click outside the box to return to the Editor.


Rejecting a Task in Consensus Review

For both Best Annotator and Manual Merge Projects, rejecting a Task will cause the Task to be sent back to the original annotators along with the labels they generated.

All labelers will be required to re-annotate the Task and finalize it in order for the Task to be returned to the corresponding Review Stage.

Note: Send to Stage operations are disabled for all Consensus Projects.


Assigning Tasks to Multiple Users

RedBrick AI has an Automatic Assignment Protocol that will automatically assign multiple users to a Task. This protocol is enabled by default on Project creation and can be configured either when creating your Project or any time afterward in your Project’s General Settings.

As annotators request Tasks by clicking on the Label button in the top right of the Dashboard, RedBrick AI will automatically assign available Tasks by prioritizing those that are already in progress/or assigned to other users.

Alternatively, you can manually override any Task assignment on the Data page. When Consensus is enabled, the Assign dropdown will allow you to select multiple users.

Task assignment in a Consensus Project

Optionally, you can manually assign any number of labelers to a Consensus Task, including a number that is greater than the minimum number of required labelers. However, the Automatic Assignment Protocol will only assign up to the minimum value.


Inter-Annotator Agreement

Once all assigned annotators have completed a Task, RedBrick AI will generate an Inter-Annotator Agreement Score, which is calculated by comparing each labeler’s annotations with those of every other labeler and averaging the pairs of scores.

User 1User 2User 3
User 1Score(U1,U2)Score(U1,U3)
User 2Score(U2,U1)Score(U2,U3)
User 3Score(U3,U1)Score(U3,U2)
Agreement = Average( Score(U1,U2) , Score(U1,U3) , Score(U2,U3) )

The type of comparison function used to calculate the Scoredepends on the type of data and annotations you and your team are working with. Please reference the following documentation to read more about how RedBrick calculates Inter-Annotator Agreement.

Agreement Calculation

Inter-Annotator Agreement for Tasks queued in Review


Exporting Consensus Annotations

If a task has gone through Consensus, you will get access to all versions of the annotations done by all users. You will also have access to additional metadata like the annotation similarity scores. You can export the data using the following CLI command inside your project directory:

redbrick export

Please view the format reference for an overview of the exported format.

If you want to export only a single version of the annotations (i.e. the labeler with the best annotations or the base annotations qualified in Review), you can run the following command:

redbrick export --no-consensus