Consensus

Consensus provides you with both a quantitative measure of annotation quality (by means of an Inter-Annotator Agreement Score) and the opportunity to create higher-quality annotations by combining the opinions of multiple annotators.

How Does Consensus Work?

With Consensus enabled, multiple annotators can be required to label each Task in the Label Stage. Each individual annotator will only see an empty Task and will not be able to view the annotations done by the other annotators.

Once all the annotators have completed the Task, RedBrick AI will calculate an Inter-Annotator Agreement Score between the annotations. Please reference the following documentation for more information on how we calculate these scores.

The Inter-Annotator Agreement Score is a quantitative measure of quality that can help you select the best set of annotations created by your annotators. It also gives reviewers the ability to arbitrate between the opinions of multiple annotators before generating a single, high-quality Ground Truth.

Enabling Consensus

You can enable Consensus by navigating to Project Settings. Once enabled, you will be required to select a minimum number of labelers that will be required to annotate each Task. If your project has a Review Stage, you can enable auto-acceptance to automatically accept Tasks whose agreement scores are higher than the specified threshold.

Assigning Tasks to Multiple Users

RedBrick AI has an automatic assignment protocol that will automatically assign multiple users to a Task. As annotators request Tasks by clicking on the Label button in the top right of the Dashboard, RedBrick AI will automatically assign available Tasks by prioritizing those that are already in progress/or assigned to other users.

Alternatively, you can manually override any Task on the Data page. When Consensus is enabled, the Assign dropdown will allow you to select multiple users.

You can manually assign more than the required number of labelers. The automatic assignment protocol will only assign up to the number of required labelers, but you can manually assign as many as you'd like.

Inter-Annotator Agreement

Once all assigned annotators have completed a Task, RedBrick AI will generate an Inter-Annotator Agreement Score, which is calculated by comparing each labeler's annotations with those of every other labeler and averaging the pairs of scores.

User 1User 2User 3

User 1

Score(U1,U2)

Score(U1,U3)

User 2

Score(U2,U1)

Score(U2,U3)

User 3

Score(U3,U1)

Score(U3,U2)

Agreement = Average(Score(U1,U2),Score(U1,U3),Score(U2,U3)).

The type of comparison function used to calculate the Scoredepends on the type of data and annotations you and your team are working with. Please reference the following documentation to read more about how RedBrick calculates Inter-Annotator Agreement.

pageAgreement calculation

Review Stage Absent

If there is no Review Stage after the Label Stage, the set of annotations with the highest Agreement Score (with respect to other annotations) will be selected and stored in Ground Truth. This is the set of annotations that will be exported by default, but you can also export all versions of the annotations.

Review Stage Present

When a Review Stage is present, all annotations will be displayed in the Editor. The list of all users that have annotated the Task is located on the right hand Consensus Panel. By default, annotations are color-coded by user, but they can be grouped by category.

Best Annotations and Super Truth

The reviewer’s primary task is to analyze the multiple sets of annotations generated by the labelers and produce a single set that will be saved and pushed to the next Stage. In RedBrick AI's Editor, this single set of annotations is referred to as the Best Annotations.

By default, RedBrick AI selects the set of annotations with the highest Inter-Annotator Score as the Best Annotations.

Reviewers can view, hide/show, etc. all sets of annotations in a Task. This allows the reviewer to analyze the work done by the labelers and select the set of annotations that they consider to be of the highest quality.

If a reviewer is satisfied with an existing annotation set, they can simply designate it as Best Annotations and accept the Task.

If a reviewer wishes to make changes to an existing set of annotations or start completely from scratch, they can either click on the Edit button under a user in the right hand panel or click on Create New under Super Truth.

Doing so will create a novel set of annotations known as a Super Truth and automatically designate the set as Best. The reviewer can then annotate the Task as they see fit.

Only Super Truth Annotations can be edited!

All other annotations in the Review Stage are View Only.

Once a reviewer is satisfied with the current Best Annotations, they can accept the Task. This saves the Best Annotations and ascribes only that set to the Task. All other annotations are also saved and are available on export. If the reviewer rejects the Task, all labelers will be required to re-annotate the task.

The video below contains a brief walkthrough of how you can use Consensus in both your Project and the Editor.

Exporting Consensus Annotations

If a task has gone through Consensus, you will get access to all versions of the annotations done by all users. You will also have access to additional metadata like the annotation similarity scores. You can export the data using the following CLI command inside your project directory:

redbrick export 

Please view the format reference for an overview of the exported format.

If you want to export only a single version of the annotations (i.e. the labeler with the best annotations or the base annotations qualified in Review), you can run the following command:

redbrick export --no-consensus

Last updated