Comparison between standard flow and Consensus flow with 3 labelers
Project creation with Consensus
Consensus_Best
Review Stage, the annotation sets of all labelers will be displayed in the Annotation Tool.
The reviewer’s primary task is to analyze the multiple sets of annotations generated by the labelers and produce a single set that will be saved and pushed to the next Stage. In RedBrick AI’s Editor, this single set of annotations is referred to as the Best Version.
Note: by default, RedBrick AI selects the set of annotations with the highest Inter-Annotator Score as the Best Version.
Reviewers can view, hide/show, etc. all sets of annotations in a Task. This allows the reviewer to analyze the work done by the labelers and select the set of annotations that they consider to be of the highest quality.
The users that annotated the Task will be listed in the lefthand toolbar. By default, annotations are color-coded by user, but the Consensus Reviewer can also control the visibility of each Object Label Entity individually.
The lefthand toolbar of Best Annotator Review
Clicking into a labeler's annotations
Creating a Best Version
A video walkthrough of Best Annotator Consensus Projects
Consensus_Best
Review presumes that the reviewer is starting their work from one “best” set of labels.
However, the Consensus_Merge
review flow adopts a more “pick and choose” approach in which the reviewer builds their Ground Truth labels and classifications one at a time.
Therefore, when a Consensus_Merge
reviewer opens a Task, they are presented with an empty Ground Truth label set:
Consensus_Merge Review
Consensus_Merge
reviewer;Before - adding an annotation to Ground Truth
After - adding an annotation to Ground Truth
Adding Classifications to Ground Truth
Task assignment in a Consensus Project
User 1 | User 2 | User 3 | |
---|---|---|---|
User 1 | Score(U1,U2) | Score(U1,U3) | |
User 2 | Score(U2,U1) | Score(U2,U3) | |
User 3 | Score(U3,U1) | Score(U3,U2) |
Agreement = Average( Score(U1,U2) , Score(U1,U3) , Score(U2,U3) )
The type of comparison function used to calculate the Score
depends on the type of data and annotations you and your team are working with. Please reference the following documentation to read more about how RedBrick calculates Inter-Annotator Agreement.
Inter-Annotator Agreement for Tasks queued in Review