Project Settings
Last updated
Last updated
Any Project on RedBrick can be highly customized to support the specifics of your desired workflow.
The Project Settings tabs allow you to adjust your Project's configuration, workflows, export your Project and metadata, execute Bulk Actions, customize your Project toolkit, and much more.
Project Settings are broken down into the following subcategories:
General Settings
Consensus (for Consensus Projects)
Export Labels
Label Validation Script
Hanging Protocol
Direct Uploads
Export Metadata
Annotation Storage
Bulk Actions
Tool Settings
Webhooks
You can also quickly navigate to several of these tabs from anywhere within a Project by clicking on the corresponding Settings Shortcut in the top-right corner of the screen.
The General Settings tab contains basic information about your Project and allows you to modify:
your Project's name, description, and labeling instructions URL;
your caching settings
your labeler evaluation settings;
your Task assignment settings and the size of your Labeling/Review Queue;
your Review Stage settings, including your pseudo-random review percentage;
control your permissions settings for Read-only Labels;
You can also add Review Stages to a Project's workflow. However, this is a permanent action that cannot be undone.
Oops!: if a Review Stage has been added to your Project in error, you can reduce the review percentage to 0% to allow all Tasks to "pass through" it.
If you are working inside of a multi-reader Project, you can find Consensus settings in the corresponding tab. For a full overview of how to set up and configure multi-reader Projects on Redbrick, please see the following documentation:
The Export Labels tab contains pre-filled commands to allow you to easily clone your Project to a directory on your local machine and export it.
For a more comprehensive overview of using the CLI to export a RedBrick Project, relevant tags, and common variants, please see our CLI export documentation.
The Label Validation Script tab allows you to use Javascript to enforce specific labeling behaviors and implement automated QA flows to your Project.
A more comprehensive overview and example scripts can be found on the following page:
Custom hanging protocols can also be added to a Project's configuration to give admins greater control over the default annotation environment, specifically:
windowing settings;
the contents of a viewport, including viewing planes, synchronization with other viewports, flipping, etc.;
Task layout (i.e. the viewport grid, the number of Layout tabs available and their contents, etc.);
thresholding settings;
The Data Uploads tab provides a summary of all of the upload operations that have been carried out within your Project.
If something is erroneously uploaded to your Project, you can "undo" the upload by deleting it in the Data Uploads tab.
The delete operation removes all images, labels, and Tasks associated with the upload - use caution!
As of v1.1.2, the Export Metadata tab has been removed and transferred to the Project Data Page (for Comment Exports) and Project Overview Page (Productivity Data Export).
You can now export all of a Project's Comments by clicking on the corresponding button on the Project Data Page.
Productivity data includes each labeler or reviewer's active work time (measured in milliseconds) and the number of Tasks completed per day and can be accessed on the Project's Overview Page.
You can also customize the date range for your export to retrieve more specific data.
By default, all annotations generated on RedBrick AI are stored on our servers in NIfTI format.
The Annotation Storage tab allows you designate any Storage Method that you have integrated to the platform and store your annotations there.
The Bulk Actions tab allows you to execute Stage-level operations for your workflow. Common operations include:
sending all Ground Truth Tasks back to a Label or Review Stage;
pushing Label Stage Tasks with pre-uploaded annotations to a later Stage;
accepting or rejecting Tasks that have been pseudo-randomly retained in a Review Stage;
Bulk Actions are also reflected in Task History as a System operation. For example, if user "Ben Stewart" bulk rejected all Tasks in the Review_1 Stage, the Task History would display as follows:
The Tool Settings page allows admins to configure exact scope of the Segmentation toolkit available to labelers and reviewers.
For more information about our Segmentation Toolkit and the Tool Settings page, please see the following pages:
If you'd like to integrate webhooks to your Project, you can do so here.
For a more detailed overview of the available webhooks and how to integrate them, please see the following page: