Programmatic Label & Review
It may be useful to programmatically add labels to your uploaded data or perform a review on queued tasks. This scenario may arise if you have an automated way of reviewing data or if you want to bulk-process tasks.
Please see the detailed reference documentation for put_tasks
here.
You can only use put_tasks
on Tasks assigned to your API key.
Please consult our documentation to learn more about how to assign Tasks to your API key.
First, perform the standard RedBrick AI SDK set-up to create a project object.
Next, you need to get a list of Tasks you want to label/review. You can do this by:
Searching for the
task_id
through the RedBrick AI UI.Retrieving the
task_id
from your filename/customname
from the Items List using search_tasks.Retrieving tasks assigned to your API key using
list_tasks
.
Programmatically Label Tasks
Add your annotations within the series
field, along with the task_id
. Please refer to the reference documentation for the format of the annotations in Series.
The corresponding Task must be queued in the Label Stage and assigned to your API key.
Programmatically Review Tasks
Add your review decision in the review_result
argument, along with the task_id
. The corresponding Task must be queued in the Review stage that you specify in stage_name
and must be assigned to your API key.
Re-annotate Ground Truth Tasks
Once your Task goes through all of the stages in your workflow, it will be stored in the Ground Truth Stage. If you notice issues with one or more of your Ground Truth Tasks, you can either modify them manually within the UI while the Tasks are still in the Ground Truth Stage or send them back to the Label Stage for correction.
First, get a list of the task_id
s you want to send back to Label. You can do this by exporting only Ground Truth Tasks and filtering them. Then, use move_tasks_to_start
to send them back to Label.
All corresponding Tasks need to be in the Ground Truth Stage. This function will not work for Tasks queued in Review.
Last updated