RedBrick

Storage Methods

class redbrick.StorageMethod [source]

Storage method integrations for RedBrick Organizations.

  • PUBLIC - Access files from a public cloud storage service using their absolute URLs.(i.e. files available publicly)

  • REDBRICK - Access files stored on RedBrick AI’s servers (i.e. files uploaded directly to RBAI from a local machine)

  • AWSS3 - Access files stored in an external S3 bucket that has been integrated with RedBrick AI

  • GoogleCloud - Access files stored in an external Google Cloud bucket that has been integrated with RedBrick AI

  • AzureBlob - Access files stored in an external Azure Blob that has been integrated with RedBrick AI

  • AltaDB - Access files stored in an AltaDB dataset


class Public [source]

Public storage provider (Subclass of StorageProvider).

Variables:


class Details [source]

Public storage provider details.

property keystr

Public storage provider details key.

classmethod from_entity(entity=None)[source]

Get object from entity

Return type:

Details

abstract to_entity()[source]

Get entity from object.

Return type:

Dict[strAny]

validate(check_secrets=False)[source]

Validate Public storage provider details.

Return type:

None

classmethod from_entity(entity)[source]

Get object from entity

Return type:

Public

class RedBrick[source]

RedBrick storage provider (Sub class of StorageProvider).

Variables:

class Details[source]

RedBrick storage provider details.

property key*: str*

RedBrick storage provider details key.

classmethod from_entity(entity=None)[source]

Get object from entity

Return type:

Details

abstract to_entity()[source]

Get entity from object.

Return type:

Dict[strAny]

validate(check_secrets=False)[source]

Validate RedBrick storage provider details.

Return type:

None

classmethod from_entity(entity)[source]

Get object from entity

Return type:

RedBrick

class AWSS3(storage_idnamedetails)[source]

AWS S3 storage provider (Sub class of StorageProvider).

Variables:

class Details(bucketregiontransfer_acceleration=Falseendpoint=Noneaccess_key_id=Nonesecret_access_key=Nonerole_arn=Nonerole_external_id=Nonesession_duration=3600)[source]

AWS S3 storage provider details.

Variables:

  • bucket (str) – AWS S3 bucket.

  • region (str) – AWS S3 region.

  • transfer_acceleration (bool) – AWS S3 transfer acceleration.

  • endpoint (str) – Custom endpoint (For S3 compatible storage, e.g. MinIO).

  • access_key_id (str) – AWS access key id.

  • secret_access_key (str) – AWS secret access key. (Will be None in output for security reasons)

  • role_arn (str) – AWS assume_role ARN. (For short-lived credentials instead of access keys)

  • role_external_id (str) – AWS assume_role external id. (Will be None in output for security reasons)

  • session_duration (int) – AWS S3 assume_role session duration.

property key*: str*

AWS S3 storage provider details key.

classmethod from_entity(entity=None)[source]

Get object from entity

Return type:

Details

abstract to_entity()[source]

Get entity from object.

Return type:

Dict[strAny]

validate(check_secrets=False)[source]

Validate AWS S3 storage provider details.

Return type:

None

class GoogleCloud(storage_idnamedetails)[source]

Google cloud storage provider (Sub class of StorageProvider).

Variables:

class Details(bucketservice_account_json=None)[source]

Google cloud storage provider details.

Variables:

  • bucket (str) – GCS bucket.

  • service_account_json (str) – GCS service account JSON. (Will be None in output for security reasons)

property key*: str*

Google cloud storage provider details key.

classmethod from_entity(entity=None)[source]

Get object from entity

Return type:

Details

abstract to_entity()[source]

Get entity from object.

Return type:

Dict[strAny]

validate(check_secrets=False)[source]

Validate Google cloud storage provider details.

Return type:

None

class AzureBlob(storage_idnamedetails)[source]

Azure blob storage provider (Sub class of StorageProvider).

Variables:

class Details(connection_string=Nonesas_url=None)[source]

Azure blob storage provider details.

Variables:

  • connection_string (str) – Azure connection string. (Will be None in output for security reasons)

  • sas_url (str) – Azure Shared Access Signature URL for granular blob access. (Will be None in output for security reasons)

property key*: str*

Azure blob storage provider details key.

classmethod from_entity(entity=None)[source]

Get object from entity

Return type:

Details

abstract to_entity()[source]

Get entity from object.

Return type:

Dict[strAny]

validate(check_secrets=False)[source]

Validate Azure blob storage provider details.

Return type:

None

class AltaDB(storage_idnamedetails)[source]

AltaDB storage provider (Sub class of StorageProvider).

Variables:

class Details(access_key_idendpoint=Nonesecret_access_key=None)[source]

AltaDB storage provider details.

Variables:

  • access_key_id (str) – AltaDB access key id.

  • secret_access_key (str) – AltaDB secret access key. (Will be None in output for security reasons)

  • endpoint (str) – Custom endpoint.

property key*: str*

AltaDB storage provider details key.

classmethod from_entity(entity=None)[source]

Get object from entity

Return type:

Details

abstract to_entity()[source]

Get entity from object.

Return type:

Dict[strAny]

validate(check_secrets=False)[source]

Validate AltaDB storage provider details.

Return type:

None

class redbrick.StorageProvider(storage_idnamedetails)[source]

Base storage provider.

Sub-classes:

  • redbrick.StorageMethod.Public (Public)

  • redbrick.StorageMethod.RedBrick (RedBrick)

  • redbrick.StorageMethod.AWSS3 (AWSS3)

  • redbrick.StorageMethod.GoogleCloud (GoogleCloud)

  • redbrick.StorageMethod.AzureBlob (AzureBlob)

  • redbrick.StorageMethod.AltaDB (AltaDB)

class Details[source]

Storage details.

abstract property key*: str*

Storage provider details key.

abstract classmethod from_entity(entity=None)[source]

Get object from entity

Return type:

Details

abstract to_entity()[source]

Get entity from object.

Return type:

Dict[strAny]

abstract validate(check_secrets=False)[source]

Validate storage provider details.

Return type:

None

classmethod from_entity(entity)[source]

Get object from entity

Return type:

StorageProvider

class redbrick.ImportTypes(valuenames=None*module=Nonequalname=Nonetype=Nonestart=1boundary=None)[source]

Enumerates the supported data import types.

Please see supported data types, and file extensions in our documentation here.

class redbrick.TaskEventTypes(valuenames=None*module=Nonequalname=Nonetype=Nonestart=1boundary=None)[source]

Enumerate the different types of task events.

  • TASK_CREATED - A new task has been created.

  • TASK_SUBMITTED - A task has been submitted for review.

  • TASK_ACCEPTED - A submitted task has been accepted in review.

  • TASK_REJECTED - A submitted task has been rejected in review.

  • TASK_CORRECTED - A submitted task has been corrected in review.

  • TASK_ASSIGNED - A task has been assigned to a worker.

  • TASK_REASSIGNED - A task has been reassigned to another worker.

  • TASK_UNASSIGNED - A task has been unassigned from a worker.

  • TASK_SKIPPED - A task has been skipped by a worker.

  • TASK_SAVED - A task has been saved but not yet submitted.

  • GROUNDTRUTH_TASK_EDITED - A ground truth task has been edited.

  • CONSENSUS_COMPUTED - The consensus for a task has been computed.

  • COMMENT_ADDED - A comment has been added to a task.

  • CONSENSUS_TASK_EDITED - A consensus task has been edited.

class redbrick.TaskFilters(valuenames=None*module=Nonequalname=Nonetype=Nonestart=1boundary=None)[source]

Enumerate the different task filters.

  • ALL - All tasks.

  • GROUNDTRUTH - Ground truth tasks only.

  • UNASSIGNED - Tasks that have not yet been assigned to a worker.

  • QUEUED - Tasks that are queued for labeling/review.

  • DRAFT - Tasks that have been saved as draft.

  • SKIPPED - Tasks that have been skipped by a worker.

  • COMPLETED - Tasks that have been completed successfully.

  • FAILED - Tasks that have been rejected in review.

  • ISSUES - Tasks that have issues raised and cannot be completed.

class redbrick.TaskStates(valuenames=None*module=Nonequalname=Nonetype=Nonestart=1boundary=None)[source]

Task Status.

  • UNASSIGNED - The Task has not been assigned to a Project Admin or Member.

  • ASSIGNED - The Task has been assigned to a Project Admin or Member,but work has not begun on it.

  • IN_PROGRESS - The Task is currently being worked on by a Project Admin or Member.

  • COMPLETED - The Task has been completed successfully.

  • PROBLEM - A Project Admin or Member has raised an Issue regarding the Task,and work cannot continue until the Issue is resolved by a Project Admin.

  • SKIPPED - The Task has been skipped.

  • STAGED - The Task has been saved as a Draft.

class redbrick.Stage(stage_nameconfig)[source]

Base stage.

class Config[source]

Stage config.

abstract classmethod from_entity(entity=Nonetaxonomy=None)[source]

Get object from entity

Return type:

Config

abstract to_entity(taxonomy=None)[source]

Get entity from object.

Return type:

Dict

abstract classmethod from_entity(entitytaxonomy=None)[source]

Get object from entity

Return type:

Stage

abstract to_entity(taxonomy=None)[source]

Get entity from object.

Return type:

Dict

class redbrick.LabelStage(stage_nameconfig=<factory>on_submit=True)[source]

Label Stage.

Parameters:

  • stage_name (str) – Stage name.

  • on_submit (Union[bool, str] = True) – The next stage for the task when submitted in current stage. If True, the task will go to ground truth. If False, the task will be archived.

  • config (Config = Config()) – Stage config.

class Config(auto_assignment=Noneauto_assignment_queue_size=Noneshow_uploaded_annotations=Noneread_only_labels_edit_access=Noneis_pre_label=Noneis_consensus_label=None)[source]

Label Stage Config.

Parameters:

  • auto_assignment (Optional[bool]) – Enable task auto assignment. (Default: True)

  • auto_assignment_queue_size (Optional[int]) – Task auto-assignment queue size. (Default: 5)

  • show_uploaded_annotations (Optional[bool]) – Show uploaded annotations to users. (Default: True)

  • read_only_labels_edit_access (Optional[ProjectMember.Role]) – Access level to change the read only labels. (Default: None)

  • is_pre_label (Optional[bool]) – Is pre-labeling stage. (Default: False)

  • is_consensus_label (Optional[bool]) – Is consensus-labeling stage. (Default: False)

classmethod from_entity(entity=Nonetaxonomy=None)[source]

Get object from entity.

Return type:

Config

to_entity(taxonomy=None)[source]

Get entity from object.

Return type:

Dict

classmethod from_entity(entitytaxonomy=None)[source]

Get object from entity

Return type:

LabelStage

to_entity(taxonomy=None)[source]

Get entity from object.

Return type:

Dict

class redbrick.ReviewStage(stage_nameconfig=<factory>on_accept=Trueon_reject=False)[source]

Review Stage.

Parameters:

  • stage_name (str) – Stage name.

  • on_accept (Union[bool, str] = True) – The next stage for the task when accepted in current stage. If True, the task will go to ground truth. If False, the task will be archived.

  • on_reject (Union[bool, str] = False) – The next stage for the task when rejected in current stage. If True, the task will go to ground truth. If False, the task will be archived.

  • config (Config = Config()) – Stage config.

class Config(review_percentage=Noneauto_assignment=Noneauto_assignment_queue_size=Noneread_only_labels_edit_access=Noneis_pre_review=Noneis_consensus_merge=None)[source]

Review Stage Config.

Parameters:

  • review_percentage (Optional[float]) – Percentage of tasks in [0, 1] that will be sampled for review. (Default: 1)

  • auto_assignment (Optional[bool]) – Enable task auto assignment. (Default: True)

  • auto_assignment_queue_size (Optional[int]) – Task auto-assignment queue size. (Default: 5)

  • read_only_labels_edit_access (Optional[ProjectMember.Role]) – Access level to change the read only labels. (Default: None)

  • is_pre_review (Optional[bool]) – Is pre-review stage. (Default: False)

  • is_consensus_merge (Optional[bool]) – Is consensus-merge (V2) stage. (Default: False)

classmethod from_entity(entity=Nonetaxonomy=None)[source]

Get object from entity.

Return type:

Config

to_entity(taxonomy=None)[source]

Get entity from object.

Return type:

Dict

classmethod from_entity(entitytaxonomy=None)[source]

Get object from entity

Return type:

ReviewStage

to_entity(taxonomy=None)[source]

Get entity from object.

Return type:

Dict

class redbrick.ModelStage(stage_nameconfig=<factory>on_submit=True)[source]

Model Stage.

Parameters:

  • stage_name (str) – Stage name.

  • on_submit (Union[bool, str] = True) – The next stage for the task when submitted in current stage. If True, the task will go to ground truth. If False, the task will be archived.

  • config (Config = Config()) – Stage config.

class ModelTaxonomyMap[source]

Model taxonomy map.

Parameters:

  • modelCategory (str) – Model category name.

  • rbCategory (str) – Category name as it appears in the RedBrick project’s taxonomy.

class Config(namesub_type=Noneurl=Nonetaxonomy_objects=None)[source]

Model Stage Config.

Parameters:

  • name (str) – Model name.

  • sub_type (str) – Model sub type.

  • url (Optional[str]) – URL for self-hosted model.

  • taxonomy_objects (Optional[List[ModelStage.ModelTaxonomyMap]]) – Mapping of model classes to project’s taxonomy objects.

classmethod from_entity(entity=Nonetaxonomy=None)[source]

Get object from entity.

Return type:

Config

to_entity(taxonomy=None)[source]

Get entity from object.

Return type:

Dict

classmethod from_entity(entitytaxonomy=None)[source]

Get object from entity

Return type:

ModelStage

to_entity(taxonomy=None)[source]

Get entity from object.

Return type:

Dict

class redbrick.OrgMember(user_idemailgiven_namefamily_nameroletagsis_2fa_enabledlast_active=Nonesso_provider=None)[source]

Organization Member.

Parameters:

  • user_id (str) – User ID.

  • email (str) – User email.

  • given_name (str) – User given name.

  • family_name (str) – User family name.

  • role (OrgMember.Role) – User role in organization.

  • tags (List[str]) – Tags associated with the user.

  • is_2fa_enabled (bool) – Whether 2FA is enabled for the user.

  • last_active (Optional[datetime] = None) – Last time the user was active.

  • sso_provider (Optional[str] = None) – User identity SSO provider.

class Role(valuenames=None*module=Nonequalname=Nonetype=Nonestart=1boundary=None)[source]

Enumerate access levels for Organization.

  • OWNER - Organization Owner

  • ADMIN - Organization Admin

  • MEMBER - Organization Member

classmethod from_entity(member)[source]

Get object from entity.

Return type:

OrgMember

class redbrick.OrgInvite(emailrolesso_provider=Nonestatus=Status.PENDING)[source]

Organization Invite.

Parameters:

  • email (str) – User email.

  • role (OrgMember.Role) – User role in organization.

  • sso_provider (Optional[str] = None) – User identity SSO provider.

  • status (OrgInvite.Status = OrgInvite.Status.PENDING) – Invite status.

class Status(valuenames=None*module=Nonequalname=Nonetype=Nonestart=1boundary=None)[source]

Enumerate invite status.

  • PENDING - Pending invitation

  • ACCEPTED - Accepted invitation

  • DECLINED - Declined invitation

classmethod from_entity(invite)[source]

Get object from entity.

Return type:

OrgInvite

to_entity()[source]

Get entity from object.

Return type:

Dict

class redbrick.ProjectMember(member_idrolestages=Noneorg_membership=None)[source]

Project Member.

Parameters:

  • member_id (str) – Unique user ID or email.

  • role (ProjectMember.Role) – User role in project.

  • stages (Optional[List[str]] = None) – Stages that the member has access to (Applicable for MEMBER role).

  • org_membership (Optional[OrgMember] = None) – Organization memberhip. This is not required when adding/updating a member.

class Role(valuenames=None*module=Nonequalname=Nonetype=Nonestart=1boundary=None)[source]

Enumerate access levels for Project.

  • ADMIN - Project Admin

  • MANAGER - Project Manager

  • MEMBER - Project Member (Labeler/Reviewer)

classmethod from_entity(member)[source]

Get object from entity.

Return type:

ProjectMember

redbrick.get_org(org_idapi_key, *url=‘https://api.redbrickai.com’*)[\[source\]](https://sdk.redbrickai.com/_modules/redbrick.html#get_org)

Get an existing redbrick organization object.

Organization object allows you to interact with your organization and perform high level actions like creating a project.

>>> org = redbrick.get_org(org_id, api_key**)**

Parameters:

Return type:

RBOrganization

redbrick.get_workspace(org_idworkspace_idapi_key, *url=‘https://api.redbrickai.com’*)[\[source\]](https://sdk.redbrickai.com/_modules/redbrick.html#get_workspace)

Get an existing RedBrick workspace object.

Workspace objects allow you to interact with your RedBrick AI workspaces, and perform actions like importing data, exporting data etc.

>>> workspace = redbrick.get_workspace(org_id, workspace_id, api_key**)**

Parameters:

  • org_id (str) – Your organizations unique id https://app.redbrickai.com/<org_id>/

  • workspace_id (str) – Your workspaces unique id.

  • api_key (str) – Your secret api_key, can be created from the RedBrick AI platform.

  • url (str = DEFAULT_URL) – Should default to https://api.redbrickai.com

Return type:

RBWorkspace

redbrick.get_project(org_idproject_idapi_key, *url=‘https://api.redbrickai.com’*)[\[source\]](https://sdk.redbrickai.com/_modules/redbrick.html#get_project)

Get an existing RedBrick project object.

Project objects allow you to interact with your RedBrick Ai projects, and perform actions like importing data, exporting data etc.

>>> project = redbrick.get_project(org_id, project_id, api_key**)**

Parameters:

Return type:

RBProject

redbrick.get_org_from_profile(profile_name=None)[source]

Get the org from the profile name in credentials file

>>> org = get_org_from_profile()

Parameters:

profile_name (str) – Name of the profile stored in the credentials file

Return type:

RBOrganization

redbrick.get_project_from_profile(project_id=Noneprofile_name=None)[source]

Get the RBProject object using the credentials file

project = get_project_from_profile()

Parameters:

  • project_id (Optional[str] = None) – project id which has to be fetched. None is valid only when called within project directory.

  • profile_name (str) – Name of the profile stored in the credentials file

Return type:

RBProject

Organization

class redbrick.RBOrganization[source]

Bases: ABC

Representation of RedBrick organization.

The redbrick.RBOrganization object allows you to programmatically interact with your RedBrick organization. This class provides methods for querying your organization and doing other high level actions. Retrieve the organization object in the following way:

Variables:

>>> org = redbrick.get_org(org_id="", api_key**="")**

abstract property org_id: str

Retrieve the unique org_id of this organization.

abstract property name: str

Retrieve unique name of this organization.

abstract taxonomies(only_name=True)[source]

Get a list of taxonomy names/objects in the organization.

Return type:

Union[List[str], List[Taxonomy]]

abstract workspaces_raw()[source]

Get a list of active workspaces as raw objects in the organization.

Return type:

List[Dict]

abstract projects_raw()[source]

Get a list of active projects as raw objects in the organization.

Return type:

List[Dict]

abstract projects()[source]

Get a list of active projects in the organization.

Return type:

List[RBProject]

abstract create_workspace(nameexists_okay=False)[source]

Create a workspace within the organization.

This method creates a worspace in a similar fashion to the quickstart on the RedBrick AI create workspace page.

Parameters:

  • name (str) – A unique name for your workspace

  • exists_okay (bool = False) – Allow workspaces with the same name to be returned instead of trying to create a new workspace. Useful for when running the same script repeatedly when you do not want to keep creating new workspaces.

Returns:

A RedBrick Workspace object.

Return type:

redbrick.RBWorkspace

abstract create_project_advanced(nametaxonomy_namestagesexists_okay=Falseworkspace_id=Nonesibling_tasks=Noneconsensus_settings=None)[source]

Create a project within the organization.

This method creates a project in a similar fashion to the quickstart on the RedBrick AI create project page.

Parameters:

  • name (str) – A unique name for your project

  • taxonomy_name (str) – The name of the taxonomy you want to use for this project. Taxonomies can be found on the left side bar of the platform.

  • stages (List[Stage]) – List of stage configs.

  • exists_okay (bool = False) – Allow projects with the same name to be returned instead of trying to create a new project. Useful for when running the same script repeatedly when you do not want to keep creating new projects.

  • workspace_id (Optional[str] = None) – The id of the workspace that you want to add this project to.

  • sibling_tasks (Optional[int] = None) – Number of tasks created for each uploaded datapoint.

  • consensus_settings (Optional[Dict[str, Any]] = None) –Consensus settings for the project. It has keys:

    • minAnnotations: int

    • autoAcceptThreshold?: float (range [0, 1])

Returns:

A RedBrick Project object.

Return type:

redbrick.RBProject

Raises:

ValueError: – If a project with the same name exists but has a different type or taxonomy.

abstract create_project(nametaxonomy_namereviews=0exists_okay=Falseworkspace_id=Nonesibling_tasks=Noneconsensus_settings=None)[source]

Create a project within the organization.

This method creates a project in a similar fashion to the quickstart on the RedBrick AI create project page.

Parameters:

  • name (str) – A unique name for your project

  • taxonomy_name (str) – The name of the taxonomy you want to use for this project. Taxonomies can be found on the left side bar of the platform.

  • reviews (int = 0) – The number of review stages that you want to add after the label stage.

  • exists_okay (bool = False) – Allow projects with the same name to be returned instead of trying to create a new project. Useful for when running the same script repeatedly when you do not want to keep creating new projects.

  • workspace_id (Optional[str] = None) – The id of the workspace that you want to add this project to.

  • sibling_tasks (Optional[int] = None) – Number of tasks created for each uploaded datapoint.

  • consensus_settings (Optional[Dict[str, Any]] = None) –Consensus settings for the project. It has keys:

    • minAnnotations: int

    • autoAcceptThreshold?: float (range [0, 1])

Returns:

A RedBrick Project object.

Return type:

redbrick.RBProject

Raises:

ValueError: – If a project with the same name exists but has a different type or taxonomy.

abstract get_project(project_id=Nonename=None)[source]

Get project by id/name.

Return type:

RBProject

abstract delete_project(project_id)[source]

Delete a project by ID.

Return type:

bool

abstract labeling_time(start_dateend_dateconcurrency=50)[source]

Get information of tasks labeled between two dates (both inclusive).

Return type:

List[Dict]

abstract create_taxonomy(namestudy_classify=Noneseries_classify=Noneinstance_classify=Noneobject_types=None)[source]

Create a Taxonomy.

Parameters:

  • name (str) – Unique identifier for the taxonomy.

  • study_classify (Optional[List[Attribute]]) – Study level classification applies to the task.

  • series_classify (Optional[List[Attribute]]) – Series level classification applies to a single series within a task.

  • instance_classify (Optional[List[Attribute]]) – Instance classification applies to a single frame (video) or slice (3D volume).

  • object_types (Optional[List[ObjectType]]) – Object types are used to annotate features/objects in tasks, for example, segmentation or bounding boxes.

Raises:

ValueError: – If there are validation errors.

Return type:

None

abstract get_taxonomy(name=Nonetax_id=None)[source]

Get (fetch, export) a Taxonomy associated with your Organization based on id or name. Useful for reviewing a Taxonomy in RedBrick-proprietary format or modifying a Taxonomy (with update_taxonomy())

Format reference for categories and attributes objects: https://sdk.redbrickai.com/formats/taxonomy.html

Return type:

Taxonomy

abstract update_taxonomy(tax_idstudy_classify=Noneseries_classify=Noneinstance_classify=Noneobject_types=None)[source]

Update the categories/attributes of Taxonomy (V2) in the organization.

Format reference for categories and attributes objects: https://sdk.redbrickai.com/formats/taxonomy.html

Raises:

ValueError: – If there are validation errors.

Return type:

None

abstract delete_taxonomy(name=Nonetax_id=None)[source]

Delete a taxonomy by name or ID.

Return type:

bool

Team

class redbrick.common.member.Team[source]

Bases: ABC

Abstract interface to Team module.

abstract get_member(member_id)[source]

Get a team member.

org = redbrick.get_org(org_id, api_key**)** member = org.team.get_member(member_id)

Parameters:

member_id (str) – Unique member userId or email.

Return type:

OrgMember

abstract list_members()[source]

Get a list of all organization members.

org = redbrick.get_org(org_id, api_key**)** members = org.team.list_members()

Return type:

List[OrgMember]

abstract remove_members(member_ids)[source]

Remove members from the organization.

org = redbrick.get_org(org_id, api_key**)** org.team.remove_members(member_ids)

Parameters:

member_ids (List[str]) – Unique member ids (userId or email).

Return type:

None

abstract list_invites()[source]

Get a list of all pending or declined invites.

org = redbrick.get_org(org_id, api_key**)** members = org.team.list_invites()

Return type:

List[OrgInvite]

abstract invite_user(invitation)[source]

Invite a user to the organization.

org = redbrick.get_org(org_id, api_key**)** invitation = org.team.invite_user(OrgInvite(email=”…”, role=OrgMember.Role.MEMBER))

Parameters:

invitation (OrgInvite) – Organization invite

Return type:

OrgInvite

abstract revoke_invitation(invitation)[source]

Revoke org user invitation.

org = redbrick.get_org(org_id, api_key**)** org.team.revoke_invitation(OrgInvite(email=”…”))

Parameters:

invitation (OrgInvite) – Organization invite

Return type:

None

Storage

class redbrick.common.storage.Storage[source]

Bases: ABC

Storage Method Controller.

abstract get_storage(storage_id)[source]

Get a storage method by ID.

Return type:

StorageProvider

abstract list_storages()[source]

Get a list of storage methods in the organization.

Return type:

List[StorageProvider]

abstract create_storage(storage)[source]

Create a storage method.

Return type:

StorageProvider

abstract update_storage(storage_iddetails)[source]

Update a storage method.

Return type:

StorageProvider

abstract delete_storage(storage_id)[source]

Delete a storage method.

Return type:

bool

abstract verify_storage(storage_idpath)[source]

Verify a storage method by ID.

Return type:

bool

Workspace

class redbrick.RBWorkspace[source]

Bases: ABC

Interface for interacting with your RedBrick AI Workspaces.

abstract property org_id: str

Read only property.

Retrieves the unique Organization UUID that this workspace belongs to

abstract property workspace_id: str

Read only property.

Retrieves the unique Workspace ID UUID.

abstract property name: str

Read only name property.

Retrieves the workspace name.

abstract property metadata_schema: List[Dict]

Retrieves the workspace metadata schema.

abstract property classification_schema: List[Dict]

Retrieves the workspace classification schema.

abstract property cohorts: List[Dict]

Retrieves the workspace cohorts.

abstract update_schema(metadata_schema=Noneclassification_schema=None)[source]

Update workspace metadata and classification schema.

Return type:

None

abstract update_cohorts(cohorts)[source]

Update workspace cohorts.

Return type:

None

abstract get_datapoints(*concurrency=10)[source]

Get datapoints in a workspace.

Return type:

Iterator[Dict]

abstract archive_datapoints(dp_ids)[source]

Archive datapoints.

Return type:

None

abstract unarchive_datapoints(dp_ids)[source]

Unarchive datapoints.

Return type:

None

abstract add_datapoints_to_cohort(cohort_namedp_ids)[source]

Add datapoints to a cohort.

Return type:

None

abstract remove_datapoints_from_cohort(cohort_namedp_ids)[source]

Remove datapoints from a cohort.

Return type:

None

abstract update_datapoint_attributes(dp_idattributes)[source]

Update datapoint attributes.

Return type:

None

abstract add_datapoints_to_projects(project_idsdp_idsis_ground_truth=False)[source]

Add datapoints to project.

Parameters:

  • project_ids (List[str]) – The projects in which you’d like to add the given datapoints.

  • dp_ids (List[str]) – List of datapoints that need to be added to projects.

  • is_ground_truth (bool = False) – Whether to create tasks directly in ground truth stage.

Return type:

None

abstract create_datapoints(storage_idpoints*concurrency=50)[source]

Create datapoints in workspace.

Upload data to your workspace (without annotations). Please visit our documentation to understand the format for points.

workspace = redbrick.get_workspace(org_id, workspace_id, api_key**,** url) points = [ { “name”: ”…”, “series”: [ { “items”: ”…”, } ] } ] workspace.create_datapoints(storage_id, points)

Parameters:

  • storage_id (str) – Your RedBrick AI external storage_id. This can be found under the Storage Tab on the RedBrick AI platform. To directly upload images to rbai, use redbrick.StorageMethod.REDBRICK.

  • points (List[InputTask]) – Please see the RedBrick AI reference documentation for overview of the format. https://sdk.redbrickai.com/formats/index.html#import. Fields with annotation information are not supported in workspace.

  • concurrency (int = 50) –

Returns:

List of datapoint objects with key response if successful, else error

Return type:

List[Dict]

Note

1. If doing direct upload, please use redbrick.StorageMethod.REDBRICK as the storage id. Your items path must be a valid path to a locally stored image.

2. When doing direct upload i.e. redbrick.StorageMethod.REDBRICK, if you didn’t specify a “name” field in your datapoints object, we will assign the “items” path to it.

abstract update_datapoints_metadata(storage_idpoints)[source]

Update datapoints metadata.

Update metadata for datapoints in workspace.

workspace = redbrick.get_workspace(org_id, workspace_id, api_key**,** url) points = [ { “dpId”: ”…”, “metaData”: { “property”: “value”, } } ] workspace.update_datapoints_metadata(storage_id, points)

Parameters:

  • storage_id (str) – Storage method where the datapoints are stored.

  • points (List[InputTask]) – List of datapoints with dpId and metaData values.

Return type:

None

abstract delete_datapoints(dp_idsconcurrency=50)[source]

Delete workspace datapoints based on ids.

>>> workspace = redbrick.get_workspace(org_id, workspace_id, api_key**,** url) >>> workspace.delete_datapoints([…])

Parameters:

  • dp_ids (List[str]) – List of datapoint ids to delete.

  • concurrency (int = 50) – The number of datapoints to delete at a time. We recommend keeping this <= 50.

Returns:

True if successful, else False.

Return type:

bool

Project

class redbrick.RBProject[source]

Bases: ABC

Abstract interface to RBProject.

Variables:

>>> project = redbrick.get_project(org_id="", project_id="", api_key**="")**

abstract property org_id: str

Read only property.

Retrieves the unique Organization UUID that this project belongs to

abstract property project_id: str

Read only property.

Retrieves the unique Project ID UUID.

abstract property name: str

Read only name property.

Retrieves the project name.

abstract property url: str

Read only property.

Retrieves the project URL.

abstract property taxonomy_name: str

Read only taxonomy_name property.

Retrieves the taxonomy name.

abstract property taxonomy*:* Taxonomy

Retrieves the project taxonomy.

abstract property workspace_id: str | None

Read only workspace_id property.

Retrieves the workspace id.

abstract property label_storage: Tuple[str, str]

Read only label_storage property.

Retrieves the label storage id and path.

abstract property stages: List[Stage]

Get list of stages.

abstract set_label_storage(storage_idpath)[source]

Set label storage method for a project.

By default, all annotations get stored in RedBrick AI’s storage i.e. redbrick.StorageMethod.REDBRICK. Set a custom external storage, within which RedBrick AI will write all annotations.

>>> project = redbrick.get_project(org_id, project_id, api_key**)** >>> project.set_label_storage(storage_id)

Parameters:

  • storage_id (str) – The unique ID of your RedBrick AI storage method integration. Found on the storage method tab on the left sidebar.

  • path (str) – A prefix path within which the annotations will be written.

Returns:

Returns [storage_id, path]

Return type:

Tuple[str, str]

Important

You only need to run this command once per project.

Raises:

ValueError: – If there are validation errors.

abstract update_stage(stage)[source]

Update stage.

Return type:

None

Export

class redbrick.common.export.Export[source]

Bases: ABC

Primary interface for various export methods.

The export module has many functions for exporting annotations and meta-data from projects. The export module is available from the redbrick.RBProject module.

>>> project = redbrick.get_project(api_key**="",** org_id="", project_id="") >>> project.export # Export

abstract export_tasks(*concurrency=10only_ground_truth=Falsestage_name=Nonetask_id=Nonefrom_timestamp=Noneold_format=Falsewithout_masks=Falsewithout_json=Falsesemantic_mask=Falsebinary_mask=Noneno_consensus=Nonewith_files=Falsedicom_to_nifti=Falsepng=Falsert_struct=Falsemhd=Falsedestination=None)[source]

Export annotation data.

Meta-data and category information returned as an Object. Segmentations are written to your disk in NIfTI-1 format. Please visit our documentation for more information on the format.

>>> project = redbrick.get_project(org_id, project_id, api_key**,** url) >>> project.export.export_tasks()

Parameters:

  • concurrency (int = 10) –

  • only_ground_truth (bool = False) – If set to True, will only return data that has been completed in your workflow. If False, will export latest state.

  • stage_name (Optional[str] = None) – If set, will only export tasks that are currently in the given stage.

  • task_id (Optional[str] = None) – If the unique task_id is mentioned, only a single datapoint will be exported.

  • from_timestamp (Optional[float] = None) – If the timestamp is mentioned, will only export tasks that were labeled/updated since the given timestamp. Format - output from datetime.timestamp()

  • old_format (bool = False) – Whether to export tasks in old format.

  • without_masks (bool = False) – Exports only tasks JSON without downloading any segmentation masks. Note: This is not recommended for tasks with overlapping labels.

  • without_json (bool = False) – Doesn’t create the tasks JSON file.

  • semantic_mask (bool = False) – Whether to export all segmentations as semantic_mask. This will create one instance per class. If this is set to True and a task has multiple instances per class, then attributes belonging to each instance will not be exported.

  • binary_mask (Optional[bool] = None) – Whether to export all segmentations as binary masks. This will create one segmentation file per instance. If this is set to None and a task has overlapping labels, then binary_mask option will be True for that particular task.

  • no_consensus (Optional[bool] = None) – Whether to export tasks without consensus info. If None, will default to export with consensus info, if it is enabled for the given project. (Applicable only for new format export)

  • with_files (bool = False) – Export with files (e.g. images/video frames)

  • dicom_to_nifti (bool = False) – Convert DICOM images to NIfTI. Applicable when with_files is True.

  • png (bool = False) – Export labels as PNG masks.

  • rt_struct (bool = False) – Export labels as DICOM RT-Struct. (Only for DICOM images)

  • mhd (bool = False) – Export segmentation masks in MHD format.

  • destination (Optional[str] = None) – Destination directory (Default: current directory)

Returns:

Datapoint and labels in RedBrick AI format. See https://sdk.redbrickai.com/formats/index.html#export

Return type:

Iterator[OutputTask]

Note

If both semantic_mask and binary_mask options are True, then one binary mask will be generated per class.

abstract list_tasks(search=TaskFilters.ALLconcurrency=10limit=50*stage_name=Noneuser_id=Nonetask_id=Nonetask_name=Noneexact_match=Falsecompleted_at=None)[source]

Search tasks based on multiple queries for a project. This function returns minimal meta-data about the queried tasks.

>>> project = redbrick.get_project(org_id, project_id, api_key**,** url) >>> result = project.export.list_tasks()

Parameters:

  • search (TaskFilters = TaskFilters.ALL) – Task filter type.

  • concurrency (int = 10) – The number of requests that will be made in parallel.

  • limit (Optional[int] = 50) – The number of tasks to return. Use None to return all tasks matching the search query.

  • stage_name (Optional[str] = None) –If present, will return tasks that are:

    1. Available in stage_name: If search == TaskFilters.QUEUED

    2. Completed in stage_name: If search == TaskFilters.COMPLETED

  • user_id (Optional[str] = None) –User id/email. If present, will return tasks that are:

    1. Assigned to user_id: If search == TaskFilters.QUEUED

    2. Completed by user_id: If search == TaskFilters.COMPLETED

  • task_id (Optional[str] = None) – If present, will return data for the given task id.

  • task_name (Optional[str] = None) – If present, will return data for the given task name. This will do a prefix search with the given task name.

  • exact_match (bool = False) – Applicable when searching for tasks by task_name. If True, will do a full match instead of partial match.

  • completed_at (Optional[Tuple[Optional[float], Optional[float]]] = None) – If present, will return tasks that were completed in the given time range. The tuple contains the from and to timestamps respectively.

Returns:

>>> [{ “taskId”: str, “name”: str, “createdAt”: str, “updatedAt”: str, “currentStageName”: str, “createdBy”?: {“userId”: str, “email”: str}, “priority”?: float([0, 1]), “metaData”?: dict, “series”?: [{“name”?: str, “metaData”?: dict}], “assignees”?: [{ “user”: str, “status”: TaskStates, “assignedAt”: datetime, “lastSavedAt”?: datetime, “completedAt”?: datetime, “timeSpentMs”?: float, }] }]

Return type:

Iterator[Dict]

abstract get_task_events(*task_id=Noneonly_ground_truth=Trueconcurrency=10from_timestamp=Nonewith_labels=False)[source]

Generate an audit log of all actions performed on tasks.

Use this method to get a detailed summary of all the actions performed on your tasks, including:

  • Who uploaded the data

  • Who annotated your tasks

  • Who reviewed your tasks

  • and more.

This can be particulary useful to present to auditors who are interested in your quality control workflows.

Parameters:

  • task_id (Optional[str] = None) – If set, returns events only for the given task.

  • only_ground_truth (bool = True) – If set to True, will return events for tasks that have been completed in your workflow.

  • concurrency (int = 10) – The number of requests that will be made in parallel.

  • from_timestamp (Optional[float] = None) – If the timestamp is mentioned, will only export tasks that were labeled/updated since the given timestamp. Format - output from datetime.timestamp()

  • with_labels (bool = False) – Get metadata of labels submitted in each stage.

Returns:

>>> [{ “taskId”: string, “currentStageName”: string, “events”: List[Dict] }]

Return type:

Iterator[Dict]

abstract get_active_time(*stage_nametask_id=Noneconcurrency=100)[source]

Get active time spent on tasks for labeling/reviewing.

Parameters:

  • stage_name (str) – Stage for which to return the time info.

  • task_id (Optional[str] = None) – If set, will return info for the given task in the given stage.

  • concurrency (int = 100) – Request batch size.

Returns:

>>> [{ “orgId”: string, “projectId”: string, “stageName”: string, “taskId”: string, “completedBy”: string, “timeSpent”: number, # In milliseconds “completedAt”: datetime, “cycle”: number # Task cycle }]

Return type:

Iterator[Dict]

Upload

class redbrick.common.upload.Upload[source]

Bases: ABC

Primary interface for uploading to a project.

>>> project = redbrick.get_project(api_key**="",** org_id="", project_id="") >>> project.upload

abstract create_datapoints(storage_idpoints*is_ground_truth=Falsesegmentation_mapping=Nonert_struct=Falsemhd=Falselabel_storage_id=Nonelabel_validate=Falseprune_segmentations=Falseconcurrency=50)[source]

Create datapoints in project.

Upload data, and optionally annotations, to your project. Please visit our documentation to understand the format for points.

project = redbrick.get_project(org_id, project_id, api_key**,** url) points = [ { “name”: ”…”, “series”: [ { “items”: ”…”,

# These fields are needed for importing segmentations. “segmentations”: ”…”, “segmentMap”: {…} } ] } ] project.upload.create_datapoints(storage_id, points)

Parameters:

  • storage_id (str) – Your RedBrick AI external storage_id. This can be found under the Storage Tab on the RedBrick AI platform. To directly upload images to rbai, use redbrick.StorageMethod.REDBRICK.

  • points (List[InputTask]) – Please see the RedBrick AI reference documentation for overview of the format. https://sdk.redbrickai.com/formats/index.html#import. All the fields with annotation information are optional.

  • is_ground_truth (bool = False) – If labels are provided in points, and this parameters is set to true, the labels will be added to the Ground Truth stage.

  • segmentation_mapping (Optional[Dict] = None) – Optional mapping of semantic_mask segmentation class ids and RedBrick categories.

  • rt_struct (bool = False) – Upload segmentations from DICOM RT-Struct files.

  • mhd (bool = False) – Upload segmentations from MHD files.

  • label_storage_id (Optional[str] = None) – Optional label storage id to reference nifti segmentations. Defaults to items storage_id if not specified.

  • label_validate (bool = False) – Validate label nifti instances and segment map.

  • prune_segmentations (bool = False) – Prune segmentations that are not part of the series.

  • concurrency (int = 50) –

Returns:

List of task objects with key response if successful, else error

Return type:

List[Dict]

Note

1. If doing direct upload, please use redbrick.StorageMethod.REDBRICK as the storage id. Your items path must be a valid path to a locally stored image.

2. When doing direct upload i.e. redbrick.StorageMethod.REDBRICK, if you didn’t specify a “name” field in your datapoints object, we will assign the “items” path to it.

abstract delete_tasks(task_idsconcurrency=50)[source]

Delete project tasks based on task ids.

>>> project = redbrick.get_project(org_id, project_id, api_key**,** url) >>> project.upload.delete_tasks([…])

Parameters:

  • task_ids (List[str]) – List of task ids to delete.

  • concurrency (int = 50) – The number of tasks to delete at a time. We recommend keeping this <= 50.

Returns:

True if successful, else False.

Return type:

bool

abstract delete_tasks_by_name(task_namesconcurrency=50)[source]

Delete project tasks based on task names.

>>> project = redbrick.get_project(org_id, project_id, api_key**,** url) >>> project.upload.delete_tasks_by_name([…])

Parameters:

  • task_names (List[str]) – List of task names to delete.

  • concurrency (int = 50) – The number of tasks to delete at a time. We recommend keeping this <= 50.

Returns:

True if successful, else False.

Return type:

bool

abstract update_task_items(storage_idpointsconcurrency=50)[source]

Update task items, meta data, heat maps, transforms, etc. for the mentioned task ids.

project = redbrick.get_project(org_id, project_id, api_key**,** url) points = [ { “taskId”: ”…”, “series”: [ { “items”: ”…”, } ] } ] project.upload.update_task_items(storage_id, points)

Parameters:

  • storage_id (str) – Your RedBrick AI external storage_id. This can be found under the Storage Tab on the RedBrick AI platform. To directly upload images to rbai, use redbrick.StorageMethod.REDBRICK.

  • points (List[InputTask]) – List of objects with taskId and series, where series contains a list of items paths to be updated for the task.

  • concurrency (int = 50) –

Returns:

List of task objects with key response if successful, else error

Return type:

List[Dict]

Note

1. If doing direct upload, please use redbrick.StorageMethod.REDBRICK as the storage id. Your items path must be a valid path to a locally stored image.

abstract import_tasks_from_workspace(source_project_idtask_idswith_labels=False)[source]

Import tasks from another project in the same workspace.

project = redbrick.get_project(org_id, project_id, api_key**,** url) project.upload.import_tasks_from_workspace(source_project_id, task_ids)

Parameters:

  • source_project_id (str) – The source project id from which tasks are to be imported.

  • task_ids (List[str]) – List of task ids to be imported.

  • with_labels (bool = False) – If True, the labels will also be imported.

Return type:

None

abstract update_tasks_priority(tasksconcurrency=50)[source]

Update tasks’ priorities. Used to determine how the tasks get assigned to annotators/reviewers in auto-assignment.

Parameters:

  • tasks (List[Dict]) – List of taskIds and their priorities. - [{“taskId”: str, “priority”: float([0, 1]), “user”?: str}]

  • concurrency (int = 50) – The number of tasks to update at a time. We recommend keeping this <= 50.

Return type:

None

abstract update_tasks_labels(tasks*rt_struct=Falsemhd=Falselabel_storage_id=‘22222222-2222-2222-2222-222222222222’label_validate=Falseprune_segmentations=Falseconcurrency=50finalize=Falsetime_spent_ms=Noneextra_data=None)[source]

Update tasks labels at any point in project pipeline.

project = redbrick.get_project(…) tasks = [ { “taskId”: ”…”, “series”: [{…}] }, ]

# Overwrite labels in tasks project.upload.update_tasks_labels(tasks)

Parameters:

  • points (List[OutputTask]) – Please see the RedBrick AI reference documentation for overview of the format. https://sdk.redbrickai.com/formats/index.html#export. All the fields with annotation information are optional.

  • rt_struct (bool = False) – Upload segmentations from DICOM RT-Struct files.

  • mhd (bool = False) – Upload segmentations from MHD files.

  • label_storage_id (Optional[str] = None) – Optional label storage id to reference nifti segmentations. Defaults to project annnotation storage_id if not specified.

  • label_validate (bool = False) – Validate label nifti instances and segment map.

  • prune_segmentations (bool = False) – Prune segmentations that are not part of the series.

  • concurrency (int = 50) –

  • finalize (bool = False) – Submit the task in current stage.

  • time_spent_ms (Optional[int] = None) – Time spent on the task in milliseconds.

  • extra_data (Optional[Dict] = None) – Extra data to be stored along with the task.

Return type:

None

abstract send_tasks_to_stage(task_idsstage_nameconcurrency=50)[source]

Send tasks to different stage.

Parameters:

  • task_ids (List[str]) – List of tasks to move.

  • stage_name (str) – The stage to which you want to move the tasks. Use “END” to move tasks to ground truth.

  • concurrency (int = 50) – Batch size per request.

Return type:

None

Labeling

class redbrick.common.labeling.Labeling[source]

Bases: ABC

Perform programmatic labeling and review tasks.

The Labeling class allows you to programmatically submit tasks. This can be useful for times when you want to make bulk actions e.g accepting several tasks, or make automated actions like using automated methods for review.

Information

The Labeling module provides several methods to query tasks and assign tasks to different users. Refer to this section for guidance on when to use each method:

  • assign_tasks. Use this method when you already have the task_ids you want to assign to a particular user. If you don’t have the task_ids, you can query the tasks using list_tasks.

abstract put_tasks(stage_nametasks*finalize=Trueexisting_labels=Falsert_struct=Falsemhd=Falsereview_result=Nonelabel_storage_id=‘22222222-2222-2222-2222-222222222222’label_validate=Falseprune_segmentations=Falseconcurrency=50)[source]

Put tasks with new labels or a review result.

Use this method to programmatically submit tasks with labels in Label stage, or to programmatically accept/reject/correct tasks in a Review stage. If you don’t already have a list of task_id, you can use list_tasks to get a filtered list of tasks in your project, that you want to work upon.

Label

project = redbrick.get_project(…) tasks = [ { “taskId”: ”…”, “series”: [{…}] }, ]

# Submit tasks with new labels project.labeling.put_tasks(“Label”, tasks)

# Save tasks with new labels, without submitting project.labeling.put_tasks(“Label”, tasks, finalize=False)

# Submit tasks with existing labels project.labeling.put_tasks(“Label”, [{“taskId”:”…”}], existing_labels=True)

Review

Parameters:

  • stage_name (str) – The stage to which you want to submit the tasks. This must be the same stage as which you called get_tasks on.

  • tasks (List[OutputTask]) – Tasks with new labels or review result.

  • finalize (bool = True) – Finalize the task. If you want to save the task without submitting, set this to False.

  • existing_labels (bool = False) – If True, the tasks will be submitted with their existing labels. Applies only to Label stage.

  • rt_struct (bool = False) – Upload segmentations from DICOM RT-Struct files.

  • mhd (bool = False) – Upload segmentations from MHD files.

  • review_result (Optional[bool] = None) – Accepts or rejects the task based on the boolean value. Applies only to Review stage.

  • label_storage_id (Optional[str] = None) – Optional label storage id to reference external nifti segmentations. Defaults to project settings’ annotation storage_id if not specified.

  • label_validate (bool = False) – Validate label nifti instances and segment map.

  • prune_segmentations (bool = False) – Prune segmentations that are not part of the series.

  • concurrency (int = 50) –

Returns:

A list of tasks that failed.

Return type:

List[OutputTask]

abstract assign_tasks(task_ids*email=Noneemails=Nonerefresh=True)[source]

Assign tasks to specified email or current API key.

Unassigns all users from the task if neither of the email or current_user are set.

>>> project = redbrick.get_project(org_id, project_id, api_key**)** >>> project.labeling.assign_tasks([task_id], email=email)

Parameters:

  • task_ids (List[str]) – List of unique task_id of the tasks you want to assign.

  • email (Optional[str] = None) – The email of the user you want to assign this task to. Make sure the user has adequate permissions to be assigned this task in the project.

  • emails (Optional[List[str]] = None) – Used for projects with Consensus activated. The emails of the users you want to assign this task to. Make sure the users have adequate permissions to be assigned this task in the project.

  • refresh (bool = True) – Used for projects with Consensus activated. If True, will overwrite the assignment to the current users.

Returns:

List of affected tasks.

>>> [{“taskId”, “name”, “stageName”}]

Return type:

List[Dict]

abstract move_tasks_to_start(task_ids)[source]

Move groundtruth tasks back to start.

Return type:

None

Settings

class redbrick.common.settings.Settings[source]

Bases: ABC

Abstract interface to Settings module.

abstract property label_validation: LabelValidation

Label Validation.

Use custom label validation to prevent annotation errors in real-time. Please visit label validation for more info.

Format: {“enabled”: bool, “enforce”: bool, “script”: str}

Get

project = redbrick.get_project(org_id, project_id, api_key**,** url) label_validation = project.settings.label_validation

Set

abstract property hanging_protocol: HangingProtocol

Hanging Protocol.

Use hanging protocol to define the visual layout of tool. Please visit hanging protocol for more info.

Format: {“enabled”: bool, “script”: str}

Get

project = redbrick.get_project(org_id, project_id, api_key**,** url) hanging_protocol = project.settings.hanging_protocol

Set

abstract property webhook: Webhook

Project webhook.

Use webhooks to receive custom events like tasks entering stages, and many more.

Format: {“enabled”: bool, “url”: str}

Get

project = redbrick.get_project(org_id, project_id, api_key**,** url) webhook = project.settings.webhook

Set

abstract toggle_reference_standard_task(task_idenable)[source]

Toggle reference standard task.

Return type:

None

abstract property task_duplication: int | None

Sibling task count.

Use task duplication to create multiple tasks for a single uploaded datapoint. Please visit task duplication for more info.

Format: Optional[int]

Get

project = redbrick.get_project(org_id, project_id, api_key**,** url) count = project.settings.task_duplication

Set

Workforce

class redbrick.common.member.Workforce[source]

Bases: ABC

Abstract interface to Workforce module.

abstract get_member(member_id)[source]

Get a project member.

project = redbrick.get_project(org_id, project_id, api_key**)** member = project.workforce.get_member(member_id)

Parameters:

member_id (str) – Unique member userId or email.

Return type:

ProjectMember

abstract list_members()[source]

Get a list of all project members.

project = redbrick.get_project(org_id, project_id, api_key**)** members = project.workforce.list_members()

Return type:

List[ProjectMember]

abstract add_members(members)[source]

Add project members.

project = redbrick.get_project(org_id, project_id, api_key**)** member = project.workforce.add_members([{“member_id”: ”…”, “role”: ”…”, “stages”: [”…”]}, …])

Parameters:

members (List[ProjectMember]) – List of members to add.

Returns:

List of added project members.

Return type:

List[ProjectMember]

abstract update_members(members)[source]

Update project members.

project = redbrick.get_project(org_id, project_id, api_key**)** member = project.workforce.update_members([{“member_id”: ”…”, “role”: ”…”, “stages”: [”…”]}, …])

Parameters:

members (List[ProjectMember]) – List of members to update.

Returns:

List of updated project members.

Return type:

List[ProjectMember]

abstract remove_members(member_ids)[source]

Remove project members.

project = redbrick.get_project(org_id, project_id, api_key**)** member = project.workforce.remove_members([…])

Parameters:

member_ids (List[str]) – List of member ids (user_id/email) to remove from the project.

Return type:

None

NextCommand Line InterfacePreviousHome

Copyright © 2023, RedBrick AI

Made with Sphinx and @pradyunsg’s Furo

On this page