tasks.json
tasks.json
- a file generated upon export that contains a record of the annotation work completed within a Project. Upon export, the tasks.json
file will contain a single entry for each Task;Task
object represents a single task on RedBrick AI. It contains task-level meta-data information about all the series within the task. A task can contain a single series or multiple series (ex. a full MRI study).
name: string
name
is meant to be a human-readable string that can help identify tasks ex. you can set name
of a task to patient/study01
.
taskId?: string
currentStageName?: string
createdBy?: string
createdAt?: string
updatedBy?: string
updatedAt?: string
preAssign?: { [ stageName : string ] : string }
{"Label": "name1@redbrickai.com", "Review": "name2@redbrickai.com"}
.
classification: { attributes : [ string : boolean ] }
metaData?: { [ key: string ]: string }
A list of key value pairs that can be affixed to a Task. This information is visible in the Annotation Tool.
priority?: number
Assign a priority value to a specific Task, which will influence the order in which the Task displays on the Data Page. The Automatic Assignment protocol will also auto-assign Tasks with a priority value to a user’s Labeling / Review Queue before moving on to Tasks without priority values.
Series
object has metadata and annotations for a single Series within a Task. A Series can represent anything from a single MRI/CT series, a video, or a single 2D image.
If a Series contains annotations, you can expect one or more of the label entries to be present (e.g. segmentations
, polygons
etc.).
items: string | string[]
name: string
classifications: { attributes: [ string : boolean ] }
instanceClassifications: fileIndex | fileName | [ values: { string : boolean } ]
instanceClassifications
object defines a series of boolean values that can be assigned to individual instances (e.g. frames in a video).
metaData?: { [ key: string ]: string }
A list of key value pairs that can be affixed to a Series. This information is visible in the Annotation Tool.
binaryMask?: boolean
Reflects the user’s choice of optionally exporting annotations as a binary mask.
semanticMask?: boolean
Reflects the user’s choice of optionally using semantic export.
pngMask?: boolean
Reflects the user’s choice of optionally exporting annotations as a PNG mask.
category: string | string[]
category
will be string[]
.
attributes: { [ attributeName: string ]: string | boolean }
attributeName
is defined when creating your Taxonomy.
voxelPoint: { i: number, j: number, k: number }
VoxelPoint
represents a three-dimensional point in image space, where i and j are columns and rows, and k is the slice number.
worldPoint: { x: number, y: number, j: number }
WorldPoint
represents a three-dimensional point in physical space/world coordinates. The world coordinates are calculated using VoxelPoint
and the Image Plane Module.
point2D: { xnorm: number, ynorm: number }
Point2D
represents a two dimensional point. This is used to define annotation types on 2D data. xnorm
has been normalized by image width, hnorm
has been normalized by image height.
fileIndex: number
fileIndex
is an integer that corresponds to a specific frame in a video series.
fileName: string
fileName
represents the name given to an image or specific frame in a video series.
measurementStats: Dict
group?: string
measurementStats
) containing geometric information about certain Object Labels.
average: number
The average pixel intensity value inside of a structure.
area?: number
The area of a 2D Object Label (e.g. Bounding Box, Ellipse), measured in square millimeters.
volume?: number
The volume of a 3D structure (e.g. Cuboid), measured in cubic millimeters.
minimum: number
The lowest pixel intensity value present in the structure.
maximum: number
The highest pixel intensity value present in the structure.
videoMetaData: Dict
frameIndex
, trackId
, keyFrame
, and endFrame
.
frameIndex: number
(video)trackId: string
(video)keyFrame: boolean
(video)endFrame: boolean
(video)segmentMap
segmentations?: string | string[]
.nii
file, or multiple .nii
files containing different instances.
segmentMap?: { [ instanceId: number ]: { category: string | string[]; attributes?: Attributes; overlappingGroups?: number[]; group?: string; } };
A mapping between a segmentation’s instance ID, your Taxonomy category name, and any accompanying attributes. The mapping will apply only to the current series, and instance IDs must be unique across all series in a task (this is useful for instance segmentation).
Please note that the segmentMap
’s instanceId is generated incrementally based on the order in which annotations were created by the labeler. You can find an example JSON output below.
mask?: string
pointTopLeft:
Point2D
wNorm, hNorm: number
points:
Point2D
[]
point1, point2 :
VoxelPoint
absolutePoint1, absolutePoint2 :
WorldPoint
point1
, point2
these are points in physical space.
normal: [number, number, number]
normal
defines the normal unit vector to the slice on which this annotation was made. For annotations made on non-oblique planes, the normal will be [0,0,1]
.
length: number
point1, point2, vertex :
VoxelPoint
absolutePoint1, absolutePoint2 :
WorldPoint
point1
, point2
, vertex
, these values are coordinates in the DICOM world coordinate system i.e. physical space.
normal: [number, number, number]
normal
defines the normal unit vector to the slice on which this annotation was made. For annotations made on non-oblique planes, the normal will be [0,0,1]
.
angle: number
pointCenter: point2D
Information regarding the exact center of the Ellipse Object Label.
xRadiusNorm: number
A numeric value equivalent to half the length of the Ellipse Object Label’s major axis.
yRadiusNorm: number
A numeric value equivalent to half the length of the Ellipse Object Label’s minor axis.
rotationRad: number
The rotation angle of the Ellipse Object Label, expressed in radians.
point: point2D
The point in physical space on a 2D image where the Landmark is located.
point: voxelPoint
The point in physical space on a 3D volume where the Landmark is located.
point1, point2: voxelPoint
Information about the initial point of the Cuboid (point1
) and the final point (point2
, opposite diagonal corner).
absolutePoint1, absolutePoint2: worldPoint
The position of VoxelPoints
point1
and point2
in physical space (world coordinate) computed using the Image Plane Module.
tasks.json
ConsensusTask
ObjectconsensusTasks
array will be 3.
updatedBy: string
updatedAt: string
scores: {secondaryUserEmail: string, score: number}[]
scores
entry compares the current users’ annotations with every other user. The scores array will be of length n-1, where n is the number of users who annotated this task. score
is the similarity score between the current user, and secondaryUserEmail
.
series: Series[]