Video labeling on the RedBrick AI platform is done by parsing a video into individual frames and generating labels on those frames. When you create a video dataset and import data into the platform, you will need to create an items list where each entry will have a
name key will be created into an independent video labeling task, and the frames will be ordered in the same order of appearance of the items list.
As mentioned earlier, video labeling is done by parsing the video into frames and then labeling individual frames. The RedBrick AI platform offers tools to assist in labeling these frames in a context of a video. The video labeling interface includes play controls, slider, and frame selector to help users navigate through a video.
To describe the functionality of the video labeling interface, let's define a few terms that completely define a single label object in a video.
The index of the particular frame in the video. Every label begins and ends at a particular
Any frame where the user manually adds/edits labels is considered a
Each object that you label on the interface will get a unique
The last frame of an object with a particular track id is the
As described in the section above, the labeling interface will linearly interpolate between all frames that have been labeled (shown in the animation below).
The interpolation feature is present for both bounding box labels, as well as polygon labels:
Bounding box: Bounding box vertices are linearly interpolated between frames, you can adjust the position, and dimensions of the bounding box between frames.
Polygon: Polygon vertices are linearly interpolated between frames, you can adjust the position of each node between frames.