About Us

CVAT was designed to provide users with a set of convenient instruments for annotating digital images and videos.
CVAT supports supervised machine learning tasks pertaining to object detection, image classification, image segmentation and 3D data annotation. It allows users to annotate images with four types of shapes: boxes, polygons (both generally and for segmentation tasks), polylines (e.g., for annotation of markings on roads),
and points (e.g., for annotation of face landmarks or pose estimation).

Data scientists need annotated data (and lots of it) to train the deep neural networks (DNNs) at the core of AI workflows. Obtaining annotated data or annotating data yourself is a challenging and time-consuming process.
For example, it took about 3,100 total hours for members of Intel’s own data annotation team to annotate more than 769,000 objects for just one of our algorithms. To help solve this challenge, Intel is conducting research to find better methods of data annotation and deliver tools that help developers do the same.

2016

Vatic as a web-based annotation solution.

2017

Internal version with support for images and attributes.

2018

First public release on GitHub.

2020

UI based on React and AntD.
app.cvat.ai as data platform.

2021

Dataset as the first-class citizen.

202X

Data platform.

Contact Us:

Russia, Nizhny Novgorod, Turgeneva street 30 (campus TGV)

Feedback from users helps Intel determine future direction for CVAT’s development. We hope to improve the tool’s user experience, feature set, stability, automation features and ability to be integrated with other services and encourage members of the community to take an active part in CVAT’s development.