A Shared Scorecard To Evaluate Data Annotation Vendors

Evaluating and choosing an annotation partner is not an easy task. There are a lot of options, and it’s not straightforward to know who will be the best fit for a project.
We recently stumbled upon this paper by Andrew Greene titled – “Towards a shared rubric for Dataset Annotation”, that talks about a set of metrics which can be used to quantitatively evaluate data annotation vendors. So we decided to turn it into an online tool.
A big reason for building this tool is to also bring welfare of annotators to the attention of all stakeholders.
Until end users start asking for their data to be labeled in an ethical manner, labelers will always be underpaid and treated unfairly, because the competition boils down solely to price. Not only does this “race to the bottom” lead to lower quality annotations, it also means vendors have to “cut corners” to increase their margins.
Our hope is that by using this tool, ML teams will have a clear picture of what to look for when evaluating data annotation service providers, leading to better quality data as well as better treatment of the unsung heroes of AI – the data labelers.
Access the tool here https://mindkosh.com/annotation-services/annotation-service-provider-evaluation.html

submitted by /u/AdventurousSea4079
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *