Loading...
 

Reproduce Crowdsourced Annotations

This is a simple reproducibility idea. Take a NLP paper that uses crowd-sourced (i.e., with annotators from Amazon Mechanical Turk or similar services and try to reproduce the results by re-doing (some) of the annotations.

If the authors have not included the exact instructions given to the annotators (that's a mistake, they should be available in an appendix), corresponding with them, obtaining them and making them available for others is already a contribution on itself. With the exact instructions, you can annotate some of the data and see if the results held using your annotations. You can also compare your annotations to see if you can get reasonable agreement to the annotations published by the authors.

This matters because the behavior of crowd-workers might not be stable, the instructions might have issues or the original work might be plainly non-reproducible. If everything goes well, this effort will strengthen our belief in the phenomena reported by the original paper.