datahub/examples/turing/content/datasets/abusive-eval-v1-0.md
2023-05-11 16:13:09 -03:00

968 B
Raw Blame History

title, link-to-publication, link-to-data, task-description, details-of-task, size-of-dataset, percentage-abusive, language, level-of-annotation, platform, medium, reference
title link-to-publication link-to-data task-description details-of-task size-of-dataset percentage-abusive language level-of-annotation platform medium reference
AbuseEval v1.0 http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.760.pdf https://github.com/tommasoc80/AbuseEval Explicitness annotation of offensive and abusive content Enriched versions of the OffensEval/OLID dataset with the distinction of explicit/implicit offensive messages and the new dimension for abusive messages. Labels for offensive language: EXPLICIT, IMPLICT, NOT; Labels for abusive language: EXPLICIT, IMPLICT, NOTABU 14100 20.75 English
Tweets
Twitter
Text
Caselli, T., Basile, V., Jelena, M., Inga, K., and Michael, G. 2020. "I feel offended, dont be abusive! implicit/explicit messages in offensive and abusive language". The 12th Language Resources and Evaluation Conference (pp. 6193-6202). European Language Resources Association.