--- title: AbuseEval v1.0 link-to-publication: http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.760.pdf link-to-data: https://github.com/tommasoc80/AbuseEval task-description: Explicitness annotation of offensive and abusive content details-of-task: "Enriched versions of the OffensEval/OLID dataset with the distinction of explicit/implicit offensive messages and the new dimension for abusive messages. Labels for offensive language: EXPLICIT, IMPLICT, NOT; Labels for abusive language: EXPLICIT, IMPLICT, NOTABU" size-of-dataset: 14100 percentage-abusive: 20.75 language: English level-of-annotation: ["Tweets"] platform: ["Twitter"] medium: ["Text"] reference: "Caselli, T., Basile, V., Jelena, M., Inga, K., and Michael, G. 2020. \"I feel offended, don’t be abusive! implicit/explicit messages in offensive and abusive language\". The 12th Language Resources and Evaluation Conference (pp. 6193-6202). European Language Resources Association." ---