Toronto Metropolitan University
Browse
Patel_Minal.pdf (1.49 MB)

Benchmarking of semantic annotation systems

Download (1.49 MB)
thesis
posted on 2021-05-24, 09:45 authored by Minal Patel
In this research, an effort has been made to evaluate the semantic annotators with a systematic subjective evaluation technique. So far, most of the previous evaluation efforts have involved creation of gold standards and by measuring basic metrics, the performance of semantic annotators has been analysed. But in this work, a subjective evaluation technique has been applied to evaluate some of the publicly available semantic annotation systems. In this method, 60 participants have been involved in the evaluation. A survey has been carried out to collect the response from participants about what they think how well the annotators perform on different types of texts (e.g. long texts, short texts and tweets). Their responses have been analysed using standard statistical tests. Using this approach, it has been concluded that Wikipedia Miner performs better on long texts and Tag Me performs better on short texts and tweets than other systems.

History

Language

English

Degree

  • Master of Science

Program

  • Computer Science

Granting Institution

Ryerson University

LAC Thesis Type

  • Thesis

Year

2014

Usage metrics

    Computer Science (Theses)

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC