In this research, an effort has been made to evaluate the semantic annotators with a systematic subjective evaluation technique. So far, most of the previous evaluation efforts have involved creation of gold standards and by measuring basic metrics, the performance of semantic annotators has been analysed. But in this work, a subjective evaluation technique has been applied to evaluate some of the publicly available semantic annotation systems. In this method, 60 participants have been involved in the evaluation. A survey has been carried out to collect the response from participants about what they think how well the annotators perform on different types of texts (e.g. long texts, short texts and tweets). Their responses have been analysed using standard statistical tests. Using this approach, it has been concluded that Wikipedia Miner performs better on long texts and Tag Me performs better on short texts and tweets than other systems.