Abstract
The objective of this paper is to investigate the effectiveness of tags in facilitating resource discovery through machine learning and user-centric approaches. Drawing our dataset from a popular social tagging system, Delicious, we conducted six text categorization experiments using the top 100 frequently occurring tags. We also conducted a human evaluation experiment to manually evaluate the relevance of some 2000 documents related to these tags. The results from the text categorization experiments suggest that not all tags are useful for content discovery regardless of the tag weighting schemes. Moreover, there were cases where the evaluators did not perform as well as the classifiers, especially when there was a lack of cues in the documents for them to ascertain the relationship with the tag assigned. This paper discusses three implications arising from the findings and suggests a number of directions for further research.
Original language | English |
---|---|
Pages (from-to) | 391-404 |
Number of pages | 14 |
Journal | Journal of Information Science |
Volume | 37 |
Issue number | 4 |
DOIs | |
Publication status | Published - Aug 2011 |
Externally published | Yes |
ASJC Scopus Subject Areas
- Information Systems
- Library and Information Sciences
Keywords
- human evaluation
- social computing
- social tagging
- text categorization
- Web 2.0