PhD thesis: Trust in Online Information

On March 1, 2013, I will defend my PhD thesis. Below, you’ll find the short abstract.

Omslag ProefschriftThe introduction of the Internet has made it very easy to gain information on any topic imaginable. The downside of this development is that it has become much harder to evaluate the credibility of this information, as it is often unclear who the author is. In this dissertation, the way Internet users evaluate the credibility of online information is studied, with a focus on the relationship between user characteristics and information features. The online encyclopedia Wikipedia is used as a case study.

In the newly developed 3S-model of information trust, three evaluation strategies are defined. Primarily, the user may want to consider semantic features of the information (e.g., factual accuracy). However, this requires a level of domain expertise on the topic at hand. Alternatively, the user may look at surface features (i.e., the way the information is presented). While this strategy does not require domain expertise, more generic information skills are needed, since the user needs to know how certain information features are related to credibility. Finally, the user may consider the source of the information; this requires prior experiences with this source.

Multiple studies have provided validation for each of the strategies defined in the 3S-model. The model was placed in a broader perspective by also considering trust in a medium (as a collection of similar sources, such as websites), and a general propensity to trust.

The models not only provide insight into how credibility is evaluated, they also demonstrate many situations in which people have difficulties with this task. A potential solution for these problems is to provide users with a decision support system. In the second part of this dissertation, the application of such systems was studied. The choice between a user-based or a fully automated system was considered, as well as the choice between a simple (but understandable) or complex (but hard to understand) system. Moreover, the relationship between the strategies of the 3S-model and decision support was investigated.

(Full-text)

The influence of source cues and topic familiarity on credibility evaluation

Lucassen, T. & Schraagen, J. M. (2013). The influence of source cues and topic familiarity on credibility evaluation. Computers in Human Behavior, 29, 1387-1392.

An important cue in the evaluation of the credibility of online information is the source from which the information comes. Earlier, it has been hypothesized that the source of information is less important when one is familiar with the topic at hand. However, no conclusive results were found to confirm this hypothesis. In this study, we re-examine the relationship between the source of information and topic familiarity. In an experiment with Wikipedia articles with and without the standard Wikipedia layout, we showed that, contrary to our expectations, familiar users have less trust in the information when they know it comes from Wikipedia than when they do not know its source. For unfamiliar users, no differences were found. Moreover, source cues only influenced trust when the credibility of the information itself was ambiguous. These results are interpreted in the 3S-model of information trust (Lucassen & Schraagen, 2011).

(Full-text)

Propensity to trust and the influence of source and medium cues in credibility evaluation

Lucassen, T. & Schraagen, J. M. (2012). Propensity to trust and the influence of source and medium cues in credibility evaluation. Journal of Information Science, 38, 564-575.

Credibility evaluation has become a daily task in the current world of online information that varies in quality. The way this task is performed has been a topic of research for some time now. In this study, we aim to extend this research by proposing an integrated layer model of trust. According to this model, trust in information is influenced by trust in its source. Moreover, source trust is influenced by trust in the medium, which in turn is influenced by a more general propensity to trust. We provide an initial validation of the proposed model by means of an online quasi-experiment (n = 152) in which participants rated the credibility of Wikipedia articles. Additionally, the results suggest that the participants were more likely to have too little trust in Wikipedia than too much trust.

(Full-text)

Readability of Wikipedia

Lucassen, T., Dijkstra, R.L., Schraagen, J.M. (2012) Readability of Wikipedia. First Monday, 17.

Wikipedia is becoming widely acknowledged as a reliable source of encyclopedic information. However, concerns have been expressed about its readability. Wikipedia articles might be written in a language too difficult to be understood by most of its visitors. In this study, we apply the Flesch reading ease test to all available articles from the English Wikipedia to investigate these concerns. The results show that overall readability is poor, with 75 percent of all articles scoring below the desired readability score. The ‘Simple English’ Wikipedia scores better, but its readability is still insufficient for its target audience. A demo of our methodology is available at www.readabilityofwikipedia.com.

(Full-text)

Topic familiarity and information skills in online credibility evaluation

Lucassen, T., Muilwijk, R., Noordzij, M. L., & Schraagen, J. M. (2013). Topic familiarity and information skills in online credibility evaluation. Journal of the American Society for Information Science and Technology, 64, 254-264.

With the rise of user generated content, evaluating the credibility of information has become increasingly important. It is already known that various user characteristics influence the way credibility evaluation is performed. Domain experts on the topic at hand primarily focus on semantic features of information (e.g., factual accuracy), whereas novices focus more on surface features (e.g., length of a text). In this study, we further explore two key influences on credibility evaluation, namely topic familiarity and information skills. Participants with varying expected levels of information skills (i.e., high school students, undergraduates, and post-graduates) evaluated Wikipedia articles of varying quality on familiar and unfamiliar topics while thinking aloud. When familiar with the topic, participants indeed focused primarily on semantic features of the information, whereas participants unfamiliar with the topic paid more attention to surface features. The utilization of surface features increased with information skills. Moreover, participants with better information skills calibrated their trust to the quality of the information, whereas trust of participants with poorer information skills did not. This study confirms the enabling character of domain expertise and information skills in credibility evaluation as predicted by the updated 3S-model of credibility evaluation.

(Full-text)

The role of topic familiarity in online credibility evaluation support

Lucassen, T. & Schraagen, J.M. (2012). The role of topic familiarity in online credibility evaluation support. In Proceedings of the Human Factors and Ergonomics Society 56th Annual Meeting. Boston, MA, USA.

Evaluating the credibility of information is a difficult yet essential task in a society where the Internet plays a large role. Familiarity with the topic at hand has been shown to have a large influence on the way credibility is evaluated; ‘familiar users’ tend to focus on semantic features, ‘unfamiliar users’ focus more on surface features. In this study, we attempt to find out whether these differences have consequences for the development of credibility evaluation support systems. Two simulated support systems were evaluated, one utilizing semantic features (aiming at familiar users), the other utilizing surface features (aiming at unfamiliar users). The results suggest that unfamiliar users have a preference for a surface support system. Familiar users have no clear preference, have less trust in the support, and report to be influenced less by a support system. We recommend to focus on unfamiliar users when developing credibility evaluation support.

(Full-text)

Improving credibility evaluations on Wikipedia

Lucassen, T. & Schmettow, M. (2011). Improving credibility evaluations on Wikipedia. In Wiering, C. H., Pieters, J. M., Boer, H., Intervention Design and Evaluation in Psychology.

In this chapter, ongoing research on trust in Wikipedia is used as a case study to illustrate the design process of a support tool for Wikipedia, following the ASCE-model. This research is performed from a cognitive perspective and aims at users actively evaluating the credibility of information on Wikipedia on an article basis rather than passively relying on their trust in the source of the information (Wikipedia as a whole).

Adaptive attention allocation support: Effects of system conservativeness and human competence

Van Maanen, P.-P., Lucassen, T., & van Dongen, K. (2011). Adaptive attention allocation support: Effects of system conservativeness and human competence. In Schmorrow, D. and Fidopiastis, C., editors, Foundations of Augmented Cognition. Directing the Future of Adaptive Systems, volume 6780 of Lecture Notes in Computer Science, chapter 74, pages 647-656. Springer Berlin / Heidelberg, Berlin, Heidelberg.

Naval tactical picture compilation is a task for which allocation of attention to the right information at the right time is crucial. Performance on this task can be improved if a support system assists the human operator. However, there is evidence that benefits of support systems are highly dependent upon the systems’ tendency to support. This paper presents a study into the effects of different levels of support conservativeness (i.e., tendency to support) and human competence on performance and on the human’s trust in the support system. Three types of support are distinguished: fixed, liberal and conservative support. In fixed support, the system calculates an estimated optimal decision and suggests this to the human. In the liberal and conservative support types, the system estimated the important information in the problem space in order to make a correct decision and directs the human’s attention to this information. In liberal support, the system attempts to direct the human’s attention using only the assessed task requirements, whereas in conservative support, the this attempt is done provided that it has been estimated that the human is not already paying attention (more conservative). Overall results do not confirm our hypothesis that adaptive conservative support leads to the best performances. Furthermore, especially high-competent humans showed more trust in a system when delivered support was adapted to their specific needs.

(Full text)

Researching trust in Wikipedia

Lucassen, T. & Schraagen, J. M. (2011). Researching trust in Wikipedia. In Chi Sparks, Arnhem, The Netherlands.

As the use of collaborative online encyclopedias such as Wikipedia grows, so does the need for research on how users evaluate its credibility. In this paper we compare three experimental approaches to study trust in Wikipedia, namely think aloud, eye-tracking, and online questionnaires. The advantages and disadvantages of each method are discussed. We conclude that it is best to use multiple methods when researching information trust, as each single one of the discussed methods alone does not give all possible information.

(Full text / Poster)

Reference blindness: The influence of references on trust in Wikipedia

Lucassen, T., Noordzij, M. L., & Schraagen, J. M. (2011). Reference blindness: The influence of references on trust in Wikipedia. In ACM WebSci ’11.

In this study we show the influence of references on trust in information. We changed the contents of reference lists of Wikipedia articles in such a way that the new references were no longer in any sense related to the topic of the arti- cle. Furthermore, the length of the reference list was varied. College students were asked to evaluate the credibility of these articles. Only 6 out of 23 students noticed the manipulation of the references; 9 out of 23 students noticed the variations in length. These numbers are remarkably low, as 17 students indicated they considered references an important indicator of credibility. The findings suggest a highly heuristic manner of credibility evaluation. Systematic evaluation behavior was also observed in the experiment, but only of participants with low trust in Wikipedia in general.

(Full text / Poster)