Learner’s Trust in Online Information

This study had three aims;

• to ‘provide an overview of the ways in which trust is either assessed or asserted in relation to the use and provision of resources in the Web environment for research and learning’;
• to ‘assess what solutions might be worth further investigation and whether establishing ways to assert trust in academic information resources could assist the development of information literacy’;
• to ‘help increase understanding of how perceptions of trust influence the behaviour of information users.’

The project proposed a model for trust in online learning environments and identified external design cues, cues internal to the content of the information and the user’s cognitive state as important variables in deciding what information to trust.

Author: James Nicholson

James is a Lecturer in the School of Computer and Information Sciences. James is interested in inclusive cybersecurity and leads the CyberGuardians research project. He is also interested in usable security, social engineering, and everyday surveillance. Previously, James was a senior researcher in PaCT Lab working on the Cybersecurity Across the Lifespan (cSALSA) project. The project explores how cyber-security is understood, and the attitudes and behaviours of people to cyber-security and risk. During his time in PaCT Lab, James also worked on Choice Architecture for Information Security (ChAISe), Digital Economy Research Centre (DERC), and the Horizon 2020 project CYBECO. Prior to PaCT Lab, James worked at Open Lab, Newcastle University on the TEDDI and SiDE projects. James’ work has focused on improving user authentication, both by repurposing existing graphical authentication systems and by evaluating novel ones. He is also interested in user privacy and how groups of users (children, parents, older adults) experience location tracking technologies, as well as how CCTV video can be crowdsourced to de-centralise the surveillance landscape. More recently, he has developed tools and methodologies for uncovering and understanding employees’ mental models of security threats with the aim of improving training programmes and/or organisational policies, as well as practical means for improving users’ protection against these security threats (e.g. phishing).