Photo of a long pedestrian tunnel in Glasgow at night
| | | | | |

Trust, suspicion and tech: considerations for knowledge institutions  

Opinions expressed in this blog are those of the author, and not of Scotland’s Futures Forum, members of its staff, or the Scottish Parliament.


Knowledge institutions – including universities, libraries, museums, scientific offices in governmental and non-governmental settings, and the press – play a central role in creating the conditions for democratic and informed deliberation and decision-making in society. As institutions, they are varied, but they share the purpose of collecting and disseminating information and knowledge, and generating knowledge from information, using some form of disciplinary criteria (Jackson, 2021, 2025). In a time of information abundance, they play an important part in helping the public scrutinise and distinguish between genuine and false knowledge (UNESCO, 2026, p. 25). This role is coming under increasing pressure from multiple directions, including political, technological and social, as information and knowledge landscapes in society become more fragmented.  

Photo of a long pedestrian tunnel in Glasgow at night
Image source: Unsplash / Robert Greinacher

My colleague Huw Davies wrote about one significant implication of fragmentation in his recent blog post for this series: the lack of shared understanding of reality (for example between teachers and their pupils) that arises from vastly different, social-media driven views of the world that people develop as they navigate through personalised and algorithmically-curated digital spaces.  

Another implication of fragmentation is that it reduces trust. This is because, in the absence of means to verify the words, sounds, images and video that surround us, our responses become characterised by suspicion. This is a major issue for knowledge institutions, because they rely on trust for their authority and effectiveness. Traditional mechanisms for establishing and preserving trust include claims to objectivity; reliance on signals of expertise; and social, economic and political influence to help create the realities they envision and predict. However, in our current circumstances, these mechanisms are falling short as we are faced with technology that can mimic them. Confident-sounding, objective-sounding materials are generated by AI processes that lack understanding, processes for sorting truth from fiction, disciplinary norms, or the ability to be interrogated or held accountable. This is a disaster for trust towards knowledge institutions, not helped when such institutions make ill-judged attempts to harness the technologies themselves (see for example the controversy over the British Museum’s use of AI-generated images in an early 2026 advertising campaign). 

In my own sector, higher education, the reliability of knowledge and authorship claims is coming under increasing suspicion. For example, academic journal editors and reviewers describe being deluged by “AI slop” submitted as scientific papers (Clarke, 2026; Gunitsky, 2026). In teaching contexts, doubt and suspicion about student authorship was already a problem, evidenced by the near-ubiquity of plagiarism detection software and processes integrated into university systems, with mixed effectiveness (Tight, 2024) and a negative impact on trust (Ross & Macleod, 2018; Wittenberg, 2024). As ‘shortcuts’ to content creation proliferate and a lack of confidence in others to behave with integrity grows, technology companies jump to offer “solutions” to verification problems which are, in fact, trust problems. This is making things worse, leading to a situation where universities “normalise suspicion and outsource judgment to tools that cannot be fair arbiters of integrity” (Steyn, 2026).  

Different approaches will be needed for knowledge institutions to address a societal reduction in trust. Trust-preserving mechanisms that are relational and interpersonal hold the promise of better outcomes not only for individuals, but for society as a whole. These mechanisms take into account what trust really is – the willingness to be vulnerable to possible, but not expected, ill-will on the part of others (Baier, 1986). A tech arms-race to replace trust with surveillance has never done anything but increase suspicion and unfairness. Let’s instead imagine a future where our technology-building capacity is used in the service of deliberation, consensus-building and dialogue. We can see glimpses of this future in emerging models of digital services designed for communities to share and develop knowledge (see for example the journalist-run Newsmast Foundation in the UK); and ways of using AI and other technologies to scale democratic deliberation (see for example the Netherlands-based DemocracyNext foundation’s recent work). 

Returning to UNESCO’s roadmap for higher education, trust is built at a small scale through engagement in social dialogue, supporting local science, culture and the arts, and work with schools (2026, p.42), and this work must continue. We should also ask: what technological possibilities and governance mechanisms can we imagine now to complement, rather than undermine, the slow, interpersonal, and committed work that it takes to build and share collective knowledge? 


Professor Jen Ross

Jen Ross is Professor of Digital Culture and Education Futures, and co-director of the Centre for Research in Digital Education, at the University of Edinburgh. She’s based in the Moray House School of Education and Sport and the Edinburgh Futures Institute. Her research interests include education and cultural heritage futures, speculative methods, online distance education, digital cultural heritage learning, open education and digital cultures. https://www.de.ed.ac.uk/people/professor-jen-ross  


References

Baier, A. (1986). Trust and Antitrust. Ethics96(2), 231–260. https://doi.org/10.2307/2381376 

Clarke, P. (2026, March 10). AI is inventing academic articles – and scholars are citing them. The Observer. https://observer.co.uk/news/science-technology/article/ai-is-inventing-academic-articles-and-scholars-are-citing-them 

Gunitsky, S. (2026, January 13). The Age of Academic Slop is Upon Us [Substack newsletter]. Hegemon. https://hegemon.substack.com/p/the-age-of-academic-slop-is-upon 

Jackson, V. C. (2021). Knowledge Institutions in Constitutional Democracies: Preliminary Reflections Democratic Decay: Challenges for Constitutionalism and the Rule of Law. Canadian Journal of Comparative and Contemporary Law7, 156–221. 

Jackson, V. C. (2025). Knowledge Institutions and Resisting ‘Truth Decay’. In A. Koltay, C. Garden, & Jr. Krotoszynski Ronald J. (Eds), Disinformation, Misinformation, and Democracy: Legal Approaches in Comparative Context (pp. 345–374). Cambridge University Press. https://doi.org/10.1017/9781009373272.020 

Ross, J., & Macleod, H. (2018). Surveillance, (dis)trust and teaching with plagiarism detection technology. Proceedings of the 11th International Conference on Networked Learning 2018. Networked Learning. http://www.networkedlearningconference.org.uk/abstracts/ross.html 

Steyn, J. (2026, February 11). The AI detector arms race is breaking trust between students and educators [Substack newsletter]. Johan’s Substack. https://johanosteyn.substack.com/p/the-ai-detector-arms-race-is-breaking?utm_medium=email&mc_eid=98a63fd469 

Tight, M. (2024). Challenging cheating in higher education: A review of research and practice. Assessment & Evaluation in Higher Education49(7), 911–923. https://doi.org/10.1080/02602938.2023.2300104 

UNESCO. (2026). Transforming higher education: Global collaboration on visioning and action. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000397582 

Wittenberg, H. (2024). Turn-It-Off: Reflections on Technology, the Knowledge Commons and the Academic Plagiarism Industry. Scrutiny228(1), 52–67. https://doi.org/10.1080/18125441.2024.2320892 

Similar Posts