Institute of Art Design + Technology
Dún Laoghaire

Etaoin Corcoran 



Do transparent artificial intelligence processes affect trust levels amongst end-users

This research drew upon the literature regarding trust in technology, namely Artificial Intelligence (AI). Industry experts predict that AI may pass the Turing test within this decade. When the imitation game is achieved by AI, it will be important to understand how humans trust anthropomorphically designed systems. Competence, dispositional, situational and learned trust are prominent constructs in human-machine interaction. Lack of interpretability in Black box AI presents a challenge in trust dynamics. This study examines the effects of a transparency video intervention on trust in AI. Digital competency, age, gender, job-seeking status, ethnicity and country of residence were measured as covariates. Participants were assigned to an intervention or control group and presented with measures of trust and digital competency, followed by a vignette task relating to ChatGPT. The intervention group received an explainer video on algorithmic processing, whilst the control group did not. Transparency did not have a significant effect on trust levels. Similarly, digital competency, age, gender, job-seeking status and country of residence did not affect trust levels. Pre-test trust levels for ethnicity and age in the 65+ age group were significant. Qualitative findings identified that transparency, human skills, oversight, competence and goal congruence were significant themes of participant awareness. Findings suggest competence and goal congruence outweigh explainability. Consistent with prior research; algorithmic output impacts trust in AI more than process transparency.