Trust and trustworthy artificial intelligence: A research agenda for AI in the environmental sciences
Citations Over TimeTop 10% of 2023 papers
Abstract
Demands to manage the risks of artificial intelligence (AI) are growing. These demands and the government standards arising from them both call for trustworthy AI. In response, we adopt a convergent approach to review, evaluate, and synthesize research on the trust and trustworthiness of AI in the environmental sciences and propose a research agenda. Evidential and conceptual histories of research on trust and trustworthiness reveal persisting ambiguities and measurement shortcomings related to inconsistent attention to the contextual and social dependencies and dynamics of trust. Potentially underappreciated in the development of trustworthy AI for environmental sciences is the importance of engaging AI users and other stakeholders, which human-AI teaming perspectives on AI development similarly underscore. Co-development strategies may also help reconcile efforts to develop performance-based trustworthiness standards with dynamic and contextual notions of trust. We illustrate the importance of these themes with applied examples and show how insights from research on trust and the communication of risk and uncertainty can help advance the understanding of trust and trustworthiness of AI in the environmental sciences.
Related Papers
- → Consensus and (lack of) accuracy in perceptions of avatar trustworthiness(2021)18 cited
- → Can people detect the trustworthiness of strangers based on their facial appearance?(2020)13 cited
- Exploration of the trustworthiness practice education to university students(2004)
- → The saying of Bukhari (He disagrees with his hadith) A critical study in his book The Middle History(2023)
- What the students expect and perceive about trustworthiness(2011)