Scandals have shown that extant assessment methods (e.g., certifications) cannot cater to the impermanent nature of Artificial Intelligence (AI) systems because of their inherent learning capabilities and adaptability. Current AI assessment methods are only limitedly trustworthy and cannot fulfill their purpose of demonstrating system safety. Our interviews with AI experts from industry and academia help us understand why and how AI impermanence limits assessment in practice. We reveal eight AI impermanence-related implications that threaten the reliability of AI assessment, including challenges for assessment methods, the validity of assessment results, and AI’s self-learning nature that requires ongoing reassessments. Our study contributes to a critical reflection on current AI assessment ideas, illustrating where their validity is at risk owing to AI impermanence. We provide the foundation for the development of assessment methods that consider impermanence-related implications and are suited to fully leveraging AI capabilities for the benefit of society.
AI Impermanence: Achilles’ Heel for AI Assessment?
Kathrin Brecker,S. Lins,Nicola Bena,Claudio A. Ardagna,Marco Anisetti,Ali Sunyaev
Published 2025 in IEEE Access
ABSTRACT
PUBLICATION RECORD
- Publication year
2025
- Venue
IEEE Access
- Publication date
Unknown publication date
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-71 of 71 references · Page 1 of 1
CITED BY
Showing 1-1 of 1 citing papers · Page 1 of 1