Improving decision transparency in autonomous maritime collision avoidance
Abstract
Recent advances in artificial intelligence (AI) have laid the foundation for developing a sophisticated collision avoidance system for use in maritime autonomous surface ships, potentially enhancing maritime safety and decreasing the navigator's workload. Understanding the reasoning behind an AI system is inherently difficult. To help the human operator understand what the AI system is doing and its reasoning, we employed a human-centered design approach to develop transparency layers that visualize different aspects of an operation by displaying labels, diagrams, and simulations intended to improve the user's situation awareness (SA). The effectiveness and usability of the different layers were investigated through simulator-based experiments involving nautical students and licensed navigators. The SA global assessment technique was utilized to measure navigators' SA. User satisfaction was also measured, and effective layers were identified. The results indicate that the transparency layers that enhance SA Level 3 are preferred by participants, suggesting a potential for improving human-AI compatibility. However, the introduction of transparency layers does not uniformly enhance SA across all levels, and a tendency toward passive decision-making was observed. The findings highlight the importance of balancing information presentation with the user's cognitive capabilities and suggest that further research is needed to refine transparency layers for optimized human-AI compatibility in maritime navigation.