That’s the idea behind explainable AI, which should help human actors trust autonomous systems more. Researchers from Osaka Metropolitan University’s Graduate School of Engineering have developed an explainable AI model for ships that quantifies the collision risk for all vessels in a given area, an important feature as key sea lanes have become ever more congested.
Graduate student Hitoshi Yoshioka and Professor Hirotada Hashimoto created the AI model to explain the basis for its decisions and the intention behind actions using numerical values for collision risk.
“By being able to explain the basis for the judgments and behavioral intentions of AI-based autonomous ship navigation, I think we can earn the trust of maritime workers,” Professor Hashimoto stated. “I also believe that this research can contribute to the realization of unmanned ships.”
The findings were published in Applied Ocean Research.