A program based on artificial intelligence (AI) needs dialog with humans to evolve. A research team from TU Darmstadt, hessian.AI and the German Research Center for Artificial Intelligence (DFKI), including researchers from the LOEWE priority WhiteBox, have presented a method that significantly simplifies feedback from humans to learning software. The results have been published in the journal Nature Machine Intelligence.
Darmstadt researchers led by Professor Kristian Kersting, spokesperson of the LOEWE focus WhiteBox, have shown in recent years that AI algorithms benefit from feedback from humans. But to communicate with machines, you need to understand them. That's often difficult with the most commonly used form of AI, called deep learning. That's because deep learning is based on billions of connections between virtual neurons, and in the process it's difficult to understand which connections lead to an AI decision. In addition, the system is prone to errors. The researchers have therefore developed a method called Explainable AI, with the help of which the user can report errors back to the AI. The researchers refer to this feedback method as Explanatory Interactive Learning (XIL). Specifically, the AI was given the task of recognizing a handwritten "1." However, the training image also showed a square. If the AI mistakenly defined the square as a "1," the square could be marked as "not belonging to the object" and the AI trained that way.
"This is much more effective, because it is generally difficult to define what distinguishes a kingfisher, for example," explains Felix Friedrich, who is doing his doctorate under Professor Kersting.
The Darmstadt researchers have distilled a strategy ("typology") from the XIL processes studied that can be used to efficiently remedy the abbreviation behavior described.