Integration of LIME Explainable AI to Enhance Interpretability of Deep Learning Models in Box Palette Classification

  • Evan Raditya Institut Teknologi Sepuluh Nopember
  • Rarasmaya Indraswari Institut Teknologi Sepuluh Nopember
Keywords: Box Palette, Convolutional Neural Network, Detection System, Explainable Ai, Image Classification

Abstract

In the food production system, manual box palette arrangement often encounters errors, such as incorrect stacking patterns, mismatched box quantities, and improper mixing of product variants. This issue occurs in a soy sauce production company with limited infrastructure, leading to the mixing of two product variants with the same box size in a single production line. This results in disruptions to the production flow and significant potential losses. This study proposes a solution using deep learning to detect box palette arrangement patterns. The Convolutional Neural Network (CNN) method is chosen because it has proven effective in image classification. Additionally, this study implements Explainable AI (XAI) to provide explanations related to classification results, increasing user confidence in the system. The Local Interpretable Model-agnostic Explanations (LIME) technique will be used to provide interpretations. This research produces the development of a deep learning model to classify box palette arrangements. Furthermore, the implementation of LIME in this study successfully provides interpretations of the model's predictions. The result is evident in the result where it shows that MobileNetV2 give an F1-Score of 100%, and LIME fidelity score of 0.2 and stability score of 0.2.

Published
2024-08-30