Refine
H-BRS Bibliography
- yes (8)
Departments, institutes and facilities
Document Type
- Conference Object (5)
- Article (3)
Language
- English (8)
Keywords
- Machine Learning (2)
- feature selection (2)
- Attention mechanism (1)
- Backorder prediction (1)
- Black-box models (1)
- CNN (1)
- Curse of dimensionality (1)
- DeepLIFT (1)
- Demand Forecasting (1)
- Explainable AI (XAI) (1)
Due to expected positive impacts on business, the application of artificial intelligence has been widely increased. The decision-making procedures of those models are often complex and not easily understandable to the company’s stakeholders, i.e. the people having to follow up on recommendations or try to understand automated decisions of a system. This opaqueness and black-box nature might hinder adoption, as users struggle to make sense and trust the predictions of AI models. Recent research on eXplainable Artificial Intelligence (XAI) focused mainly on explaining the models to AI experts with the purpose of debugging and improving the performance of the models. In this article, we explore how such systems could be made explainable to the stakeholders. For doing so, we propose a new convolutional neural network (CNN)-based explainable predictive model for product backorder prediction in inventory management. Backorders are orders that customers place for products that are currently not in stock. The company now takes the risk to produce or acquire the backordered products while in the meantime, customers can cancel their orders if that takes too long, leaving the company with unsold items in their inventory. Hence, for their strategic inventory management, companies need to make decisions based on assumptions. Our argument is that these tasks can be improved by offering explanations for AI recommendations. Hence, our research investigates how such explanations could be provided, employing Shapley additive explanations to explain the overall models’ priority in decision-making. Besides that, we introduce locally interpretable surrogate models that can explain any individual prediction of a model. The experimental results demonstrate effectiveness in predicting backorders in terms of standard evaluation metrics and outperform known related works with AUC 0.9489. Our approach demonstrates how current limitations of predictive technologies can be addressed in the business domain.
Focus on what matters: improved feature selection techniques for personal thermal comfort modelling
(2022)
Occupants' personal thermal comfort (PTC) is indispensable for their well-being, physical and mental health, and work efficiency. Predicting PTC preferences in a smart home can be a prerequisite to adjusting the indoor temperature for providing a comfortable environment. In this research, we focus on identifying relevant features for predicting PTC preferences. We propose a machine learning-based predictive framework by employing supervised feature selection techniques. We apply two feature selection techniques to select the optimal sets of features to improve the thermal preference prediction performance. The experimental results on a public PTC dataset demonstrated the efficiency of the feature selection techniques that we have applied. In turn, our PTC prediction framework with feature selection techniques achieved state-of-the-art performance in terms of accuracy, Cohen's kappa, and area under the curve (AUC), outperforming conventional methods.
The rapid progress in sensor technology has empowered smart home systems to efficiently monitor and control household appliances. AI-enabled smart home systems can forecast household future energy demand so that the occupants can revise their energy consumption plan and be aware of optimal energy consumption practices. However, deep learning (DL)-based demand forecasting models are complex and decisions from such black-box models are often considered opaque. Recently, eXplainable Artificial Intelligence (XAI) has garnered substantial attention in explaining decisions of complex DL models. The primary objective is to enhance the acceptance, trust, and transparency of AI models by offering explanations about provided decisions. We propose ForecastExplainer, an explainable deep energy demand forecasting framework that leverages Deep Learning Important Features (DeepLIFT) to approximate Shapley values to map the contribution of different appliances and features with time. The generated explanations can shed light to explain the prediction highlighting the impact of energy consumption attributes corresponding to time, such as responsible appliances, consumption by household areas and activities, and seasonal effects. Experiments on household datasets demonstrated the effectiveness of our method in accurate forecasting. We designed a new metric to evaluate the effectiveness of the generated explanations and the experiment results indicate the comprehensibility of the explanations. These insights might empower users to optimize energy consumption practices, fostering AI adoption in smart applications.
Explaining AI Decisions: Towards Achieving Human-Centered Explainability in Smart Home Environments
(2024)
Improved Thermal Comfort Model Leveraging Conditional Tabular GAN Focusing on Feature Selection
(2024)
The indoor thermal comfort in both homes and workplaces significantly influences the health and productivity of inhabitants. The heating system, controlled by Artificial Intelligence (AI), can automatically calibrate the indoor thermal condition by analyzing various physiological and environmental variables. To ensure a comfortable indoor environment, smart home systems can adjust parameters related to thermal comfort based on accurate predictions of inhabitants’ preferences. Modeling personal thermal comfort preferences poses two significant challenges: the inadequacy of data and its high dimensionality. An adequate amount of data is a prerequisite for training efficient machine learning (ML) models. Additionally, high-dimensional data tends to contain multiple irrelevant and noisy features, which might hinder ML models’ performance. To address these challenges, we propose a framework for predicting personal thermal comfort preferences, combining the conditional tabular generative adversarial network (CTGAN) with multiple feature selection techniques. We first address the data inadequacy challenge by applying CTGAN to generate synthetic data samples, incorporating challenges associated with multimodal distributions and categorical features. Then, multiple feature selection techniques are employed to identify the best possible sets of features. Experimental results based on a wide range of settings on a standard dataset demonstrated state-of-the-art performance in predicting personal thermal comfort preferences. The results also indicated that ML models trained on synthetic data achieved significantly better performance than models trained on real data. Overall, our method, combining CTGAN and feature selection techniques, outperformed existing known related work in thermal comfort prediction in terms of multiple evaluation metrics, including area under the curve (AUC), Cohen’s Kappa, and accuracy. Additionally, we presented a global, model-agnostic explanation of the thermal preference prediction system, providing an avenue for thermal comfort experiment designers to consciously select the data to be collected.