Local Interpretability Methods for Time Series Modeling
Interpretability aims to improve our understanding of the model behavior. Local interpretability methods can explain the specific predictions of a model and create trust between the model and its users, empowering the practitioners with powerful new insights. The temporal nature, and the high dimensionality of the time series data sets unique challenges to interpretability, which are specific to this domain of machine learning. An improved understanding of time series interpretability methods, and availability of suitable evaluation metrics for measuring the accuracy of the explanations can contribute to further progress in time series modeling. This PhD thesis includes four research directions that focus on the local interpretability of time series models. The first research topic that we explore involves introducing two novel evaluation metrics for comparing local interpretability methods on generic time series regression problems. We evaluate the proposed metrics through an extensive numerical study, and find that the SHAP method provides the most accurate explanations among the tested methods. Our second research problem involves a specific application of interpretability in sales forecasting and finance domains. Specifically, we propose a unified framework to predict financial commentaries from the financial data generated by a company. We evaluate multiple time series classification models for the prediction task, and use local interpretability methods to explain the predictions. We find that the proposed framework, supported by the machine learning and local interpretability methods, offers new opportunities to leverage management information systems, providing insights to management on key financial issues, including sales forecasting and inventory management. As the third research problem, we study how local interpretability methods can be used to explain time series clustering models. We provide explanations to the clustering algorithms by using classification models as intermediate models to predict the cluster labels. We perform a detailed numerical study, comparing multiple datasets, clustering models, and classification models. Through a careful analysis of the results, we discuss how and when the proposed methodology can be used to obtain insights on the corresponding model behaviour. Finally, the fourth research problem involves developing a locally interpretable deep neural network model for the time series forecasting problem. We evaluate the model accuracy and explanations, using multiple datasets and methods, and find that it achieves similar performance to those of its non-interpretable counterparts, while remaining interpretable.
History
Language
engDegree
- Doctor of Philosophy
Program
- Mechanical and Industrial Engineering
Granting Institution
Ryerson UniversityLAC Thesis Type
- Dissertation