A Reference Process for Assessing the Reliability of Predictive Analytics Results
- Authors
- S. Staudinger, C. Schütz, M. Schrefl
- Paper
- Stau24b (2024)
- Citation
Journal Springer Nature Computer Science (SNCS), Editors: Slimane Hammoudi and Christoph Quix, Volume 5, Article No. 563, Springer Verlag, 27 pages, ISSN (electr.): 2661-8907, DOI: https://doi.org/10.1007/s42979-024-02892-4, 2024. - Resources
- Copy
Abstract (English)
Organizations employ data mining to discover patterns in historic data in order to learn predictive models. Depending on the predictive model the predictions may be more or less accurate, raising the question about the reliability of individual predictions. This paper proposes a reference process aligned with the CRISP-DM to enable the assessment of reliability of individual predictions obtained from a predictive model. The reference process describes activities along the different stages of the development process required to establish a reliability assessment approach for a predictive model. The paper then presents in more detail two specific approaches for reliability assessment: perturbation of input cases and local quality measures. Furthermore, this paper describes elements of a knowledge graph to capture important metadata about the development process and training data. The knowledge graph serves to properly configure and employ the reliability assessment approaches.