Machine-in-the-Loop Process in Project Risk Management
Keywords:
Artificial Intelligence, Machine Learning, Project Risk Management, Cognitive Bias, Dual System TheoryAbstract
Despite the widespread recognition of the necessity and importance of project risk management, its effective implementation is often difficult. Much of the difficulty could be explained by the effects of cognitive biases related to heuristics when humans make decisions about probability and frequency. In this study, we propose a machine-in-the-loop process in which human and AI model cooperate in a complementary manner to mitigate the influence of cognitive bias in human decision-making and compensate for the lack of domain knowledge in prediction with the AI model. In a case study where the machine-in-the-loop process was applied to project risk management, we conducted an interview survey and confirmed the effectiveness of the machine-in-the-loop process with positive comments which support the reduction of uncertainty and cognitive bias.
References
Ibbs, C. W., Kwak, Y. H.: Assessing project management maturity. Project Management Journal 31(1), 31-43(2000).
Grant, K. P., Pennypacker, J. S.: Project management maturity: An assessment of project management capabilities among and between selected industries. IEEE Transactions on engineering management 53(1), 59-68(2006).
Project Management Institute: A guide to the project management body of knowledge(PMBOK guide) 6th edn. Project Management Institute, Inc.,(2017).
Mori, T., Uchihira, N.: An Integrated Approach of Machine Learning and Knowledge Creation in Project& Program Risk Management. Journal of International Association of P2M 14(1), 415-435(2019).
Bannerman, P. L.: Risk and risk management in software projects: A reassessment. Journal of Systems and Software 81(12), 2008, 2118-2133(2008).
Kutsch, E., Hall, M.: Deliberate ignorance in project risk management. International journal of project management 28(3), 2010, 245-255 (2010).
Kahneman, D: Thinking, fast and slow. Straus and Giroux, New York (2011).
Croskerry, P.: A universal model of diagnostic reasoning. Academic medicine 84(8), 1022-1028(2009).
Bengio, Y.: From System 1 Deep Learning to System 2 Deep Learning. In: Neural Information Processing Systems,(2019).
Lewis, D. D.: Naive(Bayes) at forty: The independence assumption in information retrieval. European conference on machine learning, Springer, Berlin, Heidelberg(1998).
Mori, T., Uchihira, N.: Balancing the trade-off between accuracy and interpretability in software defect prediction. Empirical Software Engineering 24(2), 2019, 779-825(2019).
Mori, T., Tamura, S., Kakui, S.: Incremental estimation of project failure risk with Naive Bayes classifier. In: 7th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 283-286. IEEE, Baltimore(2013).
Williamson, O. E.: The economics of organization: The transaction cost approach. American journal of sociology 87(3), 548-577(1981).
Takagi, Y., Mizuno, O., Kikuno, T.: An empirical approach to characterizing risky software projects based on logistic regression analysis. Empirical Software Engineering 10(4), 495-515(2005).
Lee, E., Park, Y., Shin, J. G.: Large engineering project risk management using a Bayesian belief network. Expert Systems with Applications 36(3), 2009, 5880-5887(2009).
Gunning, D.: Explainable artificial intelligence(XAI). Defense Advanced Research Projects Agency(DARPA),(2017).
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM computing surveys 51(5), 1-42(2018).
Lipton, Z. C.: The Mythos of Model Interpretability. In: Proceedings of the ICML 2016 Workshop on Human Interpretability in Machine Learning,(2016).
Ribeiro, M. T., Singh, S., Guestrin, C.: Why should I trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144.(2016)
Lundberg, S. M., Lee, S.: A unified approach to interpreting model predictions. In: Proceedings of the 31st international conference on neural information processing systems,(2017).
Arnott, D., Pervan, G.: A critical analysis of decision support systems research. Journal of information technology, 20(2), 67-87(2005).
Arnott, D., Pervan, G.: A critical analysis of decision support systems research revisited: the rise of design science. Journal of Information Technology, 29, 269–293(2014).
Holzinger, A.: Interactive machine learning for health informatics: when do we need the human-in-the-loop?. Brain Informatics 3(2), 2016, 119-131(2016).
Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., Horvitz, E.: Beyond accuracy: The role of mental models in human-ai team performance. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7(1),(2019).
Lai V., Tan, C.: On human predictions with explanations and predictions of machine learning models: A case study on deception detection. In: Proceedings of the Conference on Fairness, Accountability, and Transparency,(2019).