Focus: Computer Science, System Engineering, Robotics, Electric Engineering, Software Engineering
The course on Dependability Evaluation and Safe AI has the potential to make a significant contribution to the research excellence framework. By providing a comprehensive overview of state-of-the-art techniques for dependability evaluation and analysis, as well as safe and interpretable machine learning, this course equips participants with the necessary skills to address complex and critical issues in various fields. The focus on runtime monitoring and dynamic adaptation of models, in particular, reflects the current trend towards more flexible and resilient systems that can adapt to changing conditions. Moreover, the emphasis on Safe AI highlights the importance of developing machine learning models that are not only accurate but also robust, secure, and interpretable. By teaching participants how to apply these techniques to real-world problems, this course can facilitate research that advances the state of the art in dependability evaluation and Safe AI, leading to new discoveries and innovative solutions that have the potential to impact society positively.
Part I: Design Time and Runtime Dependability Evaluation and Analysis (15 hours)
Day 1:
Introduction to dependability evaluation and analysis
Introduction to Markov models
Constructing Markov models for fault diagnosis and analysis
Example: Markov model for a simple system
Day 2:
Introduction to Petri nets
Constructing Petri nets for dependability evaluation
Example: Petri nets for a manufacturing system
Petri net simulation and analysis
Day 3:
Introduction to Dynamic Fault Trees (DFTs)
Constructing DFTs for dependability analysis
Example: DFT for a safety-critical system
DFT simulation and analysis
Day 4:
Runtime dependability evaluation and analysis
Monitoring techniques for fault detection and diagnosis
Dynamic adaptation of models
Example: Runtime monitoring of a distributed system
Part II: Safe Assurance of Artificial Intelligence (15 hours)
Day 5:
Introduction to Safe AI
Statistical analysis for SafeML
Explainability methods for machine learning
Example: Interpretation of a machine learning model using LIME
Day 6:
Safe Machine Learning (SafeML)
Robustness and Security in machine learning
Defense against adversarial attacks
Example: Building a SafeML model using differential privacy
Day 7:
Introduction to SMILE (Safe Machine learning Interpretable and Learning Enabled)
SMILE: A novel approach to safe and interpretable machine learning
Overview of SMILE’s framework and algorithms
Example: Implementing SMILE on a real-world dataset
Day 8:
Case studies and open discussions
Analysis and comparison of different methods for dependability evaluation and Safe AI
Evaluation of trade-offs between accuracy, interpretability, and robustness
Open discussion and future directions for research in dependability evaluation of AI
The teaching hours of this course will be distributed in the following way: 20 hours to professor Koorosh Aslansef and 10 hours to professor André Luiz de Oliveira.
References:
A. Avizienis, J. C. Laprie, B. Randell and C. Landwehr, “”Basic concepts and taxonomy of dependable and secure computing,”” in IEEE Transactions on Dependable and Secure Computing, vol. 1, no. 1, pp. 11-33, Jan.-March 2004, DOI: http://doi.org/10.1109/TDSC.2004.2.
B. Gallina, L. Montecchi, A. L. de Oliveira and L. Bressan, “”Multiconcern, Dependability-Centered Assurance Via a Qualitative and Quantitative Coanalysis,”” in IEEE Software, vol. 39, no. 4, pp. 39-47, July-Aug. 2022, DOI: http://doi.org/10.1109/MS.2022.3167370.
C. Walker, C. Rothon, K., Aslansefat, Y. Papadopoulos, N., Dethlefs. (2022). A Deep Learning Framework for Wind Turbine Repair Action Prediction Using Alarm Sequences and Long Short Term Memory Algorithms. In: Seguin, C., Zeller, M., Prosvirnova, T. (eds) Model-Based Safety and Assessment. IMBSA 2022. Lecture Notes in Computer Science, vol 13525. Springer, Cham. DOI: https://doi.org/10.1007/978-3-031-15842-1_14.
G. Ciardo, R. German, and C. Lindemann, “A characterization of the stochastic process underlying a stochastic Petri net,” IEEE Trans. Softw. Eng., vol. 20, no. 7, pp. 506–515, Jul. 1994. DOI: https://doi.org/10.1109/32.297939.
K. Aslansefat, Panagiota Nikolaou, Martin Walker, Mohammed Naveed Akram, Ioannis Sorokos, Jan Reich, Panayiotis Kolios, Maria K. Michael, Theocharis Theocharides, Georgios Ellinas, Daniel Schneider, and Yiannis Papadopoulos. 2022. SafeDrones: Real-Time Reliability Evaluation of UAVs Using Executable Digital Dependable Identities. In Model-Based Safety and Assessment: 8th International Symposium, IMBSA 2022, Munich, Germany, September 5–7, 2022, Proceedings. Springer-Verlag, Berlin, Heidelberg, 252–266. DOI: https://doi.org/10.1007/978-3-031-15842-1_18.
K. Aslansefat, S. Kabir, A. Abdullatif, V. Vasudevan and Y. Papadopoulos, “”Toward Improving Confidence in Autonomous Vehicle Software: A Study on Traffic Sign Recognition Systems,”” in Computer, vol. 54, no. 8, pp. 66-76, Aug. 2021, DOI: http://doi.org/10.1109/MC.2021.3075054.
K. Aslansefat, I. Sorokos, D. Whiting, R. T. Kolagari, and Y. Papadopoulos, “SafeML: Safety monitoring of machine learning classifiers through statistical difference measures,” in Proc. 7th Int. Symp. Model-Based Safety and Assessment, Springer Nature, 2020, vol. 12297, pp. 197–211. DOI: http://doi.org/10.1007/978-3-030-58920-2_13.
K. Aslansefat and G. -R. Latif-Shabgahi, “”A Hierarchical Approach for Dynamic Fault Trees Solution Through Semi-Markov Process,”” in IEEE Transactions on Reliability, vol. 69, no. 3, pp. 986-1003, Sept. 2020, DOI: http://doi.org/10.1109/TR.2019.2923893.
L. Bressan, A. L. de Oliveira, F. C. Campos, L. Montecchi, R. Capilla, D. Parker, K. Aslansefat, and Y. Papadopoulos. 2022. Modeling the Variability of System Safety Analysis Using State-Machine Diagrams. In Model-Based Safety and Assessment: 8th International Symposium, IMBSA 2022, Munich, Germany, September 5–7, 2022, Proceedings. Springer-Verlag, Berlin, Heidelberg, 43–59. DOI: https://doi.org/10.1007/978-3-031-15842-1_4.
L. Bressan, A. L. de Oliveira, F. C., and R. Capilla. 2021. A variability modeling and transformation approach for safety-critical systems. In Proceedings of the 15th International Working Conference on Variability Modelling of Software-Intensive Systems (VaMoS ’21). Association for Computing Machinery, New York, NY, USA, Article 6, 1–7. DOI: https://doi.org/10.1145/3442391.3442398.
S. Kabir, K. Aslansefat, P. Gope, F. Campean and Y. Papadopoulos, “”Combining Drone-based Monitoring and Machine Learning for Online Reliability Evaluation of Wind Turbines,”” 2022 International Conference on Computing, Electronics & Communications Engineering (iCCECE), Southend, United Kingdom, 2022, pp. 53-58, DOI: http://doi.org/10.1109/iCCECE55162.2022.9875095.
S. Kabir, K. Aslansefat, I. Sorokos, Y. Papadopoulos and S. Konur, “”A Hybrid Modular Approach for Dynamic Fault Tree Analysis,”” in IEEE Access, vol. 8, pp. 97175-97188, 2020, DOI: http://doi.org/10.1109/ACCESS.2020.2996643.
Professors: André Luiz de Oliveira (UFJF), Koorosh Aslansefat (University of Hull – UK)
Language: English
Place:
Courseload: 30h
Date&Time: July 24 – August 4, from 9am to 12pm
Target audience: undergraduate and graduate students
Spots available:
Sustainable Development Goals (SDG): 4, 9, 11