![]() |
Keynotes & Programme |
Please refer to the MODELS 2025 programme for additional information.
Monday October 6
- 08:30 Welcome & Keynote
- Conference Opening
Erik Fredericks and Eugene Syriani Keynote
Multidisciplinary Model-Based Approaches to Assurance for Safety-Critical Learning-Enabled Autonomous Systems.Trustworthy artificial intelligence (Trusted AI) is essential when autonomous, safety-critical systems use learning-enabled components (LECs) in uncertain environments. When reliant on deep learning, these learning-enabled autonomous systems (LEAS) must address the reliability, interpretability, and robustness (collectively, the assurance) of learning models. Three types of uncertainty most significantly affect assurance. First, uncertainty about the physical environment can cause suboptimal, and sometimes catastrophic, results as the system struggles to adapt to unanticipated or poorly-understood environmental conditions. For example, when lane markings are occluded (either on the camera and/or the physical lanes), lane management functionality can be critically compromised. Second, uncertainty in the cyber environment can create unexpected and adverse consequences, including not only performance impacts (network load, real-time responses, etc.) but also potential threats or overt (cybersecurity) attacks. Third, uncertainty associated with the data used to train and validate AI components have the potential to not only cause LECs to fail unexpectedly, but they also can provide a false sense of trust with interacting components and the stakeholders. While learning-enabled technologies have made great strides in addressing uncertainty, challenges remain in addressing the assurance of such systems when encountering uncertainty not addressed in training data. Furthermore, we need to consider LEASs as first-class software-based systems that should be rigorously developed, verified, and maintained (i.e., software engineered). In addition to developing specific strategies to address these concerns, appropriate software frameworks are needed to coordinate LECs and ensure they deliver acceptable behavior even under uncertain conditions. We further posit that due to the increasing complexity of the LEASs and the lack of code-based artifacts, it becomes imperative to take a model-based approach to address LEAS assurance. To this end, this presentation overviews a number of our multi-disciplinary research projects involving industrial collaborators, which collectively support a software engineering, model-based approach to address Trusted AI and provide assurance for learning-enabled autonomous systems. In addition to sharing lessons learned from more than two decades of research addressing assurance for autonomous systems, near-term and longer-term research challenges for learning-enabled AI safety-critical autonomous systems will be overviewed.
Betty H.C. Cheng is a Professor in the Department of Computer Science and Engineering at Michigan State University. Her research focuses on trusted AI, automated software engineering, self-adaptive systems, requirements engineering, model-driven engineering, and automotive cyber security, with applications to intelligent transportation and vehicle systems. She collaborates extensively with industry to facilitate technology transfer. Her work has been funded by NSF, ONR, DARPA, NASA, AFRL, ARO, and numerous industrial partners. She is an Associate Editor-in-Chief for IEEE Transactions on Software Engineering and serves on the editorial boards of Requirements Engineering Journal and Software and Systems Modeling. She was Technical Program Co-Chair of ICSE 2013, the flagship conference in software engineering. She received her BS from Northwestern University and her MS and PhD from the University of Illinois Urbana-Champaign, all in computer science. More details: https://www.cse.msu.edu/~chengb.
- Conference Opening
- 09:40 Vision for the Future of Engineering Systems
- Modeling: The Heart and Soul of Engineering Smart Ecosystems.
Antonio Bucchiarone, Benoit Combemale, Alfonso Pierantonio, Nelly Bencomo, Mark van den Brand, Jean-Michel Bruel, Antonio Cicchetti, Juri Di Rocco, Leen Lambers, Judith Michael, Bernhard Rumpe, Mikael Sjodin, Gabriele Taentzer, Matthias Tichy, Hans Vangheluwe, Manuel Wimmer and Steffen Zschaler
- Modeling: The Heart and Soul of Engineering Smart Ecosystems.
- 10:00 Coffee Break
- 10:30 Session: Traceability and Verification
- Using Concept Traceability to Investigate UML Class Diagram Evolution in Long-Existing FOSS Projects.
Zaki Pauzi and Andrea Capiluppi - Fine-Grained Confidentiality and Authenticity Modeling and Verification for Embedded Systems.
Jawher Jerray, Bastien Sultan and Ludovic Apvrille - Bridging the V-Model: Early Pre-Verification of Digital System Architectures via Estimation and Back-Annotation.
Christian Seifert, Christian Steger and Tiberio Fanti
- Using Concept Traceability to Investigate UML Class Diagram Evolution in Long-Existing FOSS Projects.
- 12:00 Lunch
- 13:30 Session: Systems Engineering
- Service-oriented Modeling of Mixed-Fleet Systems in SysML v2 in a Harbor Logistics Scenario.
Hamza Haoui, Bianca Wiesmayr, David Hastbacka and Kari Systa - DarTwin made precise by SysML v2 – An Experiment.
Oystein Haugen, Stefan Klikovits, Martin Arthur Andersen, Jonathan Beaulieu, Francis Bordeleau, Joachim Denil and Joost Mertens - Model-Based Systems Engineering Perspectives: A Survey of Practitioner Experiences and Challenges.
Maged Elaasar, Abdelwahab Hamou-Lhadj, Bentley Oakes and Mohammad Hamdaqa
- Service-oriented Modeling of Mixed-Fleet Systems in SysML v2 in a Harbor Logistics Scenario.
- 15:00 Coffee Break
- 15:30 Session: Formalization
- Mind the Leak: Formalizing Confidentiality Preservation Assessment of Multi-Model Consistency Checking Systems.
Sebastian Bergemann, Andreas Bayha, Derui Zhu, Mohammad Sadeghi, Colin Atkinson and Alexander Pretschner - Optimizing Industrial Operations through Business Process Formalization.
Mihal Brumbulli and Emmanuel Gaudin
- Mind the Leak: Formalizing Confidentiality Preservation Assessment of Multi-Model Consistency Checking Systems.
Tuesday October 7
- 08:30 Session: LLMs for Model-Based Engineering
- Mitigating Hallucinations in SysML v2 Generation Using LLMs and a Tri-Layered Knowledge Graph Reasoning Framework.
Richard Qualis - Towards LLM Agents for Model-Based Engineering: A Case in Transformation Selection.
Zakaria Hachm, Théo Le Calvar, Hugo Bruneliere and Massimo Tisi - Automated AADL Architecture Modeling: Leveraging Large Language Models for Safety-Critical Software.
Yaxin Zou, Zhibin Yang, Hao Liu, Jiawei Liang, Zonghua Gu and Yong Zhou
- Mitigating Hallucinations in SysML v2 Generation Using LLMs and a Tri-Layered Knowledge Graph Reasoning Framework.
- 10:00 Coffee Break
- 10:30 Session: Trustworthy AI Systems
- Model-Driven Root Cause Analysis for Trustworthy AI: A Data-and-Model-Centric Explanation Framework.
Emmanuel Charleson Dapaah and Jens Grabowski - A Real-Time Multi-modal Framework for Human-Centric Requirements Engineering in Autonomous Vehicles.
Farzaneh Kargozari and Sanaa Alwidian
- Model-Driven Root Cause Analysis for Trustworthy AI: A Data-and-Model-Centric Explanation Framework.
- 11:30 Closing ceremony
- Best paper award
- 12:00 Lunch