top of page

Advancing Threat Modeling in the ENTRUST Project

In the ENTRUST project, Siemens has advanced cybersecurity for Connected Medical Devices (CMDs) by integrating Large Language Models (LLMs) into their threat modelling processes. This initiative aims to enhance the identification and mitigation of vulnerabilities in CMD software, ensuring safer deployment in healthcare environments. 


Siemens’ Innovative Approach to Software Verification and Threat Modelling 


Siemens developed a Software Verification module that utilises static analysis and fuzz testing to detect potential vulnerabilities in CMD software before deployment (Figure 1) in the context of the Trust Assessment Framework (TAF).  


Figure 1 : Software Verification Flow 
Figure 1 : Software Verification Flow 

Complementing this, they introduced a generative AI-driven Threat Modelling module (Figure 2). This module leverages LLMs to analyse external data sources, such as the National Institute of Standards and Technology (NIST) and Common Weakness Enumeration (CWE) databases, to identify device vulnerabilities and generate plausible attack scenarios. 


Figure 2: : Threat Modelling Flow 
Figure 2: : Threat Modelling Flow 

The effectiveness of vulnerability modelling is highly dependent on the context in which the vulnerable application operates. In the ENTRUST project, it is crucial to emphasise that only public data sources are utilised to ensure accurate and relevant vulnerability assessments. This approach is essential to uphold research ethics and guarantee transparency and reproducibility of the results. 


Advantages of LLM Integration 


The incorporation of LLMs into threat modelling offers several benefits: 


  • Enhanced Vulnerability Detection: LLMs can process and interpret vast amounts of data from various sources, enabling the identification of complex and previously overlooked vulnerabilities. 

  • Automated Scenario Generation: By generating potential attack scenarios, LLMs assist in anticipating and preparing for diverse security threats. 

  • Continuous Learning and Adaptation: LLMs can be updated with new data, allowing the threat modeling process to evolve alongside emerging threats and vulnerabilities. 


Challenges 


Despite its advantages, integrating LLMs into threat modelling and software verification comes with notable challenges, chief among them being resource consumption and system optimisation. Running large models requires significant computational power, often involving high-memory GPUs or specialised infrastructure that may not be readily available in all environments. We addressed these challenges in ENTRUST by strategically balancing on-device and cloud-based processing, leading to a faster way of generating attack scenarios, as can be seen in Figure 3. 


Figure 3: Threat Modelling Benchmark 
Figure 3: Threat Modelling Benchmark 

Implications for Healthcare Cybersecurity 


The ENTRUST project's approach signifies a shift towards more proactive and intelligent cybersecurity measures in the healthcare sector. By automating parts of the threat modelling process and utilising AI to analyse extensive data sources, organisations can better anticipate potential security issues and implement effective countermeasures. This methodology not only enhances the security of CMDs but also contributes to the overall safety and trustworthiness of healthcare technologies.  


As the ENTRUST project progresses, integration of LLMs into threat modelling sets a precedent for future cybersecurity strategies, particularly in sectors where device security is paramount. 


 This blog post was written by ENTRUST partner SIEMENS (Romania)


Don’t miss our next updates! Follow us on LinkedIn and Bluesky and be part of the conversation.

entrust_log_FINAL-04.png
EN-Funded by the EU-NEG.png

Funded by European Commission under Horizon Europe Programme (Grant Agreement No. 101095634). 

 

Views and opinions expressed are those of the ENTRUST consortium authors only and do not necessarily reflect those of the European Union or its delegated Agency DG HADEA. Neither the European Union nor the granting authority can be held responsible for them.

Subscribe to Our Newsletter

Thanks for submitting!

Follow Us On:

  • LinkedIn
  • X
  • Youtube

Coordinator Email: coordination@entrust-he.eu

bottom of page