skip to main content

Enhancing trust in detecting security threats using machine learning approaches and its application in the Internet of Things

Mahbooba, Basim

NUI Galway 2022

Texto completo disponível

Citações Citado por
  • Título:
    Enhancing trust in detecting security threats using machine learning approaches and its application in the Internet of Things
  • Autor: Mahbooba, Basim
  • Assuntos: Computer Science ; Information technology ; IoT security ; IoT security, Malicious node detection, Trust Machine Learning and Network security ; Malicious node detection ; Network security ; Science and Engineering ; Trust Machine Learning
  • Descrição: Identifying network attacks is a very crucial task for network security. The increasing amount of network devices is creating a massive amount of data and opening new security vulnerabilities that malicious users can exploit to gain access. Recently, the research community in network security has been using a data-driven approach to detect anomaly, intrusion, and cyber attacks. However, getting accurate network attack data is time-consuming and expensive. On the other hand, evaluating complex security systems requires costly and sophisticated modeling practices with expert security professionals. Conventionally,the well-known security solutions for network security are firewalls, user authentication, access control, cryptography systems etc. These systems might not be effective according to today’s need in cyber industry. The problems are these are typically handled statically by a few experienced security analysts, where data management is done for a particular purpose. However, as an increasing number of cybersecurity incidents continuously appear over time, such traditional solutions have encountered limitations in reducing such cyber threats. Thus, to tackle such problem, it is fundamental to take data driven approach to analyze a massive amount of relevant cybersecurity data. This will enable to identify insights or proper security policies with minimal human intervention in an automated manner. Despite the growing popularity of machine learning models in the cyber-security applications (e.g., an intrusion detection system (IDS)), most of these models are perceived as a black-box. The eXplainable Artificial Intelligence (XAI) has become increasingly important to interpret the machine learning models to enhance trust management by allowing human experts to understand the underlying data evidence and causal reasoning. According to IDS, the critical role of trust management is to understand the impact of the malicious data to detect any intrusion in the system. The previous studies focused more on the accuracy of the various classification algorithms, including white box and black box models for trust in IDS. They do not often provide insights into their behavior and reasoning provided by the sophisticated algorithm. Therefore, in this thesis, we have addressed XAI concept to enhance trust management by exploring the simple white box models which are interpretable by design and how we can leverage them in the context of IDS.
  • Editor: NUI Galway
  • Data de criação/publicação: 2022
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.