ACM Transactions on Software Engineering and Methodology
Security Orchestration, Automation, and Response (SOAR) platforms integrate and orchestrate a wid... more Security Orchestration, Automation, and Response (SOAR) platforms integrate and orchestrate a wide variety of security tools to accelerate the operational activities of Security Operation Center (SOC). Integration of security tools in a SOAR platform is mostly done manually using APIs, plugins, and scripts. SOC teams need to navigate through API calls of different security tools to find a suitable API to define or update an incident response action. Analyzing various types of API documentation with diverse API format and presentation structure involves significant challenges such as data availability, data heterogeneity, and semantic variation for automatic identification of security tool APIs specific to a particular task. Given these challenges can have negative impact on SOC team’s ability to handle security incident effectively and efficiently, we consider it important to devise suitable automated support solutions to address these challenges. We propose a novel learning-based f...
Proceedings of the ACM on Human-Computer Interaction
Numerous security attacks that resulted in devastating consequences can be traced back to a delay... more Numerous security attacks that resulted in devastating consequences can be traced back to a delay in applying a security patch. Despite the criticality of timely patch application, not much is known about why and how delays occur when applying security patches in practice, and how the delays can be mitigated. Based on longitudinal data collected from 132 delayed patching tasks over a period of four years and observations of patch meetings involving eight teams from two organisations in the healthcare domain, and using quantitative and qualitative data analysis approaches, we identify a set of reasons relating to technology, people and organisation as key explanations that cause delays in patching. Our findings also reveal that the most prominent cause of delays is attributable to coordination delays in the patch management process and a majority of delays occur during the patch deployment phase. Towards mitigating the delays, we describe a set of strategies employed by the studied p...
IEEE Transactions on Dependable and Secure Computing
ML-based Phishing URL (MLPU) detectors serve as the first level of defence to protect users and o... more ML-based Phishing URL (MLPU) detectors serve as the first level of defence to protect users and organisations from being victims of phishing attacks. Lately, few studies have launched successful adversarial attacks against specific MLPU detectors raising questions on their practical reliability and usage. Nevertheless, the robustness of these systems has not been extensively investigated. Therefore, the security vulnerabilities of these systems, in general, remain primarily unknown that calls for testing the robustness of these systems. In this article, we have proposed a methodology to investigate the reliability and robustness of 50 representative state-of-the-art MLPU models. Firstly, we have proposed a cost-effective Adversarial URL generator URLBUG that created an Adversarial URL dataset (Adv data). Subsequently, we reproduced 50 MLPU (traditional ML and Deep learning) systems and recorded their baseline performance. Lastly, we tested the considered MLPU systems on Adv data and analyzed their robustness and reliability using box plots and heat maps. Our results showed that the generated adversarial URLs have valid syntax and can be registered at a median annual price of $11.99, and out of 13% of the already registered adversarial URLs, 63.94% were used for malicious purposes. Moreover, the considered MLPU models Matthew Correlation Coefficient (MCC) dropped from median 0.92 to 0.02 when tested against Adv data , indicating that the baseline MLPU models are unreliable in their current form. Further, our findings identified several security vulnerabilities of these systems and provided future directions for researchers to design dependable and secure MLPU systems.
Proceedings of the 19th International Conference on Mining Software Repositories
Many studies have developed Machine Learning (ML) approaches to detect Software Vulnerabilities (... more Many studies have developed Machine Learning (ML) approaches to detect Software Vulnerabilities (SVs) in functions and fine-grained code statements that cause such SVs. However, there is little work on leveraging such detection outputs for data-driven SV assessment to give information about exploitability, impact, and severity of SVs. The information is important to understand SVs and prioritize their fixing. Using large-scale data from 1,782 functions of 429 SVs in 200 real-world projects, we investigate ML models for automating function-level SV assessment tasks, i.e., predicting seven Common Vulnerability Scoring System (CVSS) metrics. We particularly study the value and use of vulnerable statements as inputs for developing the assessment models because SVs in functions are originated in these statements. We show that vulnerable statements are 5.8 times smaller in size, yet exhibit 7.5-114.5% stronger assessment performance (Matthews Correlation Coefficient (MCC)) than non-vulnerable statements. Incorporating context of vulnerable statements further increases the performance by up to 8.9% (0.64 MCC and 0.75 F1-Score). Overall, we provide the initial yet promising ML-based baselines for function-level SV assessment, paving the way for further research in this direction. CCS CONCEPTS • Security and privacy → Software security engineering.
Proceedings of the 19th International Conference on Mining Software Repositories
Data-driven software engineering processes, such as vulnerability prediction heavily rely on the ... more Data-driven software engineering processes, such as vulnerability prediction heavily rely on the quality of the data used. In this paper, we observe that it is infeasible to obtain a noise-free security defect dataset in practice. Despite the vulnerable class, the non-vulnerable modules are difficult to be verified and determined as truly exploit free given the limited manual efforts available. It results in uncertainty, introduces labeling noise in the datasets and affects conclusion validity. To address this issue, we propose novel learning methods that are robust to label impurities and can leverage the most from limited label data; noisy label learning. We investigate various noisy label learning methods applied to software vulnerability prediction. Specifically, we propose a two-stage learning method based on noise cleaning to identify and remediate the noisy samples, which improves AUC and recall of baselines by up to 8.9% and 23.4%, respectively. Moreover, we discuss several hurdles in terms of achieving a performance upper bound with semi-omniscient knowledge of the label noise. Overall, the experimental results show that learning from noisy labels can be effective for data-driven software and security analytics.
2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)
Software Vulnerability (SV) severity assessment is a vital task for informing SV remediation and ... more Software Vulnerability (SV) severity assessment is a vital task for informing SV remediation and triage. Ranking of SV severity scores is often used to advise prioritization of patching efforts. However, severity assessment is a difficult and subjective manual task that relies on expertise, knowledge, and standardized reporting schemes. Consequently, different data sources that perform independent analysis may provide conflicting severity rankings. Inconsistency across these data sources affects the reliability of severity assessment data, and can consequently impact SV prioritization and fixing. In this study, we investigate severity ranking inconsistencies over the SV reporting lifecycle. Our analysis helps characterize the nature of this problem, identify correlated factors, and determine the impacts of inconsistency on downstream tasks. Our findings observe that SV severity often lacks consideration or is underestimated during initial reporting, and such SVs consequently receive lower prioritization. We identify six potential attributes that are correlated to this misjudgment, and show that inconsistency in severity reporting schemes can severely degrade the performance of downstream severity prediction by up to 77%. Our findings help raise awareness of SV severity data inconsistencies and draw attention to this data quality problem. These insights can help developers better consider SV severity data sources, and improve the reliability of consequent SV prioritization. Furthermore, we encourage researchers to provide more attention to SV severity data selection.
Evaluating the effectiveness of security awareness and training programs is critical for minimizi... more Evaluating the effectiveness of security awareness and training programs is critical for minimizing organizations' human security risk. Based on a literature review and industry interviews, we discuss current practices and devise guidelines for measuring the effectiveness of security training and awareness initiatives used by organizations.
The dynamics of cyber threats are increasingly complex, making it more challenging than ever for ... more The dynamics of cyber threats are increasingly complex, making it more challenging than ever for organizations to obtain in-depth insights into their cyber security status. Therefore, organizations rely on Cyber Situational Awareness (CSA) to support them in better understanding the threats and associated impacts of cyber events. Due to the heterogeneity and complexity of cyber security data, often with multidimensional attributes, sophisticated visualization techniques are needed to achieve CSA. However, there have been no previous attempts to systematically review and analyze the scientific literature on CSA visualizations. In this paper, we systematically select and review 54 publications that discuss visualizations to support CSA. We extract data from these papers to identify key stakeholders, information types, data sources, and visualization techniques. Furthermore, we analyze the level of CSA supported by the visualizations, alongside examining the maturity of the visualizations, challenges, and practices related to CSA visualizations to prepare a full analysis of the current state of CSA in an organizational context. Our results reveal certain gaps in CSA visualizations. For instance, the largest focus is on operational-level staff, and there is a clear lack of visualizations targeting other types of stakeholders such as managers, higher-level decision makers, and non-expert users. Most papers focus on threat information visualization, and there is a dearth of papers that visualize impact information, response plans, and information shared within teams. Interestingly, we find that only a few studies proposed visualizations to facilitate up to the projection level (i.e., the highest level of CSA), whereas most studies facilitated only the perception level (i.e., the lowest level of CSA). Most of the studies provide evidence of the proposed visualizations through toy examples and demonstrations, while only a few visualizations are employed in industrial practice. Based on the results that highlight the important concerns in CSA visualizations, we recommend a list of future research directions.
Software Vulnerability Prediction (SVP) is a data-driven technique for software quality assurance... more Software Vulnerability Prediction (SVP) is a data-driven technique for software quality assurance that has recently gained considerable attention in the Software Engineering research community. However, the difficulties of preparing Software Vulnerability (SV) related data remains as the main barrier to industrial adoption. Despite this problem, there have been no systematic efforts to analyse the existing SV data preparation techniques and challenges. Without such insights, we are unable to overcome the challenges and advance this research domain. Hence, we are motivated to conduct a Systematic Literature Review (SLR) of SVP research to synthesize and gain an understanding of the data considerations, challenges and solutions that SVP researchers provide. From our set of primary studies, we identify the main practices for each data preparation step. We then present a taxonomy of 16 key data challenges relating to six themes, which we further map to six categories of solutions. However, solutions are far from complete, and there are several ill-considered issues. We also provide recommendations for future areas of SV data research. Our findings help illuminate the key SV data practices and considerations for SVP researchers and practitioners, as well as inform the validity of the current SVP approaches.
An increasing number of mental health services are offered through mobile systems, a paradigm cal... more An increasing number of mental health services are offered through mobile systems, a paradigm called mHealth. Although there is an unprecedented growth in the adoption of mHealth systems, partly due to the COVID-19 pandemic, concerns about data privacy risks due to security breaches are also increasing. Whilst some studies have analyzed mHealth apps from different angles, including security, there is relatively little evidence for data privacy issues that may exist in mHealth apps used for mental health services, whose recipients can be particularly vulnerable. This paper reports an empirical study aimed at systematically identifying and understanding data privacy incorporated in mental health apps. We analyzed 27 top-ranked mental health apps from Google Play Store. Our methodology enabled us to perform an in-depth privacy analysis of the apps, covering static and dynamic analysis, data sharing behaviour, server-side tests, privacy impact assessment requests, and privacy policy eva...
Current machine-learning based software vulnerability detection methods are primarily conducted a... more Current machine-learning based software vulnerability detection methods are primarily conducted at the function-level. However, a key limitation of these methods is that they do not indicate the specific lines of code contributing to vulnerabilities. This limits the ability of developers to efficiently inspect and interpret the predictions from a learnt model, which is crucial for integrating machine-learning based tools into the software development workflow. Graph-based models have shown promising performance in function-level vulnerability detection, but their capability for statement-level vulnerability detection has not been extensively explored. While interpreting function-level predictions through explainable AI is one promising direction, we herein consider the statement-level software vulnerability detection task from a fully supervised learning perspective. We propose a novel deep learning framework, LineVD, which formulates statement-level vulnerability detection as a node classification task. LineVD leverages control and data dependencies between statements using graph neural networks, and a transformer-based model to encode the raw source code tokens. In particular, by addressing the conflicting outputs between function-level and statement-level information, LineVD significantly improve the prediction performance without vulnerability status for function code. We have conducted extensive experiments against a large-scale collection of real-world C/C++ vulnerabilities obtained from multiple real-world projects, and demonstrate an increase of 105% in F1-score over the current state-of-the-art.
2021 IEEE 26th Pacific Rim International Symposium on Dependable Computing (PRDC), 2021
Internet of Things (IoT) based applications face an increasing number of potential security risks... more Internet of Things (IoT) based applications face an increasing number of potential security risks, which need to be systematically assessed and addressed. Expert-based manual assessment of IoT security is a predominant approach, which is usually inefficient. To address this problem, we propose an automated security assessment framework for IoT networks. Our framework first leverages machine learning and natural language processing to analyze vulnerability descriptions for predicting vulnerability metrics. The predicted metrics are then input into a two-layered graphical security model, which consists of an attack graph at the upper layer to present the network connectivity and an attack tree for each node in the network at the bottom layer to depict the vulnerability information. This security model automatically assesses the security of the IoT network by capturing potential attack paths. We evaluate the viability of our approach using a proof-of-concept smart building system model which contains a variety of real-world IoT devices and potential vulnerabilities. Our evaluation of the proposed framework demonstrates its effectiveness in terms of automatically predicting the vulnerability metrics of new vulnerabilities with more than 90% accuracy, on average, and identifying the most vulnerable attack paths within an IoT network. The produced assessment results can serve as a guideline for cybersecurity professionals to take further actions and mitigate risks in a timely manner. Index Terms-Internet of Things, Vulnerability Assessment, Machine Learning, Natural Language Processing, Graphical Security Model 1 Vulnerability assessment is different from security assessment as the former is applied to individual devices and is a step in performing the security assessment of an interconnected system containing multiple devices.
Given programming languages can provide different types and levels of security support, it is cri... more Given programming languages can provide different types and levels of security support, it is critically important to consider security aspects while selecting programming languages for developing software systems. Inadequate consideration of security in the choice of a programming language may lead to potential ramifications for secure development. Whilst theoretical analysis of the supposed security properties of different programming languages has been conducted, there has been relatively little effort to empirically explore the actual security challenges experienced by developers. We have performed a large-scale study of the security challenges of 15 programming languages by quantitatively and qualitatively analysing the developers' discussions from Stack Overflow and GitHub. By leveraging topic modelling, we have derived a taxonomy of 18 major security challenges for 6 topic categories. We have also conducted comparative analysis to understand how the identified challenges vary regarding the different programming languages and data sources. Our findings suggest that the challenges and their characteristics differ substantially for different programming languages and data sources, i.e., Stack Overflow and GitHub. The findings provide evidence-based insights and understanding of security challenges related to different programming languages to software professionals (i.e., practitioners or researchers). The reported taxonomy of security challenges can assist both practitioners and researchers in better understanding and traversing the secure development landscape. This study highlights the importance of the choice of technology, e.g., programming language, in secure software engineering. Hence, the findings are expected to motivate practitioners to consider the potential impact of the choice of programming languages on software security.
2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE), 2021
It is increasingly suggested to identify Software Vulnerabilities (SVs) in code commits to give e... more It is increasingly suggested to identify Software Vulnerabilities (SVs) in code commits to give early warnings about potential security risks. However, there is a lack of effort to assess vulnerability-contributing commits right after they are detected to provide timely information about the exploitability, impact and severity of SVs. Such information is important to plan and prioritize the mitigation for the identified SVs. We propose a novel Deep multi-task learning model, DeepCVA, to automate seven Commit-level Vulnerability Assessment tasks simultaneously based on Common Vulnerability Scoring System (CVSS) metrics. We conduct large-scale experiments on 1,229 vulnerability-contributing commits containing 542 different SVs in 246 real-world software projects to evaluate the effectiveness and efficiency of our model. We show that DeepCVA is the best-performing model with 38% to 59.8% higher Matthews Correlation Coefficient than many supervised and unsupervised baseline models. DeepCVA also requires 6.3 times less training and validation time than seven cumulative assessment models, leading to significantly less model maintenance cost as well. Overall, DeepCVA presents the first effective and efficient solution to automatically assess SVs early in software systems.
Context: Software security patch management purports to support the process of patching known sof... more Context: Software security patch management purports to support the process of patching known software security vulnerabilities. Patching security vulnerabilities in large and complex systems is a hugely challenging process that involves multiple stakeholders making several interdependent technological and socio-technical decisions. Given the increasing recognition of the importance of software security patch management, it is important and timely to systematically review and synthesise the relevant literature on this topic. Objective: This paper aims at systematically reviewing the state of the art of software security patch management to identify the socio-technical challenges in this regard, reported solutions (i.e., approaches, tools, and practices), the rigour of the evaluation and the industrial relevance of the reported solutions, and to identify the gaps for future research. Method: We conducted a systematic literature review of 72 studies published from 2002 to March 2020, with extended coverage until September 2020 through forward snowballing. Results: We identify 14 socio-technical challenges in software security patch management, 18 solution approaches, tools and practices mapped onto the software security patch management process. We provide a mapping between the solutions and challenges to enable a reader to obtain a holistic overview of the gap areas. The findings also reveal that only 20.8% of the reported solutions have been rigorously evaluated in industrial settings. Conclusion: Our results reveal that 50% of the common challenges have not been directly addressed in the solutions and that most of them (38.9%) address the challenges in one phase of the process, namely vulnerability scanning, assessment and prioritisation. Based on the results that highlight the important concerns in software security patch management and the lack of solutions, we recommend a list of future research directions. This study also provides useful insights about different opportunities for practitioners to adopt new solutions and understand the variations of their practical utility.
Detection and mitigation of Security Vulnerabilities (SVs) are integral tasks in software develop... more Detection and mitigation of Security Vulnerabilities (SVs) are integral tasks in software development and maintenance. Software developers often explore developer Question and Answer (Q&A) websites to find solutions for securing their software. However, there is empirically little known about the on-going SV-related discussions and how the Q&A sites are supporting such discussions. To demystify such mysteries, we conduct large-scale qualitative and quantitative experiments to study the characteristics of 67,864 SV-related posts on Stack Overflow (SO) and Security StackExchange (SSE). We first find that the existing SV categorization of formal security sources is not frequently used on Q&A sites. Therefore, we use Latent Dirichlet Allocation topic modeling to extract a new taxonomy of thirteen SV discussion topics on Q&A sites. We then study the characteristics of such SV topics. Brute-force/Timing Attacks and Vulnerability Testing are found the most popular and difficult topics, res...
Context: Software Architecture (SA) and Source Code (SC) are two intertwined artefacts that repre... more Context: Software Architecture (SA) and Source Code (SC) are two intertwined artefacts that represent the interdependent design decisions made at different levels of abstractions-High-Level (HL) and Low-Level (LL). An understanding of the relationships between SA and SC is expected to bridge the gap between SA and SC for supporting maintenance and evolution of software systems. Objective: We aimed at exploring practitioners' understanding about the relationships between SA and SC. Method: We used a mixed-method that combines an online survey with 87 respondents and an interview with 8 participants to collect the views of practitioners from 37 countries about the relationships between SA and SC. Results: Our results reveal that: practitioners mainly discuss five features of relationships between SA and SC; a few practitioners have adopted dedicated approaches and tools in the literature for identifying and analyzing the relationships between SA and SC despite recognizing the importance of such information for improving a system's quality attributes, especially maintainability and reliability. It is felt that cost and effort are the major impediments that prevent practitioners from identifying, analyzing, and using the relationships between SA and SC. Conclusions: The results have empirically identified five features of relationships between SA and SC reported in the literature from the perspective of practitioners and a systematic framework to manage the five features of relationships should be developed with dedicated approaches and tools considering the cost and benefit of maintaining the relationships.
Software traceability plays a critical role in software maintenance and evolution. We conducted a... more Software traceability plays a critical role in software maintenance and evolution. We conducted a systematic mapping study with six research questions to understand the benefits, costs, and challenges of using traceability in maintenance and evolution. We systematically selected, analyzed, and synthesized 63 studies published between January 2000 and May 2020, and the results show that traceability supports 11 maintenance and evolution activities, among which change management is the most frequently supported activity; strong empirical evidence from industry is needed to validate the impact of traceability on maintenance and evolution; easing the process of change management is the main benefit of deploying traceability practices; establishing and maintaining traceability links is the main cost of deploying traceability practices; 13 approaches and 32 tools that support traceability in maintenance and evolution were identified; improving the quality of traceability links, the perfor...
Deep Learning (DL) techniques for Natural Language Processing have been evolving remarkably fast.... more Deep Learning (DL) techniques for Natural Language Processing have been evolving remarkably fast. Recently, the DL advances in language modeling, machine translation, and paragraph understanding are so prominent that the potential of DL in Software Engineering cannot be overlooked, especially in the field of program learning. To facilitate further research and applications of DL in this field, we provide a comprehensive review to categorize and investigate existing DL methods for source code modeling and generation. To address the limitations of the traditional source code models, we formulate common program learning tasks under an encoder-decoder framework. After that, we introduce recent DL mechanisms suitable to solve such problems. Then, we present the state-of-the-art practices and discuss their challenges with some recommendations for practitioners and researchers as well.
Proceedings of the International Conference on Software and System Processes, 2020
Development and Operations (DevOps), a particular type of Continuous Software Engineering, has be... more Development and Operations (DevOps), a particular type of Continuous Software Engineering, has become a popular Software System Engineering paradigm. Software architecture is critical in succeeding with DevOps. However, there is little evidence-based knowledge of how software systems are architected in the industry to enable and support DevOps. Since architectural decisions, along with their rationales and implications, are very important in the architecting process, we performed an industrial case study that has empirically identified and synthesized the key architectural decisions considered essential to DevOps transformation by two software development teams. Our study also reveals that apart from the chosen architecture style, DevOps works best with modular architectures. In addition, we found that the performance of the studied teams can improve in DevOps if operations specialists are added to the teams to perform the operations tasks that require advanced expertise. Finally, investment in testing is inevitable for the teams if they want to release software changes faster.
Uploads
Papers by Ali Babar