Powered by
26th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2017),
July 10–14, 2017,
Santa Barbara, CA, USA
Doctoral Symposium
Analysis
Consistency Checking in Requirements Analysis
Jaroslav Bendík
(Masaryk University, Czech Republic)
In the last decade it became a common practise to formalise software requirements using a mathematical language of temporal logics, e.g., LTL. The formalisation removes ambiguity and improves understanding. Formal description also enables various model-based techniques, like formal verification. Moreover, we get the opportunity to check the requirements earlier, even before any system model is built. This so called requirements sanity checking aims to assure that a given set of requirements is consistent, i.e., that a product satisfying all the requirements can be developed. If inconsistencies are found, it is desirable to present them to the user in a minimal fashion, exposing the core problems among the requirements. Such cores are called minimal inconsistent subsets (MISes). In this work, we present a framework for online MISes enumeration in the domain of temporal logics.
@InProceedings{ISSTA17p408,
author = {Jaroslav Bendík},
title = {Consistency Checking in Requirements Analysis},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {408--411},
doi = {},
year = {2017},
}
Inferring Page Models for Web Application Analysis
Snigdha Athaiya
(IISc Bangalore, India)
Web applications are difficult to analyze using code-based tools because data-flow and control-flow through the application occurs via both server-side code and client-side pages. Client-side pages are typically specified in a scripting language that is different from the main server-side language; moreover, the pages are generated dynamically from the scripts. To address these issues we propose
a static-analysis approach that automatically constructs a "model" of each page in a given application. A page model is a code fragment in the same language as the server-side code, which faithfully over-approximates the possible elements of the page as well as the control-flows and data-flows due to these elements. The server-side code in conjunction with the page models then becomes a standard (non-web) program, thus amenable to analysis using standard code-based tools.
@InProceedings{ISSTA17p412,
author = {Snigdha Athaiya},
title = {Inferring Page Models for Web Application Analysis},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {412--415},
doi = {},
year = {2017},
}
Path Cost Analysis for Side Channel Detection
Tegan Brennan
(University of California at Santa Barbara, USA)
Side-channels have been increasingly demonstrated as a practical threat to the confidentiality of private user information. Being able to statically detect these kinds of vulnerabilites is a key challenge in current computer security research. We introduce a new technique, path-cost analysis (PCA), for the detection of side-channels. Given a cost model for a type of side-channel, path-cost analysis assigns a symbolic cost expression to every node and every back edge of a method's control flow graph that gives an over-approximation for all possible observable values at that node or after traversing that cycle. Queries to a satisfiability solver on the maximum distance between specific pairs of nodes allow us to detect the presence of imbalanced paths through the control flow graph. When combined with taint analysis, we are able to answer the following question: does there exist a pair of paths in the method's control flow graph, differing only on branch conditions influenced by the secret, that differs in observable value by more than some given threshold? In fact, we are able to answer the specifically state what sets of secret-sensitive conditional statements introduce a side-channel detectable given some noise parameter. We extend this approach to an interprocedural analysis, resulting in a over-approximation of the number of true side-channels in the program according to the given cost model. Greater precision can be obtained by combining our method with predicate abstraction or symbolic execution to eliminate a subset of the infeasible paths through the control flow graph. We propose evaluating our method on a set of sizeable Java server-client applications.
@InProceedings{ISSTA17p416,
author = {Tegan Brennan},
title = {Path Cost Analysis for Side Channel Detection},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {416--419},
doi = {},
year = {2017},
}
Modeling and Learning
Automatically Inferring and Enforcing User Expectations
Jenny Hotzkow
(Saarland University, Germany)
Can we automatically learn how users expect an application to behave? Yes, if we consider an application from the users perspective. Whenever presented with an unfamiliar app, the user not only regards the context presented by this particular application, but rather considers previous experiences from other applications. This research presents an approach to reflect this procedure by automatically learning user expectations from the semantic contexts over multiple applications. Once the user expectations are established, this knowledge can be used as an oracle, to test if an application follows the user’s expectations or entails surprising behavior by error or deliberately.
@InProceedings{ISSTA17p420,
author = {Jenny Hotzkow},
title = {Automatically Inferring and Enforcing User Expectations},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {420--423},
doi = {},
year = {2017},
}
Understanding Intended Behavior using Models of Low-Level Signals
Deborah S. Katz
(Carnegie Mellon University, USA)
As software systems increase in complexity and operate with less human
supervision, it becomes more difficult to use traditional techniques to detect
when software is not behaving as intended.
Furthermore, many systems operating today are nondeterministic and operate
in unpredictable environments, making it difficult to even define what constitutes
correct behavior.
I propose a family of novel techniques to model the behavior of executing
programs using low-level signals collected during executions.
The models provide a basis for predicting whether an execution of the program
or program unit under test represents intended behavior.
I have demonstrated success with these techniques for detecting faulty and
unexpected behavior on small programs.
I propose to extend the work to smaller units of large, complex programs.
@InProceedings{ISSTA17p424,
author = {Deborah S. Katz},
title = {Understanding Intended Behavior using Models of Low-Level Signals},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {424--427},
doi = {},
year = {2017},
}
Version Space Learning for Verification on Temporal Differentials
Mark Santolucito
(Yale University, USA)
Configuration files provide users with the ability to quickly alter the behavior of their software system. Ensuring that a configuration file does not induce errors in the software is a complex verification issue. The types of errors can be easy to measure, such as an initialization failure of system boot, or more insidious such as performance degrading over time under heavy network loads. In order to warn a user of potential configuration errors ahead of time, we propose using version space learning specifications for configuration languages. We frame an existing tool, ConfigC, in terms of version space learning. We extend that algorithm to leverage the temporal structuring available in training sets scraped from versioning control systems. We plan to evaluate our system on a case study using TravisCI configuration files collected from Github.
@InProceedings{ISSTA17p428,
author = {Mark Santolucito},
title = {Version Space Learning for Verification on Temporal Differentials},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {428--431},
doi = {},
year = {2017},
}
Testing
Data Flow Oriented UI Testing: Exploiting Data Flows and UI Elements to Test Android Applications
Nataniel P. Borges Jr.
(Saarland University, Germany)
Testing user interfaces (UIs) is a challenging task. Ideally, every sequence of UI elements should be tested to guarantee that the application works correctly. This is, however, unfeasible due to the number of UI elements in an application. A better approach is to limit the evaluation to UI elements that affect a specific functionality. In this paper I present a novel technique to identify the relation between UI elements using the statically extracted data flows. I also present a method to refine these relations using dynamic analysis, in order to ensure that relations extracted from unreachable data flows are removed. Using these relations it is possible to more efficiently test a functionality. Finally, I present an approach to evaluate how these UI-aware data flows can be used as an heuristic to measure test coverage.
@InProceedings{ISSTA17p432,
author = {Nataniel P. Borges Jr.},
title = {Data Flow Oriented UI Testing: Exploiting Data Flows and UI Elements to Test Android Applications},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {432--435},
doi = {},
year = {2017},
}
Dynamic Tainting for Automatic Test Case Generation
Björn Mathis
(Saarland University, Germany)
Dynamic tainting is an important part of modern software engineering research. State-of-the-art tools for debugging, bug detection and program analysis make use of this technique. Nonetheless, the research area based on dynamic tainting still has open questions, among others the automatic generation of program inputs.
My proposed work concentrates on the use of dynamic tainting for test case generation. The goal is the generation of complex and valid test inputs from scratch. Therefore, I use byte level taint information enhanced with additional static and dynamic program analysis. This information is used in an evolutionary algorithm to create new offsprings and mutations. Concretely, instead of crossing and mutating the whole input randomly, taint information can be used to define which parts of the input have to be mutated. Furthermore, the taint information may also be used to define evolutionary operators.
Eventually, the evolutionary algorithm is able to generate valid inputs for a program. Such inputs can be used together with the taint information for further program analysis, e.g. the generation of input grammars.
@InProceedings{ISSTA17p436,
author = {Björn Mathis},
title = {Dynamic Tainting for Automatic Test Case Generation},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {436--439},
doi = {},
year = {2017},
}
Mapping Hardness of Automated Software Testing
Carlos Oliveira
(Monash University, Australia)
Automated Test Case Generation (ATCG) is an important topic in Software Testing, with a wide range of techniques and tools being used in academia and industry. While their usefulness is widely recognized, due to the labor-intensive nature of the task, the effectiveness of the different techniques in automatically generating test cases for different software systems is not thoroughly understood. Despite many studies introducing various ATCG techniques, much remains to be learned, however, about what makes a particular technique work well (or not) for a specific software system. Therefore, we propose a new methodology to evaluate and select the most effective ATCG technique using structure-based complexity measures. Empirical tests are going to be performed using two different techniques: Search-based Software Testing (SBST) and Random Testing (RT).
@InProceedings{ISSTA17p440,
author = {Carlos Oliveira},
title = {Mapping Hardness of Automated Software Testing},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {440--443},
doi = {},
year = {2017},
}
Oracle Problem in Software Testing
Gunel Jahangirova
(Fondazione Bruno Kessler, Italy; University College London, UK)
The oracle problem remains one of the key challenges in software testing, for which little automated support has been developed so far. In my thesis work we introduce a technique for assessing and improving test oracles by reducing the incidence of both false positives and false negatives. Our technique combines test case generation to reveal false positives and mutation testing to reveal false negatives. The experimental results on five real-world subjects show that the fault detection rate of the oracles after improvement increases, on average, by 48.6% (86% over the implicit oracle). Three actual, exposed faults in the studied systems were subsequently confirmed and fixed by the developers. However, our technique contains a human in the loop, which was represented only by the author during the initial experiments. Our next goal is to conduct further experiments where the human in the loop will be represented by real developers. Our second future goal is to address the oracle placement problem. When testing software, developers can place oracles externally or internally to a method. Given a faulty execution state, i.e., one that differs from the expected one, an oracle might be unable to expose the fault if it is placed at a program point with no access to the incorrect program state or where the program state is no longer corrupted. In such a case, the oracle is subject to failed error propagation. Internal oracles are in principle less subject to failed error propagation than external oracles. However, they are also more difficult to define manually. Hence, a key research question is whether a more intrusive oracle placement is justified by its higher fault detection capability.
@InProceedings{ISSTA17p444,
author = {Gunel Jahangirova},
title = {Oracle Problem in Software Testing},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {444--447},
doi = {},
year = {2017},
}
proc time: 1.15