SPIN 2017
24th ACM SIGSOFT International SPIN Symposium on Model Checking of Software (SPIN 2017)
Powered by
Conference Publishing Consulting

24th ACM SIGSOFT International SPIN Symposium on Model Checking of Software (SPIN 2017), July 13–14, 2017, Santa Barbara, CA, USA

SPIN 2017 – Proceedings

Contents - Abstracts - Authors

Frontmatter

Title Page

Message from the Chairs
This volume contains the proceedings of the 24th International SPIN Symposium on Model Checking of Software (SPIN 2017), which took place at the University of California, Santa Barbara on July 13 and 14, 2017. The SPIN symposia are a forum for researchers and practitioners working in automated, tool based techniques for the analysis, verification, and validation of software systems, including models and programs. SPIN 2017 features three keynote addresses, 16 full papers, and five short papers. The volume also includes a contribution from the organizers of the sister event, the Rigorous Examination of Reactive Systems (RERS) Verification Challenge.
Info
SPIN 2017 Organization
Committee listings
Sponsors
Sponsors

Invited Papers

Cobra: Fast Structural Code Checking (Keynote)
Gerard J. Holzmann
(Nimble Research, USA)
In software analysis most research has traditionally been focused on the development of tools and techniques that can be used to formally prove key correctness properties of a software design. Design errors can be hard to catch without the right tools, and even after decades of development, the right tools can still be frustratingly hard to use. Most software developers know, though, that the number of coding errors is usually much larger than the number of design errors. If something is wrong, the problem can usually be traced back to a coding mistake, and less frequently to an algorithmic error. Tools for chasing down those types of errors are indeed widely used. They are simple to use, and considered effective, although they do not offer much more than sophisticated types of trial and error. The first commercial static source code analyzers hit the market a little over a decade ago, aiming to fill the gap between ease of use and more meaningful types of code analysis. Today these tools are mature, and their use is widespread in commercial software development, but they tend to be big and slow. There does not seem to be any tool of this type yet that can be used for interactive code analysis. We'll describe the design and use of a different type of static analysis tool, that is designed to be simple, small, and fast enough that it can effectively be used for interactive code analysis even for large code bases.
Publisher's Version Article Search
Automated Formal Reasoning about Amazon Web Services (Keynote)
Byron Cook
(Amazon Web Services, USA; University College London, UK)
Automatic and semiautomatic formal verification and model checking tools are now being used within AWS to find proofs that prove or disprove desired properties of key AWS components. In this session, we outline these efforts and discuss how tools are used to play and then replay found proofs of desired properties when software artifacts or networks are modified, thus helping provide security throughout the lifetime of the AWS system.
Publisher's Version Article Search
SunDew: Systematic Automated Security Testing (Keynote)
Domagoj Babic
(Google, USA)
SunDew is a new automated test generation framework developed at Google, focused on finding security bugs in C/C++ code. It combines the strengths of multiple test generation techniques under a single cohesive platform. It leverages the vast amount of computational resources available at Google to massively parallelize the automated test generation and triage. By using a portfolio of test generation techniques, SunDew aims to overcome the coverage saturation (or plateau) that occurs with any individual technique. This saturation manifests as the inability of the technique to discover unexplored parts of a program after a certain number of generated tests. A portfolio of techniques, on the other hand, provides a diversity of test generation strategies that complement each other. SunDew embeds the most recent advances in automated test case generation, which provide precision and thoroughness. For example, symbolic execution uses powerful constraint solvers to generate tests that precisely follow desired program branches. This approach allows symbolic execution to reach code executed under very specific input preconditions that would be difficult to discover randomly. At the same time, recent improvements to coverage guided automated fuzzing, such as AFL or LibFuzzer, generates tests faster than symbolic execution. Thus, SunDew alternates these approaches by using coverage-guided fuzzing to quickly bring the coverage to a first saturation level, then using symbolic execution to refine the search for harder-to-reach code. This, in turn, may provide additional inputs for coverage-guided fuzzers, etc. As part of SunDew, we also developed a number of format-aware fuzzers, that rely on, amongst other things, machine learning to generate language-aware fuzzers. The SunDew architecture follows a distributed continuous pipeline pattern. It allows a performance-based dynamic resource allocation for the various test generation techniques. This allows us to maximize the combined output of the test suite generation and avoid long plateaus in the coverage growth of the test suite. We discuss the application of SunDew on a variety of fuzzing targets of interest.
Publisher's Version Article Search

Reports

The RERS 2017 Challenge and Workshop (Invited Paper)
Marc Jasper, Maximilian Fecke, Bernhard Steffen, Markus Schordan, Jeroen Meijer, Jaco van de Pol, Falk Howar, and Stephen F. Siegel
(Lawrence Livermore National Laboratory, USA; TU Dortmund, Germany; University of Twente, Netherlands; TU Clausthal, Germany; University of Delaware, USA)
RERS is an annual verification challenge that focuses on LTL and reachability properties of reactive systems. In 2017, RERS was extended to a one day workshop that in addition to the original challenge program also featured an invited talk about possible future developments. As a satellite of ISSTA and SPIN, the 2017 RERS Challenge itself increased emphasis on the parallel benchmark problems which, like their sequential counterparts, were generated using property-preserving transformations in order to scale their level of difficulty. The first half of the RERS workshop focused on the 2017 benchmark profiles, the evaluation of the received contributions, and short presentations of each participating team. The second half comprised discussions about attractive problem scenarios for future benchmarks, like race detection, the topic of the invited talk, and about systematic ways to leverage a tool's performance based on competition benchmarks and machine learning.
Publisher's Version Article Search

Symbolic Verification

Distributed Binary Decision Diagrams for Symbolic Reachability
Wytse Oortwijn, Tom van Dijk, and Jaco van de Pol
(University of Twente, Netherlands; JKU Linz, Austria)
Decision diagrams are used in symbolic verification to concisely represent state spaces. A crucial symbolic verification algorithm is reachability: systematically exploring all reachable system states. Although both parallel and distributed reachability algorithms exist, a combined solution is relatively unexplored. This paper contributes BDD-based reachability algorithms targeting compute clusters: high-performance networks of multi-core machines. The proposed algorithms may use the entire memory of every machine, allowing larger models to be processed while increasing performance by using all available computational power. To do this effectively, a distributed hash table, cluster-based work stealing algorithms, and several caching structures have been designed that all utilise the newest networking technology. The approach is evaluated extensively on a large collection of models, thereby demonstrating speedups up to 51,1x with 32 machines. The proposed algorithms not only benefit from the large amounts of available memory on compute clusters, but also from all available computational resources.
Publisher's Version Article Search

Model Checking I

Addressing Challenges in Obtaining High Coverage When Model Checking Android Applications
Heila Botha, Oksana Tkachuk, Brink van der Merwe, and Willem Visser
(Stellenbosch University, South Africa; NASA Ames Research Center, USA)
Current dynamic analysis tools for Android applications do not get good code coverage since they can only explore a subset of the behaviors of the applications and do not have full control over the environment in which they execute. In this work we use model checking to systematically explore application paths while reducing the analysis size using state matching and backtracking. In particular, we extend the Java PathFinder (JPF) model checking environment for Android. We describe the difficulties one needs to overcome to make this a reality as well as our current approaches to handling these issues. We obtain significantly higher coverage using shorter event sequences on a representative sample of Android apps, when compared to Dynodroid and Sapienz, the current state-of-the-art dynamic analysis tools for Android applications.
Publisher's Version Article Search
LeeTL: LTL with Quantifications over Model Objects
Pouria Mellati, Ehsan Khamespanah, and Ramtin Khosravi
(University of Tehran, Iran; Reykjavik University, Iceland)

Dynamic creating of objects and processes as one of the widely used techniques for developing models does not support by the majority of model checking tools. In addition, although there exist few model checking tools which support dynamic creation of model elements, e.g. Spin, they do not provide a property language for presenting the behavioral specifications of dynamically created elements. In this paper, we address this shortage and provide proper support for the model checking of object-based models which contain dynamic object creation. To this aim, we propose LeeTL, a new temporal logic that supports quantifications over model objects. Using LeeTL, it is also possible to traverse objects to access the variables of the objects for defining property formulas. We propose an algorithm for transforming LeeTL formulas to B'uchi automata to be able to use the existing model checking tools which support B'uchi automata.


Publisher's Version Article Search
Explicit State Model Checking with Generalized Büchi and Rabin Automata
Vincent Bloemen, Alexandre Duret-Lutz, and Jaco van de Pol
(University of Twente, Netherlands; LRDE, France)
In the automata theoretic approach to explicit state LTL model checking, the synchronized product of the model and an automaton that represents the negated formula is checked for emptiness. In practice, a (transition-based generalized) Büchi automaton (TGBA) is used for this procedure. This paper investigates whether using a more general form of acceptance, namely transition-based generalized Rabin automata (TGRAs), improves the model checking procedure. TGRAs can have significantly fewer states than TGBAs, however the corresponding emptiness checking procedure is more involved. With recent advances in probabilistic model checking and LTL to TGRA translators, it is only natural to ask whether checking a TGRA directly is more advantageous in practice. We designed a multi-core TGRA checking algorithm and performed experiments on a subset of the models and formulas from the 2015 Model Checking Contest. We observed that our algorithm can be used to replace a TGBA checking algorithm without losing performance. In general, we found little to no improvement by checking TGRAs directly.
Publisher's Version Article Search

Code Verification

Increasing Usability of Spin-Based C Code Verification using a Harness Definition Language: Leveraging Model-Driven Code Checking to Practitioners
Daniel Ratiu and Andreas Ulrich
(Siemens, Germany)
Due to its capabilities to integrate well with C code, Spin has been used for C code verification based on environment models that describe the context, in which the software under verification is expected to run. In practice this approach requires an in-depth knowledge of Promela and the underlying technology. Moreover environment models tend to be verbose and exhibit heavily intertwined statements of Promela and C code. Thereby, writing and understanding such hybrid models is difficult and error-prone. Alleviating this problem we develop a specialized language for expressing environment models used in verification harnesses. Our language harmonizes the use of Promela and C in a homogeneous way that is suitable for practitioners. We show how a small number of language concepts is sufficient to define environments for a wide variety of commonly encountered software components written in C. The approach is integrated in the development platform mbeddr, a technology stack for embedded programming and formal verification developed on top of JetBrains’ MPS language workbench.
Publisher's Version Article Search

Runtime Enforcement

Runtime Enforcement using Büchi Games
Matthieu Renard, Antoine Rollet, and Yliès Falcone
(LaBRI, France; Bordeaux INP, France; University of Bordeaux, France; Grenoble Alpes University, France; Inria, France; CNRS, France; Laboratoire d'Informatique de Grenoble, France)
We leverage Büchi games for the runtime enforcement of regular properties with uncontrollable events. Runtime enforcement consists in modifying the execution of a running system to have it satisfy a given regular property, modelled by an automaton. We revisit runtime enforcement with uncontrollable events and propose a framework where we model the runtime enforcement problem as a Büchi game and synthesise sound, compliant, and optimal enforcement mechanisms as strategies.We present algorithms and a tool implementing enforcement mechanisms.We reduce the complexity of the computations performed by enforcement mechanisms at runtime by pre-computing the decisions of enforcement mechanisms ahead of time.
Publisher's Version Article Search
Runtime Enforcement of Reactive Systems using Synchronous Enforcers
Srinivas Pinisetty, Partha S. Roop, Steven Smyth, Stavros Tripakis, and Reinhard von Hanxleden
(Aalto University, Finland; University of Gothenburg, Sweden; University of Auckland, New Zealand; University of Kiel, Germany; University of California at Berkeley, USA)
Synchronous programming is a paradigm of choice for the design of safety-critical reactive systems. Runtime enforcement is a technique to ensure that the output of a black-box system satisfies some desired properties. This paper deals with the problem of runtime enforcement in the context of synchronous programs. We propose a framework where an enforcer monitors both the inputs and the outputs of a synchronous program and (minimally) edits erroneous inputs/outputs in order to guarantee that a given property holds. We define enforceability conditions, develop an online enforcement algorithm, and prove its correctness. We also report on an implementation of the algorithm on top of the KIELER framework for the SCCharts synchronous language. Experimental results show that enforcement has minimal execution time overhead, which decreases proportionally with larger benchmarks.
Publisher's Version Article Search

Model Checking - Short Papers

SIMPAL: A Compositional Reasoning Framework for Imperative Programs
Lucas Wagner, David Greve, and Andrew Gacek
(Iowa State University, USA; Rockwell Collins, USA)
The Static IMPerative AnaLyzer (SIMPAL) is a tool for performing compositional reasoning over software programs that utilize preexisting software components. SIMPAL features a specification language, called Limp, for modeling programs that utilize preexisting components. Limp is an extension of the Lustre synchronous data flow language. Limp extends Lustre by introducing control flow elements, global variables, and syntax specifying preconditions, postconditions, and global variable interactions of preexisting components. SIMPAL translates Limp programs to an equivalent Lustre representation which can be passed to the JKind model checking tool to perform assume-guarantee reasoning, reachability, and viability analyses. The feedback from these analyses can be used to refine the program to ensure the software functions as intended.
Publisher's Version Article Search
Verification-Driven Development of ICAROUS Based on Automatic Reachability Analysis: A Preliminary Case Study
Marco A. Feliú, Camilo Rocha, and Swee Balachandran
(National Institute of Aerospace, USA; Pontificia Universidad Javeriana, Colombia)
The Integrated and Configurable Algorithms for Reliable Operations of Unmanned Systems (ICAROUS) is a software architecture being developed for the robust integration of mission-specific software modules and highly assured core software modules. This paper reports on the use of automatic reachability analysis during the development of ICAROUS, as a first step towards a broader formal verification effort of the software architecture. It explains how simulation based on state-space exploration and LTL model checking has been performed on a formal executable specification of the system in rewriting logic. Overall, this effort has unveiled issues such as deadlocks and undesired behavior, and has helped improve the ICAROUS design and source code.
Publisher's Version Article Search
Formal Verification of Data-Intensive Applications through Model Checking Modulo Theories
Marcello M. Bersani, Francesco Marconi, Matteo Rossi, Madalina Erascu, and Silvio Ghilardi
(Politecnico di Milano, Italy; Institute e-Austria Timisoara, Romania; West University of Timisoara, Romania; University of Milan, Italy)

We present our efforts on the formalization and automated formal verification of data-intensive applications based on the Storm technology, a well known and pioneering framework for developing streaming applications. The approach is based on the so-called array-based systems formalism, introduced by Ghilardi et al., a suitable abstraction of infinite-state systems that we used to model the runtime behavior of Storm-based applications. The formalization consists of quantified formulae belonging to a certain fragment of first-order logic to symbolically represent array-based systems.The formalization consists of quantified first-order formulae symbolically representing array-based systems. The verification consists in checking whether some safety property holds or not for the system. Both formalization and verification are performed in the same framework, namely the state-of-the-art Cubicle model checker.


Publisher's Version Article Search

Program Synthesis

Practical Controller Synthesis for MTL0,∞
Guangyuan Li, Peter Gjøl Jensen, Kim Guldstrand Larsen, Axel Legay, and Danny Bøgsted Poulsen
(Institute of Software at Chinese Academy of Sciences, China; University at Chinese Academy of Sciences, China; Aalborg University, Denmark; Inria, France; University of Kiel, Germany)

Metric Temporal Logic MTL0,∞ is a timed extension of linear temporal logic, LTL, with time intervals whose left endpoints are zero or whose right endpoints are infinity. Whereas the satisfiability and model-checking problems for MTL0,∞ are both decidable, we note that the controller synthesis problem for MTL0,∞ is unfortunately undecidable. As a remedy of this we propose an approximate method to the synthesis problem, which we demonstrate to be adequate and scalable to practical examples. We define a method for converting MTL0,∞ formulas into (nondeterministic) Timed Game Büchi Automata and furthermore show how to construct determinized over- and underapproximation of a such. For the proposed method, we present a toolchain seamlessly integrating the needed components for practical MTL0,∞ synthesis. Lastly we demonstrate on a pair of case-studies the applicability and scalability of the proposed method.


Publisher's Version Article Search
An Ordered Approach to Solving Parity Games in Quasi Polynomial Time and Quasi Linear Space
John Fearnley, Sanjay Jain, Sven Schewe, Frank Stephan, and Dominik Wojtczak
(University of Liverpool, UK; National University of Singapore, Singapore)
Parity games play an important role in model checking and synthesis. In their paper, Calude et al. have recently shown that these games can be solved in quasi-polynomial time. We show that their algorithm can be implemented efficiently: we use their data structure as a progress measure, allowing for a backward implementation instead of a complete unravelling of the game. To achieve this, a number of changes have to be made to their techniques, where the main one is to add power to the antagonistic player that allows for determining her rational move without changing the outcome of the game. We provide a first implementation for a quasi-polynomial algorithm, test it on small examples, and provide a number of side results, including minor algorithmic improvements, a quasi bi-linear complexity in the number of states and edges for a fixed number of colours, and matching lower bounds for the algorithm of Calude et al.
Publisher's Version Article Search
A Hot Method for Synthesising Cool Controllers
Idress Husien, Nicolas Berthier, and Sven Schewe
(University of Liverpool, UK)
Several general search techniques such as genetic programming and simulated annealing have recently been investigated for synthesising programs from specifications of desired objective behaviours. In this context, these techniques explore the space of all candidate programs by performing local changes to candidates selected by means of a measure of their fitness w.r.t the desired objectives. Previous performance results advocated the use of simulated annealing over genetic programming for such problems. In this paper, we investigate the application of these techniques for the computation of deterministic strategies solving symbolic Discrete Controller Synthesis (DCS) problems, where a model of the system to control is given along with desired objective behaviours. We experimentally confirm that relative performance results are similar to program synthesis, and give a complexity analysis of our simulated annealing algorithm for symbolic DCS.
Publisher's Version Article Search

Model Checking II

Backward Coverability with Pruning for Lossy Channel Systems
Thomas Geffroy, Jérôme Leroux, and Grégoire Sutre
(University of Bordeaux, France; CNRS, France; LaBRI, France)
Driven by the concurrency revolution, the study of the coverability problem for Petri nets has regained a lot of interest in the recent years. A promising approach, which was presented in two papers last year, leverages a downward-closed forward invariant to accelerate the classical backward coverability analysis for Petri nets. In this paper, we propose a generalization of this approach to the class of well-structured transition systems (WSTSs), which contains Petri nets. We then apply this generalized approach to lossy channel systems (LCSs), a well-known subclass of WSTSs. We propose three downward-closed forward invariants for LCSs. One of them counts the number of messages in each channel, and the other two keep track of the order of messages. An experimental evaluation demonstrates the benefits of our approach.
Publisher's Version Article Search
Model Learning and Model Checking of SSH Implementations
Paul Fiterău-Broştean, Toon Lenaerts, Erik Poll, Joeri de Ruiter, Frits Vaandrager, and Patrick Verleg
(Radboud University Nijmegen, Netherlands)
We apply model learning on three SSH implementations to infer state machine models, and then use model checking to verify that these models satisfy basic security properties and conform to the RFCs. Our analysis showed that all tested SSH server models satisfy the stated security properties, but uncovered several violations of the standard.
Publisher's Version Article Search
CARET Model Checking for Malware Detection
Huu-Vu Nguyen and Tayssir Touili
(University of Paris Diderot, France; LIPN, France; CNRS, France; University of Paris North, France)
The number of malware is growing significantly fast. Traditional malware detectors based on signature matching or code emulation are easy to get around. To overcome this problem, model-checking emerges as a technique that has been extensively applied for malware detection recently. Pushdown systems were proposed as a natural model for programs, since they allow to keep track of the stack, while extensions of LTL and CTL were considered for malicious behavior specification. However, LTL and CTL like formulas don't allow to express behaviors with matching calls and returns. In this paper, we propose to use CARET for malicious behavior specification. Since CARET formulas for malicious behaviors are huge, we propose to extend CARET with variables, quantifiers and predicates over the stack. Our new logic is called SPCARET. We reduce the malware detection problem to the model checking problem of PDSs against SPCARET formulas, and we propose efficient algorithms to model check SPCARET formulas for PDSs. We implemented our algorithms in a tool for malware detection. We obtained encouraging results.
Publisher's Version Article Search

Program Sketching

EdSketch: Execution-Driven Sketching for Java
Jinru Hua and Sarfraz Khurshid
(University of Texas at Austin, USA)
Sketching is a relatively recent approach to program synthesis, which has shown much promise. The key idea in sketching is to allow users to write partial programs that have ''holes'' and provide test harnesses or reference implementations, and let synthesis tools create program fragments that fill the holes such that the resulting complete program has the desired functionality. Traditional solutions to the sketching problem perform a translation to SAT and employ CEGIS. While effective for a range of programs, when applied to real applications, such translation-based approaches have a key limitation: they require either translating all relevant libraries that are invoked directly or indirectly by the given sketch -- which can lead to impractical SAT problems -- or creating models of those libraries -- which can require much manual effort. This paper introduces execution-driven sketching, a novel approach for synthesis of Java programs with respect to the given test suite using a backtracking search that is commonly employed in software model checkers. The key novelty of our work is to introduce effective pruning strategies to efficiently explore the actual program behaviors in presence of libraries and to provide a practical solution to sketching small parts of real-world applications, which may use complex constructs of modern languages, such as reflection or native calls. Our tool EdSketch embodies our approach in two forms: a stateful search based on the Java PathFinder model checker; and a stateless search based on re-execution inspired by the VeriSoft model checker. Experimental results show that EdSketch's performance compares well with the well-known SAT-based Sketch system for a range of small but complex programs, and moreover, that EdSketch can complete some sketches that require handling complex constructs.
Publisher's Version Article Search

Testing

Stateless Model Checking of the Linux Kernel's Hierarchical Read-Copy-Update (Tree RCU)
Michalis Kokologiannakis and Konstantinos Sagonas
(National Technical University of Athens, Greece; Uppsala University, Sweden)
Read-Copy-Update (RCU) is a synchronization mechanism used heavily in key components of the Linux kernel, such as the virtual filesystem (VFS), to achieve scalability by exploiting RCU's ability to allow concurrent reads and updates. RCU's design is non-trivial, requires significant effort to fully understand it, let alone become convinced that its implementation is faithful to its specification and provides its claimed properties. The fact that as time goes by Linux kernels are becoming increasingly more complex and are employed in machines with more and more cores and weak memory does not make the situation any easier. This paper presents an approach to systematically test the code of the main flavor of RCU used in the Linux kernel (Tree RCU) for concurrency errors, both under sequential consistency and weak memory. Our modeling allows Nidhugg, a stateless model checking tool, to reproduce, within seconds, safety and liveness bugs that have been reported for RCU. More importantly, we were able to verify the Grace-Period guarantee, the basic guarantee that RCU offers, on several Linux kernel versions (non-preemptible builds). Our approach is effective, both in dealing with the increased complexity of recent Linux kernels and in terms of time that the process requires. We have good reasons to believe that our effort constitutes a big step towards making tools such as Nidhugg part of the standard testing infrastructure of the Linux kernel.
Publisher's Version Article Search
Optimizing Parallel Korat Using Invalid Ranges
Nima Dini, Cagdas Yelen, and Sarfraz Khurshid
(University of Texas at Austin, USA)
Constraint-based input generation enables systematic testing for effective bug finding, but requires exploration of very large spaces of candidate inputs. This paper introduces a novel approach to optimize input generation using Korat – a solver for constraints written as imperative predicates in Java – when Korat is executed more than once for the same constraint solving problem. Our key insight is that in certain application scenarios the Korat search over the same state space and constraint is repeated across separate runs of Korat, and an earlier run can be summarized to optimize a later run. We introduce invalid ranges to represent parts of the exploration space that do not contain any valid inputs but must be explicitly explored by Korat. Our approach directly prunes these parts in a future run of Korat over the same search problem. We develop our approach for two settings: a sequential setting where the Korat search is run using one worker (i.e., processing unit), and a parallel setting where the search is distributed to several workers. In the parallel setting, we build on a previous technique for parallel Korat, namely SEQ-ON, and integrate invalid ranges with it. Experimental evaluation using 6 subjects show that our approach achieves: in the sequential setting, a speedup of up to 2.82X over sequential Korat (in comparison, SEQ-ON does not provide any speedup in the sequential setting); and in the distributed setting, using up to 32 workers, a speedup of up to 38.84X over sequential Korat (using 1 worker), and up to 3.04X over SEQ-ON in terms of total execution time across the workers.
Publisher's Version Article Search

Testing - Short Papers

Guided Test Case Generation for Mobile Apps in the TRIANGLE Project: Work in Progress
Laura Panizo, Alberto Salmerón, María-del-Mar Gallardo, and Pedro Merino
(University of Málaga, Spain)
The evolution of mobile networks and the increasing number of scenarios for mobile applications requires new approaches to ensure their quality and performance. The TRIANGLE project aims to develop an integrated testing framework that allows the evaluation of applications and devices in different network scenarios. This paper focuses on the generation of user interactions that will be part of the test cases for applications. We propose a method that combines model-based testing and guided search, based on the Key Performance Indicators to be measured, and we have evaluated our proposal with an example. Our ultimate goal is to integrate the guided generation of user flows into the TRIANGLE testing framework to automatically generate and execute test cases.
Publisher's Version Article Search
ExpoSE: Practical Symbolic Execution of Standalone JavaScript
Blake Loring, Duncan Mitchell, and Johannes Kinder
(Royal Holloway University of London, UK)
JavaScript has evolved into a versatile ecosystem for not just the web, but also a wide range of server-side and client-side applications. With this increased scope, the potential impact of bugs increases. We introduce ExpoSE, a dynamic symbolic execution engine for Node.js applications. ExpoSE automatically generates test cases to find bugs and cover as many paths in the target program as possible. We discuss the specific challenges for symbolic execution arising from the widespread use of regular expressions in such applications. In particular, we make explicit the issues of capture groups, backreferences, and greediness in JavaScript's flavor of regular expressions, and our models improve over previous work that only partially addressed these. We evaluate ExpoSE on three popular JavaScript libraries that make heavy use of regular expressions, and we report a previously unknown bug in the Minimist library.
Publisher's Version Article Search

proc time: 0.34