PLDI 2016 Workshops
37th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2016)
Powered by
Conference Publishing Consulting

5th ACM SIGPLAN International Workshop on the State Of the Art in Program Analysis (SOAP 2016), June 14, 2016, Santa Barbara, CA, USA

SOAP 2016 – Proceedings

Contents - Abstracts - Authors

5th ACM SIGPLAN International Workshop on the State Of the Art in Program Analysis (SOAP 2016)

Title Page


Frontmatter
Frontmatter

Towards Cross-Platform Cross-Language Analysis with Soot
Steven Arzt, Tobias Kussmaul, and Eric Bodden
(TU Darmstadt, Germany; Fraunhofer SIT, Germany; University of Paderborn, Germany; Fraunhofer IEM, Germany)
To assess the security and quality of the growing number of programs on desktop computers, mobile devices, and servers, companies often rely on static analysis techniques. While static analysis has been applied successfully to various problems, the academic literature has largely focused on a subset of programming languages and frameworks, and often only on a single language at a time. Many tools have been created for Java and Android. In this paper, we present a first step toward re-using the existing Soot framework and its analyses for other platforms. We implement a frontend for converting the CIL assembly code of the .net Framework into Soot's Jimple code and show that this is possible without modifying Jimple nor overly losing semantic information. The frontend integrates Java/Android with CIL analysis and scales to large programs. A case study demonstrates the detection of real-world malware that uses CIL code inside an Android app to hide its behavior.

Iceberg: A Tool for Static Analysis of Java Critical Sections
Michael D. Shah and Samuel Z. Guyer
(Tufts University, USA)
In this paper we present a static analysis tool called Iceberg that helps programmers find potential performance bugs in concurrent Java programs. The focus of our work is on identifying critical sections with high variability in their latency: in most cases they execute quickly, but occasionally they stall, holding a lock for an unusually long time, and preventing other threads from making progress. The end user experiences such behavior as transient program "hangs". These performance bugs are difficult to find because they are infrequent, transient, and hard to reproduce. This paper describes our initial results running Iceberg on 24 real-world concurrent programs. Our current approach is to identify all of the code that could be executed inside each critical section, including all methods potentially called by it. We collect a number of code metrics that might indicate potential performance problems. These metrics include counts of variable-latency operations such as I/O and memory allocation, as well as overall measurements of critical section size. Using our tool we are able to find critical sections with unusual behavior compared to the other critical sections. Our future work includes a more detailed analysis of control-flow through critical sections, as well as a dynamic analysis to measure the critical section latencies directly.

Toward an Automated Benchmark Management System
Lisa Nguyen Quang Do, Michael Eichberg, and Eric Bodden
(Fraunhofer IEM, Germany; TU Darmstadt, Germany; University of Paderborn, Germany)
The systematic evaluation of program analyses as well as software-engineering tools requires benchmark suites that are representative of real-world projects in the domains for which the tools or analyses are designed. Such benchmarks currently only exist for a few research areas and even where they exist, they are often not effectively maintained, due to the required manual effort. This makes evaluating new analyses and tools on software that relies on current technologies often impossible. We describe ABM, a methodology to semi-automatically mine software repositories to extract up-to-date and representative sets of applications belonging to specific domains. The proposed methodology facilitates the creation of such collections and makes it easier to release updated versions of a benchmark suite. Resulting from an instantiation of the methodology, we present a collection of current real-world Java business web applications. The collection and methodology serve as a starting point for creating current, targeted benchmark suites, and thus helps to better evaluate current program-analysis and software-engineering tools.

Info
On the Unsoundness of Static Analysis for Android GUIs
Yan Wang, Hailong Zhang, and Atanas Rountev
(Ohio State University, USA)
Android software presents exciting new challenges for the static analysis community. However, static analyses for Android are typically unsound. This is due to the lack of specification of the Android framework, the continuous evolution of framework features and behavior, and the absence of soundness arguments and studies by program analysis researchers. Our goal is to investigate one important aspect of this problem: the static modeling of control/data flow due to interactions of the user with the application's GUI. We compare the solutions of three existing static analyses - FlowDroid, IccTA, and Gator - with the actual run-time behavior. Specifically, we observe the run-time sequences of callbacks and their parameters, and match them against the static abstractions provided by these analyses. This study provides new insights into the unsoundness of existing analysis techniques. We conclude with open questions and action items for program analysis researchers working in this increasingly important area.

LifeJacket: Verifying Precise Floating-Point Optimizations in LLVM
Andres Nötzli and Fraser Brown
(Stanford University, USA)
Users depend on correct compiler optimizations but floating-point arithmetic is difficult to optimize transparently. Manually reasoning about all of floating-point arithmetic’s esoteric properties is error-prone and increases the cost of adding new optimizations. We present an approach to automate reasoning about precise floating-point optimizations using satisfiability modulo theories (SMT) solvers. We implement the approach in LifeJacket, a system for automatically verifying precise floating-point optimizations for the LLVM assembly language. We have used LifeJacket to verify 43 LLVM optimizations and to discover eight incorrect ones, including three previously unreported problems. LifeJacket is an open source extension of the Alive system for optimization verification.

Info

proc time: 0.7