Powered by
Conference Publishing Consulting

4th ACM SIGPLAN International Workshop on the State Of the Art in Program Analysis (SOAP 2015), June 14, 2015, Portland, OR, USA

SOAP 2015 – Proceedings

Contents - Abstracts - Authors

4th ACM SIGPLAN International Workshop on the State Of the Art in Program Analysis (SOAP 2015)

Title Page

The 4th ACM SIGPLAN International Workshop on the State Of the Art in Program Analysis (SOAP 2015) takes place on June 14, 2015, co-located with PLDI 2015.

Using Targeted Symbolic Execution for Reducing False-Positives in Dataflow Analysis
Steven Arzt, Siegfried Rasthofer, Robert Hahn, and Eric Bodden
(TU Darmstadt, Germany; Fraunhofer SIT, Germany)
Static data flow analysis is an indispensable tool for finding potentially malicious data leaks in software programs. Programs, nowadays often consisting of millions of lines of code, have grown much too large to allow for a complete manual inspection. Nevertheless, security experts need to judge whether an application is trustworthy or not, developers need to find bugs, and quality experts need to assess the maturity of software products. Thus, analysts take advantage of automated data flow analysis tools to find candidates for suspicious leaks which are then further investigated. While much progress has been made in the area with a broad variety of static data flow analysis tools proposed in academia and being offered commercially, the number of false alarms raised by these tools is still a concern. Many of the false alarms are reported because the analysis tool detects data flows along paths which are not realizable at runtime, e.g., due to contradictory conditions on the path. Still, every single report is a potential issue and must be reviewed by an expert which is labor-intensive and costly. In this work, we therefore propose TASMAN, a post-analysis based on symbolic execution that removes such false data leaks along unrealizable paths from the result set. Thus, it greatly improves the usefulness of the result presented to the human analyst. In our experiments on DroidBench examples, TASMAN reduces the number of false positives by about 80% without pruning any true positives. Additionally, TASMAN also identified false positives in real-world examples which we confirmed by hand. With an average execution time of 5.4 seconds per alleged leak to be checked on large real-world applications, TASMAN is fast enough for practical use.

Publisher's Version Article Search
Understanding Caller-Sensitive Method Vulnerabilities: A Class of Access Control Vulnerabilities in the Java Platform
Cristina Cifuentes, Andrew Gross, and Nathan Keynes
(Oracle Labs, Australia; Oracle, USA; Oracle, Australia)
Late 2012 and early 2013 saw a spike of new Java vulnerabilities being reported in 0-day attacks and used in the wild, that allowed bypass of the Java sandbox. These vulnerabilities were of a variety of types: unguarded caller-sensitive methods, unsafe use of doPrivileged, invalid deserialisation, invalid serialisation, and more. Oracle reacted quickly by making available patches and has now increased the scheduled patch update cycle to 4 releases a year.
Unlike more traditional vulnerabilities such as buffer overflow and cross-site scripting that have been studied in the literature for many years, these new Java vulnerabilities lack a clear definition of what the corresponding security bug type is, and what rules apply to each bug type.
In this paper we give an overview of one type of access control vulnerabilities that affects the Java platform---unguarded caller-sensitive method calls. The aim of the paper is to explain to the practitioner what the vulnerability is, why it happens in the context of the Java security model, and how to fix it. For the program analysis community, the aim is to define the security bug type, to be able to detect this type of vulnerability.

Publisher's Version Article Search
Combining Type-Analysis with Points-To Analysis for Analyzing Java Library Source-Code
Nicholas Allen, Padmanabhan Krishnan, and Bernhard Scholz
(Oracle Labs, Australia)
The predominant work in static program analysis is focused on whole program analysis assuming that the whole program is present at analysis time and the only unknowns are program inputs. However, for library designers it is of paramount importance to perform semantic checks via static program analysis tools without the presence of an application. The literature offers only little research on partial program analysis for object-oriented programming languages including Java. Analyzing libraries statically requires novel abstractions for all possible applications that are not known a-priori. In this work, we present a static program analysis technique that reasons about the state of the library by approximating the behaviour of all possible applications. The key contribution is (1) the combination of type-analysis with points-to analysis and (2) the development of a most-general application (MGA) as a type, which represents the interaction of the library with all possible applications.

Publisher's Version Article Search
Droidel: A General Approach to Android Framework Modeling
Sam Blackshear, Alexandra Gendreau, and Bor-Yuh Evan Chang
(University of Colorado at Boulder, USA)
We present an approach and tool for general-purpose modeling of Android for static analysis. Our approach is to explicate the reflective bridge between the Android framework and an application to make the framework source amenable to static analysis. Our Droidel tool does this by automatically generating application-specific stubs that summarize the reflective behavior for a particular app. The result is a program with a single entry-point that can be processed by any existing Java analysis platform (e.g., Soot, WALA, Chord). We compared call graphs constructed using Droidel to call graphs constructed using a state-of-the-art Android model and found that Droidel captures more concrete behaviors.

Publisher's Version Article Search
Design Your Analysis: A Case Study on Implementation Reusability of Data-Flow Functions
Johannes Lerch and Ben Hermann
(TU Darmstadt, Germany)
The development of efficient data flow analyses is a complicated task. As requirements change and special cases have to be considered, implementations may get hard to maintain, test and reuse. We propose to design these analyses regarding the principle of separation of concerns. Therefore, in this paper we present a reference design for data flow analyses in the context of the IFDS/IDE algorithm. We conducted a case study in order to inspect the level of reuse that can be achieved with our design and found it to be helpful for the efficient development of new analyses.

Publisher's Version Article Search

proc time: 0.01