FSE 2016 Workshops
24th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE 2016)
Powered by
Conference Publishing Consulting

International Workshop on App Market Analytics (WAMA 2016), November 14, 2016, Seattle, WA, USA

WAMA 2016 – Proceedings

Contents - Abstracts - Authors
Twitter: https://twitter.com/FSEconf

International Workshop on App Market Analytics (WAMA 2016)

Frontmatter

Title Page


Message from the Chairs
We would like to welcome you to the 1st International Workshop on App Market Analytics (WAMA), where we seek to bring together researchers and practitioners to discuss research challenges, ideas, initiatives and results that leverages app market data to answer pertinent software engineering questions.
Software applications (or apps) are distributed very differently these days than how they were once distributed - through centralized market places (which has changed the way developers interact with users, the way software is released, and consumed). These app markets, which are now standard for mobile apps, are getting popular now for desktop apps, games, and even open source apps. Such markets make it easier for app developers to release their new apps and update their existing apps. It also makes it easier for users to search, compare and download new apps and keep their existing apps up to date. Additionally, the app markets provide useful guidance to developers so that end users have the best quality apps. Finally, the market is public facing and has unique data like user comments, release notes, app popularity, besides just the app itself. Hence, app markets can be mined and the resulting data analyzed by researchers and analytics companies. Therefore our goal was to seek original articles on studies that are related to app markets, with the end goal of making concrete recommendations to the app developers, app market developers, or other developers who provide libraries and frameworks for building apps, and end users.

Data, Metrics, and Tools

Checking App User Interfaces against App Descriptions
Konstantin Kuznetsov, Vitalii Avdiienko, Alessandra Gorla, and Andreas Zeller
(Saarland University, Germany; IMDEA Software Institute, Spain)
Does the advertised behavior of apps correlate with what a user sees on a screen? In this paper, we introduce a technique to statically extract the text from the user interface definitions of an Android app. We use this technique to compare the natural language topics of an app’s user interface against the topics from its app store description. A mismatch indicates that some feature is exposed by the user interface, but is not present in the description, or vice versa. The popular Twitter app, for instance, spots UI elements that al- low to make purchases; however, this feature is not mentioned in its description. Likewise, we identified a number of apps whose user interface asks users to access or supply sensitive data; but this “feature” is not mentioned in the description. In the long run, analyzing user interface topics and comparing them against external descriptions opens the way for checking general mismatches between requirements and implementation.

Examining the Relationship between Security Metrics and User Ratings of Mobile Apps: A Case Study
Daniel E. Krutz, Nuthan Munaiah, Andrew Meneely, and Samuel A. Malachowsky
(Rochester Institute of Technology, USA)
The success or failure of a mobile application (`app') is largely determined by user ratings. Users frequently make their app choices based on the ratings of apps in comparison with similar, often competing apps. Users also expect apps to continually provide new features while maintaining quality, or the ratings drop. At the same time apps must also be secure, but is there a historical trade-off between security and ratings? Or are app store ratings a more all-encompassing measure of product maturity? We used static analysis tools to collect security-related metrics in 38,466 Android apps from the Google Play store. We compared the rate of an app's permission misuse, number of requested permissions, and Androrisk score, against its user rating.
We found that high-rated apps have statistically significantly higher security risk metrics than low-rated apps. However, the correlations are weak. This result supports the conventional wisdom that users are not factoring security risks into their ratings in a meaningful way. This could be due to several reasons including users not placing much emphasis on security, or that the typical user is unable to gauge the security risk level of the apps they use everyday.

Feature-Based Evaluation of Competing Apps
Faiz Ali Shah, Yevhenii Sabanin, and Dietmar Pfahl
(University of Tartu, Estonia)
App marketplaces, i.e. Google Play Store and Apple AppStore,comprise many competing apps offering a fair set of similar features. Users of competing apps can submit feedback in the form of ratings and textual comments. The feedback is useful for app developers to get a better understanding of where their app stands in the competition. However, app ratings don’t provide concrete information regarding users’ perceptions about an app’s features compared to similar apps. Studies have shown that users express sentiments on app features in app reviews. Therefore, user reviews are a valuable information source to compare competing apps based on users' sentiments regarding features. So far, researchers have analyzed app reviews to summarize the users’ sentiments on app features but the existing approaches have not been used for the comparison of competing apps. In this direction, we analyze app reviews of 25 apps to extract app features, determine competing apps based on feature commonality, and then compare competing apps based on users’ sentiments regarding features. We developed a tool prototype that helps app developers in identifying features which have been perceived negatively by its users. The tool prototype is also useful to find a set of features loved by users in other similar apps but missing in one’s app. We demonstrate the usefulness of the tool prototype and give pointers to future work.

CALAPPA: A Toolchain for Mining Android Applications
Vitalii Avdiienko, Konstantin Kuznetsov, Paolo Calciati, Juan Carlos Caiza Román, Alessandra Gorla, and Andreas Zeller
(Saarland University, Germany; IMDEA Software Institute, Spain)
Software engineering researchers and practitioners working on the Android ecosystem frequently have to do the same tasks over and over: retrieve data from the Google Play store to analyze it, decompile the Dalvik bytecode to understand the behavior of the app, and analyze applications metadata and user reviews. In this paper we present CALAPPA, a highly reusable and customizable toolchain that allows researchers to easily run common analysis tasks on large Android application datasets. CALAPPA includes components to retrieve the data from different Android stores, and comes with a predefined, but extensible, set of modules that can analyze apps metadata and code.

Darwin: A Static Analysis Dataset of Malicious and Benign Android Apps
Nuthan Munaiah, Casey Klimkowsky, Shannon McRae, Adam Blaine, Samuel A. Malachowsky, Cesar Perez, and Daniel E. Krutz
(Rochester Institute of Technology, USA)
The Android platform comprises the vast majority of the mobile market. Unfortunately, Android apps are not immune to issues that plague conventional software including security vulnerabilities, bugs, and permission-based problems. In order to address these issues, we need a better understanding of the apps we use everyday. Over the course of more than a year, we collected and reverse engineered 64,868 Android apps from the Google Play store as well as 1,669 malware samples collected from several sources. Each app was analyzed using several static analysis tools to collect a variety of quality and security related information. The apps spanned 41 different categories, and constituted a total of 576,174 permissions, 39,780 unique signing keys and 125,159 over-permissions. We present the dataset of these apps, and a sample set of analytics, on our website---http://darwin.rit.edu---with the option of downloading the dataset for offline evaluation.

Info

Platforms and Releases

More Insight from Being More Focused: Analysis of Clustered Market Apps
Maleknaz Nayebi, Homayoon Farrahi, Ada Lee, Henry Cho, and Guenther Ruhe
(University of Calgary, Canada; University of Toronto, Canada)
The increasing attraction of mobile apps has inspired researchers to analyze apps from different perspectives. As any software product, apps have different attributes such as size, content maturity, rating, category or number of downloads. Current research studies mostly consider sampling across all apps. This often results in comparisons of apps being quite different in nature and category (games compared with weather and calendar apps), also being different in size and complexity. Similar to proprietary software and web-based services, more specific results can be expected from looking at more homogeneous samples as they can be received as a result of applying clustering.
In this paper, we target homogeneous samples of apps to increase to degree of insight gained from analytics. As a proof-of-concept, we applied clustering technique DBSCAN and subsequent correlation analysis between app attributes for a set of 940 open source mobile apps from F-Droid. We showed that (i) clusters of apps with similar characteristics provided more insight compared to applying the same to the whole data and (ii) defining similarity of apps based on similarity of topics as created from topic modeling technique Latent Dirichlet Allocation does not significantly improve clustering results.

To Upgrade or Not to Upgrade? The Release of New Versions to Survive in the Hypercompetitive App Market
Stefano Comino, Fabio M. Manenti, and Franco Mariuzzo
(University of Udine, Italy; University of Padua, Italy; University of East Anglia, UK)
Very low entry barriers and an exceptionally high degree of competition characterize the market for mobile applications. In such an environment one of the critical issues is how to at- tract the attention of users. Practitioners and developers are well aware that managing app updates (i.e., releasing new versions of an existing app) is critical to increase app visibil- ity and to keep users engaged, disguising a hidden strategy to stimulate downloads. We use unbalanced panel data with characteristics for the top 1,000 apps on iTunes and Google Play stores, for five European countries, to empirically in- vestigate publishers’ strategies concerning the release of up- dates. We find that only in the case of iTunes updates boost downloads and are more likely to be released when the app is experiencing poor performance. We interpret this finding as evidence that the lack of quality control by Google Play leads to an excess of updating of Android apps.

The Impact of Cross-Platform Development Approaches for Mobile Applications from the User's Perspective
Iván Tactuk Mercado, Nuthan Munaiah, and Andrew Meneely
(Rochester Institute of Technology, USA)
Mobile app developers today have a hard decision to make: to independently develop native apps for different operating systems or to develop an app that is cross-platform compatible. The availability of different tools and approaches to support cross-platform app development only makes the decision harder. In this study, we used user reviews of apps to empirically understand the relationship (if any) between the approach used in the development of an app and its perceived quality. We used Natural Language Processing (NLP) models to classify 787,228 user reviews of the Android version and iOS version of 50 apps as complaints in one of four quality concerns: performance, usability, security, and reliability. We found that hybrid apps (on both Android and iOS platforms) tend to be more prone to user complaints than interpreted/generated apps. In a study of Facebook, an app that underwent a change in development approach from hybrid to native, we found that change in the development approach was accompanied by a reduction in user complaints about performance and reliability.

Mining and Characterizing Hybrid Apps
Mohamed Ali and Ali Mesbah
(University of British Columbia, Canada)
Mobile apps have grown tremendously over the past few years. To capitalize on this growth and to attract more users, implementing the same mobile app for different platforms has become a common industry practice. Building the same app natively for each platform is resource intensive and time consuming since every platform has different environments, languages and APIs. Cross Platform Tools (CPTs) address this challenge by allowing developers to use a common code- base to simultaneously create apps for multiple platforms. Apps created using these CPTs are called hybrid apps. We mine 15,512 hybrid apps and present the first study of its kind on such apps. We identify which CPTs these apps use and how users perceive them. Further, we compare the user-perceived ratings of hybrid apps to native apps of the same category. Finally, we compare the user-perceived ratings of the same hybrid app on the Android and iOS platforms.

proc time: 1.29