ASE 2013 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E G H J K L M N O P R S T U V W X Y Z
Acher, Mathieu |
ASE '13-NEWIDEAS: "From Comparison Matrix to ..."
From Comparison Matrix to Variability Model: The Wikipedia Case Study
Nicolas Sannier, Mathieu Acher, and Benoit Baudry (University of Rennes 1, France; Inria, France; Irisa, France) Product comparison matrices (PCMs) provide a convenient way to document the discriminant features of a family of related products and now abound on the internet. Despite their apparent simplicity, the information present in existing PCMs can be very heterogeneous, partial, ambiguous, hard to exploit by users who desire to choose an appropriate product. Variability Models (VMs) can be employed to formulate in a more precise way the semantics of PCMs and enable automated reasoning such as assisted configuration. Yet, the gap between PCMs and VMs should be precisely understood and automated techniques should support the transition between the two. In this paper, we propose variability patterns that describe PCMs content and conduct an empirical analysis of 300+ PCMs mined from Wikipedia. Our findings are a first step toward better engineering techniques for maintaining and configuring PCMs. @InProceedings{ASE13p580, author = {Nicolas Sannier and Mathieu Acher and Benoit Baudry}, title = {From Comparison Matrix to Variability Model: The Wikipedia Case Study}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {580--585}, doi = {}, year = {2013}, } |
|
Anish, Preethu Rose |
ASE '13-NEWIDEAS: "Detecting System Use Cases ..."
Detecting System Use Cases and Validations from Documents
Smita Ghaisas, Manish Motwani, and Preethu Rose Anish (Tata Consultancy Services, India) Identifying system use cases and corresponding validations involves analyzing large requirement documents to understand the descriptions of business processes, rules and policies. This consumes a significant amount of effort and time. We discuss an approach to automate the detection of system use cases and corresponding validations from documents. We have devised a representation that allows for capturing the essence of rule statements as a composition of atomic ‘Rule intents’ and key phrases associated with the intents. Rule intents that co-occur frequently constitute 'Rule acts’ analogous to the Speech acts in Linguistics. Our approach is based on NLP techniques designed around this Rule Model. We employ syntactic and semantic NL analyses around the model to identify and classify rules and annotate them with Rule acts. We map the Rule acts to business process steps and highlight the combinations as potential system use cases and validations for human supervision. @InProceedings{ASE13p568, author = {Smita Ghaisas and Manish Motwani and Preethu Rose Anish}, title = {Detecting System Use Cases and Validations from Documents}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {568--573}, doi = {}, year = {2013}, } |
|
Barringer, Howard |
ASE '13-NEWIDEAS: "A Pattern-Based Approach to ..."
A Pattern-Based Approach to Parametric Specification Mining
Giles Reger, Howard Barringer, and David Rydeheard (University of Manchester, UK) This paper presents a technique for using execution traces to mine parametric temporal specifications in the form of quantified event automata (QEA) - previously introduced as an expressive and efficient formalism for runtime verification. We consider a pattern-based mining approach that uses a pattern library to generate and check potential properties over given traces, and then combines successful patterns. By using predefined models to measure the tool’s precision and recall we demonstrate that our approach can effectively and efficiently extract specifications in realistic scenarios. @InProceedings{ASE13p658, author = {Giles Reger and Howard Barringer and David Rydeheard}, title = {A Pattern-Based Approach to Parametric Specification Mining}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {658--663}, doi = {}, year = {2013}, } |
|
Baudry, Benoit |
ASE '13-NEWIDEAS: "From Comparison Matrix to ..."
From Comparison Matrix to Variability Model: The Wikipedia Case Study
Nicolas Sannier, Mathieu Acher, and Benoit Baudry (University of Rennes 1, France; Inria, France; Irisa, France) Product comparison matrices (PCMs) provide a convenient way to document the discriminant features of a family of related products and now abound on the internet. Despite their apparent simplicity, the information present in existing PCMs can be very heterogeneous, partial, ambiguous, hard to exploit by users who desire to choose an appropriate product. Variability Models (VMs) can be employed to formulate in a more precise way the semantics of PCMs and enable automated reasoning such as assisted configuration. Yet, the gap between PCMs and VMs should be precisely understood and automated techniques should support the transition between the two. In this paper, we propose variability patterns that describe PCMs content and conduct an empirical analysis of 300+ PCMs mined from Wikipedia. Our findings are a first step toward better engineering techniques for maintaining and configuring PCMs. @InProceedings{ASE13p580, author = {Nicolas Sannier and Mathieu Acher and Benoit Baudry}, title = {From Comparison Matrix to Variability Model: The Wikipedia Case Study}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {580--585}, doi = {}, year = {2013}, } |
|
Cao, Chun |
ASE '13-NEWIDEAS: "Environment Rematching: Toward ..."
Environment Rematching: Toward Dependability Improvement for Self-Adaptive Applications
Chang Xu , Wenhua Yang, Xiaoxing Ma , Chun Cao , and Jian Lü (Nanjing University, China) Self-adaptive applications can easily contain faults. Existing approaches detect faults, but can still leave some undetected and manifesting into failures at runtime. In this paper, we study the correlation between occurrences of application failure and those of consistency failure. We propose fixing consistency failure to reduce application failure at runtime. We name this environment rematching, which can systematically reconnect a self-adaptive application to its environment in a consistent way. We also propose enforcing atomicity for application semantics during the rematching to avoid its side effect. We evaluated our approach using 12 self-adaptive robot-car applications by both simulated and real experiments. The experimental results confirmed our approach’s effectiveness in improving dependability for all applications by 12.5-52.5%. @InProceedings{ASE13p592, author = {Chang Xu and Wenhua Yang and Xiaoxing Ma and Chun Cao and Jian Lü}, title = {Environment Rematching: Toward Dependability Improvement for Self-Adaptive Applications}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {592--597}, doi = {}, year = {2013}, } |
|
Chaim, Marcos Lordello |
ASE '13-NEWIDEAS: "Adding Context to Fault Localization ..."
Adding Context to Fault Localization with Integration Coverage
Higor Amario de Souza and Marcos Lordello Chaim (University of Sao Paulo, Brazil) Fault localization is a costly task in the debugging process. Several techniques to automate fault localization have been proposed aiming at reducing effort and time spent. Some techniques use heuristics based on code coverage data. The goal is to indicate program code excerpts more likely to contain faults. The coverage data mostly used in automated debugging is based on white-box unit testing (e.g., statements, basic blocks, predicates). This paper presents a technique which uses integration coverage data to guide the fault localization process. By ranking most suspicious pairs of method invocations, roadmaps - sorted lists of methods to be investigated - are created. At each method, unit coverage (e.g., basic blocks) is used to locate the fault site. Fifty-five bugs of four programs containing 2K to 80K lines of code (LOC) were analyzed. The results indicate that, by using the roadmaps, the effectiveness of the fault localization process are improved: 78% of all the faults are reached within a fixed amount of basic blocks; 40% more than an approach based on the Tarantula technique. Furthermore, fewer blocks have to be investigated until reaching the fault. @InProceedings{ASE13p628, author = {Higor Amario de Souza and Marcos Lordello Chaim}, title = {Adding Context to Fault Localization with Integration Coverage}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {628--633}, doi = {}, year = {2013}, } |
|
Cleland-Huang, Jane |
ASE '13-NEWIDEAS: "Learning Effective Query Transformations ..."
Learning Effective Query Transformations for Enhanced Requirements Trace Retrieval
Timothy Dietrich, Jane Cleland-Huang, and Yonghee Shin (DePaul University, USA) In automated requirements traceability, significant improvements can be realized through incorporating user feedback into the trace retrieval process. However, existing feedback techniques are designed to improve results for individual queries. In this paper we present a novel technique designed to extend the benefits of user feedback across multiple trace queries. Our approach, named Trace Query Transformation (TQT), utilizes a novel form of Association Rule Mining to learn a set of query transformation rules which are used to improve the efficacy of future trace queries. We evaluate TQT using two different kinds of training sets. The first represents an initial set of queries directly modified by human analysts, while the second represents a set of queries generated by applying a query optimization process based on initial relevance feedback for trace links between a set of source and target documents. Both techniques are evaluated using requirements from theWorldVista Healthcare system, traced against certification requirements for the Commission for Healthcare Information Technology. Results show that the TQT technique returns significant improvements in the quality of generated trace links. @InProceedings{ASE13p586, author = {Timothy Dietrich and Jane Cleland-Huang and Yonghee Shin}, title = {Learning Effective Query Transformations for Enhanced Requirements Trace Retrieval}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {586--591}, doi = {}, year = {2013}, } |
|
Davoodi, Mohammed |
ASE '13-NEWIDEAS: "Cloud Twin: Native Execution ..."
Cloud Twin: Native Execution of Android Applications on the Windows Phone
Ethan Holder, Eeshan Shah, Mohammed Davoodi, and Eli Tilevich (Virginia Tech, USA) To successfully compete in the software marketplace, modern mobile applications must run on multiple competing platforms, such as Android, iOS, and Windows Phone. Companies producing mobile applications spend substantial amounts of time, effort, and money to port applications across platforms. Creating individual program versions for different platforms further exacerbates the maintenance burden. This paper presents Cloud Twin, a novel approach to natively executing the functionality of a mobile application written for another platform. The functionality is accessed by means of dynamic cross-platform replay, in which the source application’s execution in the cloud is mimicked natively on the target platform. The reference implementation of Cloud Twin natively emulates the behavior of Android applications on a Windows Phone. Specifically, Cloud Twin transmits, via web sockets, the UI actions performed on the Windows Phone to the cloud server, which then mimics the received actions on the Android emulator. The UI updates on the emulator are efficiently captured by means of Aspect Oriented Programming and sent back to be replayed on the Windows Phone. Our case studies with third-party applications indicate that the Cloud Twin approach can become a viable solution to the heterogeneity of the mobile application market. @InProceedings{ASE13p598, author = {Ethan Holder and Eeshan Shah and Mohammed Davoodi and Eli Tilevich}, title = {Cloud Twin: Native Execution of Android Applications on the Windows Phone}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {598--603}, doi = {}, year = {2013}, } |
|
De Souza, Higor Amario |
ASE '13-NEWIDEAS: "Adding Context to Fault Localization ..."
Adding Context to Fault Localization with Integration Coverage
Higor Amario de Souza and Marcos Lordello Chaim (University of Sao Paulo, Brazil) Fault localization is a costly task in the debugging process. Several techniques to automate fault localization have been proposed aiming at reducing effort and time spent. Some techniques use heuristics based on code coverage data. The goal is to indicate program code excerpts more likely to contain faults. The coverage data mostly used in automated debugging is based on white-box unit testing (e.g., statements, basic blocks, predicates). This paper presents a technique which uses integration coverage data to guide the fault localization process. By ranking most suspicious pairs of method invocations, roadmaps - sorted lists of methods to be investigated - are created. At each method, unit coverage (e.g., basic blocks) is used to locate the fault site. Fifty-five bugs of four programs containing 2K to 80K lines of code (LOC) were analyzed. The results indicate that, by using the roadmaps, the effectiveness of the fault localization process are improved: 78% of all the faults are reached within a fixed amount of basic blocks; 40% more than an approach based on the Tarantula technique. Furthermore, fewer blocks have to be investigated until reaching the fault. @InProceedings{ASE13p628, author = {Higor Amario de Souza and Marcos Lordello Chaim}, title = {Adding Context to Fault Localization with Integration Coverage}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {628--633}, doi = {}, year = {2013}, } |
|
Dietrich, Timothy |
ASE '13-NEWIDEAS: "Learning Effective Query Transformations ..."
Learning Effective Query Transformations for Enhanced Requirements Trace Retrieval
Timothy Dietrich, Jane Cleland-Huang, and Yonghee Shin (DePaul University, USA) In automated requirements traceability, significant improvements can be realized through incorporating user feedback into the trace retrieval process. However, existing feedback techniques are designed to improve results for individual queries. In this paper we present a novel technique designed to extend the benefits of user feedback across multiple trace queries. Our approach, named Trace Query Transformation (TQT), utilizes a novel form of Association Rule Mining to learn a set of query transformation rules which are used to improve the efficacy of future trace queries. We evaluate TQT using two different kinds of training sets. The first represents an initial set of queries directly modified by human analysts, while the second represents a set of queries generated by applying a query optimization process based on initial relevance feedback for trace links between a set of source and target documents. Both techniques are evaluated using requirements from theWorldVista Healthcare system, traced against certification requirements for the Commission for Healthcare Information Technology. Results show that the TQT technique returns significant improvements in the quality of generated trace links. @InProceedings{ASE13p586, author = {Timothy Dietrich and Jane Cleland-Huang and Yonghee Shin}, title = {Learning Effective Query Transformations for Enhanced Requirements Trace Retrieval}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {586--591}, doi = {}, year = {2013}, } |
|
Ewalt, Nicholas |
ASE '13-NEWIDEAS: "Using Automatically Generated ..."
Using Automatically Generated Invariants for Regression Testing and Bug Localization
Parth Sagdeo, Nicholas Ewalt, Debjit Pal, and Shobha Vasudevan (University of Illinois at Urbana-Champaign, USA) We present PREAMBL, an approach that applies automatically generated invariants to regression testing and bug localization. Our invariant generation methodology is PRECIS, an automatic and scalable engine that uses program predicates to guide clustering of dynamically obtained path information. In this paper, we apply it for regression testing and for capturing program predicates information to guide statistical analysis based bug localization. We present a technique to localize bugs in paths of variable lengths. We are able to map the localized post-deployment bugs on a path to pre-release invariants generated along that path. Our experimental results demonstrate the efficacy of the use of PRECIS for regression testing, as well as the ability of PREAMBL to zone in on relevant segments of program paths. @InProceedings{ASE13p634, author = {Parth Sagdeo and Nicholas Ewalt and Debjit Pal and Shobha Vasudevan}, title = {Using Automatically Generated Invariants for Regression Testing and Bug Localization}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {634--639}, doi = {}, year = {2013}, } |
|
García-Galán, Jesús |
ASE '13-NEWIDEAS: "Multi-user Variability Configuration: ..."
Multi-user Variability Configuration: A Game Theoretic Approach
Jesús García-Galán, Pablo Trinidad, and Antonio Ruiz-Cortés (University of Seville, Spain) Multi-user configuration is a neglected problem in variability-intensive systems area. The appearance of conflicts among user configurations is a main concern. Current approaches focus on avoiding such conflicts, applying the mutual exclusion principle. However, this perspective has a negative impact on users satisfaction, who cannot make any decision fairly. In this work, we propose an interpretation of multi-user configuration as a game theoretic problem. Game theory is a well-known discipline which analyzes conflicts and cooperation among intelligent rational decision-makers. We present a taxonomy of multi-user configuration approaches, and how they can be interpreted as different problems of game theory. We focus on cooperative game theory to propose and automate a tradeoff-based bargaining approach, as a way to solve the conflicts and maximize user satisfaction at the same time. @InProceedings{ASE13p574, author = {Jesús García-Galán and Pablo Trinidad and Antonio Ruiz-Cortés}, title = {Multi-user Variability Configuration: A Game Theoretic Approach}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {574--579}, doi = {}, year = {2013}, } |
|
Gethers, Malcom |
ASE '13-NEWIDEAS: "ExPort: Detecting and Visualizing ..."
ExPort: Detecting and Visualizing API Usages in Large Source Code Repositories
Evan Moritz, Mario Linares-Vásquez, Denys Poshyvanyk , Mark Grechanik, Collin McMillan , and Malcom Gethers (College of William and Mary, USA; University of Illinois at Chicago, USA; University of Notre Dame, USA; University of Maryland in Baltimore County, USA) This paper presents a technique for automatically mining and visualizing API usage examples. In contrast to previous approaches, our technique is capable of finding examples of API usage that occur across several functions in a program. This distinction is important because of a gap between what current API learning tools provide and what programmers need: current tools extract relatively small examples from single files/functions, even though programmers use APIs to build large software. The small examples are helpful in the initial stages of API learning, but leave out details that are helpful in later stages. Our technique is intended to fill this gap. It works by representing software as a Relational Topic Model, where API calls and the functions that use them are modeled as a document network. Given a starting API, our approach can recommend complex API usage examples mined from a repository of over 14 million Java methods @InProceedings{ASE13p646, author = {Evan Moritz and Mario Linares-Vásquez and Denys Poshyvanyk and Mark Grechanik and Collin McMillan and Malcom Gethers}, title = {ExPort: Detecting and Visualizing API Usages in Large Source Code Repositories}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {646--651}, doi = {}, year = {2013}, } Info |
|
Ghaisas, Smita |
ASE '13-NEWIDEAS: "Detecting System Use Cases ..."
Detecting System Use Cases and Validations from Documents
Smita Ghaisas, Manish Motwani, and Preethu Rose Anish (Tata Consultancy Services, India) Identifying system use cases and corresponding validations involves analyzing large requirement documents to understand the descriptions of business processes, rules and policies. This consumes a significant amount of effort and time. We discuss an approach to automate the detection of system use cases and corresponding validations from documents. We have devised a representation that allows for capturing the essence of rule statements as a composition of atomic ‘Rule intents’ and key phrases associated with the intents. Rule intents that co-occur frequently constitute 'Rule acts’ analogous to the Speech acts in Linguistics. Our approach is based on NLP techniques designed around this Rule Model. We employ syntactic and semantic NL analyses around the model to identify and classify rules and annotate them with Rule acts. We map the Rule acts to business process steps and highlight the combinations as potential system use cases and validations for human supervision. @InProceedings{ASE13p568, author = {Smita Ghaisas and Manish Motwani and Preethu Rose Anish}, title = {Detecting System Use Cases and Validations from Documents}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {568--573}, doi = {}, year = {2013}, } |
|
Glinz, Martin |
ASE '13-NEWIDEAS: "Semi-automatic Generation ..."
Semi-automatic Generation of Metamodels from Model Sketches
Dustin Wüest, Norbert Seyff, and Martin Glinz (University of Zurich, Switzerland) Traditionally, metamodeling is an upfront activity performed by experts for defining modeling languages. Modeling tools then typically restrict modelers to using only constructs defined in the metamodel. This is inappropriate when users want to sketch graphical models without any restrictions and only later assign meanings to the sketched elements. Upfront metamodeling also complicates the creation of domain-specific languages, as it requires experts with both domain and metamodeling expertise. In this paper we present a new approach that supports modelers in creating metamodels for diagrams they have sketched or are currently sketching. Metamodels are defined in a semi-automatic, interactive way by annotating diagram elements and automated model analysis. Our approach requires no metamodeling expertise and supports the co-evolution of models and meta-models. @InProceedings{ASE13p664, author = {Dustin Wüest and Norbert Seyff and Martin Glinz}, title = {Semi-automatic Generation of Metamodels from Model Sketches}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {664--669}, doi = {}, year = {2013}, } |
|
Gravino, Carmine |
ASE '13-NEWIDEAS: "Class Level Fault Prediction ..."
Class Level Fault Prediction using Software Clustering
Giuseppe Scanniello, Carmine Gravino, Andrian Marcus, and Tim Menzies (University of Basilicata, Italy; University of Salerno, Italy; Wayne State University, USA; West Virginia University, USA) Defect prediction approaches use software metrics and fault data to learn which software properties associate with faults in classes. Existing techniques predict fault-prone classes in the same release (intra) or in a subsequent releases (inter) of a subject software system. We propose an intra-release fault prediction technique, which learns from clusters of related classes, rather than from the entire system. Classes are clustered using structural information and fault prediction models are built using the properties of the classes in each cluster. We present an empirical investigation on data from 29 releases of eight open source software systems from the PROMISE repository, with predictors built using multivariate linear regression. The results indicate that the prediction models built on clusters outperform those built on all the classes of the system. @InProceedings{ASE13p640, author = {Giuseppe Scanniello and Carmine Gravino and Andrian Marcus and Tim Menzies}, title = {Class Level Fault Prediction using Software Clustering}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {640--645}, doi = {}, year = {2013}, } Info |
|
Grechanik, Mark |
ASE '13-NEWIDEAS: "ExPort: Detecting and Visualizing ..."
ExPort: Detecting and Visualizing API Usages in Large Source Code Repositories
Evan Moritz, Mario Linares-Vásquez, Denys Poshyvanyk , Mark Grechanik, Collin McMillan , and Malcom Gethers (College of William and Mary, USA; University of Illinois at Chicago, USA; University of Notre Dame, USA; University of Maryland in Baltimore County, USA) This paper presents a technique for automatically mining and visualizing API usage examples. In contrast to previous approaches, our technique is capable of finding examples of API usage that occur across several functions in a program. This distinction is important because of a gap between what current API learning tools provide and what programmers need: current tools extract relatively small examples from single files/functions, even though programmers use APIs to build large software. The small examples are helpful in the initial stages of API learning, but leave out details that are helpful in later stages. Our technique is intended to fill this gap. It works by representing software as a Relational Topic Model, where API calls and the functions that use them are modeled as a document network. Given a starting API, our approach can recommend complex API usage examples mined from a repository of over 14 million Java methods @InProceedings{ASE13p646, author = {Evan Moritz and Mario Linares-Vásquez and Denys Poshyvanyk and Mark Grechanik and Collin McMillan and Malcom Gethers}, title = {ExPort: Detecting and Visualizing API Usages in Large Source Code Repositories}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {646--651}, doi = {}, year = {2013}, } Info |
|
Halfond, William G. J. |
ASE '13-NEWIDEAS: "Randomizing Regression Tests ..."
Randomizing Regression Tests using Game Theory
Nupul Kukreja, William G. J. Halfond , and Milind Tambe (University of Southern California, USA) As software evolves, the number of test-cases in the regression test suites continues to increase, requiring testers to prioritize their execution. Usually only a subset of the test cases is executed due to limited testing resources. This subset is often known to the developers who may try to ``game'' the system by committing insufficiently tested code for parts of the software that will not be tested. In this new ideas paper, we propose a novel approach for randomizing regression test scheduling, based on Stackelberg games for deployment of scarce resources. We apply this approach to randomizing test cases in such a way as to maximize the testers' expected payoff when executing the test cases. Our approach accounts for resource limitations ( number of testers) and provides a probabilistic distribution for scheduling test cases. We provide an example application of our approach showcasing the idea of using Stackelberg games for randomized regression test scheduling. @InProceedings{ASE13p616, author = {Nupul Kukreja and William G. J. Halfond and Milind Tambe}, title = {Randomizing Regression Tests using Game Theory}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {616--621}, doi = {}, year = {2013}, } |
|
Harrison, Rachel |
ASE '13-NEWIDEAS: "Assessing the Maturity of ..."
Assessing the Maturity of Requirements through Argumentation: A Good Enough Approach
Varsha Veerappa and Rachel Harrison (Oxford Brookes University, UK) Requirements engineers need to be confident that enough requirements analysis has been done before a project can move forward. In the context of KAOS, this information can be derived from the soundness of the refinements: sound refinements indicate that the requirements in the goal-graph are mature enough or good enough for implementation. We can estimate how close we are to ‘good enough’ requirements using the judgments of experts and other data from the goals. We apply Toulmin’s model of argumentation to evaluate how sound refinements are. We then implement the resulting argumentation model using Bayesian Belief Networks and provide a semi-automated way aided by Natural Language Processing techniques to carry out the proposed evaluation. We have performed an initial validation on our work using a small case-study involving an electronic document management system. @InProceedings{ASE13p670, author = {Varsha Veerappa and Rachel Harrison}, title = {Assessing the Maturity of Requirements through Argumentation: A Good Enough Approach}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {670--675}, doi = {}, year = {2013}, } |
|
Holavanalli, Shashank |
ASE '13-NEWIDEAS: "Flow Permissions for Android ..."
Flow Permissions for Android
Shashank Holavanalli, Don Manuel, Vishwas Nanjundaswamy, Brian Rosenberg, Feng Shen, Steven Y. Ko, and Lukasz Ziarek (SUNY Buffalo, USA) This paper proposes Flow Permissions, an extension to the Android permission mechanism. Unlike the existing permission mechanism our permission mechanism contains semantic information based on information flows. Flow Permissions allow users to examine and grant explicit information flows within an application (e.g., a permission for reading the phone number and sending it over the network) as well as implicit information flows across multiple applications (e.g., a permission for reading the phone number and sending it to another application already installed on the user's phone). Our goal with Flow Permissions is to provide visibility into the holistic behavior of the applications installed on a user's phone. Our evaluation compares our approach to dynamic flow tracking techniques; our results with 600 popular applications and 1,200 malicious applications show that our approach is practical and effective in deriving Flow Permissions statically. @InProceedings{ASE13p652, author = {Shashank Holavanalli and Don Manuel and Vishwas Nanjundaswamy and Brian Rosenberg and Feng Shen and Steven Y. Ko and Lukasz Ziarek}, title = {Flow Permissions for Android}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {652--657}, doi = {}, year = {2013}, } |
|
Holder, Ethan |
ASE '13-NEWIDEAS: "Cloud Twin: Native Execution ..."
Cloud Twin: Native Execution of Android Applications on the Windows Phone
Ethan Holder, Eeshan Shah, Mohammed Davoodi, and Eli Tilevich (Virginia Tech, USA) To successfully compete in the software marketplace, modern mobile applications must run on multiple competing platforms, such as Android, iOS, and Windows Phone. Companies producing mobile applications spend substantial amounts of time, effort, and money to port applications across platforms. Creating individual program versions for different platforms further exacerbates the maintenance burden. This paper presents Cloud Twin, a novel approach to natively executing the functionality of a mobile application written for another platform. The functionality is accessed by means of dynamic cross-platform replay, in which the source application’s execution in the cloud is mimicked natively on the target platform. The reference implementation of Cloud Twin natively emulates the behavior of Android applications on a Windows Phone. Specifically, Cloud Twin transmits, via web sockets, the UI actions performed on the Windows Phone to the cloud server, which then mimics the received actions on the Android emulator. The UI updates on the emulator are efficiently captured by means of Aspect Oriented Programming and sent back to be replayed on the Windows Phone. Our case studies with third-party applications indicate that the Cloud Twin approach can become a viable solution to the heterogeneity of the mobile application market. @InProceedings{ASE13p598, author = {Ethan Holder and Eeshan Shah and Mohammed Davoodi and Eli Tilevich}, title = {Cloud Twin: Native Execution of Android Applications on the Windows Phone}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {598--603}, doi = {}, year = {2013}, } |
|
Huchard, Marianne |
ASE '13-NEWIDEAS: "Recovering Model Transformation ..."
Recovering Model Transformation Traces using Multi-Objective Optimization
Hajer Saada, Marianne Huchard, Clémentine Nebut, and Houari Sahraoui (Université Montpellier 2, France; CNRS, France; Université de Montréal, Canada) Model Driven Engineering (MDE) is based on a large set of models that are used and manipulated throughout the development cycle. These models are manually or automatically produced and/or exploited using model transformations. To allow engineers to maintain the models and track their changes, recovering transformation traces is essential. In this paper, we propose an automated approach, based on multi-objective optimization, to recover transformation traces between models. Our approach takes as input a source model in the form of a set of fragments (fragments are defined using the source meta-model cardinalities and OCL constraints), and a target model. The recovered transformation traces take the form of many-to-many mappings between the constructs of the two models. @InProceedings{ASE13p688, author = {Hajer Saada and Marianne Huchard and Clémentine Nebut and Houari Sahraoui}, title = {Recovering Model Transformation Traces using Multi-Objective Optimization}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {688--693}, doi = {}, year = {2013}, } |
|
Jin, Wei |
ASE '13-NEWIDEAS: "SBFR: A Search Based Approach ..."
SBFR: A Search Based Approach for Reproducing Failures of Programs with Grammar Based Input
Fitsum Meshesha Kifetew , Wei Jin, Roberto Tiella, Alessandro Orso , and Paolo Tonella (Fondazione Bruno Kessler, Italy; Georgia Institute of Technology, USA) Reproducing field failures in-house, a step developers must perform when assigned a bug report, is an arduous task. In most cases, developers must be able to reproduce a reported failure using only a stack trace and/or some informal description of the failure. The problem becomes even harder for the large class of programs whose input is highly structured and strictly specified by a grammar. To address this problem, we present SBFR, a search-based failure-reproduction technique for programs with structured input. SBFR formulates failure reproduction as a search problem. Starting from a reported failure and a limited amount of dynamic information about the failure, SBFR exploits the potential of genetic programming to iteratively find legal inputs that can trigger the failure. @InProceedings{ASE13p604, author = {Fitsum Meshesha Kifetew and Wei Jin and Roberto Tiella and Alessandro Orso and Paolo Tonella}, title = {SBFR: A Search Based Approach for Reproducing Failures of Programs with Grammar Based Input}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {604--609}, doi = {}, year = {2013}, } |
|
Kaulgud, Vikrant |
ASE '13-NEWIDEAS: "Natural Language Requirements ..."
Natural Language Requirements Quality Analysis Based on Business Domain Models
Annervaz K.M., Vikrant Kaulgud , Shubhashis Sengupta, and Milind Savagaonkar (Accenture Technology Labs, India) Quality of requirements written in natural language has always been a critical concern in software engineering. Poorly written requirements lead to ambiguity and false interpretation in different phases of a software delivery project. Further, incomplete requirements lead to partial implementation of the desired system behavior. In this paper, we present a model for harvesting domain (functional or business) knowledge. Subsequently we present natural language processing and ontology based techniques for leveraging the model to analyze requirements quality and for requirements comprehension. The prototype also provides an advisory to business analysts so that the requirements can be aligned to the expected domain standard. The prototype developed is currently being used in practice, and the initial results are very encouraging. @InProceedings{ASE13p676, author = {Annervaz K.M. and Vikrant Kaulgud and Shubhashis Sengupta and Milind Savagaonkar}, title = {Natural Language Requirements Quality Analysis Based on Business Domain Models}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {676--681}, doi = {}, year = {2013}, } |
|
Kifetew, Fitsum Meshesha |
ASE '13-NEWIDEAS: "SBFR: A Search Based Approach ..."
SBFR: A Search Based Approach for Reproducing Failures of Programs with Grammar Based Input
Fitsum Meshesha Kifetew , Wei Jin, Roberto Tiella, Alessandro Orso , and Paolo Tonella (Fondazione Bruno Kessler, Italy; Georgia Institute of Technology, USA) Reproducing field failures in-house, a step developers must perform when assigned a bug report, is an arduous task. In most cases, developers must be able to reproduce a reported failure using only a stack trace and/or some informal description of the failure. The problem becomes even harder for the large class of programs whose input is highly structured and strictly specified by a grammar. To address this problem, we present SBFR, a search-based failure-reproduction technique for programs with structured input. SBFR formulates failure reproduction as a search problem. Starting from a reported failure and a limited amount of dynamic information about the failure, SBFR exploits the potential of genetic programming to iteratively find legal inputs that can trigger the failure. @InProceedings{ASE13p604, author = {Fitsum Meshesha Kifetew and Wei Jin and Roberto Tiella and Alessandro Orso and Paolo Tonella}, title = {SBFR: A Search Based Approach for Reproducing Failures of Programs with Grammar Based Input}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {604--609}, doi = {}, year = {2013}, } |
|
K.M., Annervaz |
ASE '13-NEWIDEAS: "Natural Language Requirements ..."
Natural Language Requirements Quality Analysis Based on Business Domain Models
Annervaz K.M., Vikrant Kaulgud , Shubhashis Sengupta, and Milind Savagaonkar (Accenture Technology Labs, India) Quality of requirements written in natural language has always been a critical concern in software engineering. Poorly written requirements lead to ambiguity and false interpretation in different phases of a software delivery project. Further, incomplete requirements lead to partial implementation of the desired system behavior. In this paper, we present a model for harvesting domain (functional or business) knowledge. Subsequently we present natural language processing and ontology based techniques for leveraging the model to analyze requirements quality and for requirements comprehension. The prototype also provides an advisory to business analysts so that the requirements can be aligned to the expected domain standard. The prototype developed is currently being used in practice, and the initial results are very encouraging. @InProceedings{ASE13p676, author = {Annervaz K.M. and Vikrant Kaulgud and Shubhashis Sengupta and Milind Savagaonkar}, title = {Natural Language Requirements Quality Analysis Based on Business Domain Models}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {676--681}, doi = {}, year = {2013}, } |
|
Ko, Steven Y. |
ASE '13-NEWIDEAS: "Flow Permissions for Android ..."
Flow Permissions for Android
Shashank Holavanalli, Don Manuel, Vishwas Nanjundaswamy, Brian Rosenberg, Feng Shen, Steven Y. Ko, and Lukasz Ziarek (SUNY Buffalo, USA) This paper proposes Flow Permissions, an extension to the Android permission mechanism. Unlike the existing permission mechanism our permission mechanism contains semantic information based on information flows. Flow Permissions allow users to examine and grant explicit information flows within an application (e.g., a permission for reading the phone number and sending it over the network) as well as implicit information flows across multiple applications (e.g., a permission for reading the phone number and sending it to another application already installed on the user's phone). Our goal with Flow Permissions is to provide visibility into the holistic behavior of the applications installed on a user's phone. Our evaluation compares our approach to dynamic flow tracking techniques; our results with 600 popular applications and 1,200 malicious applications show that our approach is practical and effective in deriving Flow Permissions statically. @InProceedings{ASE13p652, author = {Shashank Holavanalli and Don Manuel and Vishwas Nanjundaswamy and Brian Rosenberg and Feng Shen and Steven Y. Ko and Lukasz Ziarek}, title = {Flow Permissions for Android}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {652--657}, doi = {}, year = {2013}, } |
|
Kukreja, Nupul |
ASE '13-NEWIDEAS: "Randomizing Regression Tests ..."
Randomizing Regression Tests using Game Theory
Nupul Kukreja, William G. J. Halfond , and Milind Tambe (University of Southern California, USA) As software evolves, the number of test-cases in the regression test suites continues to increase, requiring testers to prioritize their execution. Usually only a subset of the test cases is executed due to limited testing resources. This subset is often known to the developers who may try to ``game'' the system by committing insufficiently tested code for parts of the software that will not be tested. In this new ideas paper, we propose a novel approach for randomizing regression test scheduling, based on Stackelberg games for deployment of scarce resources. We apply this approach to randomizing test cases in such a way as to maximize the testers' expected payoff when executing the test cases. Our approach accounts for resource limitations ( number of testers) and provides a probabilistic distribution for scheduling test cases. We provide an example application of our approach showcasing the idea of using Stackelberg games for randomized regression test scheduling. @InProceedings{ASE13p616, author = {Nupul Kukreja and William G. J. Halfond and Milind Tambe}, title = {Randomizing Regression Tests using Game Theory}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {616--621}, doi = {}, year = {2013}, } |
|
Linares-Vásquez, Mario |
ASE '13-NEWIDEAS: "ExPort: Detecting and Visualizing ..."
ExPort: Detecting and Visualizing API Usages in Large Source Code Repositories
Evan Moritz, Mario Linares-Vásquez, Denys Poshyvanyk , Mark Grechanik, Collin McMillan , and Malcom Gethers (College of William and Mary, USA; University of Illinois at Chicago, USA; University of Notre Dame, USA; University of Maryland in Baltimore County, USA) This paper presents a technique for automatically mining and visualizing API usage examples. In contrast to previous approaches, our technique is capable of finding examples of API usage that occur across several functions in a program. This distinction is important because of a gap between what current API learning tools provide and what programmers need: current tools extract relatively small examples from single files/functions, even though programmers use APIs to build large software. The small examples are helpful in the initial stages of API learning, but leave out details that are helpful in later stages. Our technique is intended to fill this gap. It works by representing software as a Relational Topic Model, where API calls and the functions that use them are modeled as a document network. Given a starting API, our approach can recommend complex API usage examples mined from a repository of over 14 million Java methods @InProceedings{ASE13p646, author = {Evan Moritz and Mario Linares-Vásquez and Denys Poshyvanyk and Mark Grechanik and Collin McMillan and Malcom Gethers}, title = {ExPort: Detecting and Visualizing API Usages in Large Source Code Repositories}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {646--651}, doi = {}, year = {2013}, } Info |
|
Lü, Jian |
ASE '13-NEWIDEAS: "Environment Rematching: Toward ..."
Environment Rematching: Toward Dependability Improvement for Self-Adaptive Applications
Chang Xu , Wenhua Yang, Xiaoxing Ma , Chun Cao , and Jian Lü (Nanjing University, China) Self-adaptive applications can easily contain faults. Existing approaches detect faults, but can still leave some undetected and manifesting into failures at runtime. In this paper, we study the correlation between occurrences of application failure and those of consistency failure. We propose fixing consistency failure to reduce application failure at runtime. We name this environment rematching, which can systematically reconnect a self-adaptive application to its environment in a consistent way. We also propose enforcing atomicity for application semantics during the rematching to avoid its side effect. We evaluated our approach using 12 self-adaptive robot-car applications by both simulated and real experiments. The experimental results confirmed our approach’s effectiveness in improving dependability for all applications by 12.5-52.5%. @InProceedings{ASE13p592, author = {Chang Xu and Wenhua Yang and Xiaoxing Ma and Chun Cao and Jian Lü}, title = {Environment Rematching: Toward Dependability Improvement for Self-Adaptive Applications}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {592--597}, doi = {}, year = {2013}, } |
|
Ma, Xiaoxing |
ASE '13-NEWIDEAS: "Environment Rematching: Toward ..."
Environment Rematching: Toward Dependability Improvement for Self-Adaptive Applications
Chang Xu , Wenhua Yang, Xiaoxing Ma , Chun Cao , and Jian Lü (Nanjing University, China) Self-adaptive applications can easily contain faults. Existing approaches detect faults, but can still leave some undetected and manifesting into failures at runtime. In this paper, we study the correlation between occurrences of application failure and those of consistency failure. We propose fixing consistency failure to reduce application failure at runtime. We name this environment rematching, which can systematically reconnect a self-adaptive application to its environment in a consistent way. We also propose enforcing atomicity for application semantics during the rematching to avoid its side effect. We evaluated our approach using 12 self-adaptive robot-car applications by both simulated and real experiments. The experimental results confirmed our approach’s effectiveness in improving dependability for all applications by 12.5-52.5%. @InProceedings{ASE13p592, author = {Chang Xu and Wenhua Yang and Xiaoxing Ma and Chun Cao and Jian Lü}, title = {Environment Rematching: Toward Dependability Improvement for Self-Adaptive Applications}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {592--597}, doi = {}, year = {2013}, } |
|
Manuel, Don |
ASE '13-NEWIDEAS: "Flow Permissions for Android ..."
Flow Permissions for Android
Shashank Holavanalli, Don Manuel, Vishwas Nanjundaswamy, Brian Rosenberg, Feng Shen, Steven Y. Ko, and Lukasz Ziarek (SUNY Buffalo, USA) This paper proposes Flow Permissions, an extension to the Android permission mechanism. Unlike the existing permission mechanism our permission mechanism contains semantic information based on information flows. Flow Permissions allow users to examine and grant explicit information flows within an application (e.g., a permission for reading the phone number and sending it over the network) as well as implicit information flows across multiple applications (e.g., a permission for reading the phone number and sending it to another application already installed on the user's phone). Our goal with Flow Permissions is to provide visibility into the holistic behavior of the applications installed on a user's phone. Our evaluation compares our approach to dynamic flow tracking techniques; our results with 600 popular applications and 1,200 malicious applications show that our approach is practical and effective in deriving Flow Permissions statically. @InProceedings{ASE13p652, author = {Shashank Holavanalli and Don Manuel and Vishwas Nanjundaswamy and Brian Rosenberg and Feng Shen and Steven Y. Ko and Lukasz Ziarek}, title = {Flow Permissions for Android}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {652--657}, doi = {}, year = {2013}, } |
|
Marcus, Andrian |
ASE '13-NEWIDEAS: "Class Level Fault Prediction ..."
Class Level Fault Prediction using Software Clustering
Giuseppe Scanniello, Carmine Gravino, Andrian Marcus, and Tim Menzies (University of Basilicata, Italy; University of Salerno, Italy; Wayne State University, USA; West Virginia University, USA) Defect prediction approaches use software metrics and fault data to learn which software properties associate with faults in classes. Existing techniques predict fault-prone classes in the same release (intra) or in a subsequent releases (inter) of a subject software system. We propose an intra-release fault prediction technique, which learns from clusters of related classes, rather than from the entire system. Classes are clustered using structural information and fault prediction models are built using the properties of the classes in each cluster. We present an empirical investigation on data from 29 releases of eight open source software systems from the PROMISE repository, with predictors built using multivariate linear regression. The results indicate that the prediction models built on clusters outperform those built on all the classes of the system. @InProceedings{ASE13p640, author = {Giuseppe Scanniello and Carmine Gravino and Andrian Marcus and Tim Menzies}, title = {Class Level Fault Prediction using Software Clustering}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {640--645}, doi = {}, year = {2013}, } Info |
|
McMillan, Collin |
ASE '13-NEWIDEAS: "ExPort: Detecting and Visualizing ..."
ExPort: Detecting and Visualizing API Usages in Large Source Code Repositories
Evan Moritz, Mario Linares-Vásquez, Denys Poshyvanyk , Mark Grechanik, Collin McMillan , and Malcom Gethers (College of William and Mary, USA; University of Illinois at Chicago, USA; University of Notre Dame, USA; University of Maryland in Baltimore County, USA) This paper presents a technique for automatically mining and visualizing API usage examples. In contrast to previous approaches, our technique is capable of finding examples of API usage that occur across several functions in a program. This distinction is important because of a gap between what current API learning tools provide and what programmers need: current tools extract relatively small examples from single files/functions, even though programmers use APIs to build large software. The small examples are helpful in the initial stages of API learning, but leave out details that are helpful in later stages. Our technique is intended to fill this gap. It works by representing software as a Relational Topic Model, where API calls and the functions that use them are modeled as a document network. Given a starting API, our approach can recommend complex API usage examples mined from a repository of over 14 million Java methods @InProceedings{ASE13p646, author = {Evan Moritz and Mario Linares-Vásquez and Denys Poshyvanyk and Mark Grechanik and Collin McMillan and Malcom Gethers}, title = {ExPort: Detecting and Visualizing API Usages in Large Source Code Repositories}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {646--651}, doi = {}, year = {2013}, } Info |
|
Menzies, Tim |
ASE '13-NEWIDEAS: "Class Level Fault Prediction ..."
Class Level Fault Prediction using Software Clustering
Giuseppe Scanniello, Carmine Gravino, Andrian Marcus, and Tim Menzies (University of Basilicata, Italy; University of Salerno, Italy; Wayne State University, USA; West Virginia University, USA) Defect prediction approaches use software metrics and fault data to learn which software properties associate with faults in classes. Existing techniques predict fault-prone classes in the same release (intra) or in a subsequent releases (inter) of a subject software system. We propose an intra-release fault prediction technique, which learns from clusters of related classes, rather than from the entire system. Classes are clustered using structural information and fault prediction models are built using the properties of the classes in each cluster. We present an empirical investigation on data from 29 releases of eight open source software systems from the PROMISE repository, with predictors built using multivariate linear regression. The results indicate that the prediction models built on clusters outperform those built on all the classes of the system. @InProceedings{ASE13p640, author = {Giuseppe Scanniello and Carmine Gravino and Andrian Marcus and Tim Menzies}, title = {Class Level Fault Prediction using Software Clustering}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {640--645}, doi = {}, year = {2013}, } Info |
|
Mesbah, Ali |
ASE '13-NEWIDEAS: "Pythia: Generating Test Cases ..."
Pythia: Generating Test Cases with Oracles for JavaScript Applications
Shabnam Mirshokraie, Ali Mesbah, and Karthik Pattabiraman (University of British Columbia, Canada) Web developers often write test cases manually using testing frameworks such as Selenium. Testing JavaScript-based applications is challenging as manually exploring various execution paths of the application is difficult. Also JavaScript’s highly dynamic nature as well as its complex interaction with the DOM make it difficult for the tester to achieve high coverage. We present a framework to automatically generate unit test cases for individual JavaScript functions. These test cases are strengthened by automatically generated test oracles capable of detecting faults in JavaScript code. Our approach is implemented in a tool called PYTHIA. Our preliminary evaluation results point to the efficacy of the approach in achieving high coverage and detecting faults. @InProceedings{ASE13p610, author = {Shabnam Mirshokraie and Ali Mesbah and Karthik Pattabiraman}, title = {Pythia: Generating Test Cases with Oracles for JavaScript Applications}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {610--615}, doi = {}, year = {2013}, } |
|
Mirshokraie, Shabnam |
ASE '13-NEWIDEAS: "Pythia: Generating Test Cases ..."
Pythia: Generating Test Cases with Oracles for JavaScript Applications
Shabnam Mirshokraie, Ali Mesbah, and Karthik Pattabiraman (University of British Columbia, Canada) Web developers often write test cases manually using testing frameworks such as Selenium. Testing JavaScript-based applications is challenging as manually exploring various execution paths of the application is difficult. Also JavaScript’s highly dynamic nature as well as its complex interaction with the DOM make it difficult for the tester to achieve high coverage. We present a framework to automatically generate unit test cases for individual JavaScript functions. These test cases are strengthened by automatically generated test oracles capable of detecting faults in JavaScript code. Our approach is implemented in a tool called PYTHIA. Our preliminary evaluation results point to the efficacy of the approach in achieving high coverage and detecting faults. @InProceedings{ASE13p610, author = {Shabnam Mirshokraie and Ali Mesbah and Karthik Pattabiraman}, title = {Pythia: Generating Test Cases with Oracles for JavaScript Applications}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {610--615}, doi = {}, year = {2013}, } |
|
Moritz, Evan |
ASE '13-NEWIDEAS: "ExPort: Detecting and Visualizing ..."
ExPort: Detecting and Visualizing API Usages in Large Source Code Repositories
Evan Moritz, Mario Linares-Vásquez, Denys Poshyvanyk , Mark Grechanik, Collin McMillan , and Malcom Gethers (College of William and Mary, USA; University of Illinois at Chicago, USA; University of Notre Dame, USA; University of Maryland in Baltimore County, USA) This paper presents a technique for automatically mining and visualizing API usage examples. In contrast to previous approaches, our technique is capable of finding examples of API usage that occur across several functions in a program. This distinction is important because of a gap between what current API learning tools provide and what programmers need: current tools extract relatively small examples from single files/functions, even though programmers use APIs to build large software. The small examples are helpful in the initial stages of API learning, but leave out details that are helpful in later stages. Our technique is intended to fill this gap. It works by representing software as a Relational Topic Model, where API calls and the functions that use them are modeled as a document network. Given a starting API, our approach can recommend complex API usage examples mined from a repository of over 14 million Java methods @InProceedings{ASE13p646, author = {Evan Moritz and Mario Linares-Vásquez and Denys Poshyvanyk and Mark Grechanik and Collin McMillan and Malcom Gethers}, title = {ExPort: Detecting and Visualizing API Usages in Large Source Code Repositories}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {646--651}, doi = {}, year = {2013}, } Info |
|
Motwani, Manish |
ASE '13-NEWIDEAS: "Detecting System Use Cases ..."
Detecting System Use Cases and Validations from Documents
Smita Ghaisas, Manish Motwani, and Preethu Rose Anish (Tata Consultancy Services, India) Identifying system use cases and corresponding validations involves analyzing large requirement documents to understand the descriptions of business processes, rules and policies. This consumes a significant amount of effort and time. We discuss an approach to automate the detection of system use cases and corresponding validations from documents. We have devised a representation that allows for capturing the essence of rule statements as a composition of atomic ‘Rule intents’ and key phrases associated with the intents. Rule intents that co-occur frequently constitute 'Rule acts’ analogous to the Speech acts in Linguistics. Our approach is based on NLP techniques designed around this Rule Model. We employ syntactic and semantic NL analyses around the model to identify and classify rules and annotate them with Rule acts. We map the Rule acts to business process steps and highlight the combinations as potential system use cases and validations for human supervision. @InProceedings{ASE13p568, author = {Smita Ghaisas and Manish Motwani and Preethu Rose Anish}, title = {Detecting System Use Cases and Validations from Documents}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {568--573}, doi = {}, year = {2013}, } |
|
Nanjundaswamy, Vishwas |
ASE '13-NEWIDEAS: "Flow Permissions for Android ..."
Flow Permissions for Android
Shashank Holavanalli, Don Manuel, Vishwas Nanjundaswamy, Brian Rosenberg, Feng Shen, Steven Y. Ko, and Lukasz Ziarek (SUNY Buffalo, USA) This paper proposes Flow Permissions, an extension to the Android permission mechanism. Unlike the existing permission mechanism our permission mechanism contains semantic information based on information flows. Flow Permissions allow users to examine and grant explicit information flows within an application (e.g., a permission for reading the phone number and sending it over the network) as well as implicit information flows across multiple applications (e.g., a permission for reading the phone number and sending it to another application already installed on the user's phone). Our goal with Flow Permissions is to provide visibility into the holistic behavior of the applications installed on a user's phone. Our evaluation compares our approach to dynamic flow tracking techniques; our results with 600 popular applications and 1,200 malicious applications show that our approach is practical and effective in deriving Flow Permissions statically. @InProceedings{ASE13p652, author = {Shashank Holavanalli and Don Manuel and Vishwas Nanjundaswamy and Brian Rosenberg and Feng Shen and Steven Y. Ko and Lukasz Ziarek}, title = {Flow Permissions for Android}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {652--657}, doi = {}, year = {2013}, } |
|
Nebut, Clémentine |
ASE '13-NEWIDEAS: "Recovering Model Transformation ..."
Recovering Model Transformation Traces using Multi-Objective Optimization
Hajer Saada, Marianne Huchard, Clémentine Nebut, and Houari Sahraoui (Université Montpellier 2, France; CNRS, France; Université de Montréal, Canada) Model Driven Engineering (MDE) is based on a large set of models that are used and manipulated throughout the development cycle. These models are manually or automatically produced and/or exploited using model transformations. To allow engineers to maintain the models and track their changes, recovering transformation traces is essential. In this paper, we propose an automated approach, based on multi-objective optimization, to recover transformation traces between models. Our approach takes as input a source model in the form of a set of fragments (fragments are defined using the source meta-model cardinalities and OCL constraints), and a target model. The recovered transformation traces take the form of many-to-many mappings between the constructs of the two models. @InProceedings{ASE13p688, author = {Hajer Saada and Marianne Huchard and Clémentine Nebut and Houari Sahraoui}, title = {Recovering Model Transformation Traces using Multi-Objective Optimization}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {688--693}, doi = {}, year = {2013}, } |
|
Nguyen, Cu Duy |
ASE '13-NEWIDEAS: "Automated Inference of Classifications ..."
Automated Inference of Classifications and Dependencies for Combinatorial Testing
Cu Duy Nguyen and Paolo Tonella (Fondazione Bruno Kessler, Italy) Even for small programs, the input space is huge – often unbounded. Partition testing divides the input space into disjoint equivalence classes and combinatorial testing selects a subset of all possible input class combinations, according to criteria such as pairwise coverage. The down side of this approach is that the partitioning of the input space into equivalence classes (input classification) is done manually. It is expensive and requires deep domain and implementation understanding. In this paper, we propose a novel approach to classify test inputs and their dependencies automatically. Firstly, random (or automatically generated) input vectors are sent to the system under test (SUT). For each input vector, an observed “hit vector” is produced by monitoring the execution of the SUT. Secondly, hit vectors are grouped into clusters using machine learning. Each cluster contains similar hit vectors, i.e., similar behaviors, and from them we obtain corresponding clusters of input vectors. Input classes are then extracted for each input parameter straightforwardly. Our experiments with a number of subjects show good results as the automatically generated classifications are the same or very close to the expected ones. @InProceedings{ASE13p622, author = {Cu Duy Nguyen and Paolo Tonella}, title = {Automated Inference of Classifications and Dependencies for Combinatorial Testing}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {622--627}, doi = {}, year = {2013}, } |
|
Orso, Alessandro |
ASE '13-NEWIDEAS: "SBFR: A Search Based Approach ..."
SBFR: A Search Based Approach for Reproducing Failures of Programs with Grammar Based Input
Fitsum Meshesha Kifetew , Wei Jin, Roberto Tiella, Alessandro Orso , and Paolo Tonella (Fondazione Bruno Kessler, Italy; Georgia Institute of Technology, USA) Reproducing field failures in-house, a step developers must perform when assigned a bug report, is an arduous task. In most cases, developers must be able to reproduce a reported failure using only a stack trace and/or some informal description of the failure. The problem becomes even harder for the large class of programs whose input is highly structured and strictly specified by a grammar. To address this problem, we present SBFR, a search-based failure-reproduction technique for programs with structured input. SBFR formulates failure reproduction as a search problem. Starting from a reported failure and a limited amount of dynamic information about the failure, SBFR exploits the potential of genetic programming to iteratively find legal inputs that can trigger the failure. @InProceedings{ASE13p604, author = {Fitsum Meshesha Kifetew and Wei Jin and Roberto Tiella and Alessandro Orso and Paolo Tonella}, title = {SBFR: A Search Based Approach for Reproducing Failures of Programs with Grammar Based Input}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {604--609}, doi = {}, year = {2013}, } |
|
Pal, Debjit |
ASE '13-NEWIDEAS: "Using Automatically Generated ..."
Using Automatically Generated Invariants for Regression Testing and Bug Localization
Parth Sagdeo, Nicholas Ewalt, Debjit Pal, and Shobha Vasudevan (University of Illinois at Urbana-Champaign, USA) We present PREAMBL, an approach that applies automatically generated invariants to regression testing and bug localization. Our invariant generation methodology is PRECIS, an automatic and scalable engine that uses program predicates to guide clustering of dynamically obtained path information. In this paper, we apply it for regression testing and for capturing program predicates information to guide statistical analysis based bug localization. We present a technique to localize bugs in paths of variable lengths. We are able to map the localized post-deployment bugs on a path to pre-release invariants generated along that path. Our experimental results demonstrate the efficacy of the use of PRECIS for regression testing, as well as the ability of PREAMBL to zone in on relevant segments of program paths. @InProceedings{ASE13p634, author = {Parth Sagdeo and Nicholas Ewalt and Debjit Pal and Shobha Vasudevan}, title = {Using Automatically Generated Invariants for Regression Testing and Bug Localization}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {634--639}, doi = {}, year = {2013}, } |
|
Pattabiraman, Karthik |
ASE '13-NEWIDEAS: "Pythia: Generating Test Cases ..."
Pythia: Generating Test Cases with Oracles for JavaScript Applications
Shabnam Mirshokraie, Ali Mesbah, and Karthik Pattabiraman (University of British Columbia, Canada) Web developers often write test cases manually using testing frameworks such as Selenium. Testing JavaScript-based applications is challenging as manually exploring various execution paths of the application is difficult. Also JavaScript’s highly dynamic nature as well as its complex interaction with the DOM make it difficult for the tester to achieve high coverage. We present a framework to automatically generate unit test cases for individual JavaScript functions. These test cases are strengthened by automatically generated test oracles capable of detecting faults in JavaScript code. Our approach is implemented in a tool called PYTHIA. Our preliminary evaluation results point to the efficacy of the approach in achieving high coverage and detecting faults. @InProceedings{ASE13p610, author = {Shabnam Mirshokraie and Ali Mesbah and Karthik Pattabiraman}, title = {Pythia: Generating Test Cases with Oracles for JavaScript Applications}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {610--615}, doi = {}, year = {2013}, } |
|
Pilgrim, Jens von |
ASE '13-NEWIDEAS: "Model/Code Co-Refactoring: ..."
Model/Code Co-Refactoring: An MDE Approach
Jens von Pilgrim, Bastian Ulke, Andreas Thies, and Friedrich Steimann (Fernuniversität in Hagen, Germany) Model-driven engineering suggests that models are the primary artefacts of software development. This means that models may be refactored even after code has been generated from them, in which case the code must be changed to reflect the refactoring. However, as we show neither re-generating the code from the re-factored model nor applying an equivalent refactoring to the gen-erated code is sufficient to keep model and code in sync — rather, model and code need to be refactored jointly. To enable this, we investigate the technical requirements of model/code co-refactoring, and implement a model-driven solution that we eval-uate using a set of open-source programs and their structural models. Results suggest that our approach is feasible. @InProceedings{ASE13p682, author = {Jens von Pilgrim and Bastian Ulke and Andreas Thies and Friedrich Steimann}, title = {Model/Code Co-Refactoring: An MDE Approach}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {682--687}, doi = {}, year = {2013}, } |
|
Poshyvanyk, Denys |
ASE '13-NEWIDEAS: "ExPort: Detecting and Visualizing ..."
ExPort: Detecting and Visualizing API Usages in Large Source Code Repositories
Evan Moritz, Mario Linares-Vásquez, Denys Poshyvanyk , Mark Grechanik, Collin McMillan , and Malcom Gethers (College of William and Mary, USA; University of Illinois at Chicago, USA; University of Notre Dame, USA; University of Maryland in Baltimore County, USA) This paper presents a technique for automatically mining and visualizing API usage examples. In contrast to previous approaches, our technique is capable of finding examples of API usage that occur across several functions in a program. This distinction is important because of a gap between what current API learning tools provide and what programmers need: current tools extract relatively small examples from single files/functions, even though programmers use APIs to build large software. The small examples are helpful in the initial stages of API learning, but leave out details that are helpful in later stages. Our technique is intended to fill this gap. It works by representing software as a Relational Topic Model, where API calls and the functions that use them are modeled as a document network. Given a starting API, our approach can recommend complex API usage examples mined from a repository of over 14 million Java methods @InProceedings{ASE13p646, author = {Evan Moritz and Mario Linares-Vásquez and Denys Poshyvanyk and Mark Grechanik and Collin McMillan and Malcom Gethers}, title = {ExPort: Detecting and Visualizing API Usages in Large Source Code Repositories}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {646--651}, doi = {}, year = {2013}, } Info |
|
Reger, Giles |
ASE '13-NEWIDEAS: "A Pattern-Based Approach to ..."
A Pattern-Based Approach to Parametric Specification Mining
Giles Reger, Howard Barringer, and David Rydeheard (University of Manchester, UK) This paper presents a technique for using execution traces to mine parametric temporal specifications in the form of quantified event automata (QEA) - previously introduced as an expressive and efficient formalism for runtime verification. We consider a pattern-based mining approach that uses a pattern library to generate and check potential properties over given traces, and then combines successful patterns. By using predefined models to measure the tool’s precision and recall we demonstrate that our approach can effectively and efficiently extract specifications in realistic scenarios. @InProceedings{ASE13p658, author = {Giles Reger and Howard Barringer and David Rydeheard}, title = {A Pattern-Based Approach to Parametric Specification Mining}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {658--663}, doi = {}, year = {2013}, } |
|
Rosenberg, Brian |
ASE '13-NEWIDEAS: "Flow Permissions for Android ..."
Flow Permissions for Android
Shashank Holavanalli, Don Manuel, Vishwas Nanjundaswamy, Brian Rosenberg, Feng Shen, Steven Y. Ko, and Lukasz Ziarek (SUNY Buffalo, USA) This paper proposes Flow Permissions, an extension to the Android permission mechanism. Unlike the existing permission mechanism our permission mechanism contains semantic information based on information flows. Flow Permissions allow users to examine and grant explicit information flows within an application (e.g., a permission for reading the phone number and sending it over the network) as well as implicit information flows across multiple applications (e.g., a permission for reading the phone number and sending it to another application already installed on the user's phone). Our goal with Flow Permissions is to provide visibility into the holistic behavior of the applications installed on a user's phone. Our evaluation compares our approach to dynamic flow tracking techniques; our results with 600 popular applications and 1,200 malicious applications show that our approach is practical and effective in deriving Flow Permissions statically. @InProceedings{ASE13p652, author = {Shashank Holavanalli and Don Manuel and Vishwas Nanjundaswamy and Brian Rosenberg and Feng Shen and Steven Y. Ko and Lukasz Ziarek}, title = {Flow Permissions for Android}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {652--657}, doi = {}, year = {2013}, } |
|
Ruiz-Cortés, Antonio |
ASE '13-NEWIDEAS: "Multi-user Variability Configuration: ..."
Multi-user Variability Configuration: A Game Theoretic Approach
Jesús García-Galán, Pablo Trinidad, and Antonio Ruiz-Cortés (University of Seville, Spain) Multi-user configuration is a neglected problem in variability-intensive systems area. The appearance of conflicts among user configurations is a main concern. Current approaches focus on avoiding such conflicts, applying the mutual exclusion principle. However, this perspective has a negative impact on users satisfaction, who cannot make any decision fairly. In this work, we propose an interpretation of multi-user configuration as a game theoretic problem. Game theory is a well-known discipline which analyzes conflicts and cooperation among intelligent rational decision-makers. We present a taxonomy of multi-user configuration approaches, and how they can be interpreted as different problems of game theory. We focus on cooperative game theory to propose and automate a tradeoff-based bargaining approach, as a way to solve the conflicts and maximize user satisfaction at the same time. @InProceedings{ASE13p574, author = {Jesús García-Galán and Pablo Trinidad and Antonio Ruiz-Cortés}, title = {Multi-user Variability Configuration: A Game Theoretic Approach}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {574--579}, doi = {}, year = {2013}, } |
|
Rydeheard, David |
ASE '13-NEWIDEAS: "A Pattern-Based Approach to ..."
A Pattern-Based Approach to Parametric Specification Mining
Giles Reger, Howard Barringer, and David Rydeheard (University of Manchester, UK) This paper presents a technique for using execution traces to mine parametric temporal specifications in the form of quantified event automata (QEA) - previously introduced as an expressive and efficient formalism for runtime verification. We consider a pattern-based mining approach that uses a pattern library to generate and check potential properties over given traces, and then combines successful patterns. By using predefined models to measure the tool’s precision and recall we demonstrate that our approach can effectively and efficiently extract specifications in realistic scenarios. @InProceedings{ASE13p658, author = {Giles Reger and Howard Barringer and David Rydeheard}, title = {A Pattern-Based Approach to Parametric Specification Mining}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {658--663}, doi = {}, year = {2013}, } |
|
Saada, Hajer |
ASE '13-NEWIDEAS: "Recovering Model Transformation ..."
Recovering Model Transformation Traces using Multi-Objective Optimization
Hajer Saada, Marianne Huchard, Clémentine Nebut, and Houari Sahraoui (Université Montpellier 2, France; CNRS, France; Université de Montréal, Canada) Model Driven Engineering (MDE) is based on a large set of models that are used and manipulated throughout the development cycle. These models are manually or automatically produced and/or exploited using model transformations. To allow engineers to maintain the models and track their changes, recovering transformation traces is essential. In this paper, we propose an automated approach, based on multi-objective optimization, to recover transformation traces between models. Our approach takes as input a source model in the form of a set of fragments (fragments are defined using the source meta-model cardinalities and OCL constraints), and a target model. The recovered transformation traces take the form of many-to-many mappings between the constructs of the two models. @InProceedings{ASE13p688, author = {Hajer Saada and Marianne Huchard and Clémentine Nebut and Houari Sahraoui}, title = {Recovering Model Transformation Traces using Multi-Objective Optimization}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {688--693}, doi = {}, year = {2013}, } |
|
Sagdeo, Parth |
ASE '13-NEWIDEAS: "Using Automatically Generated ..."
Using Automatically Generated Invariants for Regression Testing and Bug Localization
Parth Sagdeo, Nicholas Ewalt, Debjit Pal, and Shobha Vasudevan (University of Illinois at Urbana-Champaign, USA) We present PREAMBL, an approach that applies automatically generated invariants to regression testing and bug localization. Our invariant generation methodology is PRECIS, an automatic and scalable engine that uses program predicates to guide clustering of dynamically obtained path information. In this paper, we apply it for regression testing and for capturing program predicates information to guide statistical analysis based bug localization. We present a technique to localize bugs in paths of variable lengths. We are able to map the localized post-deployment bugs on a path to pre-release invariants generated along that path. Our experimental results demonstrate the efficacy of the use of PRECIS for regression testing, as well as the ability of PREAMBL to zone in on relevant segments of program paths. @InProceedings{ASE13p634, author = {Parth Sagdeo and Nicholas Ewalt and Debjit Pal and Shobha Vasudevan}, title = {Using Automatically Generated Invariants for Regression Testing and Bug Localization}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {634--639}, doi = {}, year = {2013}, } |
|
Sahraoui, Houari |
ASE '13-NEWIDEAS: "Recovering Model Transformation ..."
Recovering Model Transformation Traces using Multi-Objective Optimization
Hajer Saada, Marianne Huchard, Clémentine Nebut, and Houari Sahraoui (Université Montpellier 2, France; CNRS, France; Université de Montréal, Canada) Model Driven Engineering (MDE) is based on a large set of models that are used and manipulated throughout the development cycle. These models are manually or automatically produced and/or exploited using model transformations. To allow engineers to maintain the models and track their changes, recovering transformation traces is essential. In this paper, we propose an automated approach, based on multi-objective optimization, to recover transformation traces between models. Our approach takes as input a source model in the form of a set of fragments (fragments are defined using the source meta-model cardinalities and OCL constraints), and a target model. The recovered transformation traces take the form of many-to-many mappings between the constructs of the two models. @InProceedings{ASE13p688, author = {Hajer Saada and Marianne Huchard and Clémentine Nebut and Houari Sahraoui}, title = {Recovering Model Transformation Traces using Multi-Objective Optimization}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {688--693}, doi = {}, year = {2013}, } |
|
Sannier, Nicolas |
ASE '13-NEWIDEAS: "From Comparison Matrix to ..."
From Comparison Matrix to Variability Model: The Wikipedia Case Study
Nicolas Sannier, Mathieu Acher, and Benoit Baudry (University of Rennes 1, France; Inria, France; Irisa, France) Product comparison matrices (PCMs) provide a convenient way to document the discriminant features of a family of related products and now abound on the internet. Despite their apparent simplicity, the information present in existing PCMs can be very heterogeneous, partial, ambiguous, hard to exploit by users who desire to choose an appropriate product. Variability Models (VMs) can be employed to formulate in a more precise way the semantics of PCMs and enable automated reasoning such as assisted configuration. Yet, the gap between PCMs and VMs should be precisely understood and automated techniques should support the transition between the two. In this paper, we propose variability patterns that describe PCMs content and conduct an empirical analysis of 300+ PCMs mined from Wikipedia. Our findings are a first step toward better engineering techniques for maintaining and configuring PCMs. @InProceedings{ASE13p580, author = {Nicolas Sannier and Mathieu Acher and Benoit Baudry}, title = {From Comparison Matrix to Variability Model: The Wikipedia Case Study}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {580--585}, doi = {}, year = {2013}, } |
|
Savagaonkar, Milind |
ASE '13-NEWIDEAS: "Natural Language Requirements ..."
Natural Language Requirements Quality Analysis Based on Business Domain Models
Annervaz K.M., Vikrant Kaulgud , Shubhashis Sengupta, and Milind Savagaonkar (Accenture Technology Labs, India) Quality of requirements written in natural language has always been a critical concern in software engineering. Poorly written requirements lead to ambiguity and false interpretation in different phases of a software delivery project. Further, incomplete requirements lead to partial implementation of the desired system behavior. In this paper, we present a model for harvesting domain (functional or business) knowledge. Subsequently we present natural language processing and ontology based techniques for leveraging the model to analyze requirements quality and for requirements comprehension. The prototype also provides an advisory to business analysts so that the requirements can be aligned to the expected domain standard. The prototype developed is currently being used in practice, and the initial results are very encouraging. @InProceedings{ASE13p676, author = {Annervaz K.M. and Vikrant Kaulgud and Shubhashis Sengupta and Milind Savagaonkar}, title = {Natural Language Requirements Quality Analysis Based on Business Domain Models}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {676--681}, doi = {}, year = {2013}, } |
|
Scanniello, Giuseppe |
ASE '13-NEWIDEAS: "Class Level Fault Prediction ..."
Class Level Fault Prediction using Software Clustering
Giuseppe Scanniello, Carmine Gravino, Andrian Marcus, and Tim Menzies (University of Basilicata, Italy; University of Salerno, Italy; Wayne State University, USA; West Virginia University, USA) Defect prediction approaches use software metrics and fault data to learn which software properties associate with faults in classes. Existing techniques predict fault-prone classes in the same release (intra) or in a subsequent releases (inter) of a subject software system. We propose an intra-release fault prediction technique, which learns from clusters of related classes, rather than from the entire system. Classes are clustered using structural information and fault prediction models are built using the properties of the classes in each cluster. We present an empirical investigation on data from 29 releases of eight open source software systems from the PROMISE repository, with predictors built using multivariate linear regression. The results indicate that the prediction models built on clusters outperform those built on all the classes of the system. @InProceedings{ASE13p640, author = {Giuseppe Scanniello and Carmine Gravino and Andrian Marcus and Tim Menzies}, title = {Class Level Fault Prediction using Software Clustering}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {640--645}, doi = {}, year = {2013}, } Info |
|
Sengupta, Shubhashis |
ASE '13-NEWIDEAS: "Natural Language Requirements ..."
Natural Language Requirements Quality Analysis Based on Business Domain Models
Annervaz K.M., Vikrant Kaulgud , Shubhashis Sengupta, and Milind Savagaonkar (Accenture Technology Labs, India) Quality of requirements written in natural language has always been a critical concern in software engineering. Poorly written requirements lead to ambiguity and false interpretation in different phases of a software delivery project. Further, incomplete requirements lead to partial implementation of the desired system behavior. In this paper, we present a model for harvesting domain (functional or business) knowledge. Subsequently we present natural language processing and ontology based techniques for leveraging the model to analyze requirements quality and for requirements comprehension. The prototype also provides an advisory to business analysts so that the requirements can be aligned to the expected domain standard. The prototype developed is currently being used in practice, and the initial results are very encouraging. @InProceedings{ASE13p676, author = {Annervaz K.M. and Vikrant Kaulgud and Shubhashis Sengupta and Milind Savagaonkar}, title = {Natural Language Requirements Quality Analysis Based on Business Domain Models}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {676--681}, doi = {}, year = {2013}, } |
|
Seyff, Norbert |
ASE '13-NEWIDEAS: "Semi-automatic Generation ..."
Semi-automatic Generation of Metamodels from Model Sketches
Dustin Wüest, Norbert Seyff, and Martin Glinz (University of Zurich, Switzerland) Traditionally, metamodeling is an upfront activity performed by experts for defining modeling languages. Modeling tools then typically restrict modelers to using only constructs defined in the metamodel. This is inappropriate when users want to sketch graphical models without any restrictions and only later assign meanings to the sketched elements. Upfront metamodeling also complicates the creation of domain-specific languages, as it requires experts with both domain and metamodeling expertise. In this paper we present a new approach that supports modelers in creating metamodels for diagrams they have sketched or are currently sketching. Metamodels are defined in a semi-automatic, interactive way by annotating diagram elements and automated model analysis. Our approach requires no metamodeling expertise and supports the co-evolution of models and meta-models. @InProceedings{ASE13p664, author = {Dustin Wüest and Norbert Seyff and Martin Glinz}, title = {Semi-automatic Generation of Metamodels from Model Sketches}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {664--669}, doi = {}, year = {2013}, } |
|
Shah, Eeshan |
ASE '13-NEWIDEAS: "Cloud Twin: Native Execution ..."
Cloud Twin: Native Execution of Android Applications on the Windows Phone
Ethan Holder, Eeshan Shah, Mohammed Davoodi, and Eli Tilevich (Virginia Tech, USA) To successfully compete in the software marketplace, modern mobile applications must run on multiple competing platforms, such as Android, iOS, and Windows Phone. Companies producing mobile applications spend substantial amounts of time, effort, and money to port applications across platforms. Creating individual program versions for different platforms further exacerbates the maintenance burden. This paper presents Cloud Twin, a novel approach to natively executing the functionality of a mobile application written for another platform. The functionality is accessed by means of dynamic cross-platform replay, in which the source application’s execution in the cloud is mimicked natively on the target platform. The reference implementation of Cloud Twin natively emulates the behavior of Android applications on a Windows Phone. Specifically, Cloud Twin transmits, via web sockets, the UI actions performed on the Windows Phone to the cloud server, which then mimics the received actions on the Android emulator. The UI updates on the emulator are efficiently captured by means of Aspect Oriented Programming and sent back to be replayed on the Windows Phone. Our case studies with third-party applications indicate that the Cloud Twin approach can become a viable solution to the heterogeneity of the mobile application market. @InProceedings{ASE13p598, author = {Ethan Holder and Eeshan Shah and Mohammed Davoodi and Eli Tilevich}, title = {Cloud Twin: Native Execution of Android Applications on the Windows Phone}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {598--603}, doi = {}, year = {2013}, } |
|
Shen, Feng |
ASE '13-NEWIDEAS: "Flow Permissions for Android ..."
Flow Permissions for Android
Shashank Holavanalli, Don Manuel, Vishwas Nanjundaswamy, Brian Rosenberg, Feng Shen, Steven Y. Ko, and Lukasz Ziarek (SUNY Buffalo, USA) This paper proposes Flow Permissions, an extension to the Android permission mechanism. Unlike the existing permission mechanism our permission mechanism contains semantic information based on information flows. Flow Permissions allow users to examine and grant explicit information flows within an application (e.g., a permission for reading the phone number and sending it over the network) as well as implicit information flows across multiple applications (e.g., a permission for reading the phone number and sending it to another application already installed on the user's phone). Our goal with Flow Permissions is to provide visibility into the holistic behavior of the applications installed on a user's phone. Our evaluation compares our approach to dynamic flow tracking techniques; our results with 600 popular applications and 1,200 malicious applications show that our approach is practical and effective in deriving Flow Permissions statically. @InProceedings{ASE13p652, author = {Shashank Holavanalli and Don Manuel and Vishwas Nanjundaswamy and Brian Rosenberg and Feng Shen and Steven Y. Ko and Lukasz Ziarek}, title = {Flow Permissions for Android}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {652--657}, doi = {}, year = {2013}, } |
|
Shin, Yonghee |
ASE '13-NEWIDEAS: "Learning Effective Query Transformations ..."
Learning Effective Query Transformations for Enhanced Requirements Trace Retrieval
Timothy Dietrich, Jane Cleland-Huang, and Yonghee Shin (DePaul University, USA) In automated requirements traceability, significant improvements can be realized through incorporating user feedback into the trace retrieval process. However, existing feedback techniques are designed to improve results for individual queries. In this paper we present a novel technique designed to extend the benefits of user feedback across multiple trace queries. Our approach, named Trace Query Transformation (TQT), utilizes a novel form of Association Rule Mining to learn a set of query transformation rules which are used to improve the efficacy of future trace queries. We evaluate TQT using two different kinds of training sets. The first represents an initial set of queries directly modified by human analysts, while the second represents a set of queries generated by applying a query optimization process based on initial relevance feedback for trace links between a set of source and target documents. Both techniques are evaluated using requirements from theWorldVista Healthcare system, traced against certification requirements for the Commission for Healthcare Information Technology. Results show that the TQT technique returns significant improvements in the quality of generated trace links. @InProceedings{ASE13p586, author = {Timothy Dietrich and Jane Cleland-Huang and Yonghee Shin}, title = {Learning Effective Query Transformations for Enhanced Requirements Trace Retrieval}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {586--591}, doi = {}, year = {2013}, } |
|
Steimann, Friedrich |
ASE '13-NEWIDEAS: "Model/Code Co-Refactoring: ..."
Model/Code Co-Refactoring: An MDE Approach
Jens von Pilgrim, Bastian Ulke, Andreas Thies, and Friedrich Steimann (Fernuniversität in Hagen, Germany) Model-driven engineering suggests that models are the primary artefacts of software development. This means that models may be refactored even after code has been generated from them, in which case the code must be changed to reflect the refactoring. However, as we show neither re-generating the code from the re-factored model nor applying an equivalent refactoring to the gen-erated code is sufficient to keep model and code in sync — rather, model and code need to be refactored jointly. To enable this, we investigate the technical requirements of model/code co-refactoring, and implement a model-driven solution that we eval-uate using a set of open-source programs and their structural models. Results suggest that our approach is feasible. @InProceedings{ASE13p682, author = {Jens von Pilgrim and Bastian Ulke and Andreas Thies and Friedrich Steimann}, title = {Model/Code Co-Refactoring: An MDE Approach}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {682--687}, doi = {}, year = {2013}, } |
|
Tambe, Milind |
ASE '13-NEWIDEAS: "Randomizing Regression Tests ..."
Randomizing Regression Tests using Game Theory
Nupul Kukreja, William G. J. Halfond , and Milind Tambe (University of Southern California, USA) As software evolves, the number of test-cases in the regression test suites continues to increase, requiring testers to prioritize their execution. Usually only a subset of the test cases is executed due to limited testing resources. This subset is often known to the developers who may try to ``game'' the system by committing insufficiently tested code for parts of the software that will not be tested. In this new ideas paper, we propose a novel approach for randomizing regression test scheduling, based on Stackelberg games for deployment of scarce resources. We apply this approach to randomizing test cases in such a way as to maximize the testers' expected payoff when executing the test cases. Our approach accounts for resource limitations ( number of testers) and provides a probabilistic distribution for scheduling test cases. We provide an example application of our approach showcasing the idea of using Stackelberg games for randomized regression test scheduling. @InProceedings{ASE13p616, author = {Nupul Kukreja and William G. J. Halfond and Milind Tambe}, title = {Randomizing Regression Tests using Game Theory}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {616--621}, doi = {}, year = {2013}, } |
|
Tan, Lin |
ASE '13-NEWIDEAS: "AutoComment: Mining Question ..."
AutoComment: Mining Question and Answer Sites for Automatic Comment Generation
Edmund Wong, Jinqiu Yang, and Lin Tan (University of Waterloo, Canada) Code comments improve software maintainability. To address the comment scarcity issue, we propose a new automatic comment generation approach, which mines comments from a large programming Question and Answer (Q&A) site. Q&A sites allow programmers to post questions and receive solutions, which contain code segments together with their descriptions, referred to as code-description mappings. We develop AutoComment to extract such mappings, and leverage them to generate description comments automatically for similar code segments matched in open-source projects. We apply AutoComment to analyze Java and Android tagged Q&A posts to extract 132,767 code-description mappings, which help AutoComment to generate 102 comments automatically for 23 Java and Android projects. The user study results show that the majority of the participants consider the generated comments accurate, adequate, concise, and useful in helping them understand the code. @InProceedings{ASE13p562, author = {Edmund Wong and Jinqiu Yang and Lin Tan}, title = {AutoComment: Mining Question and Answer Sites for Automatic Comment Generation}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {562--567}, doi = {}, year = {2013}, } |
|
Thies, Andreas |
ASE '13-NEWIDEAS: "Model/Code Co-Refactoring: ..."
Model/Code Co-Refactoring: An MDE Approach
Jens von Pilgrim, Bastian Ulke, Andreas Thies, and Friedrich Steimann (Fernuniversität in Hagen, Germany) Model-driven engineering suggests that models are the primary artefacts of software development. This means that models may be refactored even after code has been generated from them, in which case the code must be changed to reflect the refactoring. However, as we show neither re-generating the code from the re-factored model nor applying an equivalent refactoring to the gen-erated code is sufficient to keep model and code in sync — rather, model and code need to be refactored jointly. To enable this, we investigate the technical requirements of model/code co-refactoring, and implement a model-driven solution that we eval-uate using a set of open-source programs and their structural models. Results suggest that our approach is feasible. @InProceedings{ASE13p682, author = {Jens von Pilgrim and Bastian Ulke and Andreas Thies and Friedrich Steimann}, title = {Model/Code Co-Refactoring: An MDE Approach}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {682--687}, doi = {}, year = {2013}, } |
|
Tiella, Roberto |
ASE '13-NEWIDEAS: "SBFR: A Search Based Approach ..."
SBFR: A Search Based Approach for Reproducing Failures of Programs with Grammar Based Input
Fitsum Meshesha Kifetew , Wei Jin, Roberto Tiella, Alessandro Orso , and Paolo Tonella (Fondazione Bruno Kessler, Italy; Georgia Institute of Technology, USA) Reproducing field failures in-house, a step developers must perform when assigned a bug report, is an arduous task. In most cases, developers must be able to reproduce a reported failure using only a stack trace and/or some informal description of the failure. The problem becomes even harder for the large class of programs whose input is highly structured and strictly specified by a grammar. To address this problem, we present SBFR, a search-based failure-reproduction technique for programs with structured input. SBFR formulates failure reproduction as a search problem. Starting from a reported failure and a limited amount of dynamic information about the failure, SBFR exploits the potential of genetic programming to iteratively find legal inputs that can trigger the failure. @InProceedings{ASE13p604, author = {Fitsum Meshesha Kifetew and Wei Jin and Roberto Tiella and Alessandro Orso and Paolo Tonella}, title = {SBFR: A Search Based Approach for Reproducing Failures of Programs with Grammar Based Input}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {604--609}, doi = {}, year = {2013}, } |
|
Tilevich, Eli |
ASE '13-NEWIDEAS: "Cloud Twin: Native Execution ..."
Cloud Twin: Native Execution of Android Applications on the Windows Phone
Ethan Holder, Eeshan Shah, Mohammed Davoodi, and Eli Tilevich (Virginia Tech, USA) To successfully compete in the software marketplace, modern mobile applications must run on multiple competing platforms, such as Android, iOS, and Windows Phone. Companies producing mobile applications spend substantial amounts of time, effort, and money to port applications across platforms. Creating individual program versions for different platforms further exacerbates the maintenance burden. This paper presents Cloud Twin, a novel approach to natively executing the functionality of a mobile application written for another platform. The functionality is accessed by means of dynamic cross-platform replay, in which the source application’s execution in the cloud is mimicked natively on the target platform. The reference implementation of Cloud Twin natively emulates the behavior of Android applications on a Windows Phone. Specifically, Cloud Twin transmits, via web sockets, the UI actions performed on the Windows Phone to the cloud server, which then mimics the received actions on the Android emulator. The UI updates on the emulator are efficiently captured by means of Aspect Oriented Programming and sent back to be replayed on the Windows Phone. Our case studies with third-party applications indicate that the Cloud Twin approach can become a viable solution to the heterogeneity of the mobile application market. @InProceedings{ASE13p598, author = {Ethan Holder and Eeshan Shah and Mohammed Davoodi and Eli Tilevich}, title = {Cloud Twin: Native Execution of Android Applications on the Windows Phone}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {598--603}, doi = {}, year = {2013}, } |
|
Tonella, Paolo |
ASE '13-NEWIDEAS: "Automated Inference of Classifications ..."
Automated Inference of Classifications and Dependencies for Combinatorial Testing
Cu Duy Nguyen and Paolo Tonella (Fondazione Bruno Kessler, Italy) Even for small programs, the input space is huge – often unbounded. Partition testing divides the input space into disjoint equivalence classes and combinatorial testing selects a subset of all possible input class combinations, according to criteria such as pairwise coverage. The down side of this approach is that the partitioning of the input space into equivalence classes (input classification) is done manually. It is expensive and requires deep domain and implementation understanding. In this paper, we propose a novel approach to classify test inputs and their dependencies automatically. Firstly, random (or automatically generated) input vectors are sent to the system under test (SUT). For each input vector, an observed “hit vector” is produced by monitoring the execution of the SUT. Secondly, hit vectors are grouped into clusters using machine learning. Each cluster contains similar hit vectors, i.e., similar behaviors, and from them we obtain corresponding clusters of input vectors. Input classes are then extracted for each input parameter straightforwardly. Our experiments with a number of subjects show good results as the automatically generated classifications are the same or very close to the expected ones. @InProceedings{ASE13p622, author = {Cu Duy Nguyen and Paolo Tonella}, title = {Automated Inference of Classifications and Dependencies for Combinatorial Testing}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {622--627}, doi = {}, year = {2013}, } ASE '13-NEWIDEAS: "SBFR: A Search Based Approach ..." SBFR: A Search Based Approach for Reproducing Failures of Programs with Grammar Based Input Fitsum Meshesha Kifetew , Wei Jin, Roberto Tiella, Alessandro Orso , and Paolo Tonella (Fondazione Bruno Kessler, Italy; Georgia Institute of Technology, USA) Reproducing field failures in-house, a step developers must perform when assigned a bug report, is an arduous task. In most cases, developers must be able to reproduce a reported failure using only a stack trace and/or some informal description of the failure. The problem becomes even harder for the large class of programs whose input is highly structured and strictly specified by a grammar. To address this problem, we present SBFR, a search-based failure-reproduction technique for programs with structured input. SBFR formulates failure reproduction as a search problem. Starting from a reported failure and a limited amount of dynamic information about the failure, SBFR exploits the potential of genetic programming to iteratively find legal inputs that can trigger the failure. @InProceedings{ASE13p604, author = {Fitsum Meshesha Kifetew and Wei Jin and Roberto Tiella and Alessandro Orso and Paolo Tonella}, title = {SBFR: A Search Based Approach for Reproducing Failures of Programs with Grammar Based Input}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {604--609}, doi = {}, year = {2013}, } |
|
Trinidad, Pablo |
ASE '13-NEWIDEAS: "Multi-user Variability Configuration: ..."
Multi-user Variability Configuration: A Game Theoretic Approach
Jesús García-Galán, Pablo Trinidad, and Antonio Ruiz-Cortés (University of Seville, Spain) Multi-user configuration is a neglected problem in variability-intensive systems area. The appearance of conflicts among user configurations is a main concern. Current approaches focus on avoiding such conflicts, applying the mutual exclusion principle. However, this perspective has a negative impact on users satisfaction, who cannot make any decision fairly. In this work, we propose an interpretation of multi-user configuration as a game theoretic problem. Game theory is a well-known discipline which analyzes conflicts and cooperation among intelligent rational decision-makers. We present a taxonomy of multi-user configuration approaches, and how they can be interpreted as different problems of game theory. We focus on cooperative game theory to propose and automate a tradeoff-based bargaining approach, as a way to solve the conflicts and maximize user satisfaction at the same time. @InProceedings{ASE13p574, author = {Jesús García-Galán and Pablo Trinidad and Antonio Ruiz-Cortés}, title = {Multi-user Variability Configuration: A Game Theoretic Approach}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {574--579}, doi = {}, year = {2013}, } |
|
Ulke, Bastian |
ASE '13-NEWIDEAS: "Model/Code Co-Refactoring: ..."
Model/Code Co-Refactoring: An MDE Approach
Jens von Pilgrim, Bastian Ulke, Andreas Thies, and Friedrich Steimann (Fernuniversität in Hagen, Germany) Model-driven engineering suggests that models are the primary artefacts of software development. This means that models may be refactored even after code has been generated from them, in which case the code must be changed to reflect the refactoring. However, as we show neither re-generating the code from the re-factored model nor applying an equivalent refactoring to the gen-erated code is sufficient to keep model and code in sync — rather, model and code need to be refactored jointly. To enable this, we investigate the technical requirements of model/code co-refactoring, and implement a model-driven solution that we eval-uate using a set of open-source programs and their structural models. Results suggest that our approach is feasible. @InProceedings{ASE13p682, author = {Jens von Pilgrim and Bastian Ulke and Andreas Thies and Friedrich Steimann}, title = {Model/Code Co-Refactoring: An MDE Approach}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {682--687}, doi = {}, year = {2013}, } |
|
Vasudevan, Shobha |
ASE '13-NEWIDEAS: "Using Automatically Generated ..."
Using Automatically Generated Invariants for Regression Testing and Bug Localization
Parth Sagdeo, Nicholas Ewalt, Debjit Pal, and Shobha Vasudevan (University of Illinois at Urbana-Champaign, USA) We present PREAMBL, an approach that applies automatically generated invariants to regression testing and bug localization. Our invariant generation methodology is PRECIS, an automatic and scalable engine that uses program predicates to guide clustering of dynamically obtained path information. In this paper, we apply it for regression testing and for capturing program predicates information to guide statistical analysis based bug localization. We present a technique to localize bugs in paths of variable lengths. We are able to map the localized post-deployment bugs on a path to pre-release invariants generated along that path. Our experimental results demonstrate the efficacy of the use of PRECIS for regression testing, as well as the ability of PREAMBL to zone in on relevant segments of program paths. @InProceedings{ASE13p634, author = {Parth Sagdeo and Nicholas Ewalt and Debjit Pal and Shobha Vasudevan}, title = {Using Automatically Generated Invariants for Regression Testing and Bug Localization}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {634--639}, doi = {}, year = {2013}, } |
|
Veerappa, Varsha |
ASE '13-NEWIDEAS: "Assessing the Maturity of ..."
Assessing the Maturity of Requirements through Argumentation: A Good Enough Approach
Varsha Veerappa and Rachel Harrison (Oxford Brookes University, UK) Requirements engineers need to be confident that enough requirements analysis has been done before a project can move forward. In the context of KAOS, this information can be derived from the soundness of the refinements: sound refinements indicate that the requirements in the goal-graph are mature enough or good enough for implementation. We can estimate how close we are to ‘good enough’ requirements using the judgments of experts and other data from the goals. We apply Toulmin’s model of argumentation to evaluate how sound refinements are. We then implement the resulting argumentation model using Bayesian Belief Networks and provide a semi-automated way aided by Natural Language Processing techniques to carry out the proposed evaluation. We have performed an initial validation on our work using a small case-study involving an electronic document management system. @InProceedings{ASE13p670, author = {Varsha Veerappa and Rachel Harrison}, title = {Assessing the Maturity of Requirements through Argumentation: A Good Enough Approach}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {670--675}, doi = {}, year = {2013}, } |
|
Wong, Edmund |
ASE '13-NEWIDEAS: "AutoComment: Mining Question ..."
AutoComment: Mining Question and Answer Sites for Automatic Comment Generation
Edmund Wong, Jinqiu Yang, and Lin Tan (University of Waterloo, Canada) Code comments improve software maintainability. To address the comment scarcity issue, we propose a new automatic comment generation approach, which mines comments from a large programming Question and Answer (Q&A) site. Q&A sites allow programmers to post questions and receive solutions, which contain code segments together with their descriptions, referred to as code-description mappings. We develop AutoComment to extract such mappings, and leverage them to generate description comments automatically for similar code segments matched in open-source projects. We apply AutoComment to analyze Java and Android tagged Q&A posts to extract 132,767 code-description mappings, which help AutoComment to generate 102 comments automatically for 23 Java and Android projects. The user study results show that the majority of the participants consider the generated comments accurate, adequate, concise, and useful in helping them understand the code. @InProceedings{ASE13p562, author = {Edmund Wong and Jinqiu Yang and Lin Tan}, title = {AutoComment: Mining Question and Answer Sites for Automatic Comment Generation}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {562--567}, doi = {}, year = {2013}, } |
|
Wüest, Dustin |
ASE '13-NEWIDEAS: "Semi-automatic Generation ..."
Semi-automatic Generation of Metamodels from Model Sketches
Dustin Wüest, Norbert Seyff, and Martin Glinz (University of Zurich, Switzerland) Traditionally, metamodeling is an upfront activity performed by experts for defining modeling languages. Modeling tools then typically restrict modelers to using only constructs defined in the metamodel. This is inappropriate when users want to sketch graphical models without any restrictions and only later assign meanings to the sketched elements. Upfront metamodeling also complicates the creation of domain-specific languages, as it requires experts with both domain and metamodeling expertise. In this paper we present a new approach that supports modelers in creating metamodels for diagrams they have sketched or are currently sketching. Metamodels are defined in a semi-automatic, interactive way by annotating diagram elements and automated model analysis. Our approach requires no metamodeling expertise and supports the co-evolution of models and meta-models. @InProceedings{ASE13p664, author = {Dustin Wüest and Norbert Seyff and Martin Glinz}, title = {Semi-automatic Generation of Metamodels from Model Sketches}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {664--669}, doi = {}, year = {2013}, } |
|
Xu, Chang |
ASE '13-NEWIDEAS: "Environment Rematching: Toward ..."
Environment Rematching: Toward Dependability Improvement for Self-Adaptive Applications
Chang Xu , Wenhua Yang, Xiaoxing Ma , Chun Cao , and Jian Lü (Nanjing University, China) Self-adaptive applications can easily contain faults. Existing approaches detect faults, but can still leave some undetected and manifesting into failures at runtime. In this paper, we study the correlation between occurrences of application failure and those of consistency failure. We propose fixing consistency failure to reduce application failure at runtime. We name this environment rematching, which can systematically reconnect a self-adaptive application to its environment in a consistent way. We also propose enforcing atomicity for application semantics during the rematching to avoid its side effect. We evaluated our approach using 12 self-adaptive robot-car applications by both simulated and real experiments. The experimental results confirmed our approach’s effectiveness in improving dependability for all applications by 12.5-52.5%. @InProceedings{ASE13p592, author = {Chang Xu and Wenhua Yang and Xiaoxing Ma and Chun Cao and Jian Lü}, title = {Environment Rematching: Toward Dependability Improvement for Self-Adaptive Applications}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {592--597}, doi = {}, year = {2013}, } |
|
Yang, Jinqiu |
ASE '13-NEWIDEAS: "AutoComment: Mining Question ..."
AutoComment: Mining Question and Answer Sites for Automatic Comment Generation
Edmund Wong, Jinqiu Yang, and Lin Tan (University of Waterloo, Canada) Code comments improve software maintainability. To address the comment scarcity issue, we propose a new automatic comment generation approach, which mines comments from a large programming Question and Answer (Q&A) site. Q&A sites allow programmers to post questions and receive solutions, which contain code segments together with their descriptions, referred to as code-description mappings. We develop AutoComment to extract such mappings, and leverage them to generate description comments automatically for similar code segments matched in open-source projects. We apply AutoComment to analyze Java and Android tagged Q&A posts to extract 132,767 code-description mappings, which help AutoComment to generate 102 comments automatically for 23 Java and Android projects. The user study results show that the majority of the participants consider the generated comments accurate, adequate, concise, and useful in helping them understand the code. @InProceedings{ASE13p562, author = {Edmund Wong and Jinqiu Yang and Lin Tan}, title = {AutoComment: Mining Question and Answer Sites for Automatic Comment Generation}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {562--567}, doi = {}, year = {2013}, } |
|
Yang, Wenhua |
ASE '13-NEWIDEAS: "Environment Rematching: Toward ..."
Environment Rematching: Toward Dependability Improvement for Self-Adaptive Applications
Chang Xu , Wenhua Yang, Xiaoxing Ma , Chun Cao , and Jian Lü (Nanjing University, China) Self-adaptive applications can easily contain faults. Existing approaches detect faults, but can still leave some undetected and manifesting into failures at runtime. In this paper, we study the correlation between occurrences of application failure and those of consistency failure. We propose fixing consistency failure to reduce application failure at runtime. We name this environment rematching, which can systematically reconnect a self-adaptive application to its environment in a consistent way. We also propose enforcing atomicity for application semantics during the rematching to avoid its side effect. We evaluated our approach using 12 self-adaptive robot-car applications by both simulated and real experiments. The experimental results confirmed our approach’s effectiveness in improving dependability for all applications by 12.5-52.5%. @InProceedings{ASE13p592, author = {Chang Xu and Wenhua Yang and Xiaoxing Ma and Chun Cao and Jian Lü}, title = {Environment Rematching: Toward Dependability Improvement for Self-Adaptive Applications}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {592--597}, doi = {}, year = {2013}, } |
|
Ziarek, Lukasz |
ASE '13-NEWIDEAS: "Flow Permissions for Android ..."
Flow Permissions for Android
Shashank Holavanalli, Don Manuel, Vishwas Nanjundaswamy, Brian Rosenberg, Feng Shen, Steven Y. Ko, and Lukasz Ziarek (SUNY Buffalo, USA) This paper proposes Flow Permissions, an extension to the Android permission mechanism. Unlike the existing permission mechanism our permission mechanism contains semantic information based on information flows. Flow Permissions allow users to examine and grant explicit information flows within an application (e.g., a permission for reading the phone number and sending it over the network) as well as implicit information flows across multiple applications (e.g., a permission for reading the phone number and sending it to another application already installed on the user's phone). Our goal with Flow Permissions is to provide visibility into the holistic behavior of the applications installed on a user's phone. Our evaluation compares our approach to dynamic flow tracking techniques; our results with 600 popular applications and 1,200 malicious applications show that our approach is practical and effective in deriving Flow Permissions statically. @InProceedings{ASE13p652, author = {Shashank Holavanalli and Don Manuel and Vishwas Nanjundaswamy and Brian Rosenberg and Feng Shen and Steven Y. Ko and Lukasz Ziarek}, title = {Flow Permissions for Android}, booktitle = {Proc.\ ASE}, publisher = {IEEE}, pages = {652--657}, doi = {}, year = {2013}, } |
79 authors
proc time: 0.31