Powered by
2013 35th International Conference on Software Engineering (ICSE),
May 18–26, 2013,
San Francisco, CA, USA
Tutorial Summaries
Automated Testing of GUI Applications: Models, Tools, and Controlling Flakiness
Atif M. Memon and
Myra B. Cohen
(University of Maryland, USA; University of Nebraska-Lincoln, USA)
System testing of applications with graphical user interfaces (GUIs) such as web browsers, desktop or mobile apps, is more complex than testing from the command line. Specialized tools are needed to generate and run test cases, models are needed to quantify behavioral coverage, and changes in the environment, such as the operating system, virtual machine or system load, as well as starting states of the executions, impact the repeatability of the outcome of tests making tests appear flaky. In this tutorial, we present an overview of the state of the art in GUI testing, consisting of both lectures and demonstrations on various platforms (desktop, web and mobile applications), using an open source testing tool, GUITAR. We show how to setup a system under test, how to extract models without source code, and how to then use those models to generate and replay test cases. We then present a lecture on the various factors that may cause flakiness in the execution of GUI-centric software, and hence impact the results of analyses and experiments based on such software. We end with a demonstration of a community resource for sharing GUI testing artifacts aimed at controlling these factors. This tutorial targets both researchers who develop techniques for testing GUI software, and practitioners from industry who want to learn more about model-based GUI testing or who run and rerun GUI tests and often find their runs are flaky.
@InProceedings{ICSE13p1478,
author = {Atif M. Memon and Myra B. Cohen},
title = {Automated Testing of GUI Applications: Models, Tools, and Controlling Flakiness},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1478--1479},
doi = {},
year = {2013},
}
Build Your Own Model Checker in One Month
Jin Song Dong, Jun Sun, and Yang Liu
(National University of Singapore, Singapore; Singapore University of Technology and Design, Singapore; Nanyang Technological University, Singapore)
Model checking has established as an effective method for automatic system analysis and verification. It is making its way into many domains and methodologies. Applying model checking techniques to a new domain (which probably has its own dedicated modeling language) is, however, far from trivial. Translation-based approach works by translating domain specific languages into input languages of a model checker. Because the model checker is not designed for the domain (or equivalently, the language), translation-based approach is often ad hoc. Ideally, it is desirable to have an optimized model checker for each application domain. Implementing one with reasonable efficiency, however, requires years of dedicated efforts. In this tutorial, we will briefly survey a variety of model checking techniques. Then we will show how to develop a model checker for a language combining real-time and probabilistic features using the PAT (Process Analysis Toolkit) step-by-step, and show that it could take as short as a few weeks to develop your own model checker with reasonable efficiency. The PAT system is designed to facilitate development of customized model checkers. It has an extensible and modularized architecture to support new languages (and their operational semantics), new state reduction or abstraction techniques, new model checking algorithms, etc. Since its introduction 5 years ago, PAT has attracted more than 2500 registered users (from 500+ organisations in 60 countries) and has been applied to develop model checkers for 20 different languages.
@InProceedings{ICSE13p1480,
author = {Jin Song Dong and Jun Sun and Yang Liu},
title = {Build Your Own Model Checker in One Month},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1480--1482},
doi = {},
year = {2013},
}
Data Science for Software Engineering
Tim Menzies, Ekrem Kocaguneli, Fayola Peters, Burak Turhan, and
Leandro L. Minku
(West Virginia University, USA; University of Oulu, Finland; University of Birmingham, UK)
Target audience: Software practitioners and researchers wanting to understand the state of the art in using data science for software engineering (SE). Content: In the age of big data, data science (the knowledge of deriving meaningful outcomes from data) is an essential skill that should be equipped by software engineers. It can be used to predict useful information on new projects based on completed projects. This tutorial offers core insights about the state-of-the-art in this important field. What participants will learn: Before data science: this tutorial discusses the tasks needed to deploy machine-learning algorithms to organizations (Part1: Organization Issues). During data science: from discretization to clustering to dichotomization and statistical analysis. And the rest: When local data is scarce, we show how to adapt data from other organizations to local problems. When privacy concerns block access, we show how to privatize data while still being able to mine it. When working with data of dubious quality, we show how to prune spurious information. When data or models seem too complex, we show how to simplify data mining results. When data is too scarce to support intricate models, we show methods for generating predictions. When the world changes, and old models need to be updated, we show how to handle those updates. When the effect is too complex for one model, we show how to reason across ensembles of models. Pre-requisites: This tutorial makes minimal use of maths of advanced algorithms and would be understandable by developers and technical managers.
@InProceedings{ICSE13p1483,
author = {Tim Menzies and Ekrem Kocaguneli and Fayola Peters and Burak Turhan and Leandro L. Minku},
title = {Data Science for Software Engineering},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1483--1485},
doi = {},
year = {2013},
}
Software Analytics: Achievements and Challenges
Dongmei Zhang
and Tao Xie
(Microsoft Research, China; North Carolina State University, USA)
A huge wealth of various data exist in the practice of software development. Further rich data are produced by modern software and services in operation, many of which tend to be data-driven and/or data-producing in nature. Hidden in the data is information about the quality of software and services or the dynamics of software development. Software analytics is to utilize a data-driven approach to enable software practitioners to perform data exploration and analysis in order to obtain insightful and actionable information; such information is used for completing various tasks around software systems, software users, and software development process. This tutorial presents achievements and challenges of research and practice on principles, techniques, and applications of software analytics, highlighting success stories in industry, research achievements that are transferred to industrial practice, and future research and practice directions in software analytics.
@InProceedings{ICSE13p1486,
author = {Dongmei Zhang and Tao Xie},
title = {Software Analytics: Achievements and Challenges},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1486--1486},
doi = {},
year = {2013},
}
Developing Verified Programs with Dafny
K. Rustan M. Leino
(Microsoft Research, USA)
Dafny is a programming language and program verifier. The language includes specification constructs and the verifier checks that the program lives up to its specifications. These tutorial notes give some Dafny programs used as examples in the tutorial.
@InProceedings{ICSE13p1487,
author = {K. Rustan M. Leino},
title = {Developing Verified Programs with Dafny},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1487--1489},
doi = {},
year = {2013},
}
Software Metrics: Pitfalls and Best Practices
Eric Bouwers, Arie van Deursen
, and Joost Visser
(Software Improvement Group, Netherlands; TU Delft, Netherlands; Radboud University Nijmegen, Netherlands)
Using software metrics to keep track of the progress and quality of products and processes is a common practice in industry. Additionally, designing, validating and improving metrics is an important research area. Although using software metrics can help in reaching goals, the effects of using metrics incorrectly can be devastating. In this tutorial we leverage 10 years of metrics-based risk assessment experience to illustrate the benefits of software metrics, discuss different types of metrics and explain typical usage scenarios. Additionally, we explore various ways in which metrics can be interpreted using examples solicited from participants and practical assignments based on industry cases. During this process we will present the four common pitfalls of using software metrics. In particular, we explain why metrics should be placed in a context in order to maximize their benefits. A methodology based on benchmarking to provide such a context is discussed and illustrated by a model designed to quantify the technical quality of a software system. Examples of applying this model in industry are given and challenges involved in interpreting such a model are discussed. This tutorial provides an in-depth overview of the benefits and challenges involved in applying software metrics. At the end you will have all the information you need to use, develop and evaluate metrics constructively.
@InProceedings{ICSE13p1490,
author = {Eric Bouwers and Arie van Deursen and Joost Visser},
title = {Software Metrics: Pitfalls and Best Practices},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1490--1491},
doi = {},
year = {2013},
}
A Hands-On Java PathFinder Tutorial
Peter Mehlitz, Neha Rungta, and
Willem Visser
(NASA Ames Research Center, USA; Stellenbosch University, South Africa)
Java Pathfinder (JPF) is an open source analysis system that automatically verifies Java programs. The JPF tutorial provides an opportunity to software engineering researchers and practitioners to learn about JPF, be able to install and run JPF, and understand the concepts required to extend JPF. The hands-on tutorial will expose the attendees to the basic architecture framework of JPF, demonstrate the ways to use it for analyzing their artifacts, and illustrate how they can extend JPF to implement their own analyses. One of the defining qualities of JPF is its extensibility. JPF has been extended to support symbolic execution, directed automated random testing, different choice generation, configurable state abstractions, various heuristics for enabling bug detection, configurable search strategies, checking temporal properties and many more. JPF supports these extensions at the design level through a set of stable well defined interfaces. The interfaces are designed to not require changes to the core, yet enable the development of various JPF extensions. In this tutorial we provide attendees a hands on experience of developing different interfaces in order to extend JPF. The tutorial is targeted toward a general software engineering audiencesoftware engineering researchers and practitioners. The attendees need to have a good understanding of the Java programming language and be fairly comfortable with Java program development. The attendees are not required to have any background in Java Pathfinder, software model checking or any other formal verification techniques. The tutorial will be self-contained.
@InProceedings{ICSE13p1492,
author = {Peter Mehlitz and Neha Rungta and Willem Visser},
title = {A Hands-On Java PathFinder Tutorial},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1492--1494},
doi = {},
year = {2013},
}
Efficient Quality Assurance of Variability-Intensive Systems
Patrick Heymans, Axel Legay, and Maxime Cordy
(University of Namur, Belgium; IRISA, France; INRIA, France)
Variability is becoming an increasingly important concern in software development but techniques to cost-effectively verify and validate software in the presence of variability have yet to become widespread. This half-day tutorial offers an overview of the state of the art in an emerging discipline at the crossroads of formal methods and software engineering: quality assurance of variability-intensive systems. We will present the most significant results obtained during the last four years or so, ranging from conceptual foundations to readily usable tools. Among the various quality assurance techniques, we focus on model checking, but also extend the discussion to other techniques. With its lightweight usage of mathematics and balance between theory and practice, this tutorial is designed to be accessible to a broad audience. Researchers working in the area, willing to join it, or simply curious, will get a comprehensive picture of the recent developments. Practitioners developing variability-intensive systems are invited to discover the capabilities of our techniques and tools, and to consider integrating them in their processes.
@InProceedings{ICSE13p1495,
author = {Patrick Heymans and Axel Legay and Maxime Cordy},
title = {Efficient Quality Assurance of Variability-Intensive Systems},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1495--1497},
doi = {},
year = {2013},
}
Software Requirement Patterns
Xavier Franch
(Universitat Politècnica de Catalunya, Spain)
Software requirements reuse becomes a fundamental activity for those IT organizations that conduct requirements engineering processes in similar settings. One strategy to implement this reuse is by exploiting a catalogue of software requirement patterns (SRPs). In this tutorial, we provide an introduction to the concept of SRP, summarise several existing approaches, and reflect on the consequences on several requirements engineering processes and activities. We take one of these approaches, the PABRE framework, as exemplar for the tutorial and analyse in more depth the catalogue of SRP that is proposed. We apply the concepts given on a practical exercise.
@InProceedings{ICSE13p1498,
author = {Xavier Franch},
title = {Software Requirement Patterns},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1498--1500},
doi = {},
year = {2013},
}
proc time: 0.32