ICSE 2013 Workshops
2013 35th International Conference on Software Engineering (ICSE)
Powered by
Conference Publishing Consulting

2013 2nd International Workshop on User Evaluations for Software Engineering Researchers (USER), May 26, 2013, San Francisco, CA, USA

USER 2013 – Proceedings

Contents - Abstracts - Authors

2nd International Workshop on User Evaluations for Software Engineering Researchers (USER)

Title Page

In this highly interactive workshop, attendees will collaboratively design, develop, and pilot plans for conducting user evaluations of their own tools and/or software engineering research projects. Attendees will gain practical experience with various user evaluation methods through scaffolded group exercises, panel discussions, and mentoring by a panel of user-focused software engineering researchers. Together, we will establish a community of like-minded researchers and developers to help one another improve our research and practice through user evaluation.

Empirical Evaluation of Research Prototypes at Variable Stages of Maturity
Omar Badreddin
(University of Ottawa, Canada)
Empirical evaluation of research tools is growing especially in the field of software engineering. A number of research techniques have been proposed and used in evaluating research prototypes. We take the view that evaluation of software engineering tools is best achieved in industrial settings, with real life artifacts and tasks, and with professional software engineers. However, the feasibility of such evaluation is limited for many reasons. Some challenges are related to the prototypes under study, others are related to the industrial environments where the need to meet business requirements take precedence on experimenting with new tools and techniques. In this paper, we summarize our experiences in evaluating a research prototype tool using a grounded theory study, a questionnaire, and a controlled experiment. We discuss the challenges that hindered our industrial evaluation and share ideas on how to overcome these challenges. We propose an action research study where the research tool is used by a small number of experienced professionals in an industrial project.

Surveying Developer Knowledge and Interest in Code Smells through Online Freelance Marketplaces
Aiko Yamashita and Leon MoonenORCID logo
(Simula Research Laboratory, Norway)
This paper discusses the use of freelance marketplaces to conduct a survey amongst professional developer's about specific software engineering phenomena, in our case their knowledge and interest in code smells and their detection/removal. We present the context and motivation of our research, and the idea of using freelance marketplaces for conducting studies involving software professionals. Next, we describe the design of the survey and the specifics on the selected freelance marketplace (i.e., Freelancer.com). Finally, we discuss why freelance markets constitute a feasible and advantageous approach for conducting user evaluations that involve large numbers of software professionals, and what challenges such an approach may entail.

How to Evaluate a Conflict Minimizing Task Scheduler through a User Study
Bakhtiar Khan Kasi and Anita Sarma
(University of Nebraska-Lincoln, USA)
Workspace awareness tools facilitate coordination among developers in a team by informing them of emerging conflicts due to parallel development. Several such tools have been introduced recently. However, evaluating such (collaborative) tools through user studies is nontrivial because it depends on the group dynamics and their development behavior. In this paper, we present the challenges in evaluating a collaboration tool geared towards minimizing conflicts by scheduling (independent) development tasks. We present the research questions that a user evaluation should answer along with the foreseen challenges in answering these questions. We would like to use the workshop to exchange opinions and feedback to refine the design of our user study and start a conversation on the challenges and methods for evaluating a collaborative development tools.

On Planning an Evaluation of the Impact of Identifier Names on the Readability and Quality of Smalltalk Programs
Mircea Lungu and Jan Kurš
(University of Bern, Switzerland)
One of the long running debates between programmers is whether camelCaseIdentifiers are better than underscore identifiers. This is ultimately a matter of programming language culture and personal taste, and to our best knowledge none of the camps has won the argument yet. It is our intuition that a solution exists which is superior to both the previous ones from the point of view of usability: the solution we name sentence case identifiers allows phrases as nams for program entities such as classes or methods. In this paper we propose a study in which to evaluate the impact of sentence case identifiers in practice.

A Proposed Recommender System for Eliciting Software Sustainability Requirements
Kristin Roher and Debra Richardson
(UC Irvine, USA)
Sustainability is not considered sufficiently in developing modern software systems. In spite of the looming threats of global climate change and environmental degradation [1], software companies are more concerned with product time-to-market than long-term product impacts. The research goal of this project is to overcome the barriers of incorporating sustainability into the software engineering process through the use of a recommender system to be used during requirements engineering. This system will recommend the kinds of sustainability requirements that should be considered in a given system, based on application domain, deployment locale, etc, and in so doing will lessen the workload of eliciting appropriate sustainability requirements. This research builds on an ongoing research project on Software Engineering for Sustainability.

proc time: 0.45