Powered by
Conference Publishing Consulting

6th International Workshop on Social Software Engineering (SSE), November 17, 2014, Hong Kong, China

SSE 2014 – Proceedings

Contents - Abstracts - Authors

6th International Workshop on Social Software Engineering (SSE)

Frontmatter

Title Page


Foreword
The Workshop on Social Software Engineering (SSE) focuses on the interplay between social computing and software engineering. The fundamental objective of SSE is to socialize the software engineering process and to find novel ways for engineering the social features of software. The SSE workshop brings together academic and industrial perspectives to provide models, methods, tools and approaches to address these issues.

Collaboration
Mon, Nov 17, 11:00 - 12:20, Hall 7

Can Collaborative Tagging Improve User Feedback? A Case Study
Rana Alkadhi, Dennis Pagano, and Bernd Bruegge
(TU München, Germany; King Saud University, Saudi Arabia)
User feedback is a rich source of information which can help developers to improve software quality and identify missing features. However, developers need to analyze user feedback in order to assess its relevance and potential impact, which bears several challenges due to its quantity, quality, structure, and content, particularly when feedback volume is high. In this paper we present the results of a case study to explore which role collaborative tagging can play to improve user feedback, in particular, its impact on the navigation within, understandability, and structure of user feedback. Our results indicate that collaborative tagging might contribute to decrease the pain when analyzing and organizing user feedback.

Supporting Collaboration of Heterogeneous Teams in an Augmented Team Room
Markus Kleffmann, Matthias Book, and Volker Gruhn ORCID logo
(University of Duisburg-Essen, Germany)
It is often difficult for a team of stakeholders with heterogeneous backgrounds to maintain a common understanding of a system’s structure and the challenges in its implementation. Thus, especially in complex software projects, risks and inconsistencies are easily overlooked. In this paper, we present the concept of an Augmented Interaction Room (AugIR), i.e. a physical team room whose walls are outfitted with wall-sized touchscreens that visualize different aspects of a software system in the form of various model sketches. These sketches can be annotated by the stakeholders in order to explicitly mark important elements or indicate aspects that are critical for project success. The AugIR strives to support the collaborative work of heterogeneous teams and especially targets the inclusion of non-technical stakeholders into the communication process. Therefore, the AugIR continuously monitors the stakeholders’ design and modeling activities and analyzes the relationships between annotated contents to automatically uncover inconsistencies, contradictions and hidden potential project risks.

Human Factors
Mon, Nov 17, 14:00 - 15:30, Hall 7

Eliciting and Visualising Trust Expectations using Persona Trust Characteristics and Goal Models
Shamal Faily and Ivan Fléchais
(Bournemouth University, UK; University of Oxford, UK)
Developers and users rely on trust to simplify complexity when building and using software. Unfortunately, the invisibility of trust and the richness of a system's context of use means that factors influencing trust are difficult to see, and assessing its implications before a system is built is complex and time-consuming. This paper presents an approach for eliciting and visualising differences between trust expectations using persona cases, goal models, and complementary tool support. We evaluate our approach by using it to identify misplaced trust expectations in a software infrastructure by its users and application developers.

One Size Doesn't Fit All: Diversifying "The User" using Personas and Emotional Scenarios
Antonio A. Lopez-Lorca, Tim Miller, Sonja Pedell, Antonette Mendoza, Alen Keirnan, and Leon Sterling
(Swinburne University of Technology, Australia; University of Melbourne, Australia)
It is common practice in software engineering to develop a product for the "user". The concepts of users and actors typically oversimplify the variety of people that could use a system in a given scenario. By developing the system for actors, many software engineers effectively develop the system for themselves, embodying the abstract actors with their own personalities - i.e. how would I use the system if I was this actor? A single perspective may be sufficient for situations with a well-defined workflow, however, many systems in the social and domestic domains should consider people's emotional response to systems, which impact product acceptance. To ensure that emotional desires are met and that a product appeals to the intended audience, we advocate the use of personas within emotional scenarios. Personas and scenarios can be used to explore the diversity of people's background, emotions and motivations, and how they would react emotionally to design decisions. We describe our experience using personas and emotional scenarios in three projects related to people's health, where emotions are ever present, in the domains of aged wellbeing and mental health.

Towards Discovering the Role of Emotions in Stack Overflow
Nicole Novielli, Fabio Calefato, and Filippo Lanubile
(University of Bari, Italy)
Today, people increasingly try to solve domain-specific problems through interaction on online Question and Answer (Q&A) sites, such as Stack Overflow. The growing success of the Stack Overflow community largely depends on the will of their members to answer others' questions. Recent research has shown that the factors that push members of online communities encompass both social and technical aspects. Yet, we argue that also the emotional style of a technical question does influence the probability of promptly obtaining a satisfying answer. In this paper, we describe the design of an empirical study aimed to investigate the role of affective lexicon on the questions posted in Stack Overflow.

Empirical Studies
Mon, Nov 17, 16:00 - 17:00, Hall 7

An Empirical Investigation of Socio-technical Code Review Metrics and Security Vulnerabilities
Andrew Meneely, Alberto C. Rodriguez Tejeda, Brian Spates, Shannon Trudeau, Danielle Neuberger, Katherine Whitlock, Christopher Ketant, and Kayla Davis
(Rochester Institute of Technology, USA)
One of the guiding principles of open source software development is to use crowds of developers to keep a watchful eye on source code. Eric Raymond declared Linus'' Law as "many eyes make all bugs shallow", with the socio-technical argument that high quality open source software emerges when developers combine together their collective experience and expertise to review code collaboratively. Vulnerabilities are a particularly nasty set of bugs that can be rare, difficult to reproduce, and require specialized skills to recognize. Does Linus' Law apply to vulnerabilities empirically? In this study, we analyzed 159,254 code reviews, 185,948 Git commits, and 667 post-release vulnerabilities in the Chromium browser project. We formulated, collected, and analyzed various metrics related to Linus' Law to explore the connection between collaborative reviews and vulnerabilities that were missed by the review process. Our statistical association results showed that source code files reviewed by more developers are, counter-intuitively, more likely to be vulnerable (even after accounting for file size). However, files are less likely to be vulnerable if they were reviewed by developers who had experience participating on prior vulnerability-fixing reviews. The results indicate that lack of security experience and lack of collaborator familiarity are key risk factors in considering Linus’ Law with vulnerabilities.

Developer Involvement Considered Harmful?: An Empirical Examination of Android Bug Resolution Times
Subhajit Datta, Proshanta Sarkar, and Subhashis Majumder
(Singapore University of Technology and Design, Singapore; Heritage Institute of Technology, India)
In large scale software development ecosystems, there is a common perception that higher developer involvement leads to faster resolution of bugs. This is based on conjectures around more ``eyeballs" making bugs ``shallow" -- whose validity and applicability are not without dispute. In this paper, we posit that the level of developer attention as well as its extent of diversity influence how quickly bugs get resolved. We report results from a study of 1,000+ Android bugs. We find statistically significant evidence that attention and diversity have contrasting relationships with the resolution time of bugs, even after controlling for factors such as interest, importance, dependency etc. Our results can offer helpful insights on team dynamics and project governance.

proc time: 1.23