Powered by
1st International Workshop on CrowdSourcing in Software Engineering (CSI-SE 2014),
June 2, 2014,
Hyderabad, India
1st International Workshop on CrowdSourcing in Software Engineering (CSI-SE 2014)
Message from the Chairs
We would like to take this opportunity to welcome you to CSI-SE 2014, the first workshop on Crowdsourcing in Software Engineering, held on June 2nd, 2014 in Hyderabad, India, co-located with ICSE 2014.
Brazil Software Crowdsourcing: A First Step in a Multi-year Study
Rafael Prikladnicki, Leticia Machado, Erran Carmel, and Cleidson R. B. de Souza
(PUCRS, Brazil; American University, USA; Vale Institute of Technology, Brazil; Federal University of Pará, Brazil)
Crowdsourcing means outsourcing to a large network of people—a crowd. This form of managing work allocation has become much more sophisticated in recent years due to improvements in technology and changes in the work ecosystem. Crowdsourcing portends-- not only the disruption of outsourcing-- but the disruption of the entire global labor market. Small, atomized, tasks that can be completed and paid for in small increments are unprecedented in the history of work. Software has been the pioneer in all the large mega-trends of the last generation: in computer technology, technological entrepreneurship, offshore outsourcing, and now-- in crowdsourcing. This paper describes the starting point of a research project that aims to investigate the Brazilian software labor and industry markets. These markets are being transformed and disrupted as a result of the new phenomena of crowdsourcing. To be more specific, we aim to understand how the three elements of crowdsourcing are emerging in Brazil – the buyers, the platforms, and the crowd. The goal of our project is to identify the challenges faced by Brazilian software developers engaged in crowdsourcing platforms as well as their best practices in order to provide recommendations to the government and support for new developers interested in joining this market.
@InProceedings{CSI-SE14p1,
author = {Rafael Prikladnicki and Leticia Machado and Erran Carmel and Cleidson R. B. de Souza},
title = {Brazil Software Crowdsourcing: A First Step in a Multi-year Study},
booktitle = {Proc.\ CSI-SE},
publisher = {ACM},
pages = {1--4},
doi = {},
year = {2014},
}
Method-Call Recommendations from Implicit Developer Feedback
Sven Amann, Sebastian Proksch, and
Mira Mezini
(TU Darmstadt, Germany)
When developers use the code completion in their Integrated Development Environment (IDE), they provide implicit feedback about the usage of the Application Programming Interfaces (APIs) they program against.
We demonstrate how to apply Collaborative Filtering techniques to compute context-sensitive completion recommendations from such feedback and discuss how the approach can be used to bring the knowledge of the crowd to every developer.
@InProceedings{CSI-SE14p5,
author = {Sven Amann and Sebastian Proksch and Mira Mezini},
title = {Method-Call Recommendations from Implicit Developer Feedback},
booktitle = {Proc.\ CSI-SE},
publisher = {ACM},
pages = {5--6},
doi = {},
year = {2014},
}
Researching Crowdsourcing Software Development: Perspectives and Concerns
Klaas-Jan Stol and Brian Fitzgerald
(Lero, Ireland; University of Limerick, Ireland)
Crowdsourcing is an emerging form of `outsourcing’ software development. While there has been considerable research in the area of crowdsourcing in general, very little research has focused specifically on how crowdsourcing works in a software development context, and as far as we know, there have been no published studies of crowdsourcing software development from a customer perspective. Based on a review of the literature, we identified a number of key concerns related to crowdsourcing that are of particular importance in a software development context. Furthermore, we observed a number of recurring key stakeholders, or actors, each of whom has a unique perspective on crowdsourcing. This paper presents a research framework that consists of the various combinations of stakeholders and key concerns. The framework can be used to guide future research on the use of crowdsourcing as a `sourcing’ strategy, as well as a means to review and synthesize research findings so as to be able to compare studies on crowdsourcing in a software development context.
@InProceedings{CSI-SE14p7,
author = {Klaas-Jan Stol and Brian Fitzgerald},
title = {Researching Crowdsourcing Software Development: Perspectives and Concerns},
booktitle = {Proc.\ CSI-SE},
publisher = {ACM},
pages = {7--10},
doi = {},
year = {2014},
}
An Exploratory Study of Contribution Barriers Experienced by Newcomers to Open Source Software Projects
Christoph Hannebauer, Matthias Book, and
Volker Gruhn
(University of Duisburg-Essen, Germany)
Contributing to a Free, Libre and Open Source Software (FLOSS) project is not a trivial task even for experienced developers: Beyond the effort required for understanding and editing a project's source code for one's own purposes, submitting the changes back to the community requires additional motivation, time, and social and technical effort. Although several surveys have examined the dynamics driving FLOSS contributors, most focus either on the motivations of core developers or indicators of potential long-term commitment, i.e. the small but quite involved and visible minority at the core of a project. Our survey in contrast examines the experiences of the much larger, but nearly invisible group of developers who are just making and submitting their first patch, and identifies barriers that hinder or even prevent them from making a valuable contribution.
@InProceedings{CSI-SE14p11,
author = {Christoph Hannebauer and Matthias Book and Volker Gruhn},
title = {An Exploratory Study of Contribution Barriers Experienced by Newcomers to Open Source Software Projects},
booktitle = {Proc.\ CSI-SE},
publisher = {ACM},
pages = {11--14},
doi = {},
year = {2014},
}
Utilization of Synergetic Human-Machine Clouds: A Big Data Cleaning Case
Deniz Iren, Gokhan Kul, and Semih Bilgen
(Middle East Technical University, Turkey)
Cloud computing and crowdsourcing are growing trends in IT. Combining the strengths of both machine and human clouds within a hybrid design enables us to overcome certain problems and achieve efficiencies. In this paper we present a case in which we developed a hybrid, throw-away prototype software system to solve a big data cleaning problem in which we corrected and normalized a data set of 53,822 academic publication records. The first step in our solution consists of utilization of external DOI query web services to label the records with matching DOIs. Then we used customized string similarity calculation algorithms based on Levensthein Distance and Jaccard Index to grade the similarity between records. Finally we used crowdsourcing to identify duplicates among the residual record set consisting of similar yet not identical records. We consider this proof of concept to be successful and report that we achieved certain results that we could not have achieved by using either human or machine clouds alone.
@InProceedings{CSI-SE14p15,
author = {Deniz Iren and Gokhan Kul and Semih Bilgen},
title = {Utilization of Synergetic Human-Machine Clouds: A Big Data Cleaning Case},
booktitle = {Proc.\ CSI-SE},
publisher = {ACM},
pages = {15--18},
doi = {},
year = {2014},
}
proc time: 0.94