Powered by
4th International Workshop on Release Engineering (RELENG 2016),
November 18, 2016,
Seattle, WA, USA
4th International Workshop on Release Engineering (RELENG 2016)
Frontmatter
Message from the Chairs
On behalf of the organizing committee (Bram Adams, Stephany Bellomo, Christian Bird, Foutse Khomh, Kim Moir, and John O'Duinn), we are pleased to present the proceedings of the 4th International Workshop on Release Engineering (RELENG 2016), which was held in Seattle on Friday the 18th of November 2016 (co-located with FSE 2016). With a practicing release engineer as a program committee co-chair, 40% of the PC consisting of practitioners, and a separate abstract track for industrial reports (in addition to a research track), RELENG has been built from the ground up to bring together researchers and practitioners in the area of release engineering to meet each other and share experiences, tools, and techniques to help organizations release high quality software products on time.
Info
Integration and Release Processes
Analysis of Marketed versus Not-Marketed Mobile App Releases
Maleknaz Nayebi, Homayoon Farrahi, and Guenther Ruhe
(University of Calgary, Canada)
Market and user characteristics of mobile apps make their release managements different from proprietary software products and web services. Despite the wealth of information regarding users' feedback of an app, an in-depth analysis of app releases is difficult due to the inconsistency and uncertainty of the information. To better understand and potentially improve app release processes, we analyze major, minor and patch releases for releases following semantic versioning. In particular, we were interested in finding out the difference between marketed and not-marketed releases. Our results show that, in general, major, minor and patch releases have significant differences in the release cycle duration, nature and change velocity. We also observed that there is a significant difference between marketed and non-marketed mobile app releases in terms of cycle duration, nature and the extent of changes, and the number of opened and closed issues.
@InProceedings{RELENG16p1,
author = {Maleknaz Nayebi and Homayoon Farrahi and Guenther Ruhe},
title = {Analysis of Marketed versus Not-Marketed Mobile App Releases},
booktitle = {Proc.\ RELENG},
publisher = {ACM},
pages = {1--4},
doi = {},
year = {2016},
}
Adopting Continuous Delivery in AAA Console Games
Jafar Soltani
(Microsoft, UK)
Introduction
Games are traditionally developed as a boxed-product. There is a development phase, followed by a bug-fixing phase. Once the level of quality is acceptable, game is released, development team moves on to a new project. They rarely need to maintain the product and release updates after the first few months.
Games are architected as a monolithic application, developed in C++. Game package contains the executable and all the art contents, which makes up most of the package.
During the development phase, the level of quality is generally low, game crashes a lot. Developers mainly care about implementing their own feature and do not think too much about the stability and quality of the game as a whole. Developers spend very little time writing automated tests and rely on manual testers to verify features. It's a common practice to develop features on feature branches. The perceived benefit is developers are productive because they can submit their work to feature branches. All features come together in the bug-fixing phase when all different parts are integrated together. At this stage, many things are broken. This is a clear example of local optimisation, as a feature submitted in a feature branch does not provide any values until it’s integrated with the rest of the game and can be released. Number of bugs could be several thousands. Everyone crunches whilst getting the game to an acceptable level.
Rare’s Approach
At Rare, we decided to change our approach and adopt Continuous Delivery. The main advantages compared to traditional approach are:
• Sustainably delivering new features that are useful to players over a long period of time.
• Minimising crunch and having happier and productive developers.
• Applying hypothesis-driven development mind-set and getting rapid feedback on whether a feature is achieving the intended outcome. This allows us to listen to user feedback and deliver a better quality game that’s more fun and enjoyable for players.
• Reduce the cost of having a large manual test team.
@InProceedings{RELENG16p5,
author = {Jafar Soltani},
title = {Adopting Continuous Delivery in AAA Console Games},
booktitle = {Proc.\ RELENG},
publisher = {ACM},
pages = {5--6},
doi = {},
year = {2016},
}
System for Meta-Data Analysis using Prediction Based Constraints for Detecting Inconsistences in Release Process with Auto-Correction
Anant Bhushan and Pradeep R. Revankar
(Adobe Systems, India)
The Software product release build process usually involves posting a lot of artifacts that are shipped or used as part of the Quality Assurance or Quality Engineering. All the artifacts that are shared or posted together constitute a successful build that can be shipped out. Sometimes, a few of the artifacts might fail to be posted to a shared location that might need an immediate attention in order to repost the artifact with manual intervention.
A system and process is implemented for analyzing metadata generated by an automated build process to detect inconsistencies in generation of build artifacts. The system analyzes data retrieved from meta-data streams, once the start of an expected metadata stream is detected the system generates a list of artifacts that the build is expected to generate, based on the prediction model. Information attributes of the meta-data stream are used for deciding on the anticipated behavior of build. Events are generated based on whether the build data is consistent with the predictions made by the model. The system can enable error detection and recovery in an automated build process. The system can adapt to changing build environment by analyzing data stream for historically relevant data properties.
@InProceedings{RELENG16p7,
author = {Anant Bhushan and Pradeep R. Revankar},
title = {System for Meta-Data Analysis using Prediction Based Constraints for Detecting Inconsistences in Release Process with Auto-Correction},
booktitle = {Proc.\ RELENG},
publisher = {ACM},
pages = {7--10},
doi = {},
year = {2016},
}
Build and Release Tooling
The SpudFarm: Converting Test Environments from Pets into Cattle
Benjamin Lau
(Renaissance Learning, USA)
About a year ago I was trying to improve our automated deployment and testing processes but found that getting access to a functioning environment reliably just wasn't possible. At the time our test environments were pets. Each was built partially by script and then finished by hand with a great expenditure of time, effort and frustration for everyone involved. After some period of use, that varied depending on what you tested on the environment, it would break again and you'd have to make some, frequently wrong, decision about whether to just start fresh (that could take up to a week) or try to debug the environment instead (that could take even longer and often did).
Here's how we went about automating the creation and management of our test environment to increase developer productivity, reduce costs and increase our ability to experiment with infrastructure configuration with reduced risk.
@InProceedings{RELENG16p11,
author = {Benjamin Lau},
title = {The SpudFarm: Converting Test Environments from Pets into Cattle},
booktitle = {Proc.\ RELENG},
publisher = {ACM},
pages = {11--11},
doi = {},
year = {2016},
}
Escaping AutoHell: A Vision for Automated Analysis and Migration of Autotools Build Systems
Jafar Al-Kofahi, Tien N. Nguyen, and
Christian Kästner
(Iowa State University, USA; University of Texas at Dallas, USA; Carnegie Mellon University, USA)
GNU Autotools is a widely used build tool in the open source community. As open source projects grow more complex, maintaining their build systems becomes more challenging, due to the lack of tool support. In this paper, we propose a platform to build support tools for GNU Autotools build systems. The platform provides an abstraction of the build system to be used in different analysis techniques.
@InProceedings{RELENG16p12,
author = {Jafar Al-Kofahi and Tien N. Nguyen and Christian Kästner},
title = {Escaping AutoHell: A Vision for Automated Analysis and Migration of Autotools Build Systems},
booktitle = {Proc.\ RELENG},
publisher = {ACM},
pages = {12--15},
doi = {},
year = {2016},
}
Building a Deploy System That Works at 40000 Feet
Kat Drobnjakovic
(Shopify, Canada)
Shopify is one of the largest Rails apps in the world and yet remains to be massively scalable and reliable. The platform is able to manage large unexpected spikes in traffic that accompany events such as new product releases, holiday shopping seasons and flash sales, and has been benchmarked to process over 25,000 requests per second, all while powering more than 300,000 businesses. Even at such a large scale, all our developers still continue to push to master and regularly deploy Shopify within 4 minutes. My talk will break down everything that can happen when deploying Shopify or any really big application.
@InProceedings{RELENG16p16,
author = {Kat Drobnjakovic},
title = {Building a Deploy System That Works at 40000 Feet},
booktitle = {Proc.\ RELENG},
publisher = {ACM},
pages = {16--16},
doi = {},
year = {2016},
}
Video
GitWaterFlow: A Successful Branching Model and Tooling, for Achieving Continuous Delivery with Multiple Version Branches
Rayene Ben Rayana, Sylvain Killian, Nicolas Trangez, and Arnaud Calmettes
(Scality, France)
Collaborative software development presents organizations with a near-constant flow of day-to-day challenges, and there is no available off-the-shelf solution that covers all needs. This paper provides insight into the hurdles that Scality’s Engineering team faced in developing and extending a sophisticated storage solution, while coping with ever-growing development teams, challenging — and regularly shifting — business requirements, and non-trivial new feature development.
The authors present a novel combination of a Git-based Version Control and Branching model with a set of innovative tools dubbed GitWaterFlow to cope with the issues encountered, including the need to both support old product versions and to provide time-critical delivery of bug fixes.
In the spirit of Continuous Delivery, Scality Release Engineering aims to ensure high quality and stability, to present short and predictable release cycles, and to minimize development disruption. The team’s experience with the GitWaterFlow model suggests that the approach has been effective in meeting these goals in the given setting, with room for unceasing fine-tuning and improvement of processes and tools.
@InProceedings{RELENG16p17,
author = {Rayene Ben Rayana and Sylvain Killian and Nicolas Trangez and Arnaud Calmettes},
title = {GitWaterFlow: A Successful Branching Model and Tooling, for Achieving Continuous Delivery with Multiple Version Branches},
booktitle = {Proc.\ RELENG},
publisher = {ACM},
pages = {17--20},
doi = {},
year = {2016},
}
Posters
Your Build Data Is Precious, Don’t Waste It! Leverage It to Deliver Great Releases
Rishika Karira and Vinay Awasthi
(Adobe Systems, India)
Installers generate a huge amount of data such as product files, registries, signature bits, and permissions.Product stakeholders require the ability to compare the difference between two builds. Usually, this comparison is performed manually by deploying the builds every time such a comparison is required, followed by running some script or tool like Beyond Compare to evaluate the differences or verifying signing/registry or permission issues. The data is then stored in XLS or CSV files for further actions. The real problem occurs when a similar comparison needs to be accomplished for multiple builds in a release cycle. In this scenario, the above-mentioned process becomes extremely inefficient as it requires an enormous amount of time and is also prone to errors or faults. To solve this problem efficiently, we have developed a system that allows users to view their product’s structural changes and run comparisons across releases, builds and versions.
@InProceedings{RELENG16p21,
author = {Rishika Karira and Vinay Awasthi},
title = {Your Build Data Is Precious, Don’t Waste It! Leverage It to Deliver Great Releases},
booktitle = {Proc.\ RELENG},
publisher = {ACM},
pages = {21--21},
doi = {},
year = {2016},
}
Get Out of Git Hell: Preventing Common Pitfalls of Git
David A. Lippa
(Amazon, USA)
At Amazon, Release Engineering falls under what we call
Operational Excellence: designing, implementing, maintaining,
and releasing a scalable product. There is an even more basic
component that is often ignored: source control. Good source
control practices are necessary but not sufficient for delivering
good software.
Over the 25+ years source control has been used, each tool has
come with its own set of pitfalls: CVS, subversion, mercurial, and
most recently, git. For decades, the unwritten rule has been for each
organization to identify and mitigate these pitfalls independently,
with an expectation that the next innovation would remediate it.
This approach scales neither for large organizations such as
Amazon nor the software engineering community at large. The real
source of this dysfunction—remote collaboration between software
engineers—must be examined and ultimately fixed. In the interim,
it is up to the engineering community to share practices
independent of software process to make up the difference.
At its core, source control is a fundamental tool of software
engineers, expected to be easily understood and “just work;” this
assumption is invalid on a number of dimensions. Neither Software
Configuration Management (SCM) nor the tools used are intuitive
to new practitioners, and must be taught. The changing landscape
of newer tools misleads even expert users of past tools who are not
screened for this critical skill. And finally, success is dependent
upon synthesis of past experience and tuning a pre-determined
process to both project goals and the team. Success, then, is stacked
against the engineering team—so what happens when source
control usage goes horribly wrong?
The baseline and team end up in “Git Hell,” slowed down, or even
blocked, by the very tool that facilitates collaboration and parallel
development. “Git Hell” originates from various sources: poor tool
design, misuse or misconfiguration of the command line interface,
and lack of understanding of the “nuts and bolts” of the tool. For
example, poor interface design or configuration, even with the
command line interface, has widespread impact. A substantial flaw
in the mechanics of git push caused substantial pain at multiple
engineering firms. The interface was straightforward: a push sends
all branches with updates to the server; adding the -f option forces
an update; combining them proved disastrous, as an engineer with
minimal knowledge of git could harm the integrity of the baseline
without even realizing it. This prior version required each
developer to add local configuration to his workstation, ensuring
others in the future would repeat the mistake.
These classes of issues are repeated at company after company,
group after group—illustrating a systemic problem with git, its
configuration, the instruction in its usage, and the interaction
between collaborating engineers. To combat this, I generalized
preventative measures as a workaround in a workshop entitled “Get
out of Git Hell” that can be shared among engineers regardless of
experience or process, at least until the root causes can be studied
and remediated.
@InProceedings{RELENG16p22,
author = {David A. Lippa},
title = {Get Out of Git Hell: Preventing Common Pitfalls of Git},
booktitle = {Proc.\ RELENG},
publisher = {ACM},
pages = {22--22},
doi = {},
year = {2016},
}
A Model Driven Method to Deploy Auto-scaling Configuration for Cloud Services
Hanieh Alipour and Yan Liu
(Concordia University, Canada)
Vendor lock-in is the issues in auto-scaling configuration; scaling configuration of a service cannot automatically transfer when the service is migrated from one cloud to another cloud. To facilitate fast service deployment, there is a need to automate the operations of auto-scaling configuration and deployment.
@InProceedings{RELENG16p23,
author = {Hanieh Alipour and Yan Liu},
title = {A Model Driven Method to Deploy Auto-scaling Configuration for Cloud Services},
booktitle = {Proc.\ RELENG},
publisher = {ACM},
pages = {23--23},
doi = {},
year = {2016},
}
proc time: 1.64