Workshop JAMAICA 2014 – Author Index |
Contents -
Abstracts -
Authors
|
Altinger, Harald |
![]() Harald Altinger, Franz Wotawa, and Markus Schurius (Audi Electronics Venture, Germany; Graz University of Technology, Austria) About 90% of all the current car innovations are based on electronics and software. The software used in a car has increased a lot reaching the size of 100 Million lines of code. Therefore, testing of automotive software has become a very critical task. In this paper we report on a questionnaire survey we carried out asking for information about testing activities within the automotive industry. We report on testing and test automation methods and tools in current use. In the survey we distinguished the different fields in the automotive industry where software development happens, i.e., research, pre-development, and series development. In addition we also discuss the opinions of the survey participants to various outlook topics. ![]() |
|
Altman, Erik |
![]() A. Omar Portillo-Dominguez, Miao Wang, John Murphy, Damien Magoni, Nick Mitchell, Peter F. Sweeney, and Erik Altman (Lero, Ireland; University College Dublin, Ireland; University of Bordeaux, France; IBM Research, USA) Performance testing in distributed environments is challenging. Specifically, the identification of performance issues and their root causes are time-consuming and complex tasks which heavily rely on expertise. To simplify these tasks, many researchers have been developing tools with built-in expertise. However limitations exist in these tools, such as managing huge volumes of distributed data, that prevent their efficient usage for performance testing of highly distributed environments. To address these limitations, this paper presents an adaptive framework to automate the usage of expert systems in performance testing. Our validation assessed the accuracy of the framework and the time savings that it brings to testers. The results proved the benefits of the framework by achieving a significant decrease in the time invested in performance analysis and testing. ![]() |
|
Beckmann, Kai |
![]() Marcus Thoss, Kai Beckmann, Reinhold Kroeger, Marco Muenchhof, and Christian Mellert (RheinMain University of Applied Sciences, Germany; Eckelmann, Germany) The product- and technology-specific nature of embedded software components often results in a high overhead for the setup and automation of testing procedures. This paper presents the adaptation of the generic, modular Eclipse Test and Performance Tools Platform Project (TPTP) framework for managing and running automated tests and creating test reports for firmware releases of a computerized numerical control (CNC) device product. TPTP, built upon the development and user interface context of the Eclipse platform, is extended by product-specific plugins and TPTP module extensions. The actual test execution is performed remotely on CNC devices, controlled and evaluated by the testing framework. CNC firmware releases, configuration setups and the test results become manageable as TPTP resources. To facilitate modelling of test oracles and complex algorithms for CNC track evaluation, dynamic MATLAB code execution in the course of test procedures is supported. ![]() |
|
Gao, Jerry |
![]() Chuanqi Tao and Jerry Gao (Nanjing University of Science and Technology, China; San Jose State University, USA) With the rapid advance of mobile computing technology and wireless networking, there is a significant increase of mobile subscriptions. This drives a strong demand on mobile application testing on mobile devices. Since mobile APPs are native to mobile devices, an underlying mobile platform becomes the basic foundation of their test environments. To achieve effective test automation, test solutions must be compatible, deployable, and executable on different mobile platforms, devices, network, and appliance APIs. This paper is written to provide an approach to modeling mobile test environments based on a Mobile Test Environment Semantic Tree (MTE_ST). Based on this model, the paper discusses test complexity evaluation methods for test environment. Furthermore, some case study results are reported to demonstrate and analyze the proposed testing models. ![]() |
|
Garn, Bernhard |
![]() Bernhard Garn, Ioannis Kapsalis, Dimitris E. Simos, and Severin Winkler (SBA Research, Austria; Security Research, Austria) Case studies for evaluating tools in security testing are powerful. Although they cannot achieve the scientific rigor of formal experiments, the results can provide sufficient information to help professionals judge if a specific technology being evaluated will benefit their organization. This paper reports on a case study done for evaluating and revisiting a recently introduced combinatorial testing methodology used for web application security purposes. It further reports on undertaken practical experiments thus strengthening the applicability of combinatorial testing to web application security testing. ![]() |
|
Kapsalis, Ioannis |
![]() Bernhard Garn, Ioannis Kapsalis, Dimitris E. Simos, and Severin Winkler (SBA Research, Austria; Security Research, Austria) Case studies for evaluating tools in security testing are powerful. Although they cannot achieve the scientific rigor of formal experiments, the results can provide sufficient information to help professionals judge if a specific technology being evaluated will benefit their organization. This paper reports on a case study done for evaluating and revisiting a recently introduced combinatorial testing methodology used for web application security purposes. It further reports on undertaken practical experiments thus strengthening the applicability of combinatorial testing to web application security testing. ![]() |
|
King, Tariq M. |
![]() Jorge Martinez, Troy Thomas, and Tariq M. King (Ultimate Software, USA) Model-driven engineering (MDE) continues to raise the level of abstraction used in software development. Software testing researchers and practitioners have been adopting MDE principles, and applying them to software testing activities. Examples include the use of domain-specific languages for functional testing and test automation. In this paper we present the design of a layered middleware architecture to support domain-specific, functional UI test automation. Building on experiences gained implementing a Selenium-based framework for a large-scale agile project, we present design ideas that raise the abstraction level in UI test automation frameworks. Design considerations are discussed to provoke thoughts and ideas on automation frameworks. ![]() |
|
Kroeger, Reinhold |
![]() Marcus Thoss, Kai Beckmann, Reinhold Kroeger, Marco Muenchhof, and Christian Mellert (RheinMain University of Applied Sciences, Germany; Eckelmann, Germany) The product- and technology-specific nature of embedded software components often results in a high overhead for the setup and automation of testing procedures. This paper presents the adaptation of the generic, modular Eclipse Test and Performance Tools Platform Project (TPTP) framework for managing and running automated tests and creating test reports for firmware releases of a computerized numerical control (CNC) device product. TPTP, built upon the development and user interface context of the Eclipse platform, is extended by product-specific plugins and TPTP module extensions. The actual test execution is performed remotely on CNC devices, controlled and evaluated by the testing framework. CNC firmware releases, configuration setups and the test results become manageable as TPTP resources. To facilitate modelling of test oracles and complex algorithms for CNC track evaluation, dynamic MATLAB code execution in the course of test procedures is supported. ![]() |
|
Magoni, Damien |
![]() A. Omar Portillo-Dominguez, Miao Wang, John Murphy, Damien Magoni, Nick Mitchell, Peter F. Sweeney, and Erik Altman (Lero, Ireland; University College Dublin, Ireland; University of Bordeaux, France; IBM Research, USA) Performance testing in distributed environments is challenging. Specifically, the identification of performance issues and their root causes are time-consuming and complex tasks which heavily rely on expertise. To simplify these tasks, many researchers have been developing tools with built-in expertise. However limitations exist in these tools, such as managing huge volumes of distributed data, that prevent their efficient usage for performance testing of highly distributed environments. To address these limitations, this paper presents an adaptive framework to automate the usage of expert systems in performance testing. Our validation assessed the accuracy of the framework and the time savings that it brings to testers. The results proved the benefits of the framework by achieving a significant decrease in the time invested in performance analysis and testing. ![]() |
|
Martinez, Jorge |
![]() Jorge Martinez, Troy Thomas, and Tariq M. King (Ultimate Software, USA) Model-driven engineering (MDE) continues to raise the level of abstraction used in software development. Software testing researchers and practitioners have been adopting MDE principles, and applying them to software testing activities. Examples include the use of domain-specific languages for functional testing and test automation. In this paper we present the design of a layered middleware architecture to support domain-specific, functional UI test automation. Building on experiences gained implementing a Selenium-based framework for a large-scale agile project, we present design ideas that raise the abstraction level in UI test automation frameworks. Design considerations are discussed to provoke thoughts and ideas on automation frameworks. ![]() |
|
Mellert, Christian |
![]() Marcus Thoss, Kai Beckmann, Reinhold Kroeger, Marco Muenchhof, and Christian Mellert (RheinMain University of Applied Sciences, Germany; Eckelmann, Germany) The product- and technology-specific nature of embedded software components often results in a high overhead for the setup and automation of testing procedures. This paper presents the adaptation of the generic, modular Eclipse Test and Performance Tools Platform Project (TPTP) framework for managing and running automated tests and creating test reports for firmware releases of a computerized numerical control (CNC) device product. TPTP, built upon the development and user interface context of the Eclipse platform, is extended by product-specific plugins and TPTP module extensions. The actual test execution is performed remotely on CNC devices, controlled and evaluated by the testing framework. CNC firmware releases, configuration setups and the test results become manageable as TPTP resources. To facilitate modelling of test oracles and complex algorithms for CNC track evaluation, dynamic MATLAB code execution in the course of test procedures is supported. ![]() |
|
Mitchell, Nick |
![]() A. Omar Portillo-Dominguez, Miao Wang, John Murphy, Damien Magoni, Nick Mitchell, Peter F. Sweeney, and Erik Altman (Lero, Ireland; University College Dublin, Ireland; University of Bordeaux, France; IBM Research, USA) Performance testing in distributed environments is challenging. Specifically, the identification of performance issues and their root causes are time-consuming and complex tasks which heavily rely on expertise. To simplify these tasks, many researchers have been developing tools with built-in expertise. However limitations exist in these tools, such as managing huge volumes of distributed data, that prevent their efficient usage for performance testing of highly distributed environments. To address these limitations, this paper presents an adaptive framework to automate the usage of expert systems in performance testing. Our validation assessed the accuracy of the framework and the time savings that it brings to testers. The results proved the benefits of the framework by achieving a significant decrease in the time invested in performance analysis and testing. ![]() |
|
Muenchhof, Marco |
![]() Marcus Thoss, Kai Beckmann, Reinhold Kroeger, Marco Muenchhof, and Christian Mellert (RheinMain University of Applied Sciences, Germany; Eckelmann, Germany) The product- and technology-specific nature of embedded software components often results in a high overhead for the setup and automation of testing procedures. This paper presents the adaptation of the generic, modular Eclipse Test and Performance Tools Platform Project (TPTP) framework for managing and running automated tests and creating test reports for firmware releases of a computerized numerical control (CNC) device product. TPTP, built upon the development and user interface context of the Eclipse platform, is extended by product-specific plugins and TPTP module extensions. The actual test execution is performed remotely on CNC devices, controlled and evaluated by the testing framework. CNC firmware releases, configuration setups and the test results become manageable as TPTP resources. To facilitate modelling of test oracles and complex algorithms for CNC track evaluation, dynamic MATLAB code execution in the course of test procedures is supported. ![]() |
|
Murphy, John |
![]() A. Omar Portillo-Dominguez, Miao Wang, John Murphy, Damien Magoni, Nick Mitchell, Peter F. Sweeney, and Erik Altman (Lero, Ireland; University College Dublin, Ireland; University of Bordeaux, France; IBM Research, USA) Performance testing in distributed environments is challenging. Specifically, the identification of performance issues and their root causes are time-consuming and complex tasks which heavily rely on expertise. To simplify these tasks, many researchers have been developing tools with built-in expertise. However limitations exist in these tools, such as managing huge volumes of distributed data, that prevent their efficient usage for performance testing of highly distributed environments. To address these limitations, this paper presents an adaptive framework to automate the usage of expert systems in performance testing. Our validation assessed the accuracy of the framework and the time savings that it brings to testers. The results proved the benefits of the framework by achieving a significant decrease in the time invested in performance analysis and testing. ![]() |
|
Portillo-Dominguez, A. Omar |
![]() A. Omar Portillo-Dominguez, Miao Wang, John Murphy, Damien Magoni, Nick Mitchell, Peter F. Sweeney, and Erik Altman (Lero, Ireland; University College Dublin, Ireland; University of Bordeaux, France; IBM Research, USA) Performance testing in distributed environments is challenging. Specifically, the identification of performance issues and their root causes are time-consuming and complex tasks which heavily rely on expertise. To simplify these tasks, many researchers have been developing tools with built-in expertise. However limitations exist in these tools, such as managing huge volumes of distributed data, that prevent their efficient usage for performance testing of highly distributed environments. To address these limitations, this paper presents an adaptive framework to automate the usage of expert systems in performance testing. Our validation assessed the accuracy of the framework and the time savings that it brings to testers. The results proved the benefits of the framework by achieving a significant decrease in the time invested in performance analysis and testing. ![]() |
|
Schurius, Markus |
![]() Harald Altinger, Franz Wotawa, and Markus Schurius (Audi Electronics Venture, Germany; Graz University of Technology, Austria) About 90% of all the current car innovations are based on electronics and software. The software used in a car has increased a lot reaching the size of 100 Million lines of code. Therefore, testing of automotive software has become a very critical task. In this paper we report on a questionnaire survey we carried out asking for information about testing activities within the automotive industry. We report on testing and test automation methods and tools in current use. In the survey we distinguished the different fields in the automotive industry where software development happens, i.e., research, pre-development, and series development. In addition we also discuss the opinions of the survey participants to various outlook topics. ![]() |
|
Simos, Dimitris E. |
![]() Bernhard Garn, Ioannis Kapsalis, Dimitris E. Simos, and Severin Winkler (SBA Research, Austria; Security Research, Austria) Case studies for evaluating tools in security testing are powerful. Although they cannot achieve the scientific rigor of formal experiments, the results can provide sufficient information to help professionals judge if a specific technology being evaluated will benefit their organization. This paper reports on a case study done for evaluating and revisiting a recently introduced combinatorial testing methodology used for web application security purposes. It further reports on undertaken practical experiments thus strengthening the applicability of combinatorial testing to web application security testing. ![]() |
|
Sweeney, Peter F. |
![]() A. Omar Portillo-Dominguez, Miao Wang, John Murphy, Damien Magoni, Nick Mitchell, Peter F. Sweeney, and Erik Altman (Lero, Ireland; University College Dublin, Ireland; University of Bordeaux, France; IBM Research, USA) Performance testing in distributed environments is challenging. Specifically, the identification of performance issues and their root causes are time-consuming and complex tasks which heavily rely on expertise. To simplify these tasks, many researchers have been developing tools with built-in expertise. However limitations exist in these tools, such as managing huge volumes of distributed data, that prevent their efficient usage for performance testing of highly distributed environments. To address these limitations, this paper presents an adaptive framework to automate the usage of expert systems in performance testing. Our validation assessed the accuracy of the framework and the time savings that it brings to testers. The results proved the benefits of the framework by achieving a significant decrease in the time invested in performance analysis and testing. ![]() |
|
Tao, Chuanqi |
![]() Chuanqi Tao and Jerry Gao (Nanjing University of Science and Technology, China; San Jose State University, USA) With the rapid advance of mobile computing technology and wireless networking, there is a significant increase of mobile subscriptions. This drives a strong demand on mobile application testing on mobile devices. Since mobile APPs are native to mobile devices, an underlying mobile platform becomes the basic foundation of their test environments. To achieve effective test automation, test solutions must be compatible, deployable, and executable on different mobile platforms, devices, network, and appliance APIs. This paper is written to provide an approach to modeling mobile test environments based on a Mobile Test Environment Semantic Tree (MTE_ST). Based on this model, the paper discusses test complexity evaluation methods for test environment. Furthermore, some case study results are reported to demonstrate and analyze the proposed testing models. ![]() |
|
Thomas, Troy |
![]() Jorge Martinez, Troy Thomas, and Tariq M. King (Ultimate Software, USA) Model-driven engineering (MDE) continues to raise the level of abstraction used in software development. Software testing researchers and practitioners have been adopting MDE principles, and applying them to software testing activities. Examples include the use of domain-specific languages for functional testing and test automation. In this paper we present the design of a layered middleware architecture to support domain-specific, functional UI test automation. Building on experiences gained implementing a Selenium-based framework for a large-scale agile project, we present design ideas that raise the abstraction level in UI test automation frameworks. Design considerations are discussed to provoke thoughts and ideas on automation frameworks. ![]() |
|
Thoss, Marcus |
![]() Marcus Thoss, Kai Beckmann, Reinhold Kroeger, Marco Muenchhof, and Christian Mellert (RheinMain University of Applied Sciences, Germany; Eckelmann, Germany) The product- and technology-specific nature of embedded software components often results in a high overhead for the setup and automation of testing procedures. This paper presents the adaptation of the generic, modular Eclipse Test and Performance Tools Platform Project (TPTP) framework for managing and running automated tests and creating test reports for firmware releases of a computerized numerical control (CNC) device product. TPTP, built upon the development and user interface context of the Eclipse platform, is extended by product-specific plugins and TPTP module extensions. The actual test execution is performed remotely on CNC devices, controlled and evaluated by the testing framework. CNC firmware releases, configuration setups and the test results become manageable as TPTP resources. To facilitate modelling of test oracles and complex algorithms for CNC track evaluation, dynamic MATLAB code execution in the course of test procedures is supported. ![]() |
|
Wang, Miao |
![]() A. Omar Portillo-Dominguez, Miao Wang, John Murphy, Damien Magoni, Nick Mitchell, Peter F. Sweeney, and Erik Altman (Lero, Ireland; University College Dublin, Ireland; University of Bordeaux, France; IBM Research, USA) Performance testing in distributed environments is challenging. Specifically, the identification of performance issues and their root causes are time-consuming and complex tasks which heavily rely on expertise. To simplify these tasks, many researchers have been developing tools with built-in expertise. However limitations exist in these tools, such as managing huge volumes of distributed data, that prevent their efficient usage for performance testing of highly distributed environments. To address these limitations, this paper presents an adaptive framework to automate the usage of expert systems in performance testing. Our validation assessed the accuracy of the framework and the time savings that it brings to testers. The results proved the benefits of the framework by achieving a significant decrease in the time invested in performance analysis and testing. ![]() |
|
Winkler, Severin |
![]() Bernhard Garn, Ioannis Kapsalis, Dimitris E. Simos, and Severin Winkler (SBA Research, Austria; Security Research, Austria) Case studies for evaluating tools in security testing are powerful. Although they cannot achieve the scientific rigor of formal experiments, the results can provide sufficient information to help professionals judge if a specific technology being evaluated will benefit their organization. This paper reports on a case study done for evaluating and revisiting a recently introduced combinatorial testing methodology used for web application security purposes. It further reports on undertaken practical experiments thus strengthening the applicability of combinatorial testing to web application security testing. ![]() |
|
Wotawa, Franz |
![]() Harald Altinger, Franz Wotawa, and Markus Schurius (Audi Electronics Venture, Germany; Graz University of Technology, Austria) About 90% of all the current car innovations are based on electronics and software. The software used in a car has increased a lot reaching the size of 100 Million lines of code. Therefore, testing of automotive software has become a very critical task. In this paper we report on a questionnaire survey we carried out asking for information about testing activities within the automotive industry. We report on testing and test automation methods and tools in current use. In the survey we distinguished the different fields in the automotive industry where software development happens, i.e., research, pre-development, and series development. In addition we also discuss the opinions of the survey participants to various outlook topics. ![]() |
24 authors
proc time: 0.8