EAST 2014 – Author Index |
Contents -
Abstracts -
Authors
|
Chen, Fangwei |
EAST '14: "Predicting the Number of Forks ..."
Predicting the Number of Forks for Open Source Software Project
Fangwei Chen, Lei Li, Jing Jiang, and Li Zhang (Beihang University, China) GitHub is successful open source software platform which attract many developers. In GitHub, developers are allowed to fork repositories and copy repositories without asking for permission, which make contribution to projects much easier than it has ever been. It is significant to predict the number of forks for open source software projects. The prediction can help GitHub to recommend popular projects, and guide developers to find projects which are likely to succeed and worthy of their contribution. In this paper, we use stepwise regression and design a model to predict the number of forks for open source software projects. Then we collect datasets of 1,000 repositories through GitHub’s APIs. We use datasets of 700 repositories to compute the weight of attributes and realize the model. Then we use other 300 repositories to verify the prediction accuracy of our model. Advantages of our model include: (1) Some attributes used in our model are new. This is because GitHub is different from traditional open source software platforms and has some new features. These new features are used to build our model. (2) Our model uses project information within t month after its creation, and predicts the number of forks in the month T (t < T). It allows users to set the combination of time parameters and satisfy their own needs. (3) Our model predicts the exact number of forks, rather than the range of the number of forks (4) Experiments show that our model has high prediction accuracy. For example, we use project information with 3 months to prediction the number of forks in month 6 after its creation. The correlation coefficient is as high as 0.992, and the median number of absolute difference between prediction value and actual value is only 1.8. It shows that the predicted number of forks is very close to the actual number of forks. Our model also has high prediction accuracy when we set other time parameters. @InProceedings{EAST14p40, author = {Fangwei Chen and Lei Li and Jing Jiang and Li Zhang}, title = {Predicting the Number of Forks for Open Source Software Project}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {40--47}, doi = {}, year = {2014}, } |
|
Chen, Lu |
EAST '14: "System-Level Testing of Cyber-Physical ..."
System-Level Testing of Cyber-Physical Systems Based on Problem Concerns
Zhi Li and Lu Chen (Guangxi Normal University, China) In this paper we propose a problem-oriented approach to system-level testing of cyber-physical systems based on Jackson’s notion of problem concerns. Some close associations between problem concerns and potential faults in the problem space are made, which necessitates system-level testing. Finally, a research agenda has been put forward with the goal of building a repository of system faults and mining particular problem concerns for system-level testing. @InProceedings{EAST14p60, author = {Zhi Li and Lu Chen}, title = {System-Level Testing of Cyber-Physical Systems Based on Problem Concerns}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {60--62}, doi = {}, year = {2014}, } |
|
Chen, Mei-Hwa |
EAST '14: "Online Reliability Prediction ..."
Online Reliability Prediction of Service Composition
Zuohua Ding, Ting Xu, and Mei-Hwa Chen (Zhejiang Sci-Tech University, China; SUNY Albany, USA) Reliability is an important quality attribute for service oriented software. Existing approaches use static data collected from the testing to predict the software reliability. These approaches do not address the dynamism of service behavior after deployment. In this paper, we propose a method from any time moment to predict reliability of service composition in the near future. We first collect the service runtime data to predict the future failure data by using the ARIMA model. We then predict the reliability of each port based on the Nelson model, and nally we can compute the reliability of composite services. An Online Shop example is used to demonstrate the eectiveness of our method. @InProceedings{EAST14p1, author = {Zuohua Ding and Ting Xu and Mei-Hwa Chen}, title = {Online Reliability Prediction of Service Composition}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {1--8}, doi = {}, year = {2014}, } |
|
Dai, Meixi |
EAST '14: "Impact of Consecutive Changes ..."
Impact of Consecutive Changes on Later File Versions
Meixi Dai, Beijun Shen, Tao Zhang, and Min Zhao (Shanghai Jiao Tong University, China; PLA University of Science and Technology, China) By analyzing histories of program versions, many researches have shown that software quality is associated with history-related metrics, such as code-related metrics, commit-related metrics, developer-related metrics, process-related metrics, and organizational metrics etc. It has also been revealed that consecutive changes on commit-level are strongly associated with software defects. In this paper, we introduce two novel concepts of consecutive changes: CFC (chain of consecutive bug-fixing file versions) and CAC (chain of consecutive file versions where each pair of adjacent versions are submitted by different developers). And then several experiments are conducted to explore the correlation between consecutive changes and software quality by using three open-source projects from Github. Our main findings include: 1) CFCs and CACs widely exist in file version histories; 2) Consecutive changes have a negative and strong impact on the later file versions in a short term, especially when the length of consecutive change chain is 4 or 5. @InProceedings{EAST14p17, author = {Meixi Dai and Beijun Shen and Tao Zhang and Min Zhao}, title = {Impact of Consecutive Changes on Later File Versions}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {17--24}, doi = {}, year = {2014}, } |
|
Ding, Zuohua |
EAST '14: "Online Reliability Prediction ..."
Online Reliability Prediction of Service Composition
Zuohua Ding, Ting Xu, and Mei-Hwa Chen (Zhejiang Sci-Tech University, China; SUNY Albany, USA) Reliability is an important quality attribute for service oriented software. Existing approaches use static data collected from the testing to predict the software reliability. These approaches do not address the dynamism of service behavior after deployment. In this paper, we propose a method from any time moment to predict reliability of service composition in the near future. We first collect the service runtime data to predict the future failure data by using the ARIMA model. We then predict the reliability of each port based on the Nelson model, and nally we can compute the reliability of composite services. An Online Shop example is used to demonstrate the eectiveness of our method. @InProceedings{EAST14p1, author = {Zuohua Ding and Ting Xu and Mei-Hwa Chen}, title = {Online Reliability Prediction of Service Composition}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {1--8}, doi = {}, year = {2014}, } |
|
He, Yuyao |
EAST '14: "Estimation of Distribution ..."
Estimation of Distribution Algorithm using Variety of Information
Juan Yu and Yuyao He (Northwestern Polytechnical University, China) Former information of probability model and inferior individuals were discarded in the research of estimation of distribution algorithm usually, but they may contain useful information. In this paper, the former probability information is introduced to avoid premature convergence caused by continuously select superior individuals of current population tobuilt probability model , and the individual sampling from superior probability model is filtered by inferior probability model to avoid generating inferior individuals. The algorithm is simulated through the widely used knapsack examples, the results verify the validity of the proposed method,and give suggestion for the choice of parameter through simulation and analysis. @InProceedings{EAST14p25, author = {Juan Yu and Yuyao He}, title = {Estimation of Distribution Algorithm using Variety of Information}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {25--31}, doi = {}, year = {2014}, } |
|
Huang, Xin |
EAST '14: "Does Pareto's Law Apply ..."
Does Pareto's Law Apply to Evidence Distribution in Software Engineering? An Initial Report
Hao Tang, You Zhou, Xin Huang, and Guoping Rong (Nanjing University, China) Data is the source as well as raw format of evidence. As an important research methodology in evidence-based software engineering, systematic literature reviews (SLRs) are used for identifying the evidence and critically appraising the evidence, i.e. empirical studies that report (empirical) data about specific research questions. The 80/20 Rule (or Pareto's Law) reveals a 'vital few' phenomenon widely observed in many disciplines in the last century. However, the applicability of Pareto's Law to evidence distribution in software engineering (SE) is never tested yet. The objective of this paper is to investigate the applicability of Pareto's Law to the evidence distribution on specific topic areas in software engineering (in the form of systematic reviews), which may help us better understand the possible distribution of evidence in software engineering, and further improve the effectiveness and efficiency of literature search. We performed a tertiary study of SLRs in software engineering dated between 2004 and 2012. We further tested the Pareto's Law by collecting, analyzing, and interpreting the distribution (over publication venues) of the primary studies reported in the existing SLRs. Our search identified 255 SLRs, 107 of which were included according to the selection criteria. The analysis of the extracted data from these SLRs presents a preliminary view of the evidence (study) distribution in software engineering. The nonuniform distribution of evidence is supported by the data from the existing SLRs in SE. However, the present observation reflects a weaker 'vital few' relation between study and venue than the 80/20 Rule statement. Top referenced venues are suggested when researchers search for studies in software engineering. It is also noticeable to the community that the primary studies are improperly or incompletely reported in many SLRs. @InProceedings{EAST14p9, author = {Hao Tang and You Zhou and Xin Huang and Guoping Rong}, title = {Does Pareto's Law Apply to Evidence Distribution in Software Engineering? An Initial Report}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {9--16}, doi = {}, year = {2014}, } |
|
Hu, Jiajun |
EAST '14: "Empirical Studies on the NLP ..."
Empirical Studies on the NLP Techniques for Source Code Data Preprocessing
Xiaobing Sun, Xiangyue Liu, Jiajun Hu, and Junwu Zhu (Yangzhou University, China; Nanjing University, China) Program comprehension usually focuses on the significance of textual information to capture the programmers’ intent and knowledge in the software, in particular the source code. In the source code, most of the data is unstructured data, such as the natural language text in comments and identifier names. Researchers in software engineering community have developed many techniques for handling such unstructured data, such as natural language processing (NLP) and information retrieval (IR). Before using the IR technique on the unstructured source code, we must preprocess the text identifies and comments since these data is different from that used in our daily life. During this process, several operations, i.e, tokenization, splitting, stemming, etc. are usually used for preprocessing the unstructured source code. These preprocessing operations will affect the quality of the data used in the IR process. But how these preprocessing operations affect the results of IR is still an open problem. To the best of our knowledge, there are still no studies focusin on this problem. This paper attempts to fill this gap, and conducts some empirical studies to show what are the differences before and after these preprocessing operations. The results show some interesting phenomena based on using or not using these preprocessing operations. @InProceedings{EAST14p32, author = {Xiaobing Sun and Xiangyue Liu and Jiajun Hu and Junwu Zhu}, title = {Empirical Studies on the NLP Techniques for Source Code Data Preprocessing}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {32--39}, doi = {}, year = {2014}, } |
|
Jiang, Jing |
EAST '14: "Predicting the Number of Forks ..."
Predicting the Number of Forks for Open Source Software Project
Fangwei Chen, Lei Li, Jing Jiang, and Li Zhang (Beihang University, China) GitHub is successful open source software platform which attract many developers. In GitHub, developers are allowed to fork repositories and copy repositories without asking for permission, which make contribution to projects much easier than it has ever been. It is significant to predict the number of forks for open source software projects. The prediction can help GitHub to recommend popular projects, and guide developers to find projects which are likely to succeed and worthy of their contribution. In this paper, we use stepwise regression and design a model to predict the number of forks for open source software projects. Then we collect datasets of 1,000 repositories through GitHub’s APIs. We use datasets of 700 repositories to compute the weight of attributes and realize the model. Then we use other 300 repositories to verify the prediction accuracy of our model. Advantages of our model include: (1) Some attributes used in our model are new. This is because GitHub is different from traditional open source software platforms and has some new features. These new features are used to build our model. (2) Our model uses project information within t month after its creation, and predicts the number of forks in the month T (t < T). It allows users to set the combination of time parameters and satisfy their own needs. (3) Our model predicts the exact number of forks, rather than the range of the number of forks (4) Experiments show that our model has high prediction accuracy. For example, we use project information with 3 months to prediction the number of forks in month 6 after its creation. The correlation coefficient is as high as 0.992, and the median number of absolute difference between prediction value and actual value is only 1.8. It shows that the predicted number of forks is very close to the actual number of forks. Our model also has high prediction accuracy when we set other time parameters. @InProceedings{EAST14p40, author = {Fangwei Chen and Lei Li and Jing Jiang and Li Zhang}, title = {Predicting the Number of Forks for Open Source Software Project}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {40--47}, doi = {}, year = {2014}, } |
|
Li, Bin |
EAST '14: "Top-Down Program Comprehension ..."
Top-Down Program Comprehension with Multi-layer Clustering Based on LDA
Xiangyue Liu, Xiaobing Sun, and Bin Li (Yangzhou University, China; Nanjing University, China) Software change is a fundamental ingredient of software maintenance and evolution. During software maintenance and evolution, developers usually need to understand the system quickly and accurately. With the increasing size and complexity of the evolving system, program comprehension becomes an increasingly difficult activity. In this paper, we propose a novel top-down program comprehension approach which utilizes the Latent Dirichlet Allocation (LDA) model to cluster the whole system from coarse class level to finer method level. Our approach provides a multi-layer view of the whole system at different granularity levels and supports a stepwise comprehension activity, which can effectively guide developers to quickly understand the whole system. This paper outlines the details of how to cluster the system in multiple layers based on LDA, and describes the evaluation plans. @InProceedings{EAST14p56, author = {Xiangyue Liu and Xiaobing Sun and Bin Li}, title = {Top-Down Program Comprehension with Multi-layer Clustering Based on LDA}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {56--59}, doi = {}, year = {2014}, } |
|
Li, Lei |
EAST '14: "Predicting the Number of Forks ..."
Predicting the Number of Forks for Open Source Software Project
Fangwei Chen, Lei Li, Jing Jiang, and Li Zhang (Beihang University, China) GitHub is successful open source software platform which attract many developers. In GitHub, developers are allowed to fork repositories and copy repositories without asking for permission, which make contribution to projects much easier than it has ever been. It is significant to predict the number of forks for open source software projects. The prediction can help GitHub to recommend popular projects, and guide developers to find projects which are likely to succeed and worthy of their contribution. In this paper, we use stepwise regression and design a model to predict the number of forks for open source software projects. Then we collect datasets of 1,000 repositories through GitHub’s APIs. We use datasets of 700 repositories to compute the weight of attributes and realize the model. Then we use other 300 repositories to verify the prediction accuracy of our model. Advantages of our model include: (1) Some attributes used in our model are new. This is because GitHub is different from traditional open source software platforms and has some new features. These new features are used to build our model. (2) Our model uses project information within t month after its creation, and predicts the number of forks in the month T (t < T). It allows users to set the combination of time parameters and satisfy their own needs. (3) Our model predicts the exact number of forks, rather than the range of the number of forks (4) Experiments show that our model has high prediction accuracy. For example, we use project information with 3 months to prediction the number of forks in month 6 after its creation. The correlation coefficient is as high as 0.992, and the median number of absolute difference between prediction value and actual value is only 1.8. It shows that the predicted number of forks is very close to the actual number of forks. Our model also has high prediction accuracy when we set other time parameters. @InProceedings{EAST14p40, author = {Fangwei Chen and Lei Li and Jing Jiang and Li Zhang}, title = {Predicting the Number of Forks for Open Source Software Project}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {40--47}, doi = {}, year = {2014}, } |
|
Liu, Xiangyue |
EAST '14: "Top-Down Program Comprehension ..."
Top-Down Program Comprehension with Multi-layer Clustering Based on LDA
Xiangyue Liu, Xiaobing Sun, and Bin Li (Yangzhou University, China; Nanjing University, China) Software change is a fundamental ingredient of software maintenance and evolution. During software maintenance and evolution, developers usually need to understand the system quickly and accurately. With the increasing size and complexity of the evolving system, program comprehension becomes an increasingly difficult activity. In this paper, we propose a novel top-down program comprehension approach which utilizes the Latent Dirichlet Allocation (LDA) model to cluster the whole system from coarse class level to finer method level. Our approach provides a multi-layer view of the whole system at different granularity levels and supports a stepwise comprehension activity, which can effectively guide developers to quickly understand the whole system. This paper outlines the details of how to cluster the system in multiple layers based on LDA, and describes the evaluation plans. @InProceedings{EAST14p56, author = {Xiangyue Liu and Xiaobing Sun and Bin Li}, title = {Top-Down Program Comprehension with Multi-layer Clustering Based on LDA}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {56--59}, doi = {}, year = {2014}, } EAST '14: "Empirical Studies on the NLP ..." Empirical Studies on the NLP Techniques for Source Code Data Preprocessing Xiaobing Sun, Xiangyue Liu, Jiajun Hu, and Junwu Zhu (Yangzhou University, China; Nanjing University, China) Program comprehension usually focuses on the significance of textual information to capture the programmers’ intent and knowledge in the software, in particular the source code. In the source code, most of the data is unstructured data, such as the natural language text in comments and identifier names. Researchers in software engineering community have developed many techniques for handling such unstructured data, such as natural language processing (NLP) and information retrieval (IR). Before using the IR technique on the unstructured source code, we must preprocess the text identifies and comments since these data is different from that used in our daily life. During this process, several operations, i.e, tokenization, splitting, stemming, etc. are usually used for preprocessing the unstructured source code. These preprocessing operations will affect the quality of the data used in the IR process. But how these preprocessing operations affect the results of IR is still an open problem. To the best of our knowledge, there are still no studies focusin on this problem. This paper attempts to fill this gap, and conducts some empirical studies to show what are the differences before and after these preprocessing operations. The results show some interesting phenomena based on using or not using these preprocessing operations. @InProceedings{EAST14p32, author = {Xiaobing Sun and Xiangyue Liu and Jiajun Hu and Junwu Zhu}, title = {Empirical Studies on the NLP Techniques for Source Code Data Preprocessing}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {32--39}, doi = {}, year = {2014}, } |
|
Li, Xuejun |
EAST '14: "A Quantitative Analysis of ..."
A Quantitative Analysis of Survey Data for Software Design Patterns
Cheng Zhang, Futian Wang, Rongbin Xu, Xuejun Li, and Yun Yang (Anhui University, China) Software design patterns are largely concerned with improving the practices and products of software development. However, there has been no systematic analysis how users' profiles influence the effectiveness of design patterns. The aim of this paper is to investigate the links between the respondents' demographic data from our previous online survey and the design patterns. In this paper we employ a statistical approach to analyse the quantitative data collected from the respondents of our previous online survey. Through analysing the demographic data from the 206 responses of the questionnaire, we find that the positive assessment percentage of using patterns increases with greater experience with design patterns. The results show that the functions of design patterns are influenced by users' experiences rather than users' roles. @InProceedings{EAST14p48, author = {Cheng Zhang and Futian Wang and Rongbin Xu and Xuejun Li and Yun Yang}, title = {A Quantitative Analysis of Survey Data for Software Design Patterns}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {48--55}, doi = {}, year = {2014}, } |
|
Li, Zhi |
EAST '14: "System-Level Testing of Cyber-Physical ..."
System-Level Testing of Cyber-Physical Systems Based on Problem Concerns
Zhi Li and Lu Chen (Guangxi Normal University, China) In this paper we propose a problem-oriented approach to system-level testing of cyber-physical systems based on Jackson’s notion of problem concerns. Some close associations between problem concerns and potential faults in the problem space are made, which necessitates system-level testing. Finally, a research agenda has been put forward with the goal of building a repository of system faults and mining particular problem concerns for system-level testing. @InProceedings{EAST14p60, author = {Zhi Li and Lu Chen}, title = {System-Level Testing of Cyber-Physical Systems Based on Problem Concerns}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {60--62}, doi = {}, year = {2014}, } |
|
Rong, Guoping |
EAST '14: "Does Pareto's Law Apply ..."
Does Pareto's Law Apply to Evidence Distribution in Software Engineering? An Initial Report
Hao Tang, You Zhou, Xin Huang, and Guoping Rong (Nanjing University, China) Data is the source as well as raw format of evidence. As an important research methodology in evidence-based software engineering, systematic literature reviews (SLRs) are used for identifying the evidence and critically appraising the evidence, i.e. empirical studies that report (empirical) data about specific research questions. The 80/20 Rule (or Pareto's Law) reveals a 'vital few' phenomenon widely observed in many disciplines in the last century. However, the applicability of Pareto's Law to evidence distribution in software engineering (SE) is never tested yet. The objective of this paper is to investigate the applicability of Pareto's Law to the evidence distribution on specific topic areas in software engineering (in the form of systematic reviews), which may help us better understand the possible distribution of evidence in software engineering, and further improve the effectiveness and efficiency of literature search. We performed a tertiary study of SLRs in software engineering dated between 2004 and 2012. We further tested the Pareto's Law by collecting, analyzing, and interpreting the distribution (over publication venues) of the primary studies reported in the existing SLRs. Our search identified 255 SLRs, 107 of which were included according to the selection criteria. The analysis of the extracted data from these SLRs presents a preliminary view of the evidence (study) distribution in software engineering. The nonuniform distribution of evidence is supported by the data from the existing SLRs in SE. However, the present observation reflects a weaker 'vital few' relation between study and venue than the 80/20 Rule statement. Top referenced venues are suggested when researchers search for studies in software engineering. It is also noticeable to the community that the primary studies are improperly or incompletely reported in many SLRs. @InProceedings{EAST14p9, author = {Hao Tang and You Zhou and Xin Huang and Guoping Rong}, title = {Does Pareto's Law Apply to Evidence Distribution in Software Engineering? An Initial Report}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {9--16}, doi = {}, year = {2014}, } |
|
Shen, Beijun |
EAST '14: "Impact of Consecutive Changes ..."
Impact of Consecutive Changes on Later File Versions
Meixi Dai, Beijun Shen, Tao Zhang, and Min Zhao (Shanghai Jiao Tong University, China; PLA University of Science and Technology, China) By analyzing histories of program versions, many researches have shown that software quality is associated with history-related metrics, such as code-related metrics, commit-related metrics, developer-related metrics, process-related metrics, and organizational metrics etc. It has also been revealed that consecutive changes on commit-level are strongly associated with software defects. In this paper, we introduce two novel concepts of consecutive changes: CFC (chain of consecutive bug-fixing file versions) and CAC (chain of consecutive file versions where each pair of adjacent versions are submitted by different developers). And then several experiments are conducted to explore the correlation between consecutive changes and software quality by using three open-source projects from Github. Our main findings include: 1) CFCs and CACs widely exist in file version histories; 2) Consecutive changes have a negative and strong impact on the later file versions in a short term, especially when the length of consecutive change chain is 4 or 5. @InProceedings{EAST14p17, author = {Meixi Dai and Beijun Shen and Tao Zhang and Min Zhao}, title = {Impact of Consecutive Changes on Later File Versions}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {17--24}, doi = {}, year = {2014}, } |
|
Sun, Xiaobing |
EAST '14: "Top-Down Program Comprehension ..."
Top-Down Program Comprehension with Multi-layer Clustering Based on LDA
Xiangyue Liu, Xiaobing Sun, and Bin Li (Yangzhou University, China; Nanjing University, China) Software change is a fundamental ingredient of software maintenance and evolution. During software maintenance and evolution, developers usually need to understand the system quickly and accurately. With the increasing size and complexity of the evolving system, program comprehension becomes an increasingly difficult activity. In this paper, we propose a novel top-down program comprehension approach which utilizes the Latent Dirichlet Allocation (LDA) model to cluster the whole system from coarse class level to finer method level. Our approach provides a multi-layer view of the whole system at different granularity levels and supports a stepwise comprehension activity, which can effectively guide developers to quickly understand the whole system. This paper outlines the details of how to cluster the system in multiple layers based on LDA, and describes the evaluation plans. @InProceedings{EAST14p56, author = {Xiangyue Liu and Xiaobing Sun and Bin Li}, title = {Top-Down Program Comprehension with Multi-layer Clustering Based on LDA}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {56--59}, doi = {}, year = {2014}, } EAST '14: "Empirical Studies on the NLP ..." Empirical Studies on the NLP Techniques for Source Code Data Preprocessing Xiaobing Sun, Xiangyue Liu, Jiajun Hu, and Junwu Zhu (Yangzhou University, China; Nanjing University, China) Program comprehension usually focuses on the significance of textual information to capture the programmers’ intent and knowledge in the software, in particular the source code. In the source code, most of the data is unstructured data, such as the natural language text in comments and identifier names. Researchers in software engineering community have developed many techniques for handling such unstructured data, such as natural language processing (NLP) and information retrieval (IR). Before using the IR technique on the unstructured source code, we must preprocess the text identifies and comments since these data is different from that used in our daily life. During this process, several operations, i.e, tokenization, splitting, stemming, etc. are usually used for preprocessing the unstructured source code. These preprocessing operations will affect the quality of the data used in the IR process. But how these preprocessing operations affect the results of IR is still an open problem. To the best of our knowledge, there are still no studies focusin on this problem. This paper attempts to fill this gap, and conducts some empirical studies to show what are the differences before and after these preprocessing operations. The results show some interesting phenomena based on using or not using these preprocessing operations. @InProceedings{EAST14p32, author = {Xiaobing Sun and Xiangyue Liu and Jiajun Hu and Junwu Zhu}, title = {Empirical Studies on the NLP Techniques for Source Code Data Preprocessing}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {32--39}, doi = {}, year = {2014}, } |
|
Tang, Hao |
EAST '14: "Does Pareto's Law Apply ..."
Does Pareto's Law Apply to Evidence Distribution in Software Engineering? An Initial Report
Hao Tang, You Zhou, Xin Huang, and Guoping Rong (Nanjing University, China) Data is the source as well as raw format of evidence. As an important research methodology in evidence-based software engineering, systematic literature reviews (SLRs) are used for identifying the evidence and critically appraising the evidence, i.e. empirical studies that report (empirical) data about specific research questions. The 80/20 Rule (or Pareto's Law) reveals a 'vital few' phenomenon widely observed in many disciplines in the last century. However, the applicability of Pareto's Law to evidence distribution in software engineering (SE) is never tested yet. The objective of this paper is to investigate the applicability of Pareto's Law to the evidence distribution on specific topic areas in software engineering (in the form of systematic reviews), which may help us better understand the possible distribution of evidence in software engineering, and further improve the effectiveness and efficiency of literature search. We performed a tertiary study of SLRs in software engineering dated between 2004 and 2012. We further tested the Pareto's Law by collecting, analyzing, and interpreting the distribution (over publication venues) of the primary studies reported in the existing SLRs. Our search identified 255 SLRs, 107 of which were included according to the selection criteria. The analysis of the extracted data from these SLRs presents a preliminary view of the evidence (study) distribution in software engineering. The nonuniform distribution of evidence is supported by the data from the existing SLRs in SE. However, the present observation reflects a weaker 'vital few' relation between study and venue than the 80/20 Rule statement. Top referenced venues are suggested when researchers search for studies in software engineering. It is also noticeable to the community that the primary studies are improperly or incompletely reported in many SLRs. @InProceedings{EAST14p9, author = {Hao Tang and You Zhou and Xin Huang and Guoping Rong}, title = {Does Pareto's Law Apply to Evidence Distribution in Software Engineering? An Initial Report}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {9--16}, doi = {}, year = {2014}, } |
|
Wang, Futian |
EAST '14: "A Quantitative Analysis of ..."
A Quantitative Analysis of Survey Data for Software Design Patterns
Cheng Zhang, Futian Wang, Rongbin Xu, Xuejun Li, and Yun Yang (Anhui University, China) Software design patterns are largely concerned with improving the practices and products of software development. However, there has been no systematic analysis how users' profiles influence the effectiveness of design patterns. The aim of this paper is to investigate the links between the respondents' demographic data from our previous online survey and the design patterns. In this paper we employ a statistical approach to analyse the quantitative data collected from the respondents of our previous online survey. Through analysing the demographic data from the 206 responses of the questionnaire, we find that the positive assessment percentage of using patterns increases with greater experience with design patterns. The results show that the functions of design patterns are influenced by users' experiences rather than users' roles. @InProceedings{EAST14p48, author = {Cheng Zhang and Futian Wang and Rongbin Xu and Xuejun Li and Yun Yang}, title = {A Quantitative Analysis of Survey Data for Software Design Patterns}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {48--55}, doi = {}, year = {2014}, } |
|
Xu, Rongbin |
EAST '14: "A Quantitative Analysis of ..."
A Quantitative Analysis of Survey Data for Software Design Patterns
Cheng Zhang, Futian Wang, Rongbin Xu, Xuejun Li, and Yun Yang (Anhui University, China) Software design patterns are largely concerned with improving the practices and products of software development. However, there has been no systematic analysis how users' profiles influence the effectiveness of design patterns. The aim of this paper is to investigate the links between the respondents' demographic data from our previous online survey and the design patterns. In this paper we employ a statistical approach to analyse the quantitative data collected from the respondents of our previous online survey. Through analysing the demographic data from the 206 responses of the questionnaire, we find that the positive assessment percentage of using patterns increases with greater experience with design patterns. The results show that the functions of design patterns are influenced by users' experiences rather than users' roles. @InProceedings{EAST14p48, author = {Cheng Zhang and Futian Wang and Rongbin Xu and Xuejun Li and Yun Yang}, title = {A Quantitative Analysis of Survey Data for Software Design Patterns}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {48--55}, doi = {}, year = {2014}, } |
|
Xu, Ting |
EAST '14: "Online Reliability Prediction ..."
Online Reliability Prediction of Service Composition
Zuohua Ding, Ting Xu, and Mei-Hwa Chen (Zhejiang Sci-Tech University, China; SUNY Albany, USA) Reliability is an important quality attribute for service oriented software. Existing approaches use static data collected from the testing to predict the software reliability. These approaches do not address the dynamism of service behavior after deployment. In this paper, we propose a method from any time moment to predict reliability of service composition in the near future. We first collect the service runtime data to predict the future failure data by using the ARIMA model. We then predict the reliability of each port based on the Nelson model, and nally we can compute the reliability of composite services. An Online Shop example is used to demonstrate the eectiveness of our method. @InProceedings{EAST14p1, author = {Zuohua Ding and Ting Xu and Mei-Hwa Chen}, title = {Online Reliability Prediction of Service Composition}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {1--8}, doi = {}, year = {2014}, } |
|
Yang, Yun |
EAST '14: "A Quantitative Analysis of ..."
A Quantitative Analysis of Survey Data for Software Design Patterns
Cheng Zhang, Futian Wang, Rongbin Xu, Xuejun Li, and Yun Yang (Anhui University, China) Software design patterns are largely concerned with improving the practices and products of software development. However, there has been no systematic analysis how users' profiles influence the effectiveness of design patterns. The aim of this paper is to investigate the links between the respondents' demographic data from our previous online survey and the design patterns. In this paper we employ a statistical approach to analyse the quantitative data collected from the respondents of our previous online survey. Through analysing the demographic data from the 206 responses of the questionnaire, we find that the positive assessment percentage of using patterns increases with greater experience with design patterns. The results show that the functions of design patterns are influenced by users' experiences rather than users' roles. @InProceedings{EAST14p48, author = {Cheng Zhang and Futian Wang and Rongbin Xu and Xuejun Li and Yun Yang}, title = {A Quantitative Analysis of Survey Data for Software Design Patterns}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {48--55}, doi = {}, year = {2014}, } |
|
Yu, Juan |
EAST '14: "Estimation of Distribution ..."
Estimation of Distribution Algorithm using Variety of Information
Juan Yu and Yuyao He (Northwestern Polytechnical University, China) Former information of probability model and inferior individuals were discarded in the research of estimation of distribution algorithm usually, but they may contain useful information. In this paper, the former probability information is introduced to avoid premature convergence caused by continuously select superior individuals of current population tobuilt probability model , and the individual sampling from superior probability model is filtered by inferior probability model to avoid generating inferior individuals. The algorithm is simulated through the widely used knapsack examples, the results verify the validity of the proposed method,and give suggestion for the choice of parameter through simulation and analysis. @InProceedings{EAST14p25, author = {Juan Yu and Yuyao He}, title = {Estimation of Distribution Algorithm using Variety of Information}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {25--31}, doi = {}, year = {2014}, } |
|
Zhang, Cheng |
EAST '14: "A Quantitative Analysis of ..."
A Quantitative Analysis of Survey Data for Software Design Patterns
Cheng Zhang, Futian Wang, Rongbin Xu, Xuejun Li, and Yun Yang (Anhui University, China) Software design patterns are largely concerned with improving the practices and products of software development. However, there has been no systematic analysis how users' profiles influence the effectiveness of design patterns. The aim of this paper is to investigate the links between the respondents' demographic data from our previous online survey and the design patterns. In this paper we employ a statistical approach to analyse the quantitative data collected from the respondents of our previous online survey. Through analysing the demographic data from the 206 responses of the questionnaire, we find that the positive assessment percentage of using patterns increases with greater experience with design patterns. The results show that the functions of design patterns are influenced by users' experiences rather than users' roles. @InProceedings{EAST14p48, author = {Cheng Zhang and Futian Wang and Rongbin Xu and Xuejun Li and Yun Yang}, title = {A Quantitative Analysis of Survey Data for Software Design Patterns}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {48--55}, doi = {}, year = {2014}, } |
|
Zhang, Li |
EAST '14: "Predicting the Number of Forks ..."
Predicting the Number of Forks for Open Source Software Project
Fangwei Chen, Lei Li, Jing Jiang, and Li Zhang (Beihang University, China) GitHub is successful open source software platform which attract many developers. In GitHub, developers are allowed to fork repositories and copy repositories without asking for permission, which make contribution to projects much easier than it has ever been. It is significant to predict the number of forks for open source software projects. The prediction can help GitHub to recommend popular projects, and guide developers to find projects which are likely to succeed and worthy of their contribution. In this paper, we use stepwise regression and design a model to predict the number of forks for open source software projects. Then we collect datasets of 1,000 repositories through GitHub’s APIs. We use datasets of 700 repositories to compute the weight of attributes and realize the model. Then we use other 300 repositories to verify the prediction accuracy of our model. Advantages of our model include: (1) Some attributes used in our model are new. This is because GitHub is different from traditional open source software platforms and has some new features. These new features are used to build our model. (2) Our model uses project information within t month after its creation, and predicts the number of forks in the month T (t < T). It allows users to set the combination of time parameters and satisfy their own needs. (3) Our model predicts the exact number of forks, rather than the range of the number of forks (4) Experiments show that our model has high prediction accuracy. For example, we use project information with 3 months to prediction the number of forks in month 6 after its creation. The correlation coefficient is as high as 0.992, and the median number of absolute difference between prediction value and actual value is only 1.8. It shows that the predicted number of forks is very close to the actual number of forks. Our model also has high prediction accuracy when we set other time parameters. @InProceedings{EAST14p40, author = {Fangwei Chen and Lei Li and Jing Jiang and Li Zhang}, title = {Predicting the Number of Forks for Open Source Software Project}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {40--47}, doi = {}, year = {2014}, } |
|
Zhang, Tao |
EAST '14: "Impact of Consecutive Changes ..."
Impact of Consecutive Changes on Later File Versions
Meixi Dai, Beijun Shen, Tao Zhang, and Min Zhao (Shanghai Jiao Tong University, China; PLA University of Science and Technology, China) By analyzing histories of program versions, many researches have shown that software quality is associated with history-related metrics, such as code-related metrics, commit-related metrics, developer-related metrics, process-related metrics, and organizational metrics etc. It has also been revealed that consecutive changes on commit-level are strongly associated with software defects. In this paper, we introduce two novel concepts of consecutive changes: CFC (chain of consecutive bug-fixing file versions) and CAC (chain of consecutive file versions where each pair of adjacent versions are submitted by different developers). And then several experiments are conducted to explore the correlation between consecutive changes and software quality by using three open-source projects from Github. Our main findings include: 1) CFCs and CACs widely exist in file version histories; 2) Consecutive changes have a negative and strong impact on the later file versions in a short term, especially when the length of consecutive change chain is 4 or 5. @InProceedings{EAST14p17, author = {Meixi Dai and Beijun Shen and Tao Zhang and Min Zhao}, title = {Impact of Consecutive Changes on Later File Versions}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {17--24}, doi = {}, year = {2014}, } |
|
Zhao, Min |
EAST '14: "Impact of Consecutive Changes ..."
Impact of Consecutive Changes on Later File Versions
Meixi Dai, Beijun Shen, Tao Zhang, and Min Zhao (Shanghai Jiao Tong University, China; PLA University of Science and Technology, China) By analyzing histories of program versions, many researches have shown that software quality is associated with history-related metrics, such as code-related metrics, commit-related metrics, developer-related metrics, process-related metrics, and organizational metrics etc. It has also been revealed that consecutive changes on commit-level are strongly associated with software defects. In this paper, we introduce two novel concepts of consecutive changes: CFC (chain of consecutive bug-fixing file versions) and CAC (chain of consecutive file versions where each pair of adjacent versions are submitted by different developers). And then several experiments are conducted to explore the correlation between consecutive changes and software quality by using three open-source projects from Github. Our main findings include: 1) CFCs and CACs widely exist in file version histories; 2) Consecutive changes have a negative and strong impact on the later file versions in a short term, especially when the length of consecutive change chain is 4 or 5. @InProceedings{EAST14p17, author = {Meixi Dai and Beijun Shen and Tao Zhang and Min Zhao}, title = {Impact of Consecutive Changes on Later File Versions}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {17--24}, doi = {}, year = {2014}, } |
|
Zhou, You |
EAST '14: "Does Pareto's Law Apply ..."
Does Pareto's Law Apply to Evidence Distribution in Software Engineering? An Initial Report
Hao Tang, You Zhou, Xin Huang, and Guoping Rong (Nanjing University, China) Data is the source as well as raw format of evidence. As an important research methodology in evidence-based software engineering, systematic literature reviews (SLRs) are used for identifying the evidence and critically appraising the evidence, i.e. empirical studies that report (empirical) data about specific research questions. The 80/20 Rule (or Pareto's Law) reveals a 'vital few' phenomenon widely observed in many disciplines in the last century. However, the applicability of Pareto's Law to evidence distribution in software engineering (SE) is never tested yet. The objective of this paper is to investigate the applicability of Pareto's Law to the evidence distribution on specific topic areas in software engineering (in the form of systematic reviews), which may help us better understand the possible distribution of evidence in software engineering, and further improve the effectiveness and efficiency of literature search. We performed a tertiary study of SLRs in software engineering dated between 2004 and 2012. We further tested the Pareto's Law by collecting, analyzing, and interpreting the distribution (over publication venues) of the primary studies reported in the existing SLRs. Our search identified 255 SLRs, 107 of which were included according to the selection criteria. The analysis of the extracted data from these SLRs presents a preliminary view of the evidence (study) distribution in software engineering. The nonuniform distribution of evidence is supported by the data from the existing SLRs in SE. However, the present observation reflects a weaker 'vital few' relation between study and venue than the 80/20 Rule statement. Top referenced venues are suggested when researchers search for studies in software engineering. It is also noticeable to the community that the primary studies are improperly or incompletely reported in many SLRs. @InProceedings{EAST14p9, author = {Hao Tang and You Zhou and Xin Huang and Guoping Rong}, title = {Does Pareto's Law Apply to Evidence Distribution in Software Engineering? An Initial Report}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {9--16}, doi = {}, year = {2014}, } |
|
Zhu, Junwu |
EAST '14: "Empirical Studies on the NLP ..."
Empirical Studies on the NLP Techniques for Source Code Data Preprocessing
Xiaobing Sun, Xiangyue Liu, Jiajun Hu, and Junwu Zhu (Yangzhou University, China; Nanjing University, China) Program comprehension usually focuses on the significance of textual information to capture the programmers’ intent and knowledge in the software, in particular the source code. In the source code, most of the data is unstructured data, such as the natural language text in comments and identifier names. Researchers in software engineering community have developed many techniques for handling such unstructured data, such as natural language processing (NLP) and information retrieval (IR). Before using the IR technique on the unstructured source code, we must preprocess the text identifies and comments since these data is different from that used in our daily life. During this process, several operations, i.e, tokenization, splitting, stemming, etc. are usually used for preprocessing the unstructured source code. These preprocessing operations will affect the quality of the data used in the IR process. But how these preprocessing operations affect the results of IR is still an open problem. To the best of our knowledge, there are still no studies focusin on this problem. This paper attempts to fill this gap, and conducts some empirical studies to show what are the differences before and after these preprocessing operations. The results show some interesting phenomena based on using or not using these preprocessing operations. @InProceedings{EAST14p32, author = {Xiaobing Sun and Xiangyue Liu and Jiajun Hu and Junwu Zhu}, title = {Empirical Studies on the NLP Techniques for Source Code Data Preprocessing}, booktitle = {Proc.\ EAST}, publisher = {ACM}, pages = {32--39}, doi = {}, year = {2014}, } |
29 authors
proc time: 0.86