Yuming Zhou

Email: zhouyuming(at)nju.edu.cn

Department of Computer Science and Techonology
Nanjing University
163 Xianlin Avenue, Qixia District
Nanjing, Jiangsu Province, China, 210023


I am currently a professor in the Department of Computer Science and Technology at Nanjing University. I received my Ph.D. degree in computer science from Southeast University in 2003. From January 2003 to December 2004, I was a researcher at Tsinghua University. From February 2005 to February 2008, I was a researcher at Hong Kong Polytechnic University.


Open positions (New!)

Current interests
My research interests focus on software quality assurance in software engineering, especially on software testing, defect prediction/detection, and program analysis.

Our objective is to provide strong (i.e., simple yet effective) baseline approaches for important problems in software quality assurance (see examples). A baseline approach defines a meaningful point of reference and hence allows a meaningful evaluation of any new approach against previous approaches. The ongoing use of a strong baseline approach would help advance the state-of-the-art more reliably and quickly. If you are interested in our “SEE” (Simple yEt Effective) group, please contact me.

Teaching

Awards/honors

Postdoctor

Students

Alumni

Selected papers

  1. Yulou Cao, Lin Chen, Wanwangying Ma, Yanhui Li, Yuming Zhou, Linzhang Wang. Towards better dependency management: A first look at dependency smells in Python projects. IEEE Transactions on Software Engineering, accepted, 2022.
  2. Shiran Liu, Zhaoqiang Guo, Yanhui Li, Chuanqi Wang, Lin Chen, Zhongbin Sun, Yuming Zhou, Baowen Xu. Inconsistent defect labels: essence, causes, and influence. IEEE Transactions on Software Engineering, accepted, 2022. [Data&Code] [Supplemental material]
  3. Peng Zhang, Yang Wang, Xutong Liu, Yanhui Li, Yibiao Yang, Ziyuan Wang, Xiaoyu Zhou, Lin Chen, Yuming Zhou. Mutant reduction evaluation: what is there and what is missing? ACM Transactions on Software Engineering and Methodology, 31(4), article 69, 2022: 1-46. [Data&Code] [Supplemental material]
  4. Peng Zhang, Yanhui Li, Wanwangying Ma, Yibiao Yang, Lin Chen, Hongmin Lu, Yuming Zhou, Baowen Xu. CBUA: A probabilistic, predictive, and practical approach for evaluating test suite effectiveness. IEEE Transactions on Software Engineering, 48(3), 2022: 1067-1096. [Data&Code]
  5. Zhaoqiang Guo, Shiran Liu, Jinping Liu, Yanhui Li, Lin Chen, Hongmin Lu, Yuming Zhou. How far have we progressed in identifying self-admitted technical debts? A comprehensive empirical study . ACM Transactions on Software Engineering and Methodology, 30(4), article 45, 2021: 1-56. [Data&Code]
  6. Lin Chen, Di Wu, Wanwangying Ma, Yuming Zhou, Baowen Xu, Hareton Leung. How C++ templates are used for generic programming –an empirical study on 50 open-source systems. ACM Transactions on Software Engineering and Methodology, 29(1), article 3, 2020: 1-49.
  7. Yuming Zhou, Yibiao Yang, Hongmin Lu, Lin Chen, Yanhui Li, Yangyang Zhao, Junyan Qian, Baowen Xu. How far we have progressed in the journey? An examination of cross-project defect prediction. ACM Transactions on Software Engineering and Methodology, 27(1), article 1, 2018:1-51. [Data&Code] [Supplemental material]
  8. Yibiao Yang, Yuming Zhou, Hongmin Lu, Lin Chen, Zhenyu Chen, Baowen Xu, Hareton Leung, Zhenyu Zhang. Are slice-based cohesion metrics actually useful in effort-aware post-release fault-proneness prediction? An empirical study. IEEE Transactions on Software Engineering, 41(4), 2015: 331-357.
  9. Yuming Zhou, Baowen Xu, Hareton Leung, Lin Chen. An in-depth study of the potentially confounding effect of class size in fault prediction. ACM Transactions on Software Engineering and Methodology, 23(1), article 10, 2014: 1-51.
  10. Yuming Zhou, Hareton Leung, Baowen Xu. Examining the potentially confounding effect of class size on the associations between object-oriented metrics and change-proneness. IEEE Transactions on Software Engineering, 35(5), 2009: 607-623.
  11. Yuming Zhou, Hareton Leung, Pinata Winoto. MNav: A Markov model based web site navigability measure. IEEE Transactions on Software Engineering, 33(12), 2007: 869-890.
  12. Yuming Zhou, Hareton Leung. Empirical analysis of object-oriented design metrics for predicting high and low severity faults. IEEE Transactions on Software Engineering, 32(10), 2006: 771-789.
  13. Zhichao Zhou, Yuming Zhou, Chunrong Fang, Zhenyu Chen, Yutian Tang. Selectively combining multiple coverage goals in search-based unit test generation. ASE 2022, accepted.
  14. Yanhui Li, Linghan Meng, Lin Chen, Li Yu, Di Wu, Yuming Zhou, Baowen Xu. Training data debugging for the fairness of machine learning software. ICSE 2022: 2215-2227.
  15. Linghan Meng, Yanhui Li, Lin Chen, Zhi Wang, Di Wu, Yuming Zhou, Baowen Xu. Measuring discrimination to boost comparative testing for multiple deep learning models. ICSE 2021: 385-396.
  16. Wanwangying Ma, Lin Chen, Xiangyu Zhang, Yang Feng, Zhaogui Xu, Zhifei Chen, Yuming Zhou, Baowen Xu. Impact analysis of cross-project bugs on software ecosystems. ICSE 2020: 100-111.
  17. Weijun Shen, Yanhui Li, Lin Chen, Yuanlei Han, Yuming Zhou, Baowen Xu. Multiple-boundary clustering and prioritization to promote neural network retraining. ASE 2020: 410-422.
  18. Yibiao Yang, Yuming Zhou, Hao Sun, Zhendong Su, Zhiqiang Zuo, Lei Xu, Baowen Xu. Hunting for bugs in code coverage tools via randomized differential testing. ICSE 2019: 488-498.
  19. Yibiao Yang, Yanyan Jiang, Zhiqiang Zuo, Yang Wang, Hao Sun, Hongmin Lu, Yuming Zhou, Baowen Xu. Automatic self-validation for code coverage profilers. ASE 2019: 79-90.
  20. Wanwangying Ma, Lin Chen, Xiangyu Zhang, Yuming Zhou, Baowen Xu. How do developers fix cross-project correlated bugs?: a case study on the GitHub scientific python ecosystem. ICSE 2017: 381-392.
  21. Yangyang Zhao, Alexander Serebrenik, Yuming Zhou, Vladimir Filkov, Bogdan Vasilescu. The impact of continuous integration on other software development practices: a large-scale empirical study. ASE 2017: 60-71.
  22. Yibiao Yang, Yuming Zhou, Jinping Liu, Yangyang Zhao, Hongmin Lu, Lei Xu, Baowen Xu, Hareton Leung. Effort-aware just-in-time defect prediction: simple unsupervised models could be better than supervised models. FSE 2016: 157-168.
  23. Yibiao Yang, Mark Harman, Jens Krinke, Syed S. Islam, David W. Binkley, Yuming Zhou, Baowen Xu. An empirical study on dependence clusters for effort-aware fault-proneness prediction. ASE 2016: 296-307.

Other links


Examples: Simple yet effective approaches

2022: Existing label collection approaches are vulnerable to inconsistent defect labels, resulting in a negetive influence on defect prediction
           Suggestion: Use TSILI to detect and exclude inconsistent defect labels before building and evaluating defect prediction models

2021: Measuring the order-preserving ability is important but missing in mutation reduction evaluation
           Suggestion: Use OP/EROP to evaluate the effectiveness of a mutation reduction strategy

2021: Matching task annotation tags is competitive or even superior to the state-of-the-art approaches for identifying self-admitted technical debts
           Suggestion: Use MAT as a baseline in SATD identification

2020: An unsupervised model dramatically reduces the cost of mutation testing while maintaining the accuracy
           Suggestion: Use CBUA as a baseline in predictive mutation testing

2019: Simple multi-source information fusion can find dozens of bugs in mature code coverage tools
           Suggestion: Use C2V as a baseline in testing code coverage tools

2018: Very simple size models can outperform complex learners in defect prediction
           Suggestion: Use ManualDown/ManualUp on the test set as the baselines in defect prediction

      
Flag Counter

We hope to see the real advance in software quality assurance
We hope to see you in SEE in NJU (Now Join Us)
Last updated: July, 2022