رویکردی جدید مبتنی‌بر سنجه‌های نرم‌افزاری جهت افزایش سودمندی آزمون بازگشت

نوع مقاله : علمی-پژوهشی

نویسندگان

1 دانشکده مهندسی و علوم کامپیوتر - دانشگاه شهید بهشتی

2 گروه مهندسی نرم‌افزار - دانشکده مهندسی کامپیوتر- دانشگاه اصفهان

چکیده

اولویت‌دهی آزمایه فنی است که اغلب برای کاهش هزینه‌های آزمون بازگشت نرم‌افزار استفاده شده‌است. فنون فعلی سعی کرده‌اند با کمک اطلاعات مختلف پوشش کد، قدرت آشکارسازی خطای هر آزمایه را تخمین بزنند و سپس با روشی ابتکاری آن‌ها را رتبه‌بندی نمایند. اما مطالعه‌ها نشان داده‌اند که پوشش هم‌بستگی قوی با سودمندی آزمایه‌ها و قدرت آن‌ها در آشکارسازی خطا ندارد. با تکیه‌بر مطالعه‌هایی که اثربخشی سنجه‌های کد را در پیش‌بینی خطاها نشان داده‌اند، حدس زده شد که می‌توان از اطلاعات حاصل از سنجه‌های کد برای طراحی فن مؤثری جهت اولویت‌دهی آزمایه‌ها بهره‌برداری نمود. برمبنای این فرضیه، در این مقاله فن جدیدی برای اولویت‌دهی پیشنهاد می‌شود که براساس امتزاج داده روی اطلاعات سنجه‌های پیچیدگی کد کار می‌کند. نوآوری این تحقیق این است که قدرت آشکارکنندگی خطای آزمایه‌ها را در اولویت‌دهی با نگاه جدیدی تخمین می‌زند. برای ارزیابی فن پیشنهادی، آزمایش‌هایی روی نسخه‌های خطادار هفت برنامه محک جاوا انجام داده شد. در آزمایش‌ها کارایی اولویت‌دهی اغلب حداقل70% برحسب متوسط درصد آشکارسازی خطا مشاهده شد که این نتیجه فرضیه ما را معتبر می‌نماید. 

کلیدواژه‌ها


عنوان مقاله [English]

A New Approach Based on Software Metrics to Improve the Effectiveness of Regression Testing

نویسندگان [English]

  • M. Vahidi-Asl 1
  • M. R. Dehghani-Tafti 1
  • A. Khalilian 2
1 Faculty of Computer Science and Engineering, Shahid Beheshti University, Tehran, Iran
2 Software Engineering Department, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran
چکیده [English]

Test case prioritization has been often used to alleviate the costs associated with software regression testing. Current techniques have attempted to estimate the fault exposing potential of test cases using code coverage information and rank them using a heuristic. However, studies show that coverage does not strongly correlate with the effectiveness and fault exposing potential of test cases. Relying on the results of studies that demonstrated the effectiveness of code metrics in fault prediction, we speculate that code metric information can be leveraged to design a new effective technique for test case prioritization. Based on our hypothesis, in this paper, a new prioritization technique is proposed that works based on data fusion on code complexity metrics. The novelty of our technique lies in its original viewpoint to estimate fault exposing potential of test cases in prioritization. To evaluate the proposed technique, we have conducted experiments on faulty versions of seven Java benchmarks. In the experiments, we often observed at least 70% performance in prioritization measured in terms of average percentage of fault detection, which validates our hypothesis. 

کلیدواژه‌ها [English]

  • Software testing
  • regression testing
  • test case prioritization
  • software metrics
[1]      Z. Li, M. Harman, and R. M. Hierons, “Search algorithms for regression test case prioritization,” IEEE Trans. Softw. Eng., vol. 33, no. 4, 2007.
[2]      L. Zhang, D. Hao, L. Zhang, G. Rothermel, and H. Mei, “Bridging the gap between the total and additional test-case prioritization strategies,” Proc. - Int. Conf. Softw. Eng., pp. 192–201, 2013.
[3]      Y. Lu, Y. Lou, S. Cheng, L. Zhang, D. Hao, Y. Zhou, and L. Zhang, “How does regression test prioritization perform in real-world software evolution?,” Proc. 38th Int. Conf. Softw. Eng. - ICSE ’16, pp. 535–546, 2016.
[4]      Q. Luo, K. Moran, and D. Poshyvanyk, “A large-scale empirical comparison of static and dynamic test case prioritization techniques,” in Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 559–570, 2016.
[5]      S. Wu, Data Fusion in Information Retrieval. Springer Berlin Heidelberg, 2012.
[6]      T. Lee, J. Nam, D. Han, S. Kim, and H. Peter In, “Developer Micro Interaction Metrics for Software Defect Prediction,” IEEE Trans. Softw. Eng., vol. 42, no. 11, pp. 1015–1035, 2016.
[7]      F. Rahman and P. Devanbu, “How, and why, process metrics are better,” in Proceedings of the 2013 International Conference on Software Engineering, pp. 432–441, 2013.
[8]      R. Premraj and K. Herzig, “Network Versus Code Metrics to Predict Defects: A Replication Study,” 2011 Int. Symp. Empir. Softw. Eng. Meas., pp. 215–224, 2011.
[9]      Y. Zhou, B. Xu, and H. Leung, “On the ability of complexity metrics to predict fault-prone classes in object-oriented systems,” J. Syst. Softw., vol. 83, no. 4, pp. 660–674, 2010.
[10]      A. Shi, A. Gyori, M. Gligoric, A. Zaytsev, and D. Marinov, “Balancing trade-offs in test-suite reduction,” in Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 246–256, 2014.
[11]      A. Shi, T. Yung, A. Gyori, and D. Marinov, “Comparing and Combining Test-suite Reduction and Regression Test Selection,” Proc. 2015 10th Jt. Meet. Found. Softw. Eng., pp. 237–247, 2015.
[12]      A. Gyori, A. Shi, F. Hariri, and D. Marinov, “Reliable testing: Detecting state-polluting tests to prevent test dependency,” in Proceedings of the 2015 International Symposium on Software Testing and Analysis, pp. 223–233, 2015.
[13]      S. Eghbali and L. Tahvildari, “Test Case Prioritization Using Lexicographical Ordering,” IEEE Trans. Softw. Eng., vol. 5589, no. January, pp. 1–1, 2016.
[14]      H. Do, “Recent Advances in Regression Testing Techniques,” in Advances in Computers, Elsevier, pp. 1–25, 2016.
[15]      G. Rothermel, R. H. Untch, C. Chu, and M. J. Harrold, “Test case prioritization: An empirical study,” in Software Maintenance, 1999.(ICSM’99) Proceedings. IEEE International Conference on, pp. 179–188, 1999.
[16]      C. Kaner, “Improving the maintainability of automated test suites,” Softw. QA, vol. 4, no. 4, 1997.
[17]      P. K. Chittimalli and M. J. Harrold, “Recomputing coverage information to assist regression testing,” IEEE Trans. Softw. Eng., vol. 35, no. 4, pp. 452–469, 2009.
[18]      H. Do, S. Mirarab, L. Tahvildari, and G. Rothermel, “The effects of time constraints on test case prioritization: A series of controlled experiments,” IEEE Trans. Softw. Eng., vol. 36, no. 5, pp. 593–617, 2010.
[19]      D. Hao, L. Zhang, L. Zang, Y. Wang, X. Wu, and T. Xie, “To Be Optimal Or Not in Test-Case Prioritization,” IEEE Trans. Softw. Eng., vol. 6, no. 1, pp. 1–20, 2015.
[20]      Inozemtseva, Laura, and Reid Holmes. "Coverage is not strongly correlated with test suite effectiveness." In Proceedings of the 36th International Conference on Software Engineering, pp. 435-445. ACM, 2014.
[21]      Le Goues, Claire, and Westley Weimer. "Measuring code quality to improve specification mining." IEEE Transactions on Software Engineering 38, no. 1, pp. 175-190, 2012.
[22]      N. Nagappan and T. Ball, “Using software dependencies and churn metrics to predict field failures: An empirical case study,” in First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007), pp. 364–373, 2007.
[23]      R. Subramanyam and M. S. Krishnan, “Empirical analysis of ck metrics for object-oriented design complexity: Implications for software defects,” IEEE Trans. Softw. Eng., vol. 29, no. 4, pp. 297–310, 2003.
[24]      T. Gyimothy, R. Ferenc, and I. Siket, “Empirical validation of object-oriented metrics on open source software for fault prediction,” IEEE Trans. Softw. Eng., vol. 31, no. 10, pp. 897–910, 2005.
[25]      R. P. L. Buse and W. Weimer, “The road not taken: Estimating path execution frequency statically,” in Proceedings of the 31st International Conference on Software Engineering, 2009, pp. 144–154.
[26]      J. C. Sanchez, L. Williams, and E. M. Maximilien, “On the sustained use of a test-driven development practice at ibm,” in Agile Conference (AGILE), 2007, 2007, pp. 5–14.
[27]      R. P. L. Buse and W. R. Weimer, “x,” in Proceedings of the 2008 international symposium on Software testing and analysis, 2008, pp. 121–130.
[28]      M. J. Arafeen and H. Do, “Test Case Prioritization Using Requirements-Based Clustering,” in 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation, 2013, pp. 312–321.
[29]      H. Srikanth, L. Williams, and J. Osborne, “System test case prioritization of new and regression test cases,” in 2005 International Symposium on Empirical Software Engineering, 2005., 2005, p. 10–pp.
[30]      S. W. Thomas, H. Hemmati, A. E. Hassan, and D. Blostein, “Static test case prioritization using topic models,” Empir. Softw. Eng., vol. 19, no. 1, pp. 182–212, 2014.
[31]      J. H. Kwon, I. Y. Ko, G. Rothermel, and M. Staats, “Test case prioritization based on information retrieval concepts,” Proc. - Asia-Pacific Softw. Eng. Conf. APSEC, vol. 1, pp. 19–26, 2014.
[32]      R. K. Saha, L. Zhang, S. Khurshid, and D. E. Perry, “REPiR: An Information Retrieval based Approach for Regression Test Prioritization,” in 37th International Conference on Software Engineering, Florence Italy, Mary, 2015.
[33]      S. Panda, D. Munjal, and D. P. Mohapatra, “A Slice-Based Change Impact Analysis for Regression Test Case Prioritization of Object-Oriented Programs,” vol. 2016, 2016.
[34]      Ammann, Paul, and Jeff Offutt. Introduction to software testing. Cambridge University Press, 2016.
[35]      Choudhary, Garvit Rajesh, Sandeep Kumar, Kuldeep Kumar, Alok Mishra, and Cagatay Catal. "Empirical analysis of change metrics for software fault prediction." Computers & Electrical Engineering 67, pp. 15-24, 2018.
[36]      Chen, Jinfu, Lili Zhu, Tsong Yueh Chen, Dave Towey, Fei-Ching Kuo, Rubing Huang, and Yuchi Guo. "Test case prioritization for object-oriented software: An adaptive random sequence approach based on clustering." Journal of Systems and Software 135, pp. 107-125, 2018.
[37]      Wohlin, Claes, Per Runeson, Martin Höst, Magnus C. Ohlsson, Björn Regnell, and Anders Wesslén.Experimentation in software engineering. Springer Science & Business Media, 2012.
[38]      فاطمه علیقارداشی، محمدعلی زارع چاهوکی، «تأثیر ترکیب روش‌های انتخاب ویژگی فیلتر و بسته‌بندی در بهبود پیش‌بینی اشکال نرم‌افزار»، مجله مهندسی برق، دوره 47، شماره 1، 195-183، دانشگاه تبریز، بهار 1396.
[39]      وحید رافع، سجاد اسفندیاری، «راهکاری نوین جهت تولید دنباله آزمون کمینه در فرآیند آزمون نرم افزار با ترکیب الگوریتم‌های جستجوی تپه نوردی و جستجوی خفاش»، مجله مهندسی برق، دوره 46، شماره 3، 35-25، دانشگاه تبریز، پاییز 1395.