Towards Better Software Systems

Reverse Engineering

Reverse Engineering

By: M. Zakeri, S. Parsa, A. Kalaee, S. Amiri, and M. Amirian

Fall 2018

Welcome to the IUST reverse engineering research laboratory (former parallel and concurrent processing laboratory). Our laboratory research interests lie in the areas of compilers and software engineering. Dr. Parsa, the lab director, has related these two subjects through reverse engineering techniques. This post briefly introduces the most important research interests and areas of our laboratory in recent years. You can either download a concise PDF version of this post in a poster theme or continue reading the online version. We try to keep online content up-to-date.

Keywords: Iran University of Science and Technology,  Reverse Engineering Research Laboratory, Research Area, Compiler, Software Engineering,  Software Testing, Software Debugging.

  • Download the poster [PDF] [PPTx]
  • Download references [PDF] [BibTex] [View]

Reverse lab flower pot

IUST Reverse Engineering Research Laboratory: Towards Better Software Systems

wisdom

Software Engineering

A missing part in the field of system analysis and design is the management requirements and the system goals and objectives. There has been not much emphasis on this area in system analysis and design [1]. In this respect, we are trying to use techniques applied in management such as strategic planning and roadmaps to improve goal modeling techniques. Performance evaluation and key performance indicators [2] could be naturally extracted from the goal models. Also, business intelligence provides a suitable basis for the analysis of the management requirements to evaluate the performance of a system. Business dashboard design and implementation [3] is an omitted area of software engineering. we are trying in our laboratory to provide methodologies for this area of research and add it as a new field to software engineering. Some research areas in this scope are:

  • Dashboard system design [3],
  • Determining the impacts of key performance indicators (KPIs) on each other and goal achievements,
  • Geometric modeling of KPIs to identify the minimum variations in the parameters affecting the KPIs to achieve the desired value,
  • Graph analysis to modeling various aspect of the system based on its data.

For more information refer to our research groups websites:

 

Software Testing

Software testing is an activity to check whether the actual results match the expected results and to ensure that the software system is fault free. It can be either done manually or using automated tools [4, 5]. All of our effort in this lab is about automated software testing.  Our three main areas of research are as follows:

  • Test data generation: Statement coverage, branch coverage, and more importantly path coverage are the three criteria applied for evaluating an input data set. We have formally introduced a new criterion called domain coverage. Domain coverage has been referred before this implicitly as input space partitioning (ISP) technique, but it has not been used as a criterion for evaluating test data. For the time being, we are working on polyhedral algebra, and the techniques applied for solving none equations to find input sub-space satisfying a given path constraint [4, 6].
  • Fuzz testing: fuzz testing or Fuzzing is a dynamic software testing technique. In this technique with repeated generation and injection of malformed test data to the software under test (SUT), we are looking for the possible errors and vulnerabilities. To this goal, fuzz testing requires varieties of test data. Fuzzing is the art of automatic bug finding, and its role is to find software implementation faults, and identify them if possible. Fuzz testing was developed at the University of Wisconsin Madison in 1989 by Professor Barton Miller and his students. A fuzzer is a program which injects semi-random data automatically into a program/stack and detects bugs [7].
  • Performance testing: Performance testing is mainly concern with stress and load tastings of websites. The main difficulty is to interpret and analyze the report provided by the listeners. This analysis is primarily concern with the vast knowledge of web-servers, communication networks, database systems and the impact of operating system services and programming languages on the performance of the websites. In this respect, we are trying to build a recommender system to provide solutions to the inefficiencies observed in the listeners’ reports. Performance testing also is a significant concern in systems modeling and evaluation as a practical laboratory; we are also trying to give real meaning to the abstract models built of system modeling fields [8].

Some other software testing research areas are listed here:

  • Test oracle generation [9],
  • Test case prioritization [10],
  • Model-based testing [11].

 

Software Debugging

We are looking for a debugger which makes a chain of instructions affecting a faulty result when debugging a program. This technique is called backward slicing. Using the statistical method we could restrict this chain to those of segments which have been observed more frequently in the faulty execution rather than the successful one. Here the main difficulty is raised by accidentally passing execution. Another obstacle is the collective impact of statements on each other and the program execution results. In this respect, we have given some solutions based on information and game theory. For the time being, we are working on word embedding techniques to determine the impact of program statement on each other on the faulty results. Our three main areas of research for software debug are as follows:

  • Automatic fault localization: To eliminate a bug, programmers employ all means to identify the location of the bug and figure out its cause. This process is referred to as software fault localization, which is one of the most expensive activities of debugging. Due to intricacy and inaccuracy of manual fault localization, an enormous amount of research has been carried out to develop automated techniques and tools to assist developers in finding bugs [12, 13]. In our laboratory, we are working on improving these techniques by applying various methods such as statistical models, machine learning and information theory.
  • Defect prediction: defect prediction methods can be used to determine fault-prone software modules through software metrics to focus testing activities on them. Aim that alleviating the present challenges of this context, we are focusing on using metaheuristic approaches and classification methods [14].
  • Automatic software repair: Automatic software repair consists of automatically finding a solution to software bugs, without human intervention. The key idea of these techniques is to try to automatically repair software systems by producing an actual fix that can be validated by the testers before it is finally accepted, or that can be adapted to fit the system [15] properly.

 

References

  • [1] M. Aboutalebi and S. Parsa, “QABPEM: Quality-Aware Business Process Engineering Method,” Int. J. Coop. Inf. Syst., vol. 26, no. 01, p. 1650011, 2017.
  • [2] A. Maté, K. Zoumpatianos, T. Palpanas, J. Trujillo, J. Mylopoulos, and E. Koci, “A Systematic Approach for Dynamic Targeted Monitoring of KPIs,” in Proceedings of 24th Annual International Conference on Computer Science and Software Engineering, 2014, pp. 192–206.
  • [3] N. H. Rasmussen, M. Bansal, and C. Y. Chen, Business Dashboards: A Visual Catalog for Design and Deployment. Wiley Publishing, 2009.
  • [4] P. Ammann and J. Offutt, Introduction to Software Testing. Cambridge: Cambridge University Press, 2016.
  • [5] Paul C. Jorgensen, Software Testing A Craftsman’s Approach, Fourth Edi., vol. 47, no. (2). CRC Press Taylor & Francis Group, 2014.
  • [6] E. Nikravan and S. Parsa, “A reasoning-based approach to dynamic domain reduction in test data generation,” Int. J. Softw. Tools Technol. Transf., May 2018.
  • [7] C. Chen, B. Cui, J. Ma, R. Wu, J. Guo, and W. Liu, “A systematic review of fuzzing techniques,” Comput. Secur., vol. 75, pp. 118–137, 2018.
  • [8] I. Molyneaux, The Art of Application Performance Testing From Strategy to Tools, Second Edi. O’Reilly Media, Inc., 2015.
  • [9]
  • [10] Y. Fazlalizadeh, A. Khalilian, M. A. Azgomi, and S. Parsa, “Prioritizing test cases for resource constraint environments using historical test case performance data,” in 2009 2nd IEEE International Conference on Computer Science and Information Technology, 2009, pp. 190–195.
  • [11]
  • [12] F. Feyzi and S. Parsa, “Inforence: Effective Fault Localization Based on Information-Theoretic Analysis and Statistical Causal Inference,” CoRR, vol. abs/1712.03361, 2017.
  • [13] F. Feyzi and S. Parsa, “FPA-FL: Incorporating Static Fault-proneness Analysis into Statistical Fault Localization,” CoRR, vol. abs/1712.03359, 2017.
  • [14] Y. Abdi, S. Parsa, and Y. Seyfari, “A hybrid one-class rule learning approach based on swarm intelligence for software fault prediction,” Innov. Syst. Softw. Eng., vol. 11, no. 4, pp. 289–301, Dec. 2015.
  • [15] L. Gazzola, D. Micucci, and L. Mariani, “Automatic Software Repair: A Survey,” IEEE Trans. Softw. Eng., no. June, pp. 1–1, 2017.