Suppoting page for

Preference Based Multi-Objective Algorithms Applied to the Variability Testing of Software Product Lines

Helson Luiz Jakubovski Filho, Thiago Nascimento Ferreira and Silvia Regina Vergilio
DInf - Federal University of Paraná, CP: 19097, CEP: 81.531-980, Curitiba, Brazil


Abstract: Evolutionary Multi-Objective Algorithms (EMOAs) have been applied to derive products for the variability testing of Software Product Lines (SPLs), which is a complex task impacted by many factors, such as the number of products to be tested, coverage criteria, and efficacy to reveal faults. But such algorithms generally produce a lot of solutions that are uninteresting to the tester. This happens because traditional search algorithms do not take into consideration the user preferences. To ease the selection of the best solutions and avoid effort generating uninteresting solutions, this work introduces an approach that applies Preference-Based Evolutionary Multi-objective Algorithms (PEMOAs) to solve the problem. The approach is multi-objective, working with the number of products to be tested, pairwise coverage and mutation score. It incorporates the preferences before the evolution process and uses the Reference Point (RP) method. Two PEMOAs are evaluated: R-NSGA-II and r-NSGA-II, using two different formulations of objectives, and three kinds of RPs. PEMOAs outperform the traditional NSGA-II by generating a greater number of solutions in the Region of Interest (ROI) associated to the RPs. The use of PEMOAs can reduce the tester's burden in the task of selecting a better and reduced set of products for SPL testing.

Instances

It was used six FMs:

  • James: SPL for collaborative web systems; [1]
  • CAS (Car Audio System)): to manage automotive sound systems; [2]
  • WS (Weather Station): SPL for weather forecast systems; [3]
  • E-Shop: an E-commerce SPL; [4]
  • Drupal: a modular open source web content management framework; [5]
  • SmartHome v2.2: SPL for a smart residential solution. [6]

The following table shows information about each FM, such as number of products (nt), number of used products n, active mutants (AM), valid pairs (VP), and number of features (Features).

FM nt n AM P Features
James 68 68 106 75 14
CAS 450 450 227 183 21
WS 504 504 357 195 22
E-Shop 1152 1152 94 202 22
Drupal ≈2.09E9 11k 2194 1081 48
SmartHome ≈3.87E9 11k 2948 1710 60

Click on the instance name for downloading them.

Reference Points

The following table shows the RPs used in both experiments.

Formulation FM Feasible Infeasible True
2-Objectives James (6; 95%) (2; 98%) (5; 97%)
CAS (8; 96%) (3; 97%) (7; 97%)
WS (11; 96%) (3; 97%) (10; 98%)
E-Shop (11; 95%) (3; 97%) (9; 98%)
Drupal (25; 97%) (8; 99%) (16; 98%)
SmartHome (25; 96%) (7; 99%) (16; 98%)
3-Objectives James (6; 98%; 98%) (1; 98%; 98%) (3; 98%; 98%)
CAS (8; 96%; 96%) (3; 98%; 98%) (5; 98%; 99%)
WS (12; 98%; 98%) (3; 97%; 97%) (8; 98%; 99%)
E-Shop (12; 98%; 98%) (3; 99%; 99%) (5; 97%; 98%)
Drupal (27; 97%; 97%) (10; 99%; 99%) (18; 98%; 99%)
SmartHome (28; 96%; 97%) (8; 99%; 99%) (18; 99%; 99%)

Click here for downloading all reference points.

Quality Indicators

The analysis was conducted by using sets and indicators from the multi-objective optimization area [7] that are relevant to the scope of this work:

  • Hypervolume in conjunction with R-Metric (R-HV);
  • Euclidean Distance (ED);
  • Average number of solutions and percentage of solutions in the ROI.

To obtain such indicators, three sets of solutions were generated.

  • PFapprox: set of non-dominated solutions obtained by one algorithm execution;
  • PFknown: set of non-dominated solutions of an algorithm obtained by the union of all the PFapprox from all the executions, removing the non-dominated and repeated solutions;
  • PFtrue: represents the Optimal Pareto Front to the problem. In our case this set is unknown. This set was formed by all sets PFknown obtained from different algorithms by removing dominated solutions and repeated ones. The set PFtrue is, in fact, an approximation to the real front.

Raw Files

You can click here for downloading a zip file with the raw results found by the algorithms. Also, you can use the following tools for evaluating them.

References

  • [1] Automated analysis of feature models 20 years later: A literature review
    Benavides, D. and Segura, S. and Ruiz-Cort\'es, A. Elsevier. p. 615--636. 2010
  • [2] Reusing State Machines for Automatic Test Generation in Product Lines
    Weißleder, Stephan and Sokenou, Dehla and Schlingloff, Holger Proceedings of the 1st Workshop on Model-based Testing in Practice (MoTiP'08) 2008
  • [3] Software Product Line Engineering with Feature Models
    D. Beuche and M. Dalgarno p. 5--8. 2012
  • [4] Automated Test Data Generation on the Analyses of Feature Models: A Metamorphic Testing Approach
    Segura, S. and Hierons, R. M. and Benavides, D. and Ruiz-Cort\'es, A. Proceedings of the 3rd International Conference on Software Testing, Verification and Validation (ICST'10) p. 35--44. 2010
  • [5] Multi-objective test case prioritization in highly configurable systems: A case study
    J.A. Parejo and A.B. Sánchez and S. Segura and A. Ruiz-Cortés and R. Lopez-Herrejon and A. Egyed Elsevier. p. 287--310. 2016
  • [6] Multi-objective Test Generation for Software Product Lines
    Henard, C. and Papadakis, M. and Perrouin, G. and Klein, J. and Le Traon, Y. Proceedings of the 17th International Software Product Line Conference (SPLC'13) p. 62--71. 2013
  • [7] Evolutionary Algorithms for Multiobjective Optimization: Methods and Applications
    Zitzler, E. Citeseer. 1999