Skip to content

Beginner's guide to working with Evosuite plus plus

Snajef edited this page Apr 18, 2022 · 4 revisions

Introduction

This page will aim to cover:

  1. A brief overview of how Evosuite++ works.
  2. A brief overview of how the Evosuite++ project is structured.
  3. How to look into the workings of Evosuite++ using the SF100 benchmark dataset.
  4. How to instrument custom methods for use with Evosuite++.

A brief overview of how Evosuite++ works

Evosuite++ aims to generate unit test cases for methods to maximise branch coverage using search algorithms to gradually improve test cases to achieve coverage.

How the Evosuite++ project is structured

<TBD>

How to look into the workings of Evosuite++

One way to go about it is by looking at feature.objectconstruction.testgeneration.testcase.SF100OverallTest under evosuite-shell (for a closer look at how EvoObj generates seed test cases, see feature.objectconstruction.testgeneration.testcase.ProjectGapGraphBasedCodeGenerationTest). This class provides an example of how to run Evosuite++ in the EvoObj configuration on a target method in some project in the SF100 benchmark dataset. Below is an example of the relevant code:

@Before
public void beforeTest() {
    // This MUST be set to allow breakpoints while debugging.
	Properties.CLIENT_ON_THREAD = true;
	Properties.STATISTICS_BACKEND = StatisticsBackend.DEBUG;
    ...
}

...

@Test
public void 
testBugExample() {
	// The target project
	String projectId = SF100Project.P1;	
	String[] targetMethods = new String[]{
		...
		// The target method 
		"com.ib.client.Contract#equals(Ljava/lang/Object;)Z"
		...
	};

	// Evosuite++ configuration
	int repeatTime = 5;
	int budget = 100;
	Long seed = null;
	String fitnessApproach = "branch";
	boolean aor = false;
	List<EvoTestResult> results = CommonTestUtil.evoTestSingleMethod(projectId, targetMethods, fitnessApproach, repeatTime, budget, true, seed, aor, "generateSuite", "Evosuite", "DynaMOSA");

	// Outcome reporting
	double coverage = 0;
	double initCoverage = 0;
	double time = 0;
	double iteration  = 0;
	for(EvoTestResult res: results) {
		coverage += res.getCoverage();
		initCoverage += res.getInitialCoverage();
		time += res.getTime();
		iteration += res.getAge();
	}
			
	System.out.println("applied object rule: " + aor);
	System.out.println("overall legitimization budget: " + Properties.TOTAL_LEGITIMIZATION_BUDGET);
	System.out.println("coverage: " + coverage/repeatTime);
	System.out.println("initCoverage: " + initCoverage/repeatTime);
	System.out.println("time: " + time/repeatTime);
	System.out.println("iteration: " + iteration/repeatTime);
}

Running this as a JUnit Test (in Eclipse, this can be done by accessing Run > Run As > JUnit Test) will trigger running Evosuite++ in the EvoObj configuration on the target method (in this case, com.ib.client.Contract#equals). The output can be viewed in the output window (either in your shell, on in the Console window if on Eclipse).

Sometimes, you might want to view the test cases as they evolve. This can be done by setting breakpoints in the appropriate class. In our case, this is org.evosuite.ga.metaheuristics.mosa.DynaMOSA. Evosuite++ works by evolving the test cases iteratively; in our example, this is done in org.evosuite.ga.metaheuristics.mosa.DynaMOSA#evolve. As such, we can see how the population changes with each iteration by setting a breakpoint at the printBestFitness() call in line 187, in the evolve method. When the breakpoint is reached, we can see the current population by inspecting this.population.

Custom methods

One drawback of using the SF100 benchmark is that the methods inside the projects cannot be altered. However, we can pass a custom class (and by extension a custom method) to Evosuite for processing. This allows us to run tests in which we modify the target method and see how Evosuite responds. This functionality is showcased in feature.objectconstruction.testgeneration.testcase.ProjectOverallTest, with the relevant code shown below:

@Test
public void testCascadeCall() {
	Class<?> clazz = ObjectExample.class;
	String methodName = "test";
	int parameterNum = 1;
	
	String targetClass = clazz.getCanonicalName();
	Method method = TestUtility.getTargetMethod(methodName, clazz, parameterNum);

	String targetMethod = method.getName() + MethodUtil.getSignature(method);
	String cp = "target/test-classes";

	Properties.CLIENT_ON_THREAD = true;
	Properties.STATISTICS_BACKEND = StatisticsBackend.DEBUG;

	Properties.TIMEOUT = 1000;
	String fitnessApproach = "branch";
	int timeBudget = 10000;
	
	boolean aor = true;
	TestUtility.evoTestSingleMethod(targetClass,  
			targetMethod, timeBudget, true, aor, cp, fitnessApproach, 
			"generateMOSuite", "MOSUITE", "DynaMOSA");
}

Here, the custom class is ObjectExample, which can be found in feature.objectconstruction.testgeneration.example. The target method is ObjectExample#test, as specified in the methodName variable. The debugging procedures follow as in the previous section.

Adding data to Evosuite++ output

Adding your data collection code

In some cases you will want to alter the output of the Evosuite++ execution to track metrics, obtain data, etc. The relevant method calls are as follows:

  • ... (EvoSuite invocation, irrelevant)
    • org.evosuite.rmi.service.ClientNodeImpl#startNewSearch
      • org.evosuite.TestSuiteGenerator#generateTestSuite
        • org.evosuite.TestSuiteGenerator#generateTests
          • org.evosuite.strategy.MOSuiteStrategy#generateTests
            • org.evosuite.ga.metaheuristics.GeneticAlgorithm#generateSolution
            • org.evosuite.ga.metaheuristics.GeneticAlgorithm#getBestIndividual

Hence, to add new logic (and store data), relevant code (of your choice) should be inserted in the generateSolution and getBestIndividual methods of your chosen genetic algorithm extending GeneticAlgorithm. It should be noted that Evosuite++ operates on a master-client architecture where the client process does the test generation and sends the results back to the master, hence any new data structures used should be serialisable.

Working with the data collected

There are too many methods that deal with Evosuite++ execution to trace each one, but most of them call org.evosuite.EvoSuite#parseCommandLine. We shall assume that your chosen method does so too. This method (typically) returns a List<List<TestGenerationResult>>, where your new data structure should be embedded in TestGenerationResult (well, some implementation of it). Thus by retrieving your data collection object in your chosen method, you can work with the data collected.

EvoObj-specific architecture

For a closer look at EvoObj specific architecture, see org.evosuite.testcase.synthesizer.ConstructionPathSynthesizer and its extensions. This class handles the generation of the seed test case for OOP code. org.evosuite.testcase.factories.RandomLengthTestFactory also serves to provide test cases generated in this manner (see org.evosuite.testcase.factories.RandomLengthTestFactory.getRandomTestCase(int)).