Multi-Objective Optimization Framework for Genetic Programming

Multi-Objective Optimization Framework for Genetic Programming in Java

License

License

MIT
Categories

Categories

Java Languages
GroupId

GroupId

com.github.chen0040
ArtifactId

ArtifactId

java-mogp
Last Version

Last Version

1.0.3
Release Date

Release Date

Type

Type

jar
Description

Description

Multi-Objective Optimization Framework for Genetic Programming
Multi-Objective Optimization Framework for Genetic Programming in Java
Project URL

Project URL

https://github.com/chen0040/java-mogp
Source Code Management

Source Code Management

https://github.com/chen0040/java-mogp

Download java-mogp

How to add to project

<!-- https://jarcasting.com/artifacts/com.github.chen0040/java-mogp/ -->
<dependency>
    <groupId>com.github.chen0040</groupId>
    <artifactId>java-mogp</artifactId>
    <version>1.0.3</version>
</dependency>
// https://jarcasting.com/artifacts/com.github.chen0040/java-mogp/
implementation 'com.github.chen0040:java-mogp:1.0.3'
// https://jarcasting.com/artifacts/com.github.chen0040/java-mogp/
implementation ("com.github.chen0040:java-mogp:1.0.3")
'com.github.chen0040:java-mogp:jar:1.0.3'
<dependency org="com.github.chen0040" name="java-mogp" rev="1.0.3">
  <artifact name="java-mogp" type="jar" />
</dependency>
@Grapes(
@Grab(group='com.github.chen0040', module='java-mogp', version='1.0.3')
)
libraryDependencies += "com.github.chen0040" % "java-mogp" % "1.0.3"
[com.github.chen0040/java-mogp "1.0.3"]

Dependencies

compile (4)

Group / Artifact Type Version
org.slf4j : slf4j-api jar 1.7.20
org.slf4j : slf4j-log4j12 jar 1.7.20
com.github.chen0040 : java-moea jar 1.0.4
com.github.chen0040 : java-genetic-programming jar 1.0.13

provided (1)

Group / Artifact Type Version
org.projectlombok : lombok jar 1.16.6

test (10)

Group / Artifact Type Version
org.testng : testng jar 6.9.10
org.hamcrest : hamcrest-core jar 1.3
org.hamcrest : hamcrest-library jar 1.3
org.assertj : assertj-core jar 3.5.2
org.powermock : powermock-core jar 1.6.5
org.powermock : powermock-api-mockito jar 1.6.5
org.powermock : powermock-module-junit4 jar 1.6.5
org.powermock : powermock-module-testng jar 1.6.5
org.mockito : mockito-core jar 2.0.2-beta
org.mockito : mockito-all jar 2.0.2-beta

Project Modules

There are no modules declared in this project.

java-mogp

Genetic Programming Framework that supports Multi-Objective Optimization

Install

Add the following dependency to your POM file:

<dependency>
  <groupId>com.github.chen0040</groupId>
  <artifactId>java-mogp</artifactId>
  <version>1.0.3</version>
</dependency>

Usage

The sample code belows show tree-gp based multi-objective optimization which minimizes the following two objectives:

  1. the mean square errors in predicting the "Mexican Hat" symbolic regression problem
  2. the average tree depth of the tree-gp program generated.
List<Observation> data = Tutorials.mexican_hat();
CollectionUtils.shuffle(data);
TupleTwo<List<Observation>, List<Observation>> split_data = CollectionUtils.split(data, 0.9);
List<Observation> trainingData = split_data._1();
List<Observation> testingData = split_data._2();

NSGPII tgp = NSGPII.defaultConfig();
tgp.setVariableCount(2); // the number of variables is equal to the input dimension of an observation in the "data" list
tgp.setCostFunction((CostFunction) (solution, mogpgpConfig) -> {
 List<Observation> observations = gpConfig.getObservations();
 double error = 0;
 for(Observation observation : observations){
    solution.execute(observation);
    error += Math.pow(observation.getOutput(0) - observation.getPredictedOutput(0), 2.0);
 }

 double cost1 = error;
 double cost2 = solution.averageTreeDepth();

 return Arrays.asList(cost1, cost2);
});
tgp.setMaxGenerations(50);
tgp.setPopulationSize(500);

tgp.setDisplayEvery(2); // diplay the iteration result for every 2 iterations
NondominatedPopulation pareto_front = tgp.fit(trainingData);
System.out.println("pareto_front: " + pareto_front.size());

The number of variable of a tree-gp program is set by calling NSGPII.setVariableCount(...), the number of variables is equal to the input dimension of the problem to be solved. In the case of "Mexican Hat" symbolic regression, the input is (x, y), therefore,the number of variables is 2.

The cost evaluator computes two objectives. The first objective is the training cost of a tree-gp 'program' on the 'observations' (which is the symbolic regression trainingData), the second objective is the average depth of the tree-gp 'program'.

Test the programs in the pareto front obtained from the TreeGP evolution

Once the pareto front for the MOGP is obtained, we can access each solution in the pareto front, just like the code below:

MOOGPSolution solution = (MOOGPSolution)pareto_front.get(0);
Solution program = solution.getGp();

These two line returns the tree-gp program associated with the first solution on the pareto-front obtained.

Calling program.mathExpression() will returns the math expression representing the gp program, a sample of which is shown below:

Trees[0]: 1.0 - (if(1.0 < if(1.0 < 1.0, if(1.0 < v0, 1.0, 1.0), if(1.0 < (v1 * v0) + (1.0 / 1.0), 1.0 + 1.0, 1.0)), 1.0, v0 ^ 1.0))

The best program in the TreeGP population obtained from the training in the above step can then be used for prediction, as shown by the sample code below:

for(Observation observation : testingData) {
 program.execute(observation);
 double predicted = observation.getPredictedOutput(0);
 double actual = observation.getOutput(0);

 logger.info("predicted: {}\tactual: {}", predicted, actual);
}

Display the Pareto Front

The following code shows how to display the pareto front generated from MOGP:

 List<TupleTwo<Double, Double>> pareto_front_data = pareto_front.front2D();

ParetoFront chart = new ParetoFront(pareto_front_data, "Pareto Front for MO-GP");
chart.showIt(true);

Versions

Version
1.0.3
1.0.2
1.0.1