Local Outlier Factor (LOF)

Java implementation of a number of Local Outlier Factor algorithms

License

License

MIT
Categories

Categories

Java Languages
GroupId

GroupId

com.github.chen0040
ArtifactId

ArtifactId

java-local-outlier-factor
Last Version

Last Version

1.0.4
Release Date

Release Date

Type

Type

jar
Description

Description

Local Outlier Factor (LOF)
Java implementation of a number of Local Outlier Factor algorithms
Project URL

Project URL

https://github.com/chen0040/java-local-outlier-factor
Source Code Management

Source Code Management

https://github.com/chen0040/java-local-outlier-factor

Download java-local-outlier-factor

How to add to project

<!-- https://jarcasting.com/artifacts/com.github.chen0040/java-local-outlier-factor/ -->
<dependency>
    <groupId>com.github.chen0040</groupId>
    <artifactId>java-local-outlier-factor</artifactId>
    <version>1.0.4</version>
</dependency>
// https://jarcasting.com/artifacts/com.github.chen0040/java-local-outlier-factor/
implementation 'com.github.chen0040:java-local-outlier-factor:1.0.4'
// https://jarcasting.com/artifacts/com.github.chen0040/java-local-outlier-factor/
implementation ("com.github.chen0040:java-local-outlier-factor:1.0.4")
'com.github.chen0040:java-local-outlier-factor:jar:1.0.4'
<dependency org="com.github.chen0040" name="java-local-outlier-factor" rev="1.0.4">
  <artifact name="java-local-outlier-factor" type="jar" />
</dependency>
@Grapes(
@Grab(group='com.github.chen0040', module='java-local-outlier-factor', version='1.0.4')
)
libraryDependencies += "com.github.chen0040" % "java-local-outlier-factor" % "1.0.4"
[com.github.chen0040/java-local-outlier-factor "1.0.4"]

Dependencies

compile (4)

Group / Artifact Type Version
org.slf4j : slf4j-api jar 1.7.20
org.slf4j : slf4j-log4j12 jar 1.7.20
org.apache.commons : commons-math3 jar 3.2
com.github.chen0040 : java-data-frame jar 1.0.9

provided (1)

Group / Artifact Type Version
org.projectlombok : lombok jar 1.16.6

test (10)

Group / Artifact Type Version
org.testng : testng jar 6.9.10
org.hamcrest : hamcrest-core jar 1.3
org.hamcrest : hamcrest-library jar 1.3
org.assertj : assertj-core jar 3.5.2
org.powermock : powermock-core jar 1.6.5
org.powermock : powermock-api-mockito jar 1.6.5
org.powermock : powermock-module-junit4 jar 1.6.5
org.powermock : powermock-module-testng jar 1.6.5
org.mockito : mockito-core jar 2.0.2-beta
org.mockito : mockito-all jar 2.0.2-beta

Project Modules

There are no modules declared in this project.

java-local-outlier-factor

Package implements a number local outlier factor algorithms for outlier detection and finding anomalous data

Build Status Coverage Status

Features

  • LOF
  • LDOF (Local Density Outlier Factor)
  • LOCI (Local outlier correlation integral)
  • CBLOF (Cluster-based LOF)

Install

Add the following dependency to your POM file:

<dependency>
  <groupId>com.github.chen0040</groupId>
  <artifactId>java-local-outlier-factor</artifactId>
  <version>1.0.4</version>
</dependency>

Usage

The anomaly detection algorithms takes data that is prepared and stored in a data frame (Please refers to this link on how to create a data frame from file or from scratch)

All LOF algorithms variants use unsupervised-learning for training.

The following method trains an algorithm:

lof.fitAndTransform(dataFrame);

The following method returns true if the dataRow (which is a row in a data frame) taken in is an outlier:

boolean isOutlier = lof.isAnomaly(dataRow);

Local Outlier Factor (LOF)

To create and train the LOF, run the following code:

LOF method = new LOF();
method.setMinPtsLB(3);
method.setMinPtsUB(15);
method.setThreshold(0.2);
DataFrame resultantTrainedData = method.fitAndTransform(trainingData);
System.out.println(resultantTrainedData.head(10));

To test the trained method on new data, run:

boolean outlier = method.isAnomaly(dataRow);

Cluster-Based Local Outlier Factor (CBLOF)

The create and train the LOF, run the following code:

CBLOF method = new CBLOF();
DataFrame resultantTrainedData = method.fitAndTransform(trainingData);
System.out.println(resultantTrainedData.head(10));

To test the trained method on new data, run:

boolean outlier = method.isAnomaly(dataRow);

The problem that we will be using as demo as the following anomaly detection problem:

scki-learn example for one-class

LOF

Below is the sample code which illustrates how to use LOF to detect outliers in the above problem:

DataQuery.DataFrameQueryBuilder schema = DataQuery.blank()
      .newInput("c1")
      .newInput("c2")
      .newOutput("anomaly")
      .end();

Sampler.DataSampleBuilder negativeSampler = new Sampler()
      .forColumn("c1").generate((name, index) -> randn() * 0.3 + (index % 2 == 0 ? -2 : 2))
      .forColumn("c2").generate((name, index) -> randn() * 0.3 + (index % 2 == 0 ? -2 : 2))
      .forColumn("anomaly").generate((name, index) -> 0.0)
      .end();

Sampler.DataSampleBuilder positiveSampler = new Sampler()
      .forColumn("c1").generate((name, index) -> rand(-4, 4))
      .forColumn("c2").generate((name, index) -> rand(-4, 4))
      .forColumn("anomaly").generate((name, index) -> 1.0)
      .end();

DataFrame data = schema.build();

data = negativeSampler.sample(data, 20);
data = positiveSampler.sample(data, 20);

System.out.println(data.head(10));

LOF method = new LOF();
method.setParallel(true);
method.setMinPtsLB(3);
method.setMinPtsUB(10);
method.setThreshold(0.5);
DataFrame learnedData = method.fitAndTransform(data);

BinaryClassifierEvaluator evaluator = new BinaryClassifierEvaluator();

for(int i = 0; i < learnedData.rowCount(); ++i){
 boolean predicted = learnedData.row(i).categoricalTarget().equals("1");
 boolean actual = data.row(i).target() == 1.0;
 evaluator.evaluate(actual, predicted);
 logger.info("predicted: {}\texpected: {}", predicted, actual);
}

Cluster-Based LOF

Below is the sample code which illustrates how to use CBLOF to detect outliers in the above problem:

DataQuery.DataFrameQueryBuilder schema = DataQuery.blank()
      .newInput("c1")
      .newInput("c2")
      .newOutput("anomaly")
      .end();

Sampler.DataSampleBuilder negativeSampler = new Sampler()
      .forColumn("c1").generate((name, index) -> randn() * 0.3 + (index % 2 == 0 ? -2 : 2))
      .forColumn("c2").generate((name, index) -> randn() * 0.3 + (index % 2 == 0 ? -2 : 2))
      .forColumn("anomaly").generate((name, index) -> 0.0)
      .end();

Sampler.DataSampleBuilder positiveSampler = new Sampler()
      .forColumn("c1").generate((name, index) -> rand(-4, 4))
      .forColumn("c2").generate((name, index) -> rand(-4, 4))
      .forColumn("anomaly").generate((name, index) -> 1.0)
      .end();

DataFrame data = schema.build();

data = negativeSampler.sample(data, 200);
data = positiveSampler.sample(data, 200);

System.out.println(data.head(10));


CBLOF method = new CBLOF();
method.setParallel(false);
DataFrame learnedData = method.fitAndTransform(data);

BinaryClassifierEvaluator evaluator = new BinaryClassifierEvaluator();

for(int i = 0; i < learnedData.rowCount(); ++i){
 boolean predicted = learnedData.row(i).categoricalTarget().equals("1");
 boolean actual = data.row(i).target() == 1.0;
 evaluator.evaluate(actual, predicted);
 logger.info("predicted: {}\texpected: {}", predicted, actual);
}

evaluator.report();

LDOF

Below is the sample code which illustrates how to use LDOF to detect outliers in the above problem:

DataQuery.DataFrameQueryBuilder schema = DataQuery.blank()
      .newInput("c1")
      .newInput("c2")
      .newOutput("anomaly")
      .end();

Sampler.DataSampleBuilder negativeSampler = new Sampler()
      .forColumn("c1").generate((name, index) -> randn() * 0.3 + (index % 2 == 0 ? -2 : 2))
      .forColumn("c2").generate((name, index) -> randn() * 0.3 + (index % 2 == 0 ? -2 : 2))
      .forColumn("anomaly").generate((name, index) -> 0.0)
      .end();

Sampler.DataSampleBuilder positiveSampler = new Sampler()
      .forColumn("c1").generate((name, index) -> rand(-4, 4))
      .forColumn("c2").generate((name, index) -> rand(-4, 4))
      .forColumn("anomaly").generate((name, index) -> 1.0)
      .end();

DataFrame data = schema.build();

data = negativeSampler.sample(data, 20);
data = positiveSampler.sample(data, 20);

System.out.println(data.head(10));

LDOF method = new LDOF();
DataFrame learnedData = method.fitAndTransform(data);

BinaryClassifierEvaluator evaluator = new BinaryClassifierEvaluator();
for(int i = 0; i < learnedData.rowCount(); ++i) {
 boolean predicted = learnedData.row(i).categoricalTarget().equals("1");
 boolean actual = data.row(i).target() == 1.0;

 evaluator.evaluate(actual, predicted);
 logger.info("predicted: {}\texpected: {}", predicted, actual);
}

evaluator.report();

LOCI

Below is the sample code which illustrates how to use LOCI to detect outliers in the above problem:

DataQuery.DataFrameQueryBuilder schema = DataQuery.blank()
      .newInput("c1")
      .newInput("c2")
      .newOutput("anomaly")
      .end();

Sampler.DataSampleBuilder negativeSampler = new Sampler()
      .forColumn("c1").generate((name, index) -> randn() * 0.3 + (index % 2 == 0 ? -2 : 2))
      .forColumn("c2").generate((name, index) -> randn() * 0.3 + (index % 2 == 0 ? -2 : 2))
      .forColumn("anomaly").generate((name, index) -> 0.0)
      .end();

Sampler.DataSampleBuilder positiveSampler = new Sampler()
      .forColumn("c1").generate((name, index) -> rand(-4, 4))
      .forColumn("c2").generate((name, index) -> rand(-4, 4))
      .forColumn("anomaly").generate((name, index) -> 1.0)
      .end();

DataFrame data = schema.build();

data = negativeSampler.sample(data, 20);
data = positiveSampler.sample(data, 20);

System.out.println(data.head(10));

LOCI method = new LOCI();
method.setAlpha(0.5);
method.setKSigma(3);
DataFrame learnedData = method.fitAndTransform(data);

BinaryClassifierEvaluator evaluator = new BinaryClassifierEvaluator();

for(int i = 0; i < learnedData.rowCount(); ++i){
 boolean predicted = learnedData.row(i).categoricalTarget().equals("1");
 boolean actual = data.row(i).target() == 1.0;
 evaluator.evaluate(actual, predicted);
 logger.info("predicted: {}\texpected: {}", predicted, actual);
}

Versions

Version
1.0.4
1.0.3
1.0.2
1.0.1