com.hurence.logisland:logisland-redis_4-client-service

LogIsland is an event mining platform based on Kafka to handle a huge amount of data in realtime.

License

License

Categories

Categories

Redis Data Databases CLI User Interface
GroupId

GroupId

com.hurence.logisland
ArtifactId

ArtifactId

logisland-redis_4-client-service
Last Version

Last Version

0.14.0
Release Date

Release Date

Type

Type

jar
Description

Description

LogIsland is an event mining platform based on Kafka to handle a huge amount of data in realtime.
Project Organization

Project Organization

Hurence - Big Data Experts.

Download logisland-redis_4-client-service

How to add to project

<!-- https://jarcasting.com/artifacts/com.hurence.logisland/logisland-redis_4-client-service/ -->
<dependency>
    <groupId>com.hurence.logisland</groupId>
    <artifactId>logisland-redis_4-client-service</artifactId>
    <version>0.14.0</version>
</dependency>
// https://jarcasting.com/artifacts/com.hurence.logisland/logisland-redis_4-client-service/
implementation 'com.hurence.logisland:logisland-redis_4-client-service:0.14.0'
// https://jarcasting.com/artifacts/com.hurence.logisland/logisland-redis_4-client-service/
implementation ("com.hurence.logisland:logisland-redis_4-client-service:0.14.0")
'com.hurence.logisland:logisland-redis_4-client-service:jar:0.14.0'
<dependency org="com.hurence.logisland" name="logisland-redis_4-client-service" rev="0.14.0">
  <artifact name="logisland-redis_4-client-service" type="jar" />
</dependency>
@Grapes(
@Grab(group='com.hurence.logisland', module='logisland-redis_4-client-service', version='0.14.0')
)
libraryDependencies += "com.hurence.logisland" % "logisland-redis_4-client-service" % "0.14.0"
[com.hurence.logisland/logisland-redis_4-client-service "0.14.0"]

Dependencies

compile (36)

Group / Artifact Type Version
org.slf4j : slf4j-api jar 1.7.16
com.hurence.logisland : logisland-api jar 0.14.0
commons-collections : commons-collections jar 3.2.1
com.hurence.logisland : logisland-cache_key_value-service-api jar 0.14.0
org.apache.commons : commons-csv jar 1.5
com.hurence.logisland : logisland-utils jar 0.14.0
org.apache.avro : avro jar 1.7.7
org.codehaus.jackson : jackson-core-asl jar 1.9.13
org.codehaus.jackson : jackson-mapper-asl jar 1.9.13
com.thoughtworks.paranamer : paranamer jar 2.3
org.xerial.snappy : snappy-java jar 1.0.5
org.apache.commons : commons-compress jar 1.4.1
org.tukaani : xz jar 1.0
com.fasterxml.jackson.dataformat : jackson-dataformat-yaml jar 2.6.6
com.fasterxml.jackson.core : jackson-core jar 2.6.6
org.yaml : snakeyaml jar 1.15
commons-cli : commons-cli jar 1.2
commons-codec : commons-codec jar 1.10
joda-time : joda-time jar 2.8.2
com.fasterxml.jackson.core : jackson-databind jar 2.6.6
com.fasterxml.jackson.core : jackson-annotations jar 2.6.6
com.googlecode.json-simple : json-simple jar 1.1
org.apache.commons : commons-lang3 jar 3.4
com.google.protobuf : protobuf-java jar 2.5.0
org.apache.curator : curator-test jar 2.11.0
org.apache.zookeeper : zookeeper jar 3.4.6
jline : jline jar 0.9.94
io.netty : netty jar 3.7.0.Final
org.javassist : javassist jar 3.18.1-GA
org.apache.commons : commons-math jar 2.2
org.json : json jar 20090211
com.101tec : zkclient jar 0.8
commons-io : commons-io jar 2.4
commons-logging : commons-logging jar 1.2
org.apache.commons : commons-pool2 jar 2.4.2
com.google.guava : guava jar 18.0

runtime (1)

Group / Artifact Type Version
org.slf4j : jcl-over-slf4j jar 1.7.25

test (2)

Group / Artifact Type Version
junit : junit jar 4.12
org.mockito : mockito-core jar 1.10.19

Project Modules

There are no modules declared in this project.

Logisland

https://travis-ci.org/Hurence/logisland.svg?branch=master Gitter

Download the latest release build and chat with us on gitter

LogIsland is an event mining scalable platform designed to handle a high throughput of events.

It is highly inspired from DataFlow programming tools such as Apache Nifi, but with a highly scalable architecture.

LogIsland is completely open source and free even for commercial use. Hurence provides support if required.

Event mining Workflow

Here is an example of a typical event mining pipeline.

  1. Raw events (sensor data, logs, user click stream, ...) are sent to Kafka topics by a NIFI / Logstash / *Beats / Flume / Collectd (or whatever) agent
  2. Raw events are structured in Logisland Records, then processed and eventually pushed back to another Kafka topic by a Logisland streaming job
  3. Records are sent to external short living storage (Elasticsearch, Solr, Couchbase, ...) for online analytics.
  4. Records are sent to external long living storage (HBase, HDFS, ...) for offline analytics (aggregated reports or ML models).
  5. Logisland Processors handle Records to produce Alerts and Information from ML models

Online documentation

You can find the latest Logisland documentation, including a programming guide, on the project web page. This README file only contains basic setup instructions.

Browse the Java API documentation for more information.

You can follow one getting started guide through the apache log indexing tutorial.

Building Logisland

to build from the source just clone source and package with maven (logisland requires a maven 3.5.2 version and beyond)

git clone https://github.com/Hurence/logisland.git
cd logisland
mvn clean package

the final package is available at logisland-assembly/target/logisland-1.3.0-bin.tar.gz

You can also download the latest release build

If you want to build with opencv support, please install OpenCV first and then

mvn clean package -Dopencv

Quick start

Local Setup

Alternatively you can deploy logisland on any linux server from which Kafka and Spark are available

Replace all versions in the below code by the required versions (spark version, logisland version on specific HDP version, kafka scala version and kafka version etc.)

The Kafka distributions are available at this address: <https://kafka.apache.org/downloads>

Last tested version of scala version for kafka is: 2.11 with preferred release of kafka : 0.10.2.2

Last tested version of Spark is: 2.3.1 on Hadoop version: 2.7

But you should choose the Spark version that is compatible with your environment and hadoop installation if you have one (for example Spark 2.1.0 on hadoop 2.7). Note that hadoop 2.7 can run Spark 2.4.x, 2.3.x, 2.2.x, 2.1.x. Check at this URL what is available : http://d3kbcqa49mib13.cloudfront.net/

# install Kafka & start a zookeeper node + a broker
curl -s https://www-us.apache.org/dist/kafka/<kafka_release>/kafka_scala_version>-<kafka_version>.tgz | tar -xz -C /usr/local/
cd /usr/local/kafka_<scala_version>-<kafka_version>
nohup bin/zookeeper-server-start.sh config/zookeeper.properties > zookeeper.log 2>&1 &
JMX_PORT=10101 nohup bin/kafka-server-start.sh config/server.properties > kafka.log 2>&1 &

# install Spark (choose the spark version compatible with your hadoop distrib if you have one)
curl -s http://d3kbcqa49mib13.cloudfront.net/spark-<spark-version>-bin-hadoop<hadoop-version>.tgz | tar -xz -C /usr/local/
export SPARK_HOME=/usr/local/spark-<spark-version>-bin-hadoop<hadoop-version>

# install Logisland 1.3.0
curl -s https://github.com/Hurence/logisland/releases/download/v1.0.0-RC2/logisland-1.0.0-RC2-bin.tar.gz  | tar -xz -C /usr/local/
cd /usr/local/logisland-1.3.0

# launch a logisland job
bin/logisland.sh --conf conf/index-apache-logs.yml

you can find some logisland job configuration samples under $LOGISLAND_HOME/conf folder

Docker setup

The easiest way to start is the launch a docker compose stack

# launch logisland environment
cd /tmp
curl -s https://raw.githubusercontent.com/Hurence/logisland/master/logisland-framework/logisland-resources/src/main/resources/conf/docker-compose.yml > docker-compose.yml
docker-compose up

# sample execution of a logisland job
docker exec -i -t logisland conf/index-apache-logs.yml

Hadoop distribution setup

Launching logisland streaming apps is just easy as unarchiving logisland distribution on an edge node, editing a config with YARN parameters and submitting job.

# install Logisland 1.3.0
curl -s https://github.com/Hurence/logisland/releases/download/v0.10.0/logisland-1.3.0-bin-hdp2.5.tar.gz  | tar -xz -C /usr/local/
cd /usr/local/logisland-1.3.0
bin/logisland.sh --conf conf/index-apache-logs.yml

Start a stream processing job

A Logisland stream processing job is made of a bunch of components. At least one streaming engine and 1 or more stream processors. You set them up by a YAML configuration file.

Please note that events are serialized against an Avro schema while transiting through any Kafka topic. Every spark.streaming.batchDuration (time window), each processor will handle its bunch of Records to eventually

generate some new Records to the output topic.

The following configuration.yml file contains a sample of job that parses raw Apache logs and send them to Elasticsearch.

The first part is the ProcessingEngine configuration (here a Spark streaming engine)

version: 1.3.0
documentation: LogIsland job config file
engine:
  component: com.hurence.logisland.engine.spark.KafkaStreamProcessingEngine
  type: engine
  documentation: Index some apache logs with logisland
  configuration:
    spark.app.name: IndexApacheLogsDemo
    spark.master: yarn-cluster
    spark.driver.memory: 1G
    spark.driver.cores: 1
    spark.executor.memory: 2G
    spark.executor.instances: 4
    spark.executor.cores: 2
    spark.yarn.queue: default
    spark.yarn.maxAppAttempts: 4
    spark.yarn.am.attemptFailuresValidityInterval: 1h
    spark.yarn.max.executor.failures: 20
    spark.yarn.executor.failuresValidityInterval: 1h
    spark.task.maxFailures: 8
    spark.serializer: org.apache.spark.serializer.KryoSerializer
    spark.streaming.batchDuration: 4000
    spark.streaming.backpressure.enabled: false
    spark.streaming.unpersist: false
    spark.streaming.blockInterval: 500
    spark.streaming.kafka.maxRatePerPartition: 3000
    spark.streaming.timeout: -1
    spark.streaming.unpersist: false
    spark.streaming.kafka.maxRetries: 3
    spark.streaming.ui.retainedBatches: 200
    spark.streaming.receiver.writeAheadLog.enable: false
    spark.ui.port: 4050
  controllerServiceConfigurations:

Then comes a list of ControllerService which are the shared components that interact with outside world (Elasticearch, HBase, ...)

- controllerService: datastore_service
  component: com.hurence.logisland.service.elasticsearch.Elasticsearch_6_6_2_ClientService
  type: service
  documentation: elasticsearch service
  configuration:
    hosts: sandbox:9200
    batch.size: 5000

Then comes a list of RecordStream, each of them route the input batch of Record through a pipeline of Processor to the output topic

streamConfigurations:
  - stream: parsing_stream
    component: com.hurence.logisland.stream.spark.KafkaRecordStreamParallelProcessing
    type: stream
    documentation: a processor that converts raw apache logs into structured log records
    configuration:
      kafka.input.topics: logisland_raw
      kafka.output.topics: logisland_events
      kafka.error.topics: logisland_errors
      kafka.input.topics.serializer: none
      kafka.output.topics.serializer: com.hurence.logisland.serializer.KryoSerializer
      kafka.error.topics.serializer: com.hurence.logisland.serializer.JsonSerializer
      kafka.metadata.broker.list: sandbox:9092
      kafka.zookeeper.quorum: sandbox:2181
      kafka.topic.autoCreate: true
      kafka.topic.default.partitions: 4
      kafka.topic.default.replicationFactor: 1

Then come the configurations of all the Processor pipeline. Each Record will go through these components. Here we first parse raw apache logs and then we add those records to Elasticsearch. Please note that the datastore processor makes use of the previously defined ControllerService.

processorConfigurations:

  - processor: apache_parser
    component: com.hurence.logisland.processor.SplitText
    type: parser
    documentation: a parser that produce records from an apache log REGEX
    configuration:
      record.type: apache_log
      value.regex: (\S+)\s+(\S+)\s+(\S+)\s+\[([\w:\/]+\s[+\-]\d{4})\]\s+"(\S+)\s+(\S+)\s*(\S*)"\s+(\S+)\s+(\S+)
      value.fields: src_ip,identd,user,record_time,http_method,http_query,http_version,http_status,bytes_out

  - processor: es_publisher
    component: com.hurence.logisland.processor.datastore.BulkPut
    type: processor
    documentation: a processor that indexes processed events in elasticsearch
    configuration:
      datastore.client.service: datastore_service
      default.collection: logisland
      default.type: event
      timebased.collection: yesterday
      collection.field: search_index
      type.field: record_type

Once you've edited your configuration file, you can submit it to execution engine with the following cmd :

bin/logisland.sh -conf conf/job-configuration.yml

You should jump to the tutorials section of the documentation. And then continue with components documentation

Contributing

Please review the Contribution to Logisland guide for information on how to get started contributing to the project.

com.hurence.logisland

Versions

Version
0.14.0