elasticsearch-lambda

Framework For Lambda Architecture on Elasticsearch

License

License

Categories

Categories

Search Business Logic Libraries Elasticsearch
GroupId

GroupId

com.inin.analytics
ArtifactId

ArtifactId

elasticsearch-lambda
Last Version

Last Version

1.2.1
Release Date

Release Date

Type

Type

jar
Description

Description

elasticsearch-lambda
Framework For Lambda Architecture on Elasticsearch
Project URL

Project URL

https://github.com/drewdahlke/elasticsearch-lambda
Source Code Management

Source Code Management

https://github.com/drewdahlke/elasticsearch-lambda.git

Download elasticsearch-lambda

How to add to project

<!-- https://jarcasting.com/artifacts/com.inin.analytics/elasticsearch-lambda/ -->
<dependency>
    <groupId>com.inin.analytics</groupId>
    <artifactId>elasticsearch-lambda</artifactId>
    <version>1.2.1</version>
</dependency>
// https://jarcasting.com/artifacts/com.inin.analytics/elasticsearch-lambda/
implementation 'com.inin.analytics:elasticsearch-lambda:1.2.1'
// https://jarcasting.com/artifacts/com.inin.analytics/elasticsearch-lambda/
implementation ("com.inin.analytics:elasticsearch-lambda:1.2.1")
'com.inin.analytics:elasticsearch-lambda:jar:1.2.1'
<dependency org="com.inin.analytics" name="elasticsearch-lambda" rev="1.2.1">
  <artifact name="elasticsearch-lambda" type="jar" />
</dependency>
@Grapes(
@Grab(group='com.inin.analytics', module='elasticsearch-lambda', version='1.2.1')
)
libraryDependencies += "com.inin.analytics" % "elasticsearch-lambda" % "1.2.1"
[com.inin.analytics/elasticsearch-lambda "1.2.1"]

Dependencies

compile (18)

Group / Artifact Type Version
commons-codec : commons-codec jar 1.4
org.codehaus.jackson : jackson-mapper-lgpl jar 1.9.2
org.codehaus.jackson : jackson-core-lgpl jar 1.9.2
org.codehaus.jackson : jackson-mapper-asl jar 1.9.2
org.codehaus.jackson : jackson-core-asl jar 1.9.2
us.monoid.web : resty jar 0.3.2
joda-time : joda-time jar 2.3
org.apache.curator : curator-recipes jar 2.4.0
org.apache.curator : curator-framework jar 2.3.0
com.amazonaws : aws-java-sdk jar 1.9.38
org.elasticsearch : elasticsearch jar 1.6.0
org.slf4j : slf4j-api jar 1.7.5
org.slf4j : jcl-over-slf4j jar 1.7.5
org.slf4j : log4j-over-slf4j jar 1.7.5
org.slf4j : jul-to-slf4j jar 1.7.5
ch.qos.logback : logback-classic jar 1.0.13
ch.qos.logback : logback-core jar 1.0.13
org.javassist : javassist jar 3.18.1-GA

provided (11)

Group / Artifact Type Version
org.apache.avro : avro jar 1.7.5
org.apache.mrunit : mrunit jar 1.0.0
com.google.code.gson : gson jar 2.3.1
commons-lang : commons-lang jar 2.5
commons-io : commons-io jar 2.4
org.apache.commons : commons-compress jar 1.5
commons-httpclient : commons-httpclient jar 3.1
org.apache.curator : curator-test jar 2.4.0
org.apache.hadoop : hadoop-mapreduce-client-app jar 2.2.0
org.apache.hadoop : hadoop-common jar 2.2.0
org.apache.hadoop : hadoop-hdfs jar 2.2.0

test (2)

Group / Artifact Type Version
junit : junit jar 4.11
org.mockito : mockito-all jar 1.9.5

Project Modules

There are no modules declared in this project.

Elasticsearch-Lambda

What is Lambda

For a primer on Lambda Architecture see [http://lambda-architecture.net/]. At Interactive Intelligence we're applying Lamda architecture to Elasticsearch. In our case that means we're streaming data into Elasticsearch in real time generated by Storm, and then re-building the whole thing every night in Hadoop. This allows some architecture nicities such as

  • Changes to analyzers & tokenizers are rolled out for all historical data every night, automatically
  • New features & bug fixes that affect the data being indexed are rolled out every night, automatically. No data repair/backpopulation scripts are ever required.
  • Tune the # shards and shard routing strategies on data already written. Changes roll out every night, automatically.
  • With the button that rebuilds the cluster getting hit nightly, it is a well oiled button.
  • If data gets corrupt, no heroics are required. Hit the rebuild button and grab a beer.
  • Backups? Why bother when you can just hit the rebuild button?

Obviously there's a decent bit of work up front to get all this working. Being a fairly generic problem, we decided to open source our infastructure.

Features

A new way to bulk load elasticsearch from hadoop

  • Build indexes offline, without touching your production cluster
  • Run Elasticsearch unmodified, entirely within YARN
  • Build snapshots of indexes without requiring enough disk space on task trackers to hold an entire index
  • Load those indexes into your cluster using the snapshot restore functionality built into Elasticsearch
  • .. and more to come. We're in the process of pulling as much into this repo as we can.

How it works

The meat is in BaseEsReducer where individual reducer tasks recieve all the data for a single shard of a single index. It creates an embeded Elasticsearch instance, bulk loads it locally in-jvm, and then creates a snapshot. Discovery is disabled and the elasticsearch instances do not form a cluster with each other. Once bulk loading a shard is complete it is flushed, optimized, snapshotted, and then transfered to a snapshot repository (S3, HDFS, or Local FS). After the job is complete, any shards that have no data get placeholder shards generated to make the index complete.

By making reducers only responsible for a single shard worth of data at a time, the total disk space required on task trackers is roughly

(shard data) + (shard snapshot) * (num reducers per task tracker)

After indexes have been generated they can be loaded in using the snapshot restore functionality built into Elasticsearch. The index promotion process maintains state in Zookeeper. This is in the process of being open sourced.

Maven

<repository>
<id>oss-sonatype</id>
<name>oss-sonatype</name>
<url>https://oss.sonatype.org/content/groups/public/</url>
</repository>

<dependency>
<artifactId>elasticsearch-lambda</artifactId>
<groupId>com.inin.analytics</groupId>
<version>1.0.25</version>
</dependency>

Shard routing

In order to index 1 shard per reducer at a time, elasticsearch-lambda relies on manual shard routing. If you've got big indexes (probably why you're here), then you'll almost certainly want a custom routing strategy so that searches can hit a subset of shards.

To create your own you would implement the ElasticsearchRoutingStrategy interface and make use of it during the setup method of the ExampleJobPrep job. The default works as such:

ElasticsearchRoutingStrategyV1: Two parameters: numShards & numShardsPerOrg. A nieve apprach would be routing all data for 1 customer to 1 shard. To avoid hotspotting shards with large customers, this lets you spread the load across multiple shards. For example with 10 shards and 3 per customer, customer A might sit on shards 1,3,5 while customer B sits on shards 2,3,8. Setting the inputs to 10 & 10 would spread all customers evenly across all 10 shards.

EMR Example Steps

  • generateExampleData 1000 hdfs:///tmp/test/data
  • examplePrep hdfs:///tmp/test/data/ hdfs:///tmp/test/json/ _rebuild_20141030012508 5 2
  • esIndexRebuildExample hdfs:///tmp/test/json/ /media/ephemeral0/tmp/bulkload/ hdfs:///tmp/snapshotrepo/ my_backup /media/ephemeral0/tmp/esrawdata/ 1 5 100 hdfs:///tmp/manifest/

Running Configs (for eclipse/IDE)

You can experiment via these run configs ran in series

  • com.inin.analytics.elasticsearch.driver.Driver

Lets build some dummy data

  • generateExampleData 1000 file:///tmp/data/part2

Prepare some data for the indexing job

  • examplePrep /tmp/data/part2 /tmp/datajson/ _rebuild_20141030012508 5 2

Build Elasticsearch indexes, snapshot them, and transport them to a snapshot repository on hdfs (s3 paths also allowed)

  • esIndexRebuildExample /tmp/datajson/ /tmp/bulkload110/ hdfs:///tmp/snapshotrepo110/ my_backup /tmp/esrawdata1010/ 1 5 2 /tmp/manifest110/

Can I use HDFS or NFS for Elasticsearch data?

Elasticsearch does not currently support backing it's data with HDFS, so this project makes use of local disks on the task trackers. Given that Solr Cloud already supports HDFS backed data, it's concievable that one day Elasticsearch might.

When considering NFS you must first consider how different hadoop distributions have implemented it. The apache version of hadoop implements NFS with large local disk buffers, so it may or may not save you any trouble. The Mapr NFS implementation is more native and performant. In our tests, running Elasticsearch on YARN and backing the data directories by NFS mounts backed by MapR-FS ran roughly half as fast. While impressive, it's up to you to balance the cost of using MapR for it's NFS cabilitiy to run Elasticsearch. Note, this requires substituting Elasticsearch's locking mechanism for a non-filesystem based implementation.

Versions

Version
1.2.1
1.2
1.1
1.0.27
1.0.26
1.0.25
1.0.24
1.0.23
1.0.22
1.0.21
1.0.20
1.0.19
1.0.18
1.0.17
1.0.16
1.0.15
1.0.14
1.0.13
1.0.12
1.0.11
1.0.10
1.0.9
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1