elx-node

Admin/Bulk/Search API extensions for Elasticsearch clients (node, transport, http)

License

License

GroupId

GroupId

org.xbib
ArtifactId

ArtifactId

elx-node
Last Version

Last Version

7.10.2.0
Release Date

Release Date

Type

Type

pom.sha512
Description

Description

elx-node
Admin/Bulk/Search API extensions for Elasticsearch clients (node, transport, http)
Project URL

Project URL

https://github.com/jprante/elx
Project Organization

Project Organization

xbib
Source Code Management

Source Code Management

https://github.com/jprante/elx

Download elx-node

Dependencies

compile (4)

Group / Artifact Type Version
org.xbib : elx-common jar 7.10.2.0
org.elasticsearch.plugin : transport-netty4-client jar 7.10.2
io.netty : netty-codec-http jar 4.1.58.Final
io.netty : netty-transport jar 4.1.58.Final

Project Modules

There are no modules declared in this project.

Elasticsearch Clients

content coverage badge License Apache%202.0 blue

This Java library extends the Elasticsearch Java Client classes for better convenience.

It is not a plugin for Elasticsearch. Use it by importing the jar from Maven Central into your project.

The Elasticsearch node client and transport client APIs are unified in a ClientMethods interface. This interface uses bulk services and index management under the hood, like index creation, alias managent, and retention policies.

Two classes BulkNodeClient and BulkTransportClient combine the client methods with the BulkProcessor, provide some logging convenience, and still offer the Client interface of Elasticsearch by using the client() method.

A MockTransportClient implements the BulkTransportClient API but does not need a running Elasticsearch node to connect to. This is useful for unit testing.

The client classes are enriched by metrics that can measure document count, size, and speed.

A ClientBuilder helps to build client instances. For example

       ClientBuilder clientBuilder = ClientBuilder.builder()
                .put(elasticsearchSettings)
                .put("client.transport.ping_timeout", settings.get("timeout", "30s"))
                .put(ClientBuilder.MAX_ACTIONS_PER_REQUEST, settings.getAsInt("maxbulkactions", 1000))
                .put(ClientBuilder.MAX_CONCURRENT_REQUESTS, settings.getAsInt("maxconcurrentbulkrequests",
                        Runtime.getRuntime().availableProcessors()))
                .setMetric(new SimpleBulkMetric())
                .setControl(new SimpleBulkControl());
       BulkTransportClient client = clientBuilder.toBulkTransportClient();

For more examples, consult the integration etsts at src/integration-test/java.

A re-implemented BulkProcessor allows flushing of documents before closing.

Also, a light-weight re-implementation of the TransportClient class is provided with the following differences to the original TransportClient:

  • no retry mechanism, no exponential back off, if an error or exception is encountered, the client fails fast

  • no sniffing, that means, no additional nodes are detected during runtime

  • methods of TransportClient, TransportClientNodesServce, TransportClientProxy classes are merged into one class

  • configurable ping timeout

Some interesting methods

Here are some methods from the ClientMethods API, these are not all methods, but maybe some of which can demonstrate the convenience.

Create new index, use settings and mappings from input streams.

ClientMethods newIndex(String index, String type, InputStream settings, InputStream mappings) throws IOException

Switch an index to bulk mode - disable replicas, set refresh interval.

ClientMethods startBulk(String index, long startRefreshIntervalSeconds, long stopRefreshIntervalSeconds) throws IOException

Index document, use bulk mode automatically.

ClientMethods index(String index, String type, String id, String source);

Wait for outstanding bulk responsed from the cluster.

ClientMethods waitForResponses(TimeValue maxWait) throws InterruptedException, ExecutionException;

Update replica level on an index.

int updateReplicaLevel(String index, int level) throws IOException;

Switch aliases from a previously created index with a timestamp to a current index under the common base name index.

void switchAliases(String index, String concreteIndex, List<String> extraAliases, IndexAliasAdder adder);

Retention policy for an index. All indices before timestampdiff should be deleted, but mintokeep indices must be kept.

void performRetentionPolicy(String index, String concreteIndex, int timestampdiff, int mintokeep);

Prerequisites

You will need Java 8, although Elasticsearch 2.x requires Java 7. Java 7 is not supported.

Dependencies

This project depends only on https://github.com/xbib/metrics which is a slim version of Coda Hale’s metrics library, Elasticsearch, and Log4j2 API.

How to decode the Elasticsearch version

This project uses semantic versioning to determine the Elasticsearch upstream version it is built against.

The first three version numbers are the corresponding Elasticsearch version. The last version number is an incrementing number, the version of this project.

Please use exactly the Elasticsearch version which is declared in the project’s version. Other Elasticsearch versions do not work and will never work, it is not worth to try it. This is by design of the Elasticsearch project because the internal node communication protocol depends on the exact same API implementation. Also, the exact same version of Java virtual machine is remoonded on server and client side.

Versions

Version
7.10.2.0
7.6.1.1
6.3.2.8
6.3.2.7
6.3.2.6
6.3.2.1
2.2.1.24
2.2.1.21
2.2.1.19
2.2.1.17
2.2.1.13
2.2.1.10
2.2.1.8
2.2.1.7