netty-vbuf

Variable length encoding ByteBuf implementation

License

License

Apache License, Verison 2.0
Categories

Categories

Net Netty Networking
GroupId

GroupId

de.knutwalker
ArtifactId

ArtifactId

netty-vbuf
Last Version

Last Version

0.1.4
Release Date

Release Date

Type

Type

jar
Description

Description

netty-vbuf
Variable length encoding ByteBuf implementation
Project URL

Project URL

https://github.com/knutwalker/netty-vbuf
Source Code Management

Source Code Management

http://github.com/knutwalker/netty-vbuf

Download netty-vbuf

How to add to project

<!-- https://jarcasting.com/artifacts/de.knutwalker/netty-vbuf/ -->
<dependency>
    <groupId>de.knutwalker</groupId>
    <artifactId>netty-vbuf</artifactId>
    <version>0.1.4</version>
</dependency>
// https://jarcasting.com/artifacts/de.knutwalker/netty-vbuf/
implementation 'de.knutwalker:netty-vbuf:0.1.4'
// https://jarcasting.com/artifacts/de.knutwalker/netty-vbuf/
implementation ("de.knutwalker:netty-vbuf:0.1.4")
'de.knutwalker:netty-vbuf:jar:0.1.4'
<dependency org="de.knutwalker" name="netty-vbuf" rev="0.1.4">
  <artifact name="netty-vbuf" type="jar" />
</dependency>
@Grapes(
@Grab(group='de.knutwalker', module='netty-vbuf', version='0.1.4')
)
libraryDependencies += "de.knutwalker" % "netty-vbuf" % "0.1.4"
[de.knutwalker/netty-vbuf "0.1.4"]

Dependencies

provided (1)

Group / Artifact Type Version
io.netty : netty-buffer jar 4.0.23.Final

test (3)

Group / Artifact Type Version
org.perf4j : perf4j jar 0.9.16
junit : junit jar 4.11
com.carrotsearch.randomizedtesting : randomizedtesting-runner jar 2.1.6

Project Modules

There are no modules declared in this project.

netty-vbuf

Variable length encoding ByteBuf implementation.

Build Status Coverage Status

Overview

This is a ByteBuf implementation, that's using variable length encoding for ints and longs to save memory consumption and eventually cpu time.

Get it

from Maven Central

<dependency>
    <groupId>de.knutwalker</groupId>
    <artifactId>netty-vbuf</artifactId>
    <version>0.1.3</version>
</dependency>

from release binary

Download the netty-vbuf-0.1.3.jar of the latest release and place it in your classpath.

from source

Clone this repo and run mvn package -DskipTests and then add the target/netty-vbuf-0.1.4-SNAPSHOT.jar jar to your classpath.

Use it

import io.netty.buffer.VByteBuf;
...
ByteBuf vBuf = VByteBuf.wrap(existingByteBuf);

The methods set, get, read, and write for both, Int and Long are reimplemented, all others delegate to the wrapped buffer.

Profit

Variable length encoding used the highest bit of each byte to encode whether the next byte belongs to the current number. This allows for an int to be encoded in 1 to 5 bytes. If you're encoding primarily small numbers, using a variable length encoding can drastically save memory consumption.

These are some examples for the different encodings

int regular encoding variable length encoding memory savings
42 00000000 00000000 00000000 00101010 00101010 75%
1337 00000000 00000000 00000101 00111001 10001010 00111001 50%
134217728 00001000 00000000 00000000 00000000 11000000 10000000 10000000 00000000 0%
2147483647 01111111 11111111 11111111 11111111 10000111 11111111 11111111 11111111 01111111 -25% (loss)

If you write many small numbers to a ByteBuf, that isn't over sized and needs to resize once in a while, using variable length encoding can reduce the number of costly memory copy operations and therefore reduce used cpu time.

Otherwise, the encoding adds a runtime overhead, mostly due to the repeated boundary checking for each byte, where the regular encoding would just check once for all bytes.

These are some possible savings, when writing 100M (continuously increasing) longs:

     writing 7-bit long with resizing used 87.50% less memory and 95.14% less time
   writing growing long with resizing used 50.26% less memory and 69.69% less time
  writing 7-bit long without resizing used 87.50% less memory and 31.83% less time
writing growing long without resizing used 50.26% less memory and 242.82% more time

These are just nanoTimes, so no real, useful benchmarks, but they give a hint in the right direction.

Versions

Version
0.1.4
0.1.3