dalloc

A distributed resoure allocator

License

License

GroupId

GroupId

io.paradoxical
ArtifactId

ArtifactId

dalloc
Last Version

Last Version

1.0
Release Date

Release Date

Type

Type

jar
Description

Description

dalloc
A distributed resoure allocator
Project URL

Project URL

http://maven.apache.org
Source Code Management

Source Code Management

http://github.com/paradoxical-io/dalloc

Download dalloc

How to add to project

<!-- https://jarcasting.com/artifacts/io.paradoxical/dalloc/ -->
<dependency>
    <groupId>io.paradoxical</groupId>
    <artifactId>dalloc</artifactId>
    <version>1.0</version>
</dependency>
// https://jarcasting.com/artifacts/io.paradoxical/dalloc/
implementation 'io.paradoxical:dalloc:1.0'
// https://jarcasting.com/artifacts/io.paradoxical/dalloc/
implementation ("io.paradoxical:dalloc:1.0")
'io.paradoxical:dalloc:jar:1.0'
<dependency org="io.paradoxical" name="dalloc" rev="1.0">
  <artifact name="dalloc" type="jar" />
</dependency>
@Grapes(
@Grab(group='io.paradoxical', module='dalloc', version='1.0')
)
libraryDependencies += "io.paradoxical" % "dalloc" % "1.0"
[io.paradoxical/dalloc "1.0"]

Dependencies

compile (8)

Group / Artifact Type Version
org.apache.commons : commons-collections4 jar 4.0
com.google.inject.extensions : guice-assistedinject jar 4.0
io.paradoxical : common jar 1.1
com.hazelcast : hazelcast jar 3.5.4
com.godaddy : logging jar 1.0
org.slf4j : slf4j-api jar 1.7.10
com.google.guava : guava jar 18.0
com.esotericsoftware : reflectasm jar 1.10.0

provided (1)

Group / Artifact Type Version
org.projectlombok : lombok jar 1.16.2

test (6)

Group / Artifact Type Version
uk.co.jemos.podam : podam jar 4.7.2.RELEASE
org.assertj : assertj-core jar 3.0.0
org.slf4j : slf4j-log4j12 jar 1.7.10
ch.qos.logback : logback-classic jar 1.0.13
org.mockito : mockito-all jar 1.10.19
junit : junit jar 4.12

Project Modules

There are no modules declared in this project.

paradoxical.dalloc

Build status

A distributed resource allocator. It supports

  • Hazelcast distribution
  • Manual distribution

What does it solve?

The problem we are solving is having many distributed machines who should act on only 1 resource out of a set of resources. For example, you have 10,000 items that each need monitoring, and you don't want more than one machine monitoring an element in the set. You need to distribute this work to a cluster of boxes such that each box is monitoring 10,000/clusterSize elements.

Installation

To install

<dependency>
    <groupId>io.paradoxical</groupId>
    <artifactId>dalloc</artifactId>
    <version>1.0</version>
</dependency>

Hazelcast Distribution

Distribution in hazelcast constitutes of mapping your input set of resources to a ResourceIdentifier and grouping them as part of a ResourceGroup. Given the set of members in the hazelcast instance the resources are distributed evenly to each member. As members join the cluster they may rebalance resources (i.e. give up some resources they had so that new members can claim them). As members leave the cluster, members will rebalance the required load.

Claiming resources

Claim event can occur by either

  • A member joining
  • A member leaving
  • Manually invoked

The claim function will be dispatched on member join/leave events and is executed on the same thread. If you need to do long running work during a claim event it is best to capture the Set of claimed resources and dispatch your work on a threadpool.

Split brain

There is no splitbrain merge exposed here.

How it works

There is a single Map<ClusterMember, Set<ResourceIdentifier>> stored on a distributed node (to minimize cluster IO) that each member tries to lock and acquire. When someone acquires a lock they find out the intersection of the total vs already allocated (owned by the set) and divies up the required needing allocation

Needing Acquisition = Total available ∩ Already allocated

When a member leaves, all members will try and acquire the lock and prune that members resources.

Manual distribution

Manual distribution relies on the user to define the total expected set of workers and to manually label each worker. Each worker will then grab

Total Set / Workers

Resources. There is no guarantee of overlap protection, but since resource identifiers are ordered they will deterministically choose the same slice sections.

io.paradoxical

Paradoxical Devs

Libraries and dockerized applications. Pull requests welcome!

Versions

Version
1.0