de.tum.ei.lkn.eces:dnm

Library implementing deterministic network models (DNM) for the routing library of the ECES framework

License

License

GroupId

GroupId

de.tum.ei.lkn.eces
ArtifactId

ArtifactId

dnm
Last Version

Last Version

2.0.2
Release Date

Release Date

Type

Type

jar
Description

Description

Library implementing deterministic network models (DNM) for the routing library of the ECES framework
Project URL

Project URL

https://github.com/AmoVanB/eces-dnm
Source Code Management

Source Code Management

http://github.com/amovanb/eces-dnm/tree/master

Download dnm

How to add to project

<!-- https://jarcasting.com/artifacts/de.tum.ei.lkn.eces/dnm/ -->
<dependency>
    <groupId>de.tum.ei.lkn.eces</groupId>
    <artifactId>dnm</artifactId>
    <version>2.0.2</version>
</dependency>
// https://jarcasting.com/artifacts/de.tum.ei.lkn.eces/dnm/
implementation 'de.tum.ei.lkn.eces:dnm:2.0.2'
// https://jarcasting.com/artifacts/de.tum.ei.lkn.eces/dnm/
implementation ("de.tum.ei.lkn.eces:dnm:2.0.2")
'de.tum.ei.lkn.eces:dnm:jar:2.0.2'
<dependency org="de.tum.ei.lkn.eces" name="dnm" rev="2.0.2">
  <artifact name="dnm" type="jar" />
</dependency>
@Grapes(
@Grab(group='de.tum.ei.lkn.eces', module='dnm', version='2.0.2')
)
libraryDependencies += "de.tum.ei.lkn.eces" % "dnm" % "2.0.2"
[de.tum.ei.lkn.eces/dnm "2.0.2"]

Dependencies

compile (11)

Group / Artifact Type Version
org.json : json jar 20150729
de.tum.ei.lkn.eces : routing jar 2.0.4
de.tum.ei.lkn.eces : network jar 2.0.1
de.tum.ei.lkn.eces : discodnc jar 2.4.3-lkn
de.tum.ei.lkn.eces : core jar 2.0.3
de.tum.ei.lkn.eces : graph jar 2.0.2
de.erichseifert.gral : gral-core jar 0.11
org.javatuples : javatuples jar 1.2
de.tum.ei.lkn.eces : master-pom-commons jar 1.0.21
org.aeonbits.owner : owner jar 1.0.10
log4j : log4j jar 1.2.17

test (3)

Group / Artifact Type Version
de.tum.ei.lkn.eces : topologies jar 2.0.5
de.tum.ei.lkn.eces : master-pom-commons test-jar 1.0.21
junit : junit jar 4.13.1

Project Modules

There are no modules declared in this project.

DNM

The deterministic network modeling (DNM) module implements different network models for access control and delay guarantees to be used by the routing module of the ECES framework. This allows to use the routing module for finding paths with strict delay guarantees in communication networks.

The logic of the models and the implementation rely on deterministic network calculus concepts. See our technical report about the topic and the main reference defining and describing network calculus concepts.

This repository corresponds to the reference implementation for the Chameleon and DetServ models described in:

and also implements the state-of-the-art QJump and Silo models described in:

This mostly comes in the form of different Proxy subclasses (see the routing module for the description of what is a proxy) which implement different access control strategies.

The proxies require the existence of a NCRequestData instance attached to the same entity as the Request object.

Usage

The project can be downloaded from maven central using:

<dependency>
  <groupId>de.tum.ei.lkn.eces</groupId>
  <artifactId>dnm</artifactId>
  <version>X.Y.Z</version>
</dependency>

Implemented Models

We currently have two models, i.e., two proxies, implemented.

QJump Proxy

The QJumpProxy implements the access control of the QJump system.

The configuration of QJump (number of hosts, link rate, packet size and cumulative processing time) is assumed to be stored in a QJumpConfig object attached to the same entity as the Graph object on which routing is to be performed.

DetServ Proxy

The DetServProxy implements the access control of the Chameleon and DetServ models.

It is based on a configuration assumed to be stored as an instance of the DetServConfig object attached to the same entity as the Graph object on which routing is to be performed.

The configuration object consists of the following elements:

  • Access control model: this is either the multi-hop model (MHM) or the threshold-based model (TBM) - see the DetServ paper. Chameleon corresponds to the TBM. In a nutshell, the multi-hop model assigns a maximum burst and rate to each queue while the threshold-based model assigns a maximum delay to each queue.
  • Cost model: cost function for a given queue. This can be defined using the classes deriving from CostModel. For example,
CostModel costFunction = new LowerLimit(new UpperLimit(new Division(new Constant(), new Summation(new Constant(), new QueuePriority())), 1), 0);

defines a cost function of 1/(1+p) bounded between 0 and 1 and where p is the priority of the queue.

  • Burst increase model: along its path, a flow see its burst increasing. There are different ways of taking this into account: neglecting it, taking the worst-case delay (request deadline) as worst-case burst increase and taking the real burst increase (but then routing becomes sub-optimal, see our ICC paper about that). See the DetServ paper for more information on this.
  • Input link shaping (ILS): whether or not we use ILS. See the DetServ paper for more information on this. This is just a modeling change in order to be less conservative. It however increases runtime and makes the routing problem an M1 problem (see our ICC paper about that).
  • Residual mode: Within the assumption of sub-additive arrival curves and super-additive service curves, there are different ways of computing the residual rate latency service curve from an arrival curve and a service curve. Since they can both have several knee points, the residual service curve can also have multiple knee points, but because the arrival curve (resp. service curve) is assumed to be sub-additive (resp. super-additive), the residual service curve will be super-additive. From this, there are different ways of transforming a super-additive service curve in a rate-latency curve depending on which slope of the curve is used for the rate-latency one: the highest slope, the least one (then the least latency also) or the real curve (which is then not a rate-latency curve). Note that, for networks with uniform link rates, this has no influence.
  • Maximum packet size: max packet size in the network (this defaults to 1530 bytes).
  • Resource allocation: the MHM and TBM need resources (either rate/burst or delay) to be allocated to each queue in the network. A SelectResourceAllocation object defines a given resource allocation algorithm (subclass of ResourceAllocation) per scheduler, i.e., per physical unidirectional link.

The proxy also implements the Silo model. Indeed, Silo is a particular instance of the TBM model with real burst increase computation, no input link shaping, a shortest path cost function and using the TBMSiloDefaultAllocation default resource allocation for each scheduler.

Components used by the DetServ Proxy

For its implementation, the DetServ proxy attaches, to each queue, a service model (QueueModel) and an input model (ResourceUtilization). The former models the service offered by a queue (simply, its service curve) and the latter the traffic entering the queue (simply, its arrival curve).

For the MHM, the QueueModel is extended with the maximum token bucket that can be accepted at this queue (MHMQueueModel).

Both MHM and TBM use a simple TokenBucketUtilization component to model the arrival curve at a given queue. When ILS is enabled, both models then use a PerInEdgeTokenBucketUtilization component, which simply keeps track of the token bucket arrival curves per incoming edge (if the flow starts at the given edge, this current edge is used as "incoming edge" label).

DNM System

The DNM system is used by some specific model configurations to automate some actions and automatically update state information. For example, it automatically allocate resources when a new scheduler is created. Also, it automatically updates the service curves when a new flow is added.

Examples

The Silo model can be configured in the following way:

DetServConfig modelConfig = new DetServConfig(
                ACModel.ThresholdBasedModel,
                ResidualMode.LEAST_LATENCY,
                BurstIncreaseModel.NO,
                false,
                new Constant(),
                (controller, scheduler) -> new TBMSiloDefaultAllocation(controller));

That config (or any other) must then be attached to the subject graph and initialized with the used controller:

modelingConfigMapper.attachComponent(myNetwork.getQueueGraph(), modelConfig);
modelConfig.initCostModel(controller);

The routing algorithms in use must then be configured with a single proxy instance:

DetServProxy proxy = new DetServProxy(controller);
algorithm1.setProxy(proxy);
algorithm2.setProxy(proxy);
...
algorithmN.setProxy(proxy);

and then a traditional routing request with the additional NCRequestData object will trigger a routing + admission control + registration run:

Entity entity = controller.createEntity();
try (MapperSpace mapperSpace = controller.startMapperSpace()) {
        requestMapper.attachComponent(entity, new UnicastRequest(h1.getQueueNode(), h3.getQueueNode()));
        ncRequestDataMapper.attachComponent(entity, new NCRequestData(
                CurvePwAffine.getFactory().createTokenBucket(flowRate, flowBurst),
                Num.getFactory().create(deadline)););
        selectedRoutingAlgorithmMapper.attachComponent(entity, new SelectedRoutingAlgorithm(aStarPruneAlgorithm));
}

See tests for other simple examples.

See other ECES repositories using this library (e.g., the tenant manager) for more detailed/advanced examples.

Versions

Version
2.0.2
2.0.1