Dropwizard Prometheus

Dropwizard metrics exporter in Prometheus format

License

License

Categories

Categories

DropWizard Container Microservices Prometheus Application Testing & Monitoring Monitoring
GroupId

GroupId

me.andrusha
ArtifactId

ArtifactId

dropwizard-prometheus_2.11
Last Version

Last Version

0.2.0
Release Date

Release Date

Type

Type

jar
Description

Description

Dropwizard Prometheus
Dropwizard metrics exporter in Prometheus format
Project URL

Project URL

https://github.com/andrusha/dropwizard-prometheus
Project Organization

Project Organization

me.andrusha
Source Code Management

Source Code Management

https://github.com/andrusha/dropwizard-prometheus

Download dropwizard-prometheus_2.11

How to add to project

<!-- https://jarcasting.com/artifacts/me.andrusha/dropwizard-prometheus_2.11/ -->
<dependency>
    <groupId>me.andrusha</groupId>
    <artifactId>dropwizard-prometheus_2.11</artifactId>
    <version>0.2.0</version>
</dependency>
// https://jarcasting.com/artifacts/me.andrusha/dropwizard-prometheus_2.11/
implementation 'me.andrusha:dropwizard-prometheus_2.11:0.2.0'
// https://jarcasting.com/artifacts/me.andrusha/dropwizard-prometheus_2.11/
implementation ("me.andrusha:dropwizard-prometheus_2.11:0.2.0")
'me.andrusha:dropwizard-prometheus_2.11:jar:0.2.0'
<dependency org="me.andrusha" name="dropwizard-prometheus_2.11" rev="0.2.0">
  <artifact name="dropwizard-prometheus_2.11" type="jar" />
</dependency>
@Grapes(
@Grab(group='me.andrusha', module='dropwizard-prometheus_2.11', version='0.2.0')
)
libraryDependencies += "me.andrusha" % "dropwizard-prometheus_2.11" % "0.2.0"
[me.andrusha/dropwizard-prometheus_2.11 "0.2.0"]

Dependencies

compile (4)

Group / Artifact Type Version
org.scala-lang : scala-library jar 2.11.12
io.dropwizard.metrics : metrics-core jar 3.1.5
org.nanohttpd : nanohttpd jar 2.3.1
org.slf4j : slf4j-api jar 1.7.25

test (1)

Group / Artifact Type Version
org.scalatest : scalatest_2.11 jar 3.0.5

Project Modules

There are no modules declared in this project.

Dropwizard -> Prometheus

Build Status

Dropwizard to Prometheus exporter.

  • Runs http server to serve metrics.
  • Converts Dropwizard metric names to Prometheus format.
  • Supports multiple metric registries.

Usage

Include sbt dependency:

"me.andrusha" %% "dropwizard-prometheus" % "0.2.0"
  1. Add registry MetricsCollector.register(registry)
  2. Start server MetricsCollector.start("0.0.0.0", 9095)
  3. Gracefully stop server MetricsCollector.stop()

Spark integration

At the moment custom Spark metric sinks are not supported, however it's possible to define Sink as a part of Spark package:

package org.apache.spark.metrics.sink

private[spark] class PrometheusSink(
    val property: Properties,
    val registry: MetricRegistry,
    securityMgr: SecurityManager) extends Sink {

  override def start(): Unit = {
    MetricsCollector.register(registry)
    MetricsCollector.start("0.0.0.0", 9095)
  }
  
  override def stop(): Unit = {
    MetricsCollector.stop()
  }
  
  override def report(): Unit = ()
}

Spark metric name converter

It's possible to transform metrics after the fact to fit your naming scheme better. In case of spark you would want to change metric names to have common prefix, eg:

  override def start(): Unit = {
   MetricsCollector.register(registry, sparkMetricsTranformer)
  }

  def sparkMetricsTranformer(m: Metric): Metric = m match {
    case ValueMetric(name, tpe, desc, v, d) =>
      ValueMetric(sparkName(name), tpe, desc, v, d.merge(extractDimensions(name)))
    case SampledMetric(name, tpe, desc, samples, cnt, d) =>
      SampledMetric(sparkName(name), tpe, desc, samples, cnt, d.merge(extractDimensions(name)))
  }

  /**
    * Eg:
    *   spark_application_1523628279755:208:executor:shuffle_total_bytes_read
    *     v v v
    *   spark:executor:shuffle_total_bytes_read
    */
  def sparkName(name: String): String = name.split(':').drop(2).+:("spark").mkString(":")

  /**
    * Two common naming patterns are:
    *   spark_application_1523628279755:driver:dag_scheduler:message_processing_time
    *   spark_application_1523628279755:208:executor:shuffle_total_bytes_read
    */
  def extractDimensions(name: String): Map[String, String] = name.split(':').toList match {
    case appId :: "driver" :: _ =>
      Map("app_id" -> appId, "app_type" -> "driver")
    case appId :: executorId :: _ =>
      Map("app_id" -> appId, "app_type" -> "executor", "executor_id" -> executorId)
    case _ => Map.empty
  }
}

Versions

Version
0.2.0
0.1.1