spec-driven

Main Spec-Driven library, to be used for implementing specifications.

License

License

Categories

Categories

Net
GroupId

GroupId

net.jackadull
ArtifactId

ArtifactId

spec-driven_2.12
Last Version

Last Version

1.0.1
Release Date

Release Date

Type

Type

jar
Description

Description

spec-driven
Main Spec-Driven library, to be used for implementing specifications.

Download spec-driven_2.12

How to add to project

<!-- https://jarcasting.com/artifacts/net.jackadull/spec-driven_2.12/ -->
<dependency>
    <groupId>net.jackadull</groupId>
    <artifactId>spec-driven_2.12</artifactId>
    <version>1.0.1</version>
</dependency>
// https://jarcasting.com/artifacts/net.jackadull/spec-driven_2.12/
implementation 'net.jackadull:spec-driven_2.12:1.0.1'
// https://jarcasting.com/artifacts/net.jackadull/spec-driven_2.12/
implementation ("net.jackadull:spec-driven_2.12:1.0.1")
'net.jackadull:spec-driven_2.12:jar:1.0.1'
<dependency org="net.jackadull" name="spec-driven_2.12" rev="1.0.1">
  <artifact name="spec-driven_2.12" type="jar" />
</dependency>
@Grapes(
@Grab(group='net.jackadull', module='spec-driven_2.12', version='1.0.1')
)
libraryDependencies += "net.jackadull" % "spec-driven_2.12" % "1.0.1"
[net.jackadull/spec-driven_2.12 "1.0.1"]

Dependencies

compile (1)

Group / Artifact Type Version
org.scala-lang : scala-library jar 2.12.6

test (1)

Group / Artifact Type Version
org.scalatest : scalatest_2.12 jar 3.0.5

Project Modules

There are no modules declared in this project.

Spec-Driven

Travis CI Maven Central Scaladoc Codefactor Snyk

Specification-driven development, purely in Scala.

Utility library for writing implementation-independent feature specifications in Scala, in "Given-When-Then" style. Also offers support for executing the specification as a test, against an implementation.

1. Dependency management and compatibility

Spec-Driven is compatible with Scala 2.13 . Best effort is made to always keep it up-to-date with the latest Scala version.

Cross-versioning will not be supported. When a new Scala version is released, your code should be updated to that version as soon as possible anyways.

1.1. Main library

For declaring specifications.

1.1.1. SBT

libraryDependencies += "net.jackadull" %% "spec-driven" % "1.1.1-SNAPSHOT"

1.1.2. Maven

<dependency>
  <groupId>net.jackadull</groupId>
  <artifactId>spec-driven_2.13</artifactId>
  <version>1.1.1-SNAPSHOT</version>
</dependency>

1.2. ScalaTest adapter

To be imported into test scope, in order to run a specification as a unit test based on ScalaTest .

2. Basic idea

One important observation of Test-Driven Development (TDD) is that a good test specification is very close to a formal requirement specification.

Spec-Driven suggests to take the logical consequence and write your specification entirely independent from your implementation. Those specifications are similar to BDD tests, with those differences:

  • 100% Scala: Spec-Driven does not use any custom programming language for defining requirements, as some BDD frameworks do. Everything is written purely in Scala. This enables the developer to use the full set of Scala functionality to cover common programming problems, such as repetition, code duplication, abstraction etc.

    Indeed, the main Spec-Driven library is rather small and does not require a lot of code. The concept is simple enough. No special IDE plugins or other tools are needed, your preferred regular Scala development toolkit suffices.

  • Implementation independence: BDD test specifications have a dependency on the implementation code, because they need to invoke its methods for executing the tests.

    Spec-Driven specifications are not supposed to have a dependency to the implementation code because they are not tests . There can be multiple implementations that fulfill a certain specification, therefore it is not bound to any particular implementation.

  • Not directly executable: BDD specifications are executable as unit tests or integration tests. In contrast, Spec-Driven specifications cannot be executed as such; they represent a specification , not an algorithm.

    Of course, Spec-Driven specifications can be used to verify that a certain implementation fulfills them. The result of this would be a unit test suite. In order to achieve this, there are three requirements:

    • An adapter library from Spec-Driven to a certain testing framework. At the time of writing this, only one bridge exists, namely the one for the ScalaTest framework. But if required, it would be very easy to write one's own.

    • Spec-Driven model implementations for the concrete implementation. This tells Spec-Driven exactly what methods to call in the implementation in order to achieve a certain effect that is mentioned in the specification.

      This is far easier than it might sound here.

    • A way to interpret the outcomes of the specificion's requirements in the shape of test assertions. Also a very simple thing.

  • Rather free-form: Like BDD, Spec-Driven advertises the "Given-When-Then" composition of requirements. But this is just a recommendation; if you see fit, you can easily choose any other form you like.

3. Advantages of Spec-Driven

  • Writing specifications independent from the implementation code makes them express your actual requirements, as opposed to following the technical details of a particular implementation.

  • Spec-Driven has no opinion about what makes a valid or invalid outcome of a requirement.

    Unit test frameworks ultimately need a boolean true or false, for example in the shape of assert statements. In contrast, the outcome of a Spec-Driven requirement is entirely open. There is a type parameter for it, and it can be any type.

    This is because not every implementation necessarily boils down to a strict Boolean value. Asynchonous implementations might have some sort of Future or Fiber object that better represents the result. Functional implementations might use some kind of IO monad. Others might leave it more open and have something like T[+_] , which can be customized dependending on the implementation.

    Some implementations may transport more information than just true or false , such as for example what exactly is the nature of an error. This may be an exception, or a simple string message. It is left entirely open in Spec-Driven.

    This is possible because, as mentioned before, Spec-Driven specifications are not directly executable as unit tests. They have no opinion on how to interpret the outcomes, they only explain how to arrive at a certain outcome. The act of interpreting the outcome is part of the bridge implementation, which results in actual unit tests.

  • When you are used to it, the process of writing the Spec-Driven specification becomes part of your normal analysis process. Instead of writing down your requirements specification in terms of natural-language issue descriptions in some tool, you just write them down in Scala, using Spec-Driven.

  • If you need to use natural language requirements descriptions though, you can take sentences from those and incorporate them directly in Spec-Driven requirements. Using this process frequently and reflecting on the results can improve the quality of your natural-language specifications and analysis process.

  • If you follow the TDD virtue to write the tests first and implement after, you might end up with compile errors in your test scope. This is because Scala is a strongly typed language, and you cannot refer to the methods of a type before it exists. This problem does not occur with Spec-Driven, as the specification is independent of the implementation code.

  • For the same reason, it is easier to separate the jobs of writing the specification and writing the implementation between two different developers.

  • When the implementation changes, but the core functionality remains the same, there is no need to change the specification.

    Should implementation interfaces change in this process, only the test models need to be changed. This is much easier and clearer than changing test cases with compile errors:

    When using a regular unit test framework and an implementation changes, some tests might not compile or succed any more. The developer requires knowledge about the tests to fix this. For various reasons, this might not be feasible at the time. This creates an incentive for the developer to disable or delete some "misbehaving" tests, which is frequently seen in some codebases.

    When using Spec-Driven on the other hand, the only place where compile errors might occur after an implementation change is the test model implementations. As those are just models of the implementation and mostly just forward calls to an implementation-specific class, no special knowledge about test cases is required for maintaining them. Whoever understands the implementation code also understands the test models.

    Therefore, Spec-Driven does not create incentives to delete or deactivate such test cases. Indeed, an implementation developer does not even need to come in touch with the specification directly. There is no way to deactivate certain requirements. If it is an official requirement, it is required, no matter what.

  • The specification can be extended, changed and versioned independently from the implementation code.

  • When basing your unit tests on a Spec-Driven specification, test reports look really nice, clear and well-structured.

  • Separating specification and implementation makes it much easier to find unnessary code lines, i.e. parts of the implementation that are not required by the specification.

4. Specification structure

Describes the structure of a specification in terms of Scala types. Use the codebase on GitHub for reference, in the src/main/scala subdirectory.

4.1. Specification

The root type of a specification is Specification[+O] . [+O] is the type of outcomes of the expectations, which follow below.

A specification only has two properties: Its string title, and the sequence of requirements.

4.2. Requirement

This is analogous to the BDD "Given-When-Then" steps. All of those steps for one case taken together form one requirement. The Scala type is Requirement[+O] .

Its properties:

  • Title: a string.

  • Given-Descriptions: a sequence of strings describing the elements of the "Given" part.

  • When-Descriptions: a sequence of strings describing the elements of the "When" parts.

  • Expectations: a sequence of expectations, which form the "Then" part.

  • perform() and cleanUp() : can be overridden for preparation and clean-up stages (such as creating and deleting a temporary directory, for example).

  • Some utility methods that give good natural-language descriptions of the various parts. Useful when creating text output related to the specification, directed towards human readers.

4.3. Expectation

One element of the requirement's "Then" part. The Scala type is Expectation[+O] . Contains:

  • Description: a string.

  • outcome() : the method that produces the outcome of the expectation, as an instance of O .

5. Guide

Quick hands-on guide for starting to use Spec-Driven.

A short note in the beginning: Be aware that those are all just recommendations. There is no need to stick to them if you like to do things different. Spec-Driven is just another Scala library , there is no special magic or secret lore involved.

5.1. Creating the spec module

Your new specification should reside in its own module. Maybe you would like to use an SBT sub-project, or a Maven submodule, or whichever way you organize your code.

Just make sure that the specification module has no dependencies whatsoever to your implementation modules. It should only have a dependency to the main Spec-Driven artifact. For its coordinates, see above, under the heading "Dependency management and compatibility" .

5.1.1. Top-level vs. sub-module

One fundamental question is whether your new specification module should be its own top-level project, or a sub-module or sub-project of another project.

Both ways are feasible. Often, you already have some root module, and you are pretty sure that your specification will be used for just one single implementation sub-module under this parent module. In such cases, it can be convenient to create the specification in a new, dedicated sub-module, as you can easily share the CI infrastructure with the other modules, and no new code repository needs to be created.

Plus, if you go this way, other implementors can still refer to your specification, as it will be just another artifact that they can depend on.

In other cases, a stronger separation might be preferable though. The decision is entirely up to you.

If you want to give another party or team access to a code repository with only the specification and nothing else, then the specification should obviously reside in its own repository. And therefore, in its own module.

This may be useful if several implementations need to conform to the same specification, and the specification does not belong to any of them. For example, if the specification relates to an abstract functionality, such as a certain general way of dealing with configuration files, there might not be a single module to which the specification belongs.

In rarer cases, you want to distribute the same specification to several teams. Those teams are each supposed to create an implementation that conforms to the specification. This may be the case for competitions, pitches, challenges or even redundant implementations for very security-sensitive features.

5.2. Basic structure

Two early decisions to take:

  • The name of your use case, which will be the prefix of many of your base types. For example's sake, this documentation will use TomatoShop as use case.

  • The outcome type O of your requirements' expecations. With regards to the TomatoShop example, we will use Either[String,Unit] : The left case represents an error message, and the right case means success.

    Note that this means that implementations should resolve in a synchronous way. If you want, you could also use something like Future[Unit] , for example.

5.2.1. Packages

It is suggested that you divide the specification into three packages:

  • [your-prefix-package] : Whatever you choose as prefix package. Your main specification goes in here.

  • [your-prefix-package].model : Model traits go in here.

  • [your-prefix-package].requirement : Your specific requirement implementations.

5.2.2. Requirement types

In your requirement package, create types similar to these:

trait TomatoShopRequirementResources extends RequirementResources {
}

This is the base trait for the requirements and requirement steps. You will put common things here that are not uniquely a "Given", "When" or "Then". This is for cross-cutting stuff.

trait TomatoShopRequirementStep extends RequirementStep[Either[String,Unit]] {
}

Base type for the various "Given", "When" and "Then" steps. As you can see, it already nails down the outcome type. If you wanted, you could still leave it open here, adding it as a type parameter to TomatoShopRequirement .

trait TomatoShopGiven extends TomatoShopRequirementResources {
}

This will contain your "Given" steps, which will be traits inside this trait.

In the same way as TomatoShopGiven , also create TomatoShopWhen and TomatoShopThen .

5.2.3. Model types

You can do this now, or later. When you already have a good concept of the programming models that your specification will deal with, you can already create them. Otherwise, wait until the need arises, and create the models when needed.

The model traits will go into your model package. They extend Model , optionally.

Models only reflect the entities and operations that the specification needs to know. If the specification is about making requests and responses, the models will only reflect those requests and responses. And only in so far as relevant for the specification.

The specification models also do not need to be arranged in such a way that they are easy or meaningful to implement. In an implementation, a certain concern might be broken down into several traits. In the specification, all of those can be just one trait. Don't think about how this should be implemented. After all, the models will not be implemented themselves: Their implementations will only forward the calls to an actual implementation.

You should not create any models for internals of the potential implementations.

Here is a tomato shop example:

trait TomatoShopModel {
  def tomatoesInStock():Int
  def stockUp(nTomatoes:Int)
}

The implementation that gets specified by the model might be much, much more complex than that. It might contain a TomatoShopStockManager , a TomatoRequest type, and other more complex things. For the specification however, this complexity is not needed. We only cover what the specification needs to refer to, in the most simple and intuitive way imaginable.

5.2.4. Main specification

Put your main specification type in your prefix package. For example:

trait TomatoShopSpecification extends Specification[Either[String,Unit]]
with TomatoShopGiven with TomatoShopWhen with TomatoShopThen {
  def requirements:Seq[Requirement[Either[String,Unit]]] = Seq()
  def specificationTitle:String = "Tomato Shop"
}

Now you have everything ready. This is your skeletal specification. During the next steps, the first requirements will be added.

5.3. The shish-pike

This is about how the various "Given", "When" and "Then" steps are put together.

There is no runtime structure like a collection to which the steps are being added. Instead, they are put on top of one another using inheritance: One particular requirement extends all of its steps.

The result is a rather long inheritance declaration that can be visualized like a shish-pike: All steps are put on top of each other, mangled together into one single thing.

In this way, resources can be shared between steps: When one "When" step requires an instance of TomatoShopModel , for example, it can declare:

def shop:TomatoShopModel

This method must be implemented, otherwise compilation will fail. It would typically be implemented by a "Given" step.

Some steps can also perform modifications of the resource, by overriding the method:

override def invoiceCustomerName:String = super.invoiceCustomerName + " ** INVALID"

For this last example however, it is required that the super type defines the invoiceCustomerName method. If a supertype method is abstract, it cannot be called using super .

Therefore, this is the way that such resources should be shared: Within the ...Resources type, another inner type should be declared for sharing the resource:

trait TomatoShopRequirementResources extends RequirementResources {
  trait ContainsShop {def shop:TomatoShop = unassigned("shop")}
}

The unassigned method is predefined. It throws a descriptive exception when called.

In this way, sub-types can override this method. When multiple traits in one level of the inheritance hierarchy override the same method, they are stacked on top of each other in the same order as the with and extends clauses:

trait A {def foo:String = "(a) "}
trait B extends A {override def foo:String = super.foo + "(b) "}
trait C extends A {override def foo:String = super.foo + "(c) "}
// ...
println(new A with B with C.foo)

The example would print (a) (b) (c) . Note that trait C extends A , not B . But because the implementation declares new A with B with C , the super of this particular C refers to B .

This is the core mechanism used in Spec-Driven to stack things on top of each other. It makes the various steps work together in a modular way without them explicitly knowing each other. In this way, creating the specification results in a lot of reusable parts that can be mixed and matched for putting together new requirements from old parts.

This kind of reuse is very different from regular unit test frameworks.

It is also used to collect all the parts of the "Given", "When" and "Then" descriptions. For example, every "Given" step overrides the super method givenDescriptions , which returns a Seq[String] . It calls super.givenDescriptions and adds the descriptions for its own local "Given" elements. And of course, the super trait declares givenDescriptions to return an empty sequence.

5.4. Adding a requirement

Here is an example for adding two steps necessary for the next requirement. The code excerpts are to be put into their respective files.

trait TomatoShopRequirementShopResources extends RequirementResources {
  def newTomatoShop():TomatoShopModel
  trait ContainsShop {def shop:TomatoShopModel = unassigned(shop)}
}

This adds a new abstract method for creating a fresh, empty tomato shop model, and a trait ContainsShop that is to be extended by all steps that share one shop instance.

trait TomatoShopGiven extends TomatoShopRequirementResources {
  trait Given_a_tomato_shop extends TomatoShopRequirementStep with ContainsShop {
    override def givenDescriptions:Seq[String] =
      super.givenDescriptions :+ "a new tomato shop"
    private lazy val newShop = newTomatoShop()
    override def shop:TomatoShopModel = newShop
  }
}

The trait Given_a_tomato_shop is a "Given" step. It mixes in a description that would result in "GIVEN a new tomato shop" in the final requirement's description string.

It overrides the shop method such that a new shop will be returned. Note the use of the private lazy val , in order to prevent that a new instance is created every time the method is called.

trait TomatoShopThen extends TomatoShopRequirementResources {
  trait Then_the_stock_level_is extends TomatoShopRequirementStep with ContainsShop {
    def expectedStockLevel:Int
    override def expectations:Seq[Expectation[String,Unit]] = super.expectations :+
      Expectation(s"the stock level is $expectedStockLevel") {shop.tomatoesInStock() match {
        case expected if expectedStockLevel == expected => Right(())
        case unexpected => Left(s"expected stock level $expectedStockLevel instead of $unexpected")
      }}
  }
}

This trait would add an expectation that verifies that the stock level of the shop has the expected level. Note that the expected level is configurable by leaving expectedStockLevel abstract.

Now those can be put together into a new requirement:

trait TomatoShopSpecification extends Specification[Either[String,Unit]]
with TomatoShopGiven with TomatoShopWhen with TomatoShopThen {
  def requirements:Seq[Requirement[Either[String,Unit]]] = Seq(
    new Requirement[Either[String,Unit]]
    with Given_a_tomato_shop
    with Then_the_stock_level_is {
      def requirementTitle:String = "Fresh shop instances have zero stock"
      def expectedStockLevel:Int = 0
    }
  )
  def specificationTitle:String = "Tomato shop"
}

And in this way, we have declared a requirement. It says that new shops must have zero stock initially.

5.4.1. Another requirement

Now, let's reuse those steps and add another one:

trait TomatoShopWhen extends TomatoShopRequirementResources {
  trait When_adding_one_to_stock extends TomatoShopRequirementStep with ContainsShop {
    override def whenDescriptions:Seq[String] = super.whenDescriptions :+ "adding one tomato to the stock"
    override def perform() {
      super.perform()
      shop.stockUp(1)
    }
  }
}

This step would add one tomato to the shop. Note the use of the perform() method for performing a side effect. When run, it is guaranteed that the perform() method will be called once.

Of course, it is vital to remember to call super.perform() when overriding the perform method. Otherwise, the super implementations will not be called.

We can reuse the previous two steps and add another requirement. You can now add this new requirement to the Seq of requirements in the specification:

new Requirement[Either[String,Unit]]
with Given_a_tomato_shop
with When_adding_one_to_stock
with Then_the_stock_level_is {
  def requirementTitle:String = "When adding a tomato to the stock, the stock level increases to 1"
  def expectedStockLevel:Int = 1
}

Because of the way inheritance works, you cannot mix in When_adding_one_to_stock more than once, for adding more stock. Such cases have to be handled with custom or configurable "When" steps.

5.5. Testing conformance

In order to test if a certain implementation conforms to the specification, add the conformance test to the implementation module in test scope. It will then run as a regular unit test.

One thing you need for that is an adapter from Spec-Driven to the test framework that you are using. At the time of writing this documentation, the only bridge provided is the Scala-Test bridge. However, it is not very difficult to write a bridge for any testing framework.

This would be the unit test for the tomato shop specification:

class TomatoShopSpecificationTest extends SpecificationForScalaTest {
  def specifications:Seq[Specification[()=>Unit]] = Seq(
    new TomatoShopSpecification {
      def newTomatoShop():TomatoShopModel = ??? // use the model implementation here
    }
  )
}

This would be enough to run the specification as a series of structured unit tests. Note that you need to insert your TomatoShopModel implementation though. This implementation extends TomatoShopModel in an implementation-specific way.

Also, notice that the outcome of the test is specified as ()=>Unit here. However, the tomato shop specification declares its outcome as Either[String,Unit] . Those two do not match. It still works because an implicit conversion is supplied in this case.

For converting the specification automatically to an outcome of ()=>Unit , an implicit instance of ScalaTestAssertOutcome[O] needs to be in scope. The job of this type is to call the right ScalaTest assertions for a given outcome of type O . For Either[Any,Unit] , a default implicit ScalaTestAssertOutcome is provided. For other kinds of outcomes, you can easily write your own.

5.6. Use unit tests as well

Using a specification for testing does not mean that you should stop using unit tests. See the specification test as just one set of test suites within your unit tests.

However, whenever you see the need to add a unit test, ask yourself first if that would be better represented as a new requirement in the specification.

There are some test-worthy concerns that do not belong into a specification. The chapter "The full coverage goal" talks about this in more detail. Mostly, there can be technical concerns like handling configuration, database connectivity and the such, that sometimes find no place in a specification. Those should be represented as unit tests.

5.7. Requirement change management

Spec-Driven advertises a "specify first, implement after" approach. New features should be specified first, before they get implemented, and the implementation should only cover features that are covered as requirements, nothing else. This is also true for changing existing requirements, or even removing requirements.

Ultimately, this means that your specification is part of your release cycle. When it gets modified, this often implies changes in the model traits.

Changing a model trait however would cause the existing model implementation to fail compiling. This is a systemic problem when combining TDD with strongly-typed languages. (And in this context, Spec-Driven Development is seen as a form of TDD, even though it differs in the details.)

There are four ways of dealing with this. Only the latter two are recommended by this documentation:

  1. Change the model implementation(s) at the same time: not recommended. Of course, one way to deal with the problem is to fix the compile errors when they occur. This is not a good general solution however.

    This approach would produce a strong coupling between two different tasks:

    • Changing the specification, and

    • Implementing the models demanded by the specification.

    Whoever modifies the specification should be able to do so without knowing any details about the implementation. Modifying the specification should not depend on one's ability to also implement the new requirements.

    In theory, it should be possible that two independent developers or teams are responsible for the specification and the implementation, respectively. This separation should also be held up even when only one developer is responsible for both. Even with only one developer, the specification and the implementation should be decoupled.

    (The only acceptable strong dependency goes from the implementation to the specification , and it only involves the test adapter and model implementations. Its sole purpose is to tell the computer how to interpret the specification for one particular implementation, without relying on referring to specific requirements directly.)

  2. Accepting the compile errors in the code base temporarily: not recommended. Knowing that the compile errors must be fixed eventually, one could decide to just accept them. One could even hold the opinion that the compile errors are an advantage, as the development team is strictly forced by them to provide the implementation.

    This is not a feasible solution however, as it would divide the compile errors in the code base into two categories:

    • "Justified" errors that are temporarily accepted.

    • "Real" errors that, like pretty much all other compile errors, must be fixed and are not tolerated.

    Both kinds of compile errors would surface in the same way, and there is no good solution for telling them apart reliably. It would be impossible to generate an artifact with a compile error, so the new version of the specification could not be shared over an artifactory, for example.

    Moreover, CI tools such as a build server should follow certain rules for rejecting certain code commits. And a build with a compile error should generally be rejected.

    Other parts of the development infrastructure could also rely on the code compiling successfully. For example, it might be reasonable in certain situations to create a Git commit hook that rejects commits leading to compile errors. A certain way of dealing with specifications should not take that decision away from the responsible stakeholders. Indeed, however you decide to deal with your software specifications should have no impact no your coding infrastructure, at least not on that level.

  3. Use artifact versioning for the specification: recommended solution. Jackadull uses SBT, others may use Maven, Gradle or other artifact-generating build tools. They all have in common that artifacts and dependencies are versioned .

    For example, let's say your specification has version 1.0.0 . The test scope of your implementation has a dependency on exactly that version number.

    While changing the specification, you change the version to 1.0.1-SNAPSHOT . You add some requirements, maybe modify some. Finally, you decide to release the changes of the specification as version 1.1.0 , because it contains downwards-incompatible changes.

    At that point, the implementation test scope still depends on the specification version 1.0.0 . The task of integrating the new requirements is effectively the task of upgrading that dependency to version 1.1.0 .

    This can be done independently of the release of the newer specification version. The separation of concerns holds up.

  4. Use FeatureNotImplemented : also recommended, when necessary. Another way to make the code compile despite missing model methods is to implement them such that they throw new FeatureNotImplemented . The exception type FeatureNotImplemented is provided by Spec-Driven, and it should be thrown from within a model implementation to signal that the feature has not yet been implemented.

    Whenever possible, the previous approach (artifact versioning) should be used instead, because it is cleaner. But sometimes, you will arrive in a situation where using FeatureNotImplemented is necessary.

    For example, when the task of implementing the new requirements is divided among several developers. Those developers want to share code that already incorporates the new version of the specification, but the code should compile.

    In such and similar situations, you may want to use that exception type.

    The ScalaTest bridge for Spec-Driven executes a non-implemented feature as a ScalaTest Assertions.cancel() . This means that the test procedure will not fail, but the test results will list the corresponding test cases as "cancelled".

    This is great for development time, but it would be bad for releasing a new version. When you run the release (or deploy) procedure, you would want the whole process to fail if the codebase contains non-implemented features.

    In order to fix this, the Spec-Driven ScalaTest bridge can process the ConfigMap parameter acceptFeatureNotImplemented=false . When set to this value, a non-implemented feature will instead result in a ScalaTest Assertions.fail() . You should configure your build environment to add this flag when making a release build. This way, you ensure that no code can transpire into a release when it contains non-implemented features.

    As an example, this is the code used in the Spec-Driven Maven POM to reach that effect. (Spec-Driven uses the Maven profile jackadull-release when creating a new release.)

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
      <!-- ... -->
      <profiles>
        <!-- ... -->
        <profile>
          <id>jackadull-release</id>
          <build>
            <plugins>
              <plugin>
                <groupId>org.scalatest</groupId>
                <artifactId>scalatest-maven-plugin</artifactId>
                <configuration>
                  <config>acceptFeatureNotImplemented=false</config>
                </configuration>
              </plugin>
            </plugins>
          </build>
        </profile>
      </profiles>
    </project>

    This uses the config parameter of the Maven ScalaTest plugin for passing through this information. You can use this example as an inspiration for creating your own solution.

6. The full coverage goal

TDD proponents sometimes propose to increase code coverage gradually in a project, and seeing 100% coverage as a kind of far ideal. Spec-Driven proposes to start with 100% coverage, and always stay at that level.

Reasons for that can be found when asking the question: What reasons are there for coverage gaps, and what does that mean?

6.1. Reasons for coverage gaps

  • Technicalities: Specifications focus on a certain concern, and abstract everything else away. For example, a specification might deal with requirements related to counting stock amounts of wares. The specification might imply that the various stock levels get stored somewhere, but it does not tell in what kind of storage implementation, or wether it should be an SQL or NoSQL database. From the point of view of the specification, this would be a technicality.

    There are two ways of dealing with this:

    1. Create another specification that deals with storing data in a database. All modules that use that form of storage could independently conform to this specification. This would obviously fill that coverage gap.

    2. Create a suite of unit tests that produce evidence for the database access layer of the module to work correctly.

    You can decide either way. And there is no easy way to decide this, so no definite advice can be given. Consider both alternatives, and gather pros and cons for your specific case.

    However, accepting the coverage gap is not an option.

    If the code is important for you application, why not produce some evidence for it working correctly? Maybe you are sure that you made no mistakes writing the code. But you or someone else might modify it later, so it would be great to reduce the possibility of introducing new bugs then, ahead of time.

    If the code is not important for the application, why not remove it? Removing unnecessary lines of code is the best way to introduce code quality and maintainability. The line of code that is not written is guaranteed to contain no bug, and has a guaranteed technical debt of zero. In contrast, every line that exists has a guaranteed chance greater than zero to contain a bug, and a guaranteed technical debt of greater than zero.

  • Problems with the coverage framework: Coverage frameworks can have trouble recognizing code coverage in some cases.

    For example, JaCoCo at the time of writing this document does not recognize invocations of Scala macros. This also affects ScalaTest methods like fail and assert .

    Here is a good strategy for dealing with such cases:

    1. Isolate such invocations into a helper class that contains nothing but those invocations. Make sure to keep those helper methods as short as possible. They should only contain the exact coverage gap, not more.

    2. Configure the coverage tool to ignore that particular helper class.

    The bitterness of this method is that the code gets changed because of an insufficient tool. This can be an antipattern, so one should consider well when to do it. Also, it can be difficult to remember this decision when the coverage tool gets an update that fixes the problem.

    In this case however, reaching 100% coverage is such a worthy goal that it may appear justified. At least, it is better than the alternative of leaving an unexplained coverage hole that cannot be filled.

  • Unreachable code: Only in very rare cases, it is necessary to introduce code that can never be called.

    Such cases can point to an insufficiency in the programming language, or its compiler. Therefore, such cases should be handled exactly like the previous one, "Problems with the coverage framework".

  • Unspecified application behavior: If your application does something that is not specified, that should definitely go into the specification.

    The only exception is optional features. But features that have never been requested should never be implemented.

    Should a developer for any reason insist on implementing unspecified behavior, she still can write her own mini-specification for it. Then it is specified, and there is nothing stopping anyone from writing small specifications for optional features.

net.jackadull

Jackadull

"All work and no play makes Jack a dull boy." -- The open source software of @madoc.

Versions

Version
1.0.1
1.0.0