This is an experimental feature for Spock, which is based on the experimental implementation of parallel execution in the JUnit Platform.

Parallel execution has the potential to reduce the overall test execution time. The actual achievable reduction will heavily depend on the respective codebase and can vary wildly.

By default, Spock runs tests sequentially with a single thread. As of version 2.0, Spock supports parallel execution based on the JUnit Platform. To enable parallel execution set the runner.parallel.enabled configuration to true. See Spock Configuration File for general information about this file.

runner {
  parallel {
    enabled true
JUnit Jupiter also supports parallel execution, both rely on the JUnit Platform implementation, but function independently of each other. If you enable parallel execution in Spock it won’t affect Jupiter and vice versa. The JUnit Platform executes the test engines (Spock, Jupiter) sequentially, so there should not be any interference between engines.

Execution modes

Spock supports two execution modes SAME_THREAD and CONCURRENT. You can define the execution mode explicitly for a specification or feature via the @Execution annotation. Otherwise, Spock will use the value of defaultSpecificationExecutionMode and defaultExecutionMode respectively, both have CONCURRENT as default value. Certain extensions also set execution modes when they are applied.

  • defaultSpecificationExecutionMode controls what execution mode a specification will use by default.

  • defaultExecutionMode controls what execution mode a feature and its iterations (if data driven) will use by default.

sequential squential execution
Figure 1. Sequential Execution, either runner.parallel.enabled=false or SAME_THREAD Specifications, SAME_THREAD Features
concurrent concurrent execution
Figure 2. CONCURRENT Specifications, CONCURRENT Features
concurrent sequential execution
Figure 3. CONCURRENT Specifications, SAME_THREAD Features
sequential concurrent execution
Figure 4. SAME_THREAD Specifications, CONCURRENT Features

Execution Hierarchy

wbs legend
Figure 5. Legend for the following figures
  • The node Same Thread will run in the same thread as its parent.

  • The node Concurrent will be executed in another thread, all concurrent nodes can execute in different threads from each other.

  • The node ResourceLock will be executed in another thread, but will also acquire a lock.

  • The node Same Thread with Lock will run in the same thread as its parent thus inheriting the lock.

  • The node Data Driven Feature represents a data driven feature with Data Driven Feature[1] and Data Driven Feature[2] being the iterations.

  • The node Isolated will run exclusively, no other specifications or features will run at the same time.

execution hierarchy same thread
Figure 6. Single threaded execution

Figure 6 shows the default case when parallel execution is disabled (runner.parallel.enabled=false) or when both specifications (defaultSpecificationExecutionMode) and features (defaultExecutionMode) are set to SAME_THREAD.

execution hierarchy concurrent sequential execution
Figure 7. Execution with CONCURRENT Specifications, SAME_THREAD

Figure 7 shows the result of setting defaultSpecificationExecutionMode=CONCURRENT and defaultExecutionMode=SAME_THREAD, the specifications will run concurrently but all features will run in the same thread as their specification.

execution hierarchy sequential concurrent execution
Figure 8. Execution with SAME_THREAD Specifications, CONCURRENT Features

Figure 8 shows the result of setting defaultSpecificationExecutionMode=SAME_THREAD and defaultExecutionMode=CONCURRENT, the specifications will run in the same thread, causing them to run one after the other. The features inside a specification will run concurrently.

execution hierarchy concurrent concurrent execution
Figure 9. Execution with CONCURRENT Specifications, CONCURRENT Features

Figure 9 shows the result of setting defaultSpecificationExecutionMode=CONCURRENT and defaultExecutionMode=CONCURRENT, both specifications and features will run concurrently.

Execution Mode Inheritance

If nothing else is explicit configured, specifications will use defaultSpecificationExecutionMode and features use defaultExecutionMode. However, this changes when you set the execution mode explicitly via @Execution. Each node (specification, feature) checks first if it has an explicit execution mode set, otherwise it will check its parents for an explicit setting and fall back to the respective defaults otherwise.

The following examples have defaultSpecificationExecutionMode=SAME_THREAD and defaultExecutionMode=SAME_THREAD. If you invert the values SAME_THREAD and CONCURRENT in these examples you will get the inverse result.

execution hierarchy inheritance feature execution
Figure 10. Execution with SAME_THREAD Specifications, SAME_THREAD Features and explicit @Execution on Features

In Figure 10 @Execution is applied on the features and those features and iterations will execute concurrently while the rest will execute in the same thread.

execution hierarchy inheritance spec execution
Figure 11. Execution with SAME_THREAD Specifications, SAME_THREAD Features and explicit @Execution on a Specification

In Figure 11 @Execution is applied on one specification causing the specification and all its features to run concurrently. The features execute concurrently since they inherit the explicit execution mode from the specification.

execution hierarchy inheritance spec feature execution
Figure 12. Execution with SAME_THREAD Specifications, SAME_THREAD Features and explicit @Execution on Features and Specifications

Figure 12 showcases the combined application of @Execution on a specification and some of its features. As in the previous example the specification and its features will execute concurrently except TestB1 since it has its own explicit execution mode set.

Resource Locks

With parallel execution comes a new set of challenges for testing, as shared state can be modified and consumed by multiple tests at the same time.

A simple example would be two features that test the use a system property, both setting it to a specific value in the respective given block and then executing the code to test the expected behavior. If they run sequentially then both complete without issue. However, if the run at the same time both given blocks will run before the when blocks and one feature will fail since the system property did not contain the expected value.

The above example could simply be fixed if both features are part of the same specification by setting them to run in the same thread with @Execution(SAME_THREAD). However, this is not really practicable when the features are in separate specifications. To solve this issue Spock has support to coordinate access to shared resources via @ResourceLock and @ResourceLockChildren.

With @ResourceLock you can define both a key and a mode. By default, @ResourceLock assumes ResourceAccessMode.READ_WRITE, but you can weaken it to ResourceAccessMode.READ.

  • ResourceAccessMode.READ_WRITE will enforce exclusive access to the resource.

  • ResourceAccessMode.READ will prevent any READ_WRITE locks, but will allow other READ locks.

READ-only locks will isolate tests from others that modify the shared resource, while at the same time allowing tests that also only read the resource to execute. You should not modify the shared resource when you only hold a READ lock, otherwise the assurances don’t hold.

Certain extensions also set implicit locks when they are applied.

lock contention basics
Figure 13. Two features with @ResourceLock

Lock inheritance

If a parent node has a lock, it forces its children to run in the same thread. As READ_WRITE locks cause serialized execution anyway, this is effectively not different from what would happen if the lock would be applied to every child directly. However, if only READ locks are used, then it makes a difference whether the lock is acquired on the parent or each child.

lock inherit spec
Figure 14. Locks are passed on from the spec to the features
lock inherit iterate feature
Figure 15. Locks on data driven features

If you want to have a lock on all features it might be intuitive to apply @ResourceLock on the specification. However, with this you are actually applying the lock on the Specification level and force every feature to SAME_THREAD execution. To avoid this you can use @ResourceLockChildren on the specification, this behaves the same as applying @ResourceLock on every feature. As mentioned before, you will only notice a real difference if you are using @ResourceLockChildren with READ-only locks, as they allow concurrent execution.

lock inherit lock children
Figure 16. Locks on children

Lock coarsening

To avoid deadlocks, Spock pulls up locks to the specification, when locks are defined on both the specification and features. The Specification will then contain all defined locks. If the features both had READ_WRITE and READ locks for the same resource, then the READ will be merged into the READ_WRITE.

lock coarsening before
Figure 17. Lock coarsening - before
lock coarsening after
Figure 18. Lock coarsening - after

Isolated Execution

Sometimes, you want to modify and test something that affects every other feature, you could put a READ @ResourceLock on every feature, but that is impractical. The @Isolated extension enforces, that only this feature runs without any other features running at the same time. You can think of this as an implicit global lock.

As with other locks, the features in an @Isolated specification will run in SAME_THREAD mode. @Isolated can only be applied at the specification level so if you have a large specification and only need it for a few features, you might want to consider splitting the spec into @Isolated and non isolated.

Figure 19. @Isolated execution

Parallel Thread Pool

With parallel execution enabled, specifications and features can execute concurrently. You can control the size of the thread pool that executes the features. Spock uses Runtime.getRuntime().availableProcessors() to determine the available processors.

  • dynamic(BigDecimal factor) - Computes the desired parallelism based on the number of available processors multiplied by the factor and rounded down to the nearest integer. For example a factor of 0.5 will use half your processors.

  • dynamicWithReservedProcessors(BigDecimal factor, int reservedProcessors) - Same as dynamic but ensures that the given amount of reservedProcessors is not used. The reservedProcessors are counted against the available cores not the result of the factor.

  • fixed(int parallelism) - Uses the given amount of threads.

  • custom(int parallelism, int minimumRunnable, int maxPoolSize, int corePoolSize, int keepAliveSeconds) - Allows complete control over the threadpool. However, it should only be used when the other options are insufficient, and you need that extra bit of control. Check the Javadoc of spock.config.ParallelConfiguration for a detailed description the parameters.

By default, Spock uses dynamicWithReservedProcessors(1.0, 2) that is all your logical processors minus 2.

If the calculated parallelism is less than 2, then Spock will execute single threaded, basically the same as runner.parallel.enabled=false.

Example SpockConfig.groovy with fixed setting
runner {
  parallel {
    enabled true