|This is an experimental feature for Spock, which is based on the experimental implementation of parallel execution in the JUnit Platform.
Parallel execution has the potential to reduce the overall test execution time. The actual achievable reduction will heavily depend on the respective codebase and can vary wildly.
By default, Spock runs tests sequentially with a single thread.
As of version 2.0, Spock supports parallel execution based on the JUnit Platform.
To enable parallel execution set the
runner.parallel.enabled configuration to
See Spock Configuration File for general information about this file.
|JUnit Jupiter also supports parallel execution, both rely on the JUnit Platform implementation, but function independently of each other. If you enable parallel execution in Spock it won’t affect Jupiter and vice versa. The JUnit Platform executes the test engines (Spock, Jupiter) sequentially, so there should not be any interference between engines.
Spock supports two execution modes
You can define the execution mode explicitly for a specification or feature via the
Otherwise, Spock will use the value of
defaultExecutionMode respectively, both have
CONCURRENT as default value.
Certain extensions also set execution modes when they are applied.
defaultSpecificationExecutionModecontrols what execution mode a specification will use by default.
defaultExecutionModecontrols what execution mode a feature and its iterations (if data driven) will use by default.
Same Threadwill run in the same thread as its parent.
Concurrentwill be executed in another thread, all concurrent nodes can execute in different threads from each other.
ResourceLockwill be executed in another thread, but will also acquire a lock.
Same Thread with Lockwill run in the same thread as its parent thus inheriting the lock.
Data Driven Featurerepresents a data driven feature with
Data Driven Featureand
Data Driven Featurebeing the iterations.
Isolatedwill run exclusively, no other specifications or features will run at the same time.
Figure 6 shows the default case when parallel execution is disabled (
runner.parallel.enabled=false) or when both specifications (
defaultSpecificationExecutionMode) and features (
defaultExecutionMode) are set to
Figure 7 shows the result of setting
defaultExecutionMode=SAME_THREAD, the specifications will run concurrently but all features will run in the same thread as their specification.
Figure 8 shows the result of setting
defaultExecutionMode=CONCURRENT, the specifications will run in the same thread, causing them to run one after the other.
The features inside a specification will run concurrently.
Figure 9 shows the result of setting
defaultExecutionMode=CONCURRENT, both specifications and features will run concurrently.
If nothing else is explicit configured, specifications will use
defaultSpecificationExecutionMode and features use
However, this changes when you set the execution mode explicitly via
Each node (specification, feature) checks first if it has an explicit execution mode set,
otherwise it will check its parents for an explicit setting and fall back to the respective defaults otherwise.
The following examples have
If you invert the values
CONCURRENT in these examples you will get the inverse result.
SAME_THREAD Features and explicit
@Execution on Features
In Figure 10
@Execution is applied on the features and those features and iterations will execute concurrently while the rest will execute in the same thread.
SAME_THREAD Features and explicit
@Execution on a Specification
In Figure 11
@Execution is applied on one specification causing the specification and all its features to run concurrently.
The features execute concurrently since they inherit the explicit execution mode from the specification.
SAME_THREAD Features and explicit
@Execution on Features and Specifications
Figure 12 showcases the combined application of
@Execution on a specification and some of its features.
As in the previous example the specification and its features will execute concurrently except
testB1 since it has its own explicit execution mode set.
With parallel execution comes a new set of challenges for testing, as shared state can be modified and consumed by multiple tests at the same time.
A simple example would be two features that test the use a system property, both setting it to a specific value in the respective
given block and then executing the code to test the expected behavior.
If they run sequentially then both complete without issue. However, if the run at the same time both
given blocks will run before the
when blocks and one feature will fail since the system property did not contain the expected value.
The above example could simply be fixed if both features are part of the same specification by setting them to run in the same thread with
However, this is not really practicable when the features are in separate specifications.
To solve this issue Spock has support to coordinate access to shared resources via
@ResourceLock you can define both a
key and a
mode. By default,
ResourceAccessMode.READ_WRITE, but you can weaken it to
ResourceAccessMode.READ_WRITEwill enforce exclusive access to the resource.
ResourceAccessMode.READwill prevent any
READ_WRITElocks, but will allow other
READ-only locks will isolate tests from others that modify the shared resource, while at the same time allowing tests that also only read the resource to execute.
You should not modify the shared resource when you only hold a
READ lock, otherwise the assurances don’t hold.
Certain extensions also set implicit locks when they are applied.
If a parent node has a lock, it forces its children to run in the same thread.
READ_WRITE locks cause serialized execution anyway, this is effectively not different from what would happen if the lock would be applied to every child directly.
However, if only
READ locks are used, then it makes a difference whether the lock is acquired on the parent or each child.
If you want to have a lock on all features it might be intuitive to apply
@ResourceLock on the specification.
However, with this you are actually applying the lock on the Specification level and force every feature to
To avoid this you can use
@ResourceLockChildren on the specification, this behaves the same as applying
@ResourceLock on every feature.
As mentioned before, you will only notice a real difference if you are using
READ-only locks, as they allow concurrent execution.
To avoid deadlocks, Spock pulls up locks to the specification, when locks are defined on both the specification and features.
The Specification will then contain all defined locks.
If the features both had
READ locks for the same resource, then the
READ will be merged into the
Sometimes, you want to modify and test something that affects every other feature, you could put a
@ResourceLock on every feature, but that is impractical.
@Isolated extension enforces, that only this feature runs without any other features running at the same time.
You can think of this as an implicit global lock.
As with other locks, the features in an
@Isolated specification will run in
@Isolated can only be applied at the specification level so if you have a large specification and only need it for a few features,
you might want to consider splitting the spec into
@Isolated and non isolated.
With parallel execution enabled, specifications and features can execute concurrently.
You can control the size of the thread pool that executes the features.
Runtime.getRuntime().availableProcessors() to determine the available processors.
dynamic(BigDecimal factor)- Computes the desired parallelism based on the number of available processors multiplied by the
factorand rounded down to the nearest integer. For example a factor of
0.5will use half your processors.
dynamicWithReservedProcessors(BigDecimal factor, int reservedProcessors)- Same as
dynamicbut ensures that the given amount of
reservedProcessorsis not used. The
reservedProcessorsare counted against the available cores not the result of the factor.
fixed(int parallelism)- Uses the given amount of threads.
custom(int parallelism, int minimumRunnable, int maxPoolSize, int corePoolSize, int keepAliveSeconds)- Allows complete control over the threadpool. However, it should only be used when the other options are insufficient, and you need that extra bit of control. Check the Javadoc of
spock.config.ParallelConfigurationfor a detailed description the parameters.
By default, Spock uses
dynamicWithReservedProcessors(1.0, 2) that is all your logical processors minus
If the calculated parallelism is less than
2, then Spock will execute single threaded, basically the same as