Time is a crucial constraint in software development and software teams often need to focus their test efforts on the most important application paths.
⚠ UNDER CONSTRUCTION ⚠
In software testing, test coverage is some kind of metric that helps you to understand what parts of your application (or code) are exercised by your tests. The number of feasible paths through them grows exponentially with an increase in application size and can even be infinite in the case of applications with unbounded loop iterations. That is a problem called path explosion problem. Concordia Compiler deals with it by providing sets of combination and selection strategies, and trying to achieve full path coverage over time.
All the Features, Scenarios, and Variants are covered by default.
The CLI parameter --files
can filter the .feature
files to be considered.
The CLI parameter --ignore
can indicate the .feature
files to be ignored, when a directory is given.
The tag @ignore
can be used to mark a Feature
or Variant
to be ignored by the test generator. However, it can still be used by other Features or Variants.
The tag @importance (e.g., @importance( 8 )) can be used to denote the importance of a Feature. CLI parameters --sel-min-feature and --sel-max-feature can then be used to filter the features to be considered by the test generator. Example: concordia --sel-min-feature 7 makes the compiler considers the features with importance value of 7 or above. By default, all the features receive an importance value of 5.
Variants are selected and combined using State-based Strategies (see below).
All the UI Elements constraints are covered by default, using a set of Testing Techniques (see below).
In Concordia, you can declare a State in a Variant sentence using a text between tile (~
), like this:
There are three types of State:
Precondition: when declared in a Given
sentence;
State Call: when declared in a When
sentence;
Postcondition: when declared in a Then
sentence.
Both Preconditions and State Calls are considered required States. That is, they denote a dependency of a certain State of the system that needs to be executed. A Precondition needs to be executed before the Variant starts, and a State Call needs to be executed during the Variant's execution.
A Postcondition is a produced State, that is, a state of the system produced by a successful execution of a Variant. Therefore, whether something goes wrong during a Variant's execution, it will not produce the declared State.
When the current Variant requires a State, Concordia Compiler will look for imported Features' Variants able to produce it. To generate complete Test Cases for the current Variant, it will:
Select the Variants to combine;
Generate successful test scenarios for the selected Variants;
Select the successful test scenarios to combine;
Generate (successful and unsuccessful) test scenarios for the current Variant;
Combine the selected successful test scenarios with the test scenarios of the current Variant;
Transform all the test scenarios into test cases (i.e., valued test scenarios).
Steps 1 and 3 can adopt different strategies. Concordia Compiler let's you:
Parameterize how the Variants will be selected, using --comb-variant
; and
Parameterize how the successful test scenarios will be combined, using --comb-state
.
Available strategies for --comb-variant
:
random
: Selects a random Variant that produces the required State. That's the default behavior;
first
: Selects the first Variant that produces the required State;
fmi
: Selects the first most important Variant (since two Variants can have the same importance value) that produces the required State;
all
: Selects all the Variants that produce the required State.
Example:
Available strategies for --comb-state
:
sre
: Single random of each - that is, randomly selects a single, successful test scenario of each selected Variant. That's the default behavior;
sow
: Shuffled one-wise - that is, shuffles the successful test scenarios than uses one-wise combination.
ow
: One-wise selection;
all
: Selects all the successful test scenarios to combine.
Example:
Strategies that use random selection can take different paths every time they are used. Furthermore, they reduce considerably the amount of generated paths - i.e., it avoids "path explosion" - and thus the amount of produced test cases.
Full-selection strategies can be used for increase path coverage. Although, it also increases the needed time to check all the paths, which may be undesirable for frequent tests.
By default, Concordia Compiler uses random selection strategies.
Concordia Compiler can infer input test data from Variants, UI Elements, Constants, Tables, and Databases. The more constraints you declare, the more test cases it generates.
Adopted techniques to generate input test data include:
These are well-known, effective black-box testing techniques for discovering relevant defects on applications.
We call Data Test Cases those test cases used to generate input test data. They are classified into the following groups: RANGE
, LENGTH
, FORMAT
, SET
, REQUIRED
, and COMPUTED
. The group COMPUTED
is not available on purpose, since a user-defined algorithm to produce test data can have bugs on itself. Thus, one should provide expected input and output values in order to check whether the application is able to correctly compute the output value based on the received input value.
Every group has a set of related data test cases, applied according to the declared constraints and selected algorithms:
By default, the maximum length for randomly-generated string values is 500
. This value is used for reducing the time to run test scripts, since long strings take time to be entered.
You can set maximum length using the CLI parameter --random-max-string-size
. Example:
You can also set it in the configuration file (.concordiarc
) by adding the property randomMaxStringSize
. Example:
Data Test Cases (DTC) are selected for every declared UI Element and its properties. The more properties you declare for a UI Element, the more data you provide for Concordia Compiler to generate DTC.
Example of some evalutations:
When no properties are declared, FILLED
and NOT_FILLED
are both applied and considered as valid values;
When the property required is declared, NOT_FILLED
(empty) is considered as an invalid value;
When the property value is declared:
if it comes from a set of values (inclusing a query result), all the DTC of the group SET
are applied;
otherwise, ...
When the property minimum value is declared:
There is more logic involved for generating these values. ...
Let's describe a user interface element named Salary
:
When no property is defined or only the property data type
is defined,
We defined the property data type
as double
, since the default data type is string
.
Since few restrictions were made, Salary
will be tested with the test cases of the group REQUIRED
:
FILLED
: a pseudo-random double value is generated;
NOT_FILLED
: an empty value will be used.
Now let's add a minimum value restriction.
Some tests of the group RANGE
are now applicable:
LOWEST_VALUE
: the lowest possible double is used
RANDOM_BELOW_MIN_VALUE
: a random double before the minimum value is generated
JUST_BELOW_MIN_VALUE
: a double just below the minimum value is used (e.g., 999.99
)
MIN_VALUE
: the minimum value is used
JUST_ABOVE_MIN_VALUE
: a double just above the minimum value is used (e.g., 1000.01
)
ZERO_VALUE
: zero (0
) is used
Since 1000.00
is the minimum value, the data produced by the tests 1
, 2
, 3
, and 6
of the group VALUE
are considered invalid, while 4
and 5
are not. For these tests considered invalid, the behavior defined in Otherwise
, that is
is expected to happen. In other words, this behavior serves as test oracle and must occur only when the produced value is invalid.
Unlike this example, when the expected system behavior for invalid values is not specified and a test data is considered invalid, Concordia expects that test should fail. In this case, it generates the Test Case with the tag @fail
.
Now let's add maximum value restriction:
All the tests of the group RANGE
are now applicable. That is, the following tests will be included:
MEDIAN_VALUE
: the median between the minimum and the maximum values
RANDOM_BETWEEN_MIN_MAX_VALUES
: a pseudo-random double value between the minimum and the maximum values
JUST_BELOW_MAX_VALUE
: the value just below the maximum value
MAX_VALUE
: the maximum value
JUST_ABOVE_MAX_VALUE
: the value just above the maximum value
RANDOM_ABOVE_MAX_VALUE
: a pseudo-random double above the maximum value
GREATEST_VALUE
: the greatest possible double
The tests from 5
to 7
will produce values considered invalid.
Let's define a user interface element named Profession
and a table named Professions
from which the values come from:
Applicable test are:
FILLED
NOT_FILLED
FIRST_ELEMENT
RANDOM_ELEMENT
LAST_ELEMENT
NOT_IN_SET
The first two tests are in the group REQUIRED
. Since we declared Profession
as having a required value, the test FILLED
is considered valid but NOT_FILLED
is not. Therefore, it is important to remember declaring required inputs accordingly.
The last four tests are in the group SET
. Only the last one, NOT_IN_SET
, will produce a value considered invalid.
In this example, let's adjust the past two examples to make Salary
rules dynamic and change according to the Profession
.
Firstly, we add two columns the the Professions
table:
Then, we change the rules to retrieve the values from the table:
The reference to the UI Element {Profession}
inside the query, makes the rules of Salary
depend on Profession
. Every time a Profession
is selected, the minimum value and the maximum value of Salary
changes according to the columns min_salary
and max_salary
of the table Professions
.
Group
Data Test Case
Description
RANGE
LOWEST_VALUE
The lowest value for the data type, e.g., lowest integer
RANDOM_BELOW_MIN_VALUE
A random value below the minimum value
JUST_BELOW_MIN_VALUE
The value just below the minimum value, considering the data type and decimal places if applicable
MIN_VALUE
Exactly the minimum value
JUST_ABOVE_MIN_VALUE
The value just above the minimum value, considering the data type and decimal places if applicable
ZERO_VALUE
Zero (0
)
MEDIAN_VALUE
The median between the minimum value and the maximum value
RANDOM_BETWEEN_MIN_MAX_VALUES
A random value between the minimum value and the maximum value
JUST_BELOW_MAX_VALUE
The value just below the maximum value, considering the data type and decimal places if applicable
MAX_VALUE
Exactly the maximum value
JUST_ABOVE_MAX_VALUE
The value just above the maximum value, considering the data type and decimal places if applicable
RANDOM_ABOVE_MAX_VALUE
A random value above the maximum value
GREATEST_VALUE
The greatest value for the data type, e.g., greatest integer
LENGTH
LOWEST_LENGTH
An empty string
RANDOM_BELOW_MIN_LENGTH
A string with random characters and random length, less than the minimum length
JUST_BELOW_MIN_LENGTH
A string with random characters and length exactly below the minimum length
MIN_LENGTH
A string with random characters and length exactly equal to the minimum length
JUST_ABOVE_MIN_LENGTH
A string with random characters and length exactly above the minimum length
MEDIAN_LENGTH
A string with random characters and length equal to the median between the minimum length and the maximum length
RANDOM_BETWEEN_MIN_MAX_LENGTHS
A string with random characters and random length, between the minimum length and the maximum length
JUST_BELOW_MAX_LENGTH
A string with random characters and length exactly below the maximum length
MAX_LENGTH
A string with random characters and length exactly equal to the maximum length
JUST_ABOVE_MAX_LENGTH
A string with random characters and length exactly above the maximum length
RANDOM_ABOVE_MAX_LENGTH
A string with random characters and random length, greater than the maximum length
GREATEST_LENGTH
The greatest length supported for a string (see Notes)
FORMAT
VALID_FORMAT
A value that matches the defined regular expression
INVALID_FORMAT
A value that does not match the defined regular expression
SET
FIRST_ELEMENT
The first element in the defined set or query result
RANDOM_ELEMENT
A random element in the defined set or query result
LAST_ELEMENT
The last element in the defined set or query result
NOT_IN_SET
A value that does not belong to the defined set or query result
REQUIRED
FILLED
A random value
NOT_FILLED
Empty value
COMPUTED
RIGHT_COMPUTATION
A value generated by the defined algorithm
WRONG_COMPUTATION
A value different from that generated by the defined algorithm