Cycle 2.5 introduces enhanced Scenario Outline functionality. This enhanced functionality provides several different options for managing test parameters across multiple permutations of the Scenario Outline.
An Overview of enhanced Scenario Outlines can be found here.
Datasets are one of several available sources for managing Scenario Outline test parameters. The dataset is created and maintained within the Cycle Data Store. Instructions on configuring your project to use the Data Store can be found in the user manual here.
The Scenario Outline utilizes the Data Store through the use of the "Datastore Examples" key word. More information regarding Scenario Outline Datastore Examples can be found here.
Two bundle features have been created to help manage the creation and maintenance of Data Store test parameter data through the use of CSV files. Cycle can use the CSV files to add, remove, and update test parameters within the Data Store. This article describes the usage of these features.
Using these bundle features may be a good option for maintaining the Data Store for your project if you do not have the ability to use any other database management tools or methods such as those described in the article here.
Selecting the Correct Feature for your Data Store
The first step in using the Bundle Features to maintain your Data Store is to select the correct Test Parameter Feature for your Data Store type. Cycle is able to use Data Stores with either MOCA or JDBC connection types. Each connection type is slightly different, so it is important to select the Feature supporting the connection type that matches the Data Store configured for your project.
The Data Store test parameter features can be found in the Features/Utilities/Datastore_Utilities directory within your Cycle Project folder.
If your Data Store is a MOCA connection type, you will use MOCA_Datastore_Test_Parameter_Management.feature.
If your Data Store is a JDBC connection type, you will use JDBC_Datastore_Test_Parameter_Management.feature.
Before using either feature, you will need to update your project Connections and Credentials with a Data Store connection, and then update the feature with the Connections and Credentials specific to your Cycle project.
Updating the Cycle Project and Feature
Data Store Connection and Credentials
If you want to use the Data Store as a Scenario Outline data source, you need to update the Data Store connection in your Cycle project. Information on setting up a Cycle Data Store in your project can be found here.
The Test Parameter Management Bundle features also require a separate saved connection and set of credentials in order to add, remove, and update Data Store records directly within Cycle. Information on setting up Connections and Credentials in Cycle can be found here.
Updating the Feature
Once you have completed the Data Store connection setup and added connection and credentials for the Data Store, you will need to update the Test Parameter Management feature that corresponds to your Data Store connection type. This will require editing the feature within Cycle to use the connection and credentials you created for the Data Store.
Updating the MOCA_Datastore_Test_Parameter_Management.feature
Update the MOCA connection step by replacing "MOCA_Datastore" with the name of the saved Data Store Connection and Credentials.
Then I connect to MOCA "MOCA_Datastore" logged in as "MOCA_Datastore"
Update all Scenario Outline Connection and Credentials keywords to use the name of the saved Data Store Connection and Credentials in your Project. An example Scenario Outline is shown below. Note that there are seven instances of these MOCA Examples that will need to be updated. Each Scenario Outline needs to be updated for this feature to function properly.
Scenario Outline: Populate TEST_PLAN CSV File
MOCA Examples: Features\Utilities\Datastore_Utilities\get_UC_TEST_PLAN_data_for_csv.msql connected to MOCA_Datastore logged in as MOCA_Datastore
Updating the JDBC_Datastore_Test_Parameter_Management.feature
Update the database connection step by replacing "Datastore_Test_Parameters" with the name of the saved Data Store Connection and Credentials.
Then I connect to database "Datastore_Test_Parameters" logged in as "Datastore_Test_Parameters"
Update all Scenario Outline Connection and Credentials to use the name of the saved Data Store Connection and Credentials in your Project. An example Scenario Outline is shown below. Note that there are seven instances of these SQL examples that will need to be updated. Each Scenario Outline needs to be updated for this feature to function properly.
Scenario Outline: Populate TEST_PLAN CSV File
SQL Examples: Features\Utilities\Datastore_Utilities\get_TEST_PLAN_data_for_csv.sql connected to Datastore_Test_Parameters logged in as Datastore_Test_Parameters
Using the Feature
Regardless of Data Store connection type, both the MOCA and JDBC test parameter management features function identically. There are two different modes of execution. The first mode extracts the Data Store test parameters into CSV files, and the second mode uses the CSV files to add, remove, and update Data Store test parameter data.
Creating Test Parameter CSVs
Running the feature displays a prompt for the user to select an option to either create CSVs or upload CSVs. Type "C" in the text box and click "OK" to create CSVs. Click "Cancel" to stop feature execution.
After selecting "C" another prompt appears alerting the user that any existing CSV files will be overwritten. The CSV files are written to the Features/Utilities/Datastore_Utilities folder.
If there are existing CSV files that you do not want overwritten, you should create backups of those CSV files in another folder. Click "OK" to proceed with creating the CSV files. Click "Cancel" to stop feature execution and prevent the CSV files from being created.
The feature uses Scenario Outlines with Msql and Sql files (for MOCA and JDBC connections respectively) in the Features/Utilities/Datastore_Utilities folder to generate the CSV files. The files below must be present in the folder in order for the feature to create the CSV files.
MOCA .msql files used to create the CSVs:
JDBC .sql files used to create the CSVs:
The CSV files are populated with data extracted from each Data Store test parameter table. One CSV file per table is generated in the Features/Utilities/Datastore_Utilities folder.
You will see the list of files below in the folder after creating the CSVs:
The first field in each CSV file is an "Action" field that will be used by the feature during CSV upload to determine which action to complete against the Data Store database (Add, Remove, or Update). The remaining fields in the CSV correspond directly to the fields in the corresponding Data Store table.
The output below is an example of a TEST_DATA_RECORD extract from a JDBC Data Store. Note that the "Action" field is not initially populated with any value. This is important to remember for uploading CSVs. Any record with a NULL value Action field will be ignored during CSV upload.
Uploading Test Parameter CSVs
Foreign Key Constraints
The Data Store tables enforce referential integrity between the various tables. For instance, you cannot have a record in the TEST_DATA_RECORD table with a foreign key TEST_DATA_SET_ID that does not exist in the TEST_DATA_SET table.
Foreign key constraints need to be taken into consideration when adding, removing, and updating test parameter data in the Data Store.
If you are uploading a CSV to add a new row into TEST_DATA_RECORD, you will need to make sure you are either referencing an existing TEST_DATA_SET.TEST_DATA_SET_ID, or you are also adding a new row into TEST_DATA_SET that defines your new TEST_DATA_SET_ID.
The features are designed to add, remove, and update records in the correct order to minimize the possibility of foreign key constraints causing issues during upload. However, the feature cannot control the data that you are entering into the CSV files.
Errors during CSV data entry/modification can cause issues during CSV file uploads. It is important to review the results in your Data Store after each upload to ensure the data was properly added, removed, and/or updated.
Modify the CSV Files for Upload
Before uploading the test parameter data to the Data Store, you will need to update the CSV files. There are three available Actions that can be executed against each row in the CSV file. Those actions are Add, Remove, and Update. Any row with a NULL value Action will be ignored during upload.
Use the letter "A" in the Action column to add a new record into the table.
The below example TEST_DATA_RECORD.csv would Add a new TEST_DATA_RECORD with ID = RECORD_ID_005 into the Data Store table during upload.
Use the letter "R" in the Action column to remove a record from the table.
The below example TEST_DATA_RECORD.csv would Remove the TEST_DATA_RECORD with ID = RECORD_ID_004 from the Data Store table during upload.
Use the letter "U" in the Action column to update a record in the table.
The below example TEST_DATA_RECORD.csv would Update the existing TEST_DATA_RECORD with ID = RECORD_ID_003 in the Data Store table during upload.
A single CSV file can include a combination of Add, Remove, and Update records. The three actions from the examples above have been combined into a single TEST_DATA_RECORD.csv below.
Update the Data Store with the CSV Data
Running the feature displays a prompt for the user to select an option to either create CSVs or upload CSVs. Type "U" in the text box and click "OK" to upload CSVs. Click "Cancel" to stop feature execution.
Selecting "U" to upload the CSVs to the Data Store will execute Scenario Outlines that perform the Add, Remove, and Update actions against each Data Store table in a specific order. This order is optimized to reduce the risk of foreign key violations. However, you may still run into foreign key constraints if there were any data entry errors while creating the CSV files or if you have not taken the order of execution into consideration when modifying the CSV files.
The chart below represents the progression of adds, removes, and updates as the Scenario Outlines execute them against each table.
It is highly recommended to review the results of the feature after running the Upload CSV mode to ensure all of the appropriate actions were successful. You can easily modify the CSVs and upload again to correct any issues you may encounter.