How to: Test services (Testing Center)
In the Testing Center, a model can be applied against a dataset to determine the calculated outputs for many records at same time. This is beneficial for use cases around regression testing, model simulation, and portfolio projection. The Testing Center takes a "testbed first" approach to empower users to better create, manage, and run testbeds.
To access this feature, after choosing a specific service from Folder overview, choose the
Testing Center icon from the service sidebar.
Terminology
The following terminology is used in the Testing Center:
A testbed contains test cases.
Each test case contains the inputs the Spark service will receive during testing.
A testbed can be "versioned up" by adding test cases to an existing testbed.
A test result contains the testbed and the calculated results for each test case.
A testbed can be run against a Spark service version. The Source of this test result is Spark.
If a test result is uploaded to Spark, the Source of this test result is User Upload.
Create a new testbed

Enter the Testing Center screen for the target service.
Create a new testbed by clicking, New Testbed.
On the resulting modal, there is the option to upload a testbed or generate test cases.
Uploading a testbed is more useful when actual data needs to be used in the Testing Center.
The Test Case Generator automatically generates relevant test cases within the selected bounds for each parameter.
This can be done in a random fashion, primarily for regression testing.
Systematic generation can exhaustively generate all combinations for comprehensive testing.
Import a custom testbed
This option allows users to generate a blank Excel template with the input headers of the service. The data entered here will form the test cases that will be part of the uploaded testbed.
The testbed template is laid out to make it easy to delete input fields that are not required. Please review the README tab.

Click Download an Excel template to get started.
Another modal follows with the ability to select particular subservices (How to: Work with subservices) to test.

Click Download. The resulting file will have the inputs/outputs from the selected subservices.

Open the template in Excel.
Add test cases to the Excel file, using the column headers for guidance:
Each row represents one test case.
If certain inputs are not required, the columns related to those inputs can be deleted.
For any blank fields, Spark will use the default value to process the calculation.
Description is a free-text field and can be used to identify a particular test case.
Remember to save the Excel file before uploading!
Note that it is also possible to use the Excel output from the API Call History as a testbed!
Follow the steps from Upload testbed.


After importing the testbed, a summary screen will detail the testbed that was imported. On this screen, there is the option to provide a name and description.
If the testbed name and description are defined in the testbed template, it will be automatically populated on this screen!
If no name has been defined, a unique system-generated name will be assigned.
Upon selecting Upload, the testbed will be added to the top of the testbed dashboard.
Import using CSV

Testbeds can also be constructed without using Excel. Comma Separated Values (CSV) files may be preferable in certain use cases since data from other systems can be used directly in the Testing Center.
The CSV file must contain headers in the first row. These should correspond to the names of inputs in the Spark service.
Description is a free-text field that can be added to identify a particular test case.
Handle null values
To set an input as null i.e.
For Excel testbed uploads, an empty cell will be considered as
null.For CSV testbed uploads, an entry that is
,null,will be considered asnull.
To set a table to be empty, we recommend using the Expanded Tables functionality and populating empty rows.
Similar to the Execute API, the treatment of null is different for single value and table inputs.
For single value inputs, a
null, is equivalent to clearing the cell in Excel and numerically equal to0.If a table is set to a singular
null, then the default value will be used for that input.
Generate random test cases
The Test Case Generator can generate a testbed of randomized test cases based on a subset of the inputs defined on a Spark service. This works best for services where:
The service consists of single cell
Xinput.Each input has a clearly defined bounded Data Validation (see How to: Set data validations).

Click Randomized Test Case Generation and select the subservices (How to: Work with subservices) to test.
Define the initial parameters, including the number of test cases you'd like to generate.
Spark automatically identifies the range of values based on the Data Validation defined in Excel for the
Xinput.For categorical inputs where the input range is finite, a subset of the field values can be included in the test case generation.
For numerical inputs, there are options to define the minimum, maximum, and step size.
Fields that are dependent on other fields can be automatically generated with the correct dependency. For example, this enables the generation of city, state, and country combinations. Fields that are generated through dependent validation will be shown with a message saying
Field values are generated based upon the value(s) of: <dependent fields>. Dynamic validations will only be picked up if the Data Validation has been defined using absolute references in Excel, e.g.$A$1.If an
Xinputis larger than 1 cell, then the default values will be used for every test case.
Click Generate test cases.
When the process is complete, a modal will appear to show a summary of the resulting testbed.
Generate test cases systematically
Systematic test case generation is used when you want to have more control around the test case generation. This includes features to systematically cover domain of inputs and/or to include random input values as well.


Click Systematic test case generation.
On the left-hand side, set the name of the testbed and the version of the service it should be run against. By default, on the Testbed name the testbed is the name of the service followed by the timestamp. A Testbed description is optional.
On the right-hand side is a table of all the inputs defined in the service. This will show the Input name, the Input type, the Generation type, and the No. of variations for the input based on the Input type and parameters specified.
The total number of test cases generated will be the product of the number of variations for each input. For instance, if you have
10variations forAge,2forSex, and5forTerm period, then the number of test cases will be10 x 2 x 5 = 100.Click on an input to open the Edit pane.

For single-cell inputs, 4 input types are available: List, Random list, Number range, and Random number range. Initially, all inputs are set to the input type detected from the service's default values. For each input choose the appropriate Generation type and parameters.
Generation typeDetailsList: Generate test cases for all the listed options.

If there is a List type data validation for this input in Excel, those values will be shown as the starting list. Otherwise, the default value will be the only initial item in the list.
To remove a variation, click the
xbutton.To add new variations, click into the input box and begin typing. Hit Enter to create an option.
The number of variations is the same as the number of options.
Random list: Systematically generate test cases from a randomly selected number of items in the listed options.

If there is a List type data validation for this input in Excel, those values will be shown as the starting list. Otherwise, the default value will be the only initial item in the list.
The test cases will systematically step through a randomly predetermined list of the items until the desired number of variations has been reached.
If there are more options than the desired number of variations, not every item will be used.
If there are fewer options than the desired number of variations, some items will be repeated
To remove a variation, click the
xbutton.To add new variations, click into the input box and begin typing. Hit Enter to create an option.
You can now select the number of variations.
Number range: Generate test cases to cover all the defined steps within a range.

The Number range input type works similarly to "For loops" in many programming languages. Conceptually, think of skip counting in grade school: start at the value for
minand skip count by thestep sizeuntil arriving at themax(or the next number will be greater than themax).Example:
min: 0,max: 20,step size: 5.The sequence would be
0, 5, 10, 15, 20, producing5variations.If the
maxwas23, the sequence would still stop at20because the next number25exceeds themax.
Random number range: Generate test cases selected randomly from all the defined steps within a range.

This functions the same as Number range but instead selects the desired number of items from the total number range list and iterates through it to create its variations.
Same as Random list, the test cases will systematically step through the same list of randomly selected items until the desired number of variations has been reached. If there are more options than the desired number of variations, not every item will be used. If there are fewer options than the desired number of variations, some items will be repeated.
Static table: For multi-cell range inputs.
For multi-cell range inputs, the only input type available is Table. This works similarly to the List input type for single-cell inputs, except that the value is the JSON-formatted string representing the table.
This table will be repeated for every record. In the future we may look to accommodate the systematic creation of table inputs.
Click Generate test cases.
When the process is complete, a modal will show a summary of the resulting testbed.
NOTE: Aside from the number of test cases, there are several other differences between systematic and randomized test case generation:
Data validation in the Excel file is ignored except for the populating the initial set of options for the List and Range validation types. This is to support the creation of test cases that are intended to fail validation.
Complex or Dynamic dependencies such as varying the
Agerange parameters based on theSexinput field are only partially supported and will be randomly selected.
Manage testbeds

The Testing Center screen will list all of the testbeds that were created against the particular service. This screen includes the functionality to delete, download, favorite, run, and search testbeds.
Run testbed
Once a testbed has successfully been created it can be run against a particular service version.
Navigate to a testbed and select the ‘run test’ icon (see image, above) on the right-hand side.

A modal will appear to allow the selection of the service version and service type.

By default, the latest service version will be selected.
The default service type used for new Spark services is Neuron.
A unique, system-generated name for the test results will be assigned automatically. This name can be modified.
An optional description can be added to the Testbed.
Select Run testbed to initiate the run.
The modal can be closed if other tasks need to be performed in Spark.

Upon completion of the test run, the results summary will appear. The results summary is also displayed in Testbed details. Test results can be downloaded in Excel or CSV format. CSV export is much faster for large test results.
If you encounter an error relating to timeout and batch size, we suggest the following actions:
Adjusting in the Service Documentation Service details change Batch Size Override to a lower number. This reduces the number of testbed records that are submitted at a time which reduces the likelihood of a timeout as fewer records will be processed together. The default batch size value is
100.If you frequently encounter this across many Spark services, the batch size value can be adjusted for all services. This can be done by contacting Support.
Review testbed details

The Testbed details page is a drill-down section of a testbed. It is a consolidated space that is divided into two sections:
The heading section, which contains additional details about the testbed, and
The test results section, which shows all of the test results associated with the testbed. This can include test results that are from a Spark testbed run or test results uploaded into Spark.
Add test cases
Testbeds can be "versioned up" by appending additional test cases to an existing testbed.


From the Testbed details page heading section, select Add test cases.
Similar to Import a custom testbed:
Download the testbed template.
Add any new test cases to this file.
Follow the steps from Add test cases.

If the test cases were successfully loaded, the summary screen will show the new version of the testbed and the number of test cases appended.
If the imported testbed contains test cases that already exist in the testbed, they will be removed and noted in the duplicated test case count.
If the uploaded additional test cases contain duplicates, those will not be removed.
If it is a requirement for the new test cases to contain duplicated test cases, modify the service to include a dummy input value that can serve as a unique index.
Upload test results
The Testing Center also supports the ability to upload external test results for comparison. This is useful for comparing results from Spark with those from an externally generated process or system to identify intended and unintended mismatches.

From the Testbed details testbed results section, select Upload test results.
Similar to Import a custom testbed:
Download the test results template.
Unlike before, only add the testbed results into the output columns.
Do not alter the existing test cases, as this will lead to an error.
The outputs do not all have to be populated for every row and/or column.
Follow the instructions on the Upload test results screen.
The test results will appear in the testbed results with the source as User Upload.
Explore a test case in API tester
It is also possible to view the inputs for a particular test case in the API Tester. This can be used to verify the calculations for the particular use case and to enable downloading the test case into an Excel file.

Navigate to Testing Center and open an existing testbed.
Under the Testbed results section, click or tap the "three-dot" menu on the right-hand side and select Download as > Excel file. This file contains all of the test cases and their associated UUIDs – you’ll need this for the next step.
Click the kebab menu again and select Explore test case in API tester.
When prompted, enter the UUID for the test case you are interested in examining and click Submit.

The API Tester should appear with the data from the test case. The API response can be downloaded for further investigation.
Compare test results
The Compare results feature allows any two sets of test results to be compared, whether they are created by Spark or User Uploaded. Comparing results will quickly identify mismatches between two test results.

Select two testbed results.
Once selected, the Compare results button will change color to indicate it is available.
Click Compare results. A modal will appear when the comparison is initiated. Once complete, users will be presented with a summary screen of the comparison, including the number of Test results compared, % Match, Mismatch errors and Run time.
Download the results to see the detailed breakdown of the comparison in Excel. The resulting file includes color coding to identify differences in test results, by test case.

Last updated
