Export
The export API packages all of the files necessary to create a service into a ZIP file. The ZIP file can then be used by the import API to deploy the Spark services to another tenant and/or environment. The API is designed not to proceed if a successful export cannot be completed.
Prerequisites to export Spark services
Only
1job can be executed at once per tenant.All the referenced objects need to exist and accessible with the provided authorization.
Any referenced folders most contain a service.
All referenced services must have a compiled Neuron WebAssembly module.
The requested package size does not exceed the configured limit (default to
200 MB).
Authorization
These APIs support authorization via:
Bearer {token}accessible from Authorization - Bearer token or systematically via Client Credentials.The request headers should include a key for
Authorizationwith the valueBearer {token}.
API key created from Authorization - API keys. The API key groups must contain User groups that are also assigned to Permissions - Features permissions
Spark.Exports.jsonorSpark.AllEncompassingProxy.json.The request headers should include the keys
x-synthetic-keyandx-tenant-namewith the values of the API key and tenant name respectively.
POST export job
POST export jobReturns: Response from GET export status.
Path parameters
tenant *
Tenant is part of your Log in to Spark URL and also available in the User menu.
Request body
Content-Type: application/json
Note that the union of the input service identifiers will be included as part of the exported package.
inputs.folders
Array of folder names.
inputs.services
Array of C.SPARK_XCALL() format of {folder}/{service} or {folder}/{service}[{version}] or service_id.
If a version is specified, follow the version convention to take the latest version with a starts with match. e.g. 3.2 will grab the latest semantic version that starts with 3.2.
Example 1: myfolder/myservice
Example 2: myfolder/myservice[1.4.3]
Example 3: 5edf95a1-96f3-4a53-b9a4-9ff382bd9936
Example 4: ["myfolder/myservice1", "yourfolder/service2"]
inputs.version_ids
Array of version_ids.
file_filter
Filter the requested files.
The default value is
migratewhich will export all of the files relevant to the service.onpremisesincludes only the files that are needed for the Hybrid Runner.
source_system
Tag API Call.
Example: mycicd
correlation_id
Tag API Call.
Example: 456
version_filter
Filter service versions.
The default value is
allwhich will export all of the service versions for the identified folders and services.latestwill only provide the latest service version for each service. Iflatestis used and there are references to a specific service version, the export will generate an error.
file_name
Name of the downloaded file. If not provided, Spark will use an appropriate name.
Sample request
Response
HTTP 200 OK Content-Type: application/json
Returns the response from GET export status.
GET export status
GET export statusReturns: status response and link to download the ZIP file.
Path parameters
tenant *
Tenant is part of your Log in to Spark URL and also available in the User menu.
jobId *
id from POST export job.
Response
Content-Type: application/json
object
export
id
id for the job.
response_timestamp
Response timestamp.
status
createdJob registered on Spark.in_progressJob in progress.closedJob completed successfully.closed_by_timeoutJob was unable to complete within15minutes.failedJob was not able to complete.
status_url
Link to the GET status API.
process_time
Time taken for the job.
outputs.files
Array of the output files:
fileLink to the package. Filename of the exported object ispackage.zip. This can be overridden with thefile_nameparameter.file_hashSHA-256of the packaged file.
outputs.services
Array of the exported services sorted by folder, service:
service_uri_sourceinC.SPARK_XCALL()format of{folder}/{service}folder_sourceservice_sourceservice_id_source
outputs.service_versions
Array of the exported service versions sorted by folder_source, service_source, version_source:
service_uri_sourceinC.SPARK_XCALL()format of{folder}/{service}[{version}]folder_sourceservice_sourceversion_source(semantic version)service_id_sourceversion_id_source
source_system
Value from POST export job.
correlation_id
Value from POST export job.
Sample response
HTTP 200 OK Content-Type: application/json
PATCH export
PATCH exportChange export job status. This can cancel an errant job.
Returns: Response from GET export status.
Path parameters
environment *
Environment is part of your Log in to Spark URL.
tenant *
Tenant is part of your Log in to Spark URL and also available in the User menu.
jobId *
id from POST export job.
Request
Content-Type: application/json
Response
HTTP 200 OK Content-Type: application/json
Returns the response from GET export status.
GET export status across the tenant
GET export status across the tenantGet information about export jobs that are in_progress_exports or recent_exports within the past 1 h.
Returns: Export jobs.
If you are a
supervisor:pfuser, you will be able to see all exports run by users within your tenant.Otherwise, you will only see information about the batches that were initiated by yourself.
Path parameters
environment *
Environment is part of your Log in to Spark URL.
tenant *
Tenant is part of your Log in to Spark URL and also available in the User menu.
Sample response
HTTP 200 OK Content-Type: application/json
Last updated
