Snowflake
This is a comprehensive guide for Snowflake developers who want to integrate Coherent Spark services into their Snowflake environment.
This article demonstrates how to bring your existing Excel logic into Snowflake using Coherent Spark .
You’ll learn how Coherent Spark (or simply Spark) lets you run complex Excel models as API services inside Snowflake, so you can tap directly into your existing data. This guide assumes some familiarity with Snowflake and walks you through prerequisites, setup, and common practices. It covers logging into both Coherent Spark and Snowflake, using the Snowflake CLI, and configuring network rules for external access. You’ll also find examples of calling Spark services with the requests library and the Coherent Spark Python SDK. The article wraps up with advanced use cases and links to additional resources.
Why would you want to do this? In an ideal world, your modeling logic should live where your data lives. But while Snowflake is powerful for data storage and processing, building complex functions directly in Snowflake can be challenging. On the other hand, most business users prefer Excel for modeling because it’s intuitive and flexible. Coherent Spark bridges that gap by transforming your Excel models into scalable API services that integrate coherently with Snowflake.
After reading this guide, you’ll be able to run your externalized Excel functions directly from within Snowflake.
Prerequisites
Before you begin, make sure you have:
access to a Snowflake account with ACCOUNTADMIN or similar privileges
access to a Coherent Spark account with READ and EXECUTE permissions
basic familiarity with Python and SQL
Snowflake CLI installed (optional but recommended)
Most actions can be performed in the Snowflake UI, but some are better suited for the Snowflake CLI.
Getting started
Let’s start by gathering the necessary credentials and information to work with both Snowflake and Coherent Spark.
We recommend working within a testing or development environment for both Snowflake (e.g., Snowflake warehouse, which should allow you to safely run queries and procedures without affecting your production environment) and Coherent Spark during this setup.
Log in to Coherent Spark
Coherent Spark is a platform that allows you to run Excel models in a cloud environment, exposing them as API services.
Log in to your Coherent Spark account to get the following:
Base URL (e.g.,
https://excel.uat.us.coherent.global/my-tenant)User credentials:
Service Execute API (v4) :
folder- the folder name containing the Spark modelservice- the name of the Spark modelversion- and optionally the semantic version (also known as revision number)
Roughly speaking, a folder acts as a container that holds one or more services. Think of folders as a way to organize and group related services together. Each service represents an Excel model that has been converted into a Spark service. Services can exist in multiple versions, representing different iterations or updates of that service over time. When interacting with a Spark service, you are always working with a specific version – the latest one by default. You can explicitly specify an older version if you need to work with a previous iteration of the service.
Find more information about Spark services in the How to: Create a Spark service.
Log in to Snowflake
Using the right credentials, log into your Snowflake account, which will take you to the Snowflake Dashboard by default. If you are an ACCOUNTADMIN (or similar role), you will be able to see all the projects, including worksheets and notebooks, in the left sidebar.
Confirm in your Catalog that you are able to see the database, schema, and tables you intend to use. Visit the Snowflake documentation page to learn more.
2 Different ways of accessing Coherent within Snowflake
Within Snowflake, you will utilize notebook to create access to Coherent Spark. There, we describe 2 ways to achieve this:
To start, click the top-right button to create a new notebook. You can also use the provided sample Jupyter notebook at the bottom of this page for a quick start guide. Once you are within the notebook environment, ensure requests library is installed (ideally, its latest version).

Once this is installed, your environment will restart, and you can proceed to the next phase.
In order for Snowflake notebook to gain access to Coherent's Python SDK, we need to first use the Snowflake CLI tool to grab the SDK zip file and upload it to the Snowflake database of choice. Once the zip file is uploaded to the selected Snowflake database, you can then add the stage package to the current workbook environment.
Snowflake CLI
The Snowflake CLI is a command-line tool that allows you to interact with Snowflake from the terminal. It’s tailored for developers and can be used to perform most actions that are usually done in the Snowflake UI.
Follow the instructions on the Snowflake CLI GitHub repository to install it. Once installed, you can run snow to see the help message as shown below.

You will need to create a new connection by snow connection add
It will prompt a series of inputs that you need to provide. This will ultimately create a config.toml file with the appropriate credentials.
In the example below, we are using a connection named coherent_dev to connect to the coherent_db database in the coherent_wh warehouse.
The snowflake folder is typically located in ~/Library/Application Support/snowflake on macOS. However, you may choose any other location and set it up accordingly (e.g., snow --config-file="/path/to/config.toml").
You may confirm your connection by running snow connection test or snow connection list . See example below:
Keep in mind to set the default connection with snow connection set-default coherent_dev
Let’s now use the CLI to push a version of the Coherent Spark Python SDK to Snowflake as a stage package. Make sure to create a stage first. In our example, we are using the stage coherent_packages.
Create a new Snowpark package from the Python SDK releases.
Upload the zip file to the stage (i.e.,
coherent_packages).Add an
environment.ymlfor additional settings along with the dependencies in your notebook. Please remember to add dependency to Anaconda Packages in order for cspark SDK to operate correctly.
Snowflake + Coherent Spark
Snowflake needs to be able to communicate with Coherent Spark. This is done by setting up network rules and external access integrations. By default, outbound traffic is blocked.
Let’s start a notebook so we can leverage Snowpark to execute Python and SQL code alongside your data. Snowflake Labs has a great collection of notebooks that you can use to get started. The Access External Endpoints one is a good starting point.
Setting up external access integration
To enable communication between Snowflake and Coherent Spark, you need to set up network rules. Here’s an example of a network rule cs_network_rule for Coherent Spark:
In the example above, the network rule allows the following services to be accessed:
https://keycloak.{region}.coherent.globalfor Keycloak (Coherent’s Identity Provider)https://excel.{region}.coherent.globalfor Excel calculations (Coherent Spark’s main Excel engine)
The {region} is the environment where Coherent Spark services are hosted. In the SQL example above, the region is uat.us, meaning UAT environment in the United States.
After creating the network rule, set up the external access integration:
Finally, locate the notebook Settings and enable the external access integration for the rule you just created, as illustrated below.

Calling a Spark service
Depending on the method you have chosen above, we have 2 different code snippets:
Within a Python-enabled cell, let’s call a Spark service using the Python requests library.
requests is an elegant and simple HTTP library for Python, which is already installed in the Snowpark environment (see Snowflake Anaconda Channel for more details).
Here’s a simple example of calling a Coherent Spark service using the requests library. In the example, we are calling the volume-cylinder service with some inputs.

requests libraryTo make things even easier, we’ve also included a Snowpark-adapted Jupyter Notebook (see attached file below) that developers can use to accelerate the integration process and quickly experiment with running Excel-based models in Snowflake.
cspark is Coherent Spark's Python SDK that provides convenient access to its APIs. Since the SDK is not included in the Snowflake Anaconda Channel, we added it manually earlier as a stage package, that is, by importing it as a pre-installed package.

Here’s the same example of calling the volume-cylinder service using the SDK this time.

To make things even easier, we’ve also included a Snowpark-adapted Jupyter Notebook (see attached file below) that developers can use to accelerate the integration process and quickly experiment with running Excel-based models in Snowflake.
Always store your Coherent Spark credentials in a secure secrets manager, such as Snowflake Secrets Manager. The sample code above is only meant for illustration purposes. Do avoid hardcoding any credentials in your code or configuration files. Instead, retrieve them securely at runtime to protect sensitive data and maintain compliance with best security practices.
What’s next?
Now that you have a basic understanding of how to call Coherent Spark services from Snowflake, you can start introducing more Snowflake native features (e.g., UDFs, stored procedures, etc.) into the picture. Remember, for your bulk processing needs, Spark supports both synchronous and asynchronous batch processing.
For more details on the advanced use cases, visit the SDK documentation. Some of the topics covered in the guide are:
How-to: Execute records sequentially (1 record at a time)
How-to: Execute batch of records synchronously (up to 100 records at a time)
Asynchronous batch processing (high-throughput processing)
Last updated
