Workflows allows composition of geospatial workflows using a rich set of objects and methods. It is fundamentally different from Raster and Metadata because no work occurs locally on your machine and only the result is returned. The operations are serialized and executed on scalable, secure infrastructure so that you can focus on expressing your model and defining your outputs.

It’s available at:

>>> import descarteslabs.workflows as wf

Request Access

Workflows is currently in limited release. To request access please contact


The following example loads a single Image with red, green, and blue bands and computes with a GeoContext argument.

>>> import descarteslabs.workflows as wf
>>> img = wf.Image.from_id("landsat:LC08:PRE:TOAR:meta_LC80270312016188_v1")
>>> rgb = img.pick_bands(["red", "green", "blue"])
>>> geocontext = wf.GeoContext(
        bounds=(258292.5, 4503907.5, 493732.5, 4743307.5),
>>> result = rgb.compute(geocontext)
>>> type(result.ndarray)
<class ''>
>>> result.ndarray.shape
(3, 15960, 15696)

Proxy Objects

All objects in the Workflows client are lazy proxy objects: every time you call a function or access an attribute on a Workflows object, it just returns another proxy object representing what the result would be, and keeps track of that operation for later. When you call .compute() on a proxy object, a graph of all those operations is sent to the backend, which executes it and sends the result back to you.

workflows architecture overview

For example, in normal Python, adding two numbers happens right away:

>>> 1 + 1

But if we use a Workflows Int, we just get a proxy object:

>>> from descarteslabs import workflows as wf
>>> wf.Int(1) + 1
<descarteslabs.workflows.types.primitives.number.Int at 0x7f656b0cb0d0>

This proxy object is actually just storing a dependency graph representing the operation 1 + 1, in a syntax called “graft”:

>>> result = wf.Int(1) + 1
>>> result.graft
{'1': 1, '2': 1, '3': ['add', '1', '2'], 'returns': '3'}

You don’t ever need to worry about the graft or understand the syntax, but knowing that it’s happening might make the system a little less mysterious.

So when you call .compute() on a proxy object, that dependency graph gets sent to the backend and executed there, and the result is sent back to you:

>>> result.compute()
[###############] | Steps: 1/1 | Stage: STAGE_DONE | Status: STATUS_SUCCESS


A Workflow is a persisted proxy object, plus metadata like name, description, and eventually access controls.

When a workflow is saved on the backend, you or others can link to it in other workflows—much like importing a package in other programming languages:

>>> # continuing from previous example, where `result` is an `Int` proxy object
>>> workflow = result.publish(name="one-plus-one", description="The result of 1 plus 1")
>>> workflow.type

wf.retrieve loads a saved Workflow by ID:

>>> same_workflow = wf.retrieve('f8be90ba80990f081cc8460d984ffcbcb1709222c99db052')
>>> same_workflow.description
"The result of 1 plus 1"
>>> same_workflow.type

Workflow.object contains the actual proxy object, which you can use in your code:

>>> same_workflow.object
<descarteslabs.workflows.types.primitives.number.Int at 0x1152ed110>
>>> (same_workflow.object + 2).compute()
[###############] | Steps: 0/0 | Stage: STAGE_DONE | Status: STATUS_SUCCESS

wf.use is a shorthand for wf.retrieve(...).object, and you can use it like an import statement:

>>> two = wf.use('f8be90ba80990f081cc8460d984ffcbcb1709222c99db052')
# `two` is equivalent to `same_workflow.object` from above


All computations are executed asynchronously; they will complete in the background without blocking. This execution is represented by the Job object which can stream updates from the running computation including its current status, stage, and progress.

>>> job = result.compute(block=False)

The Job also allows blocking until the result is available.

>>> Job.get("626e3036857d492fbc11e7fa09b25f16").result()
[###############] | Steps: 1/1 | Stage: STAGE_DONE | Status: STATUS_SUCCESS

Jobs execute in a queue and may be subject to a delay as the queue size grows. If you desire greater resources or a prioritized queue, please contact

Interactive Maps in Jupyter

Some Workflows objects can be viewed on an interactive map in a Jupyter Notebook. Rather than calling .compute() with an explicit GeoContext, the workflow is computed on-the-fly for the area you’re viewing in the map.


Using the map requires ipyleaflet, which is included when running pip install --upgrade descarteslabs[complete], or by manually running pip install ipyleaflet.

Currently, ipyleaflet requires some additional installation steps to make widgets show up in Jupyter. For JupyterLab:

jupyter labextension install jupyter-leaflet @jupyter-widgets/jupyterlab-manager

If you’re using plain Jupyter notebook and maps don’t show up, try:

jupyter nbextension enable --py --sys-prefix ipyleaflet

Usage is a single MapApp that all layers are added to by default.

When using JupyterLab (recommended), one of your first cells should be:

which will display the map below. Right-click on it and select ‘New View for Output’, which allows you to rearrange the map as its own tab.

Then, calling Image.visualize() will add a new layer to Note that Image.visualize() just adds the layer; nothing will show up directly underneath that cell.

Using the layer controls, you can adjust scaling, set colormaps for single-band images, perform autoscaling to the current viewport with the enhance button, and rearrange layers.

Currently, only Image objects can be displayed. To visualize an ImageCollection, first composite it (mean, min, etc.). To visualize vector data, you can rasterize it into an Image.


  • When you run, no map shows up, you see “A Jupyter widget could not be displayed because the widget state could not be found.”, or you just see MapApp(children=(Map(basemap={....

    The ipyleaflet Jupyter plugins aren’t installed correctly. Make sure the ipyleaflet Python package is installed in your environmemt, and follow the installation steps above.

  • You call Image.visualize(), but nothing shows up.

    Sometimes it just takes a while. But currently, errors are not passed back, so if something is wrong it will fail silently.

    1. Make sure you’re logged in to your Descartes Labs account. Visit, log in, then refresh the Jupyter page.

    2. Try clicking the Autoscale button to run a compute job. If there’s an error with your workflow, it will be displayed.

Continue to Workflows API Reference.