Datetimes¶
Classes:
Datetime (year[, month, day, hour, minute, …]) |
Proxy Datetime object, similar to Python’s datetime. |
Timedelta ([days, seconds, microseconds, …]) |
Proxy Timedelta object, similar to Python’s timedelta. |
-
class
Datetime
(year, month=1, day=1, hour=0, minute=0, second=0, microsecond=0)[source]¶ Proxy Datetime object, similar to Python’s datetime.
Note: Datetimes are always in UTC.
Examples
>>> from descarteslabs.workflows import Datetime >>> my_datetime = Datetime(year=2019, month=1, day=1) >>> my_datetime <descarteslabs.workflows.types.datetimes.datetime_.Datetime object at 0x...> >>> my_datetime.compute() datetime.datetime(2019, 1, 1, 0, 0, tzinfo=datetime.timezone.utc) >>> my_datetime.year.compute() 2019
Construct a
Datetime
from components. All parts are optional besidesyear
.Methods:
compute
([format, destination, file, …])Compute a proxy object and wait for its result. from_string
(string)Construct a Workflows Datetime from an ISO 8601-formatted string. from_timestamp
(seconds)Construct a Workflows Datetime from a number of seconds since the Unix epoch (January 1, 1970, 00:00:00 (UTC)). inspect
([format, file, cache, _ruster, …])Quickly compute a proxy object using a low-latency, lower-reliability backend. is_between
(start, end[, inclusive])Whether the datetime is between these start
andend
dates.publish
(version[, title, description, …])Publish a proxy object as a Workflow
with the given version.Attributes:
day
1 <= day <= number of days in the given month and year hour
0 <= hour < 24 microsecond
0 <= microsecond < 1000000 minute
0 <= minute < 60 month
1 <= month <= 12 second
0 <= second < 60 year
1 <= year <= 9999 -
compute
(format='pyarrow', destination='download', file=None, timeout=None, block=True, progress_bar=None, cache=True, _ruster=None, _trace=False, client=None, num_retries=None, **arguments)¶ Compute a proxy object and wait for its result.
If the caller has too many outstanding compute jobs, this will raise a
ResourceExhausted
exception.Parameters: - geoctx (
GeoContext
, or None) – The GeoContext parameter under which to run the computation. Almost all computations will require aGeoContext
, but for operations that only involve non-geospatial types, this parameter is optional. - format (Str or Dict, default "pyarrow") – The serialization format for the result. See the formats documentation for more information. If “pyarrow” (the default), returns an appropriate Python object, otherwise returns raw bytes or None.
- destination (str or dict, default "download") – The destination for the result. See the destinations documentation for more information.
- file (path or file-like object, optional) – If specified, writes results to the path or file instead of returning them.
- timeout (Int, optional) – The number of seconds to wait for the result, if
block
is True. RaisesJobTimeoutError
if the timeout passes. - block (Bool, default True) – If True (default), block until the job is completed, or
timeout
has passed. If False, immediately returns aJob
(which has already hadexecute
called). - progress_bar (Bool, default None) – Whether to draw the progress bar. If
None
(default), will display a progress bar in Jupyter Notebooks, but not elsewhere. Ignored ifblock==False
. - client (
workflows.client.Client
, optional) – Allows you to use a specific client instance with non-default auth and parameters - num_retries (Int, optional) – The number of retries to make in the event of a request failure. If you are making numerous long-running
asynchronous requests, you can use this parameter as a way to indicate that you are comfortable waiting
and retrying in response to RESOURCE EXHAUSTED errors. By default, most failures will trigger a small number
of retries, but if you have reached your outstanding job limit, by default, the client will not retry. This
parameter is unnecessary when making synchronous
compute
requests (ie. block=True, the default). See the compute section of the Workflows Guide for more information. - **arguments (Any) – Values for all parameters that
obj
depends on (or arguments thatobj
takes, if it’s aFunction
). Can be given as Proxytypes, or as Python objects like numbers, lists, and dicts that can be promoted to them. These arguments cannot depend on any parameters.
Returns: result – When
format="pyarrow"
(the default), returns an appropriate Python object representing the result, either as a plain Python type, or object fromdescarteslabs.workflows.result_types
. For other formats, returns raw bytes. Consider usingfile
in that case to save the results to a file. If the destination doesn’t support retrieving results (like “email”), returns None.Return type: Python object, bytes, or None
Raises: RetryError – Raised if there are too many failed retries. Inspect
RetryError.exceptions
to determine the ultimate cause of the error. If you reach your maximum number of outstanding compute jobs, there will be one or moreResourceExhausted
exceptions.- geoctx (
-
classmethod
from_string
(string)[source]¶ Construct a Workflows Datetime from an ISO 8601-formatted string.
If there’s no timezone offset information in the string, it’s assumed to be UTC. If there is, it’s converted to UTC.
Parameters: string (Str) – An ISO 8601-formatted datetime string, such as 2018-03-22 or 2020-03-22T16:37:00Z. Returns: Return type: Datetime Example
>>> from descarteslabs.workflows import Datetime >>> my_datetime = Datetime.from_string("2017-12-31") >>> my_datetime.compute() datetime.datetime(2017, 12, 31, 0, 0, tzinfo=datetime.timezone.utc)
-
classmethod
from_timestamp
(seconds)[source]¶ Construct a Workflows Datetime from a number of seconds since the Unix epoch (January 1, 1970, 00:00:00 (UTC)).
Parameters: seconds (Int or Float) – Returns: Return type: Datetime Example
>>> from descarteslabs.workflows import Datetime >>> my_datetime = Datetime.from_timestamp(1000) >>> my_datetime.compute() datetime.datetime(1970, 1, 1, 0, 16, 40)
-
inspect
(format='pyarrow', file=None, cache=True, _ruster=None, timeout=60, client=None, **arguments)¶ Quickly compute a proxy object using a low-latency, lower-reliability backend.
Inspect is meant for getting simple computations out of Workflows, primarily for interactive use. It’s quicker but less resilient, won’t be retried if it fails, and has no progress updates.
If you have a larger computation (longer than ~30sec), or you want to be sure the computation will succeed, use
compute
instead.compute
creates aJob
, which runs asynchronously, will be retried if it fails, and stores its results for later retrieval.Parameters: - geoctx (
common.geo.geocontext.GeoContext
,GeoContext
, or None) – The GeoContext parameter under which to run the computation. Almost all computations will require aGeoContext
, but for operations that only involve non-geospatial types, this parameter is optional. - format (str or dict, default "pyarrow") –
The serialization format for the result. See the formats documentation for more information. If “pyarrow” (the default), returns an appropriate Python object, otherwise returns raw bytes.
- file (path or file-like object, optional) – If specified, writes results to the path or file instead of returning them.
- cache (bool, default True) – Whether to use the cache for this job.
- timeout (int, optional, default 60) – The number of seconds to wait for the result.
Raises
JobTimeoutError
if the timeout passes. - client (
workflows.inspect.InspectClient
, optional) – Allows you to use a specific InspectClient instance with non-default auth and parameters - **arguments (Any) – Values for all parameters that
obj
depends on (or arguments thatobj
takes, if it’s aFunction
). Can be given as Proxytypes, or as Python objects like numbers, lists, and dicts that can be promoted to them. These arguments cannot depend on any parameters.
Returns: result – When
format="pyarrow"
(the default), returns an appropriate Python object representing the result, either as a plain Python type, or object fromdescarteslabs.workflows.result_types
. For other formats, returns raw bytes. Consider usingfile
in that case to save the results to a file.Return type: Python object or bytes
- geoctx (
-
is_between
(start, end, inclusive=True)[source]¶ Whether the datetime is between these
start
andend
dates.Parameters: Returns: Return type: Example
>>> import descarteslabs.workflows as wf >>> dt = wf.Datetime(2019, 6, 1) >>> dt.is_between("2019-01-01", "2020-01-01").compute() True >>> dt.is_between("2019-06-01", "2020-07-01").compute() True >>> dt.is_between("2019-06-01", "2020-07-01", inclusive=False).compute() False
-
publish
(version, title='', description='', labels=None, tags=None, docstring='', version_labels=None, viz_options=None, client=None)¶ Publish a proxy object as a
Workflow
with the given version.If the proxy object depends on any parameters (
obj.params
is not empty), it’s first internally converted to aFunction
that takes those parameters (usingFunction.from_object
).Parameters: - id (Str) – ID for the new Workflow object. This should be of the form
email:workflow_name
and should be globally unique. If this ID is not of the proper format, you will not be able to save the Workflow. - version (Str) – The version to be set, tied to the given
obj
. This should adhere to the semantic versioning schema. - title (Str, default "") – User-friendly title for the
Workflow
. - description (str, default "") – Long-form description of this
Workflow
. Markdown is supported. - labels (Dict, optional) – Key-value pair labels to add to the
Workflow
. - tags (list, optional) – A list of strings to add as tags to the
Workflow
. - docstring (Str, default "") – The docstring for this version.
- version_labels (Dict, optional) – Key-value pair labels to add to the version.
- client (
workflows.client.Client
, optional) – Allows you to use a specific client instance with non-default auth and parameters
Returns: workflow – The saved
Workflow
object.workflow.id
contains the ID of the new Workflow.Return type: - id (Str) – ID for the new Workflow object. This should be of the form
-
-
class
Timedelta
(days=0, seconds=0, microseconds=0, milliseconds=0, minutes=0, hours=0, weeks=0)[source]¶ Proxy Timedelta object, similar to Python’s timedelta.
Examples
>>> from descarteslabs.workflows import Timedelta >>> my_timedelta = Timedelta(days=10, minutes=100) >>> my_timedelta <descarteslabs.workflows.types.datetimes.timedelta.Timedelta object at 0x...> >>> my_timedelta.compute() datetime.timedelta(days=10, seconds=6000) >>> my_timedelta.total_seconds().compute() 870000.0
Methods:
compute
([format, destination, file, …])Compute a proxy object and wait for its result. inspect
([format, file, cache, _ruster, …])Quickly compute a proxy object using a low-latency, lower-reliability backend. publish
(version[, title, description, …])Publish a proxy object as a Workflow
with the given version.total_seconds
()The total number of seconds contained in the duration. Attributes:
days
-999999999 <= days <= 999999999 microseconds
0 <= microseconds < 1000000 seconds
0 <= seconds < 3600*24 (the number of seconds in one day) -
compute
(format='pyarrow', destination='download', file=None, timeout=None, block=True, progress_bar=None, cache=True, _ruster=None, _trace=False, client=None, num_retries=None, **arguments)¶ Compute a proxy object and wait for its result.
If the caller has too many outstanding compute jobs, this will raise a
ResourceExhausted
exception.Parameters: - geoctx (
GeoContext
, or None) – The GeoContext parameter under which to run the computation. Almost all computations will require aGeoContext
, but for operations that only involve non-geospatial types, this parameter is optional. - format (Str or Dict, default "pyarrow") –
The serialization format for the result. See the formats documentation for more information. If “pyarrow” (the default), returns an appropriate Python object, otherwise returns raw bytes or None.
- destination (str or dict, default "download") –
The destination for the result. See the destinations documentation for more information.
- file (path or file-like object, optional) – If specified, writes results to the path or file instead of returning them.
- timeout (Int, optional) – The number of seconds to wait for the result, if
block
is True. RaisesJobTimeoutError
if the timeout passes. - block (Bool, default True) – If True (default), block until the job is completed, or
timeout
has passed. If False, immediately returns aJob
(which has already hadexecute
called). - progress_bar (Bool, default None) – Whether to draw the progress bar. If
None
(default), will display a progress bar in Jupyter Notebooks, but not elsewhere. Ignored ifblock==False
. - client (
workflows.client.Client
, optional) – Allows you to use a specific client instance with non-default auth and parameters - num_retries (Int, optional) – The number of retries to make in the event of a request failure. If you are making numerous long-running
asynchronous requests, you can use this parameter as a way to indicate that you are comfortable waiting
and retrying in response to RESOURCE EXHAUSTED errors. By default, most failures will trigger a small number
of retries, but if you have reached your outstanding job limit, by default, the client will not retry. This
parameter is unnecessary when making synchronous
compute
requests (ie. block=True, the default). See the compute section of the Workflows Guide for more information. - **arguments (Any) – Values for all parameters that
obj
depends on (or arguments thatobj
takes, if it’s aFunction
). Can be given as Proxytypes, or as Python objects like numbers, lists, and dicts that can be promoted to them. These arguments cannot depend on any parameters.
Returns: result – When
format="pyarrow"
(the default), returns an appropriate Python object representing the result, either as a plain Python type, or object fromdescarteslabs.workflows.result_types
. For other formats, returns raw bytes. Consider usingfile
in that case to save the results to a file. If the destination doesn’t support retrieving results (like “email”), returns None.Return type: Python object, bytes, or None
Raises: RetryError – Raised if there are too many failed retries. Inspect
RetryError.exceptions
to determine the ultimate cause of the error. If you reach your maximum number of outstanding compute jobs, there will be one or moreResourceExhausted
exceptions.- geoctx (
-
inspect
(format='pyarrow', file=None, cache=True, _ruster=None, timeout=60, client=None, **arguments)¶ Quickly compute a proxy object using a low-latency, lower-reliability backend.
Inspect is meant for getting simple computations out of Workflows, primarily for interactive use. It’s quicker but less resilient, won’t be retried if it fails, and has no progress updates.
If you have a larger computation (longer than ~30sec), or you want to be sure the computation will succeed, use
compute
instead.compute
creates aJob
, which runs asynchronously, will be retried if it fails, and stores its results for later retrieval.Parameters: - geoctx (
common.geo.geocontext.GeoContext
,GeoContext
, or None) – The GeoContext parameter under which to run the computation. Almost all computations will require aGeoContext
, but for operations that only involve non-geospatial types, this parameter is optional. - format (str or dict, default "pyarrow") –
The serialization format for the result. See the formats documentation for more information. If “pyarrow” (the default), returns an appropriate Python object, otherwise returns raw bytes.
- file (path or file-like object, optional) – If specified, writes results to the path or file instead of returning them.
- cache (bool, default True) – Whether to use the cache for this job.
- timeout (int, optional, default 60) – The number of seconds to wait for the result.
Raises
JobTimeoutError
if the timeout passes. - client (
workflows.inspect.InspectClient
, optional) – Allows you to use a specific InspectClient instance with non-default auth and parameters - **arguments (Any) – Values for all parameters that
obj
depends on (or arguments thatobj
takes, if it’s aFunction
). Can be given as Proxytypes, or as Python objects like numbers, lists, and dicts that can be promoted to them. These arguments cannot depend on any parameters.
Returns: result – When
format="pyarrow"
(the default), returns an appropriate Python object representing the result, either as a plain Python type, or object fromdescarteslabs.workflows.result_types
. For other formats, returns raw bytes. Consider usingfile
in that case to save the results to a file.Return type: Python object or bytes
- geoctx (
-
publish
(version, title='', description='', labels=None, tags=None, docstring='', version_labels=None, viz_options=None, client=None)¶ Publish a proxy object as a
Workflow
with the given version.If the proxy object depends on any parameters (
obj.params
is not empty), it’s first internally converted to aFunction
that takes those parameters (usingFunction.from_object
).Parameters: - id (Str) – ID for the new Workflow object. This should be of the form
email:workflow_name
and should be globally unique. If this ID is not of the proper format, you will not be able to save the Workflow. - version (Str) – The version to be set, tied to the given
obj
. This should adhere to the semantic versioning schema. - title (Str, default "") – User-friendly title for the
Workflow
. - description (str, default "") – Long-form description of this
Workflow
. Markdown is supported. - labels (Dict, optional) – Key-value pair labels to add to the
Workflow
. - tags (list, optional) – A list of strings to add as tags to the
Workflow
. - docstring (Str, default "") – The docstring for this version.
- version_labels (Dict, optional) – Key-value pair labels to add to the version.
- client (
workflows.client.Client
, optional) – Allows you to use a specific client instance with non-default auth and parameters
Returns: workflow – The saved
Workflow
object.workflow.id
contains the ID of the new Workflow.Return type: - id (Str) – ID for the new Workflow object. This should be of the form
-