Releases: Netflix/metaflow
2.12.1
Features
Configurable default decorators
This release adds the ability to configure default decorators that will be applied to all steps. This is achieved by setting the decospecs as a value (space separated) for METAFLOW_DECOSPECS
either as an environment variable or in a config.json
The following example would add retry and kubernetes decorators with a custom memory value to all steps:
export METAFLOW_DECOSPECS="kubernetes:memory=4096 retry"
Defining a decorator with the --with
keyword will override the defaults configured. Same applies for explicitly adding a decorator in the flow file.
Improvements
Correctly clean up Argo Workflow sensors when using @project
This release fixes an issue where argo-workflows delete
did not correctly remove possible sensors associated with the workflow if the workflow used the @project
decorator.
What's Changed
- Add the possibility of defining default decorators for steps by @romain-intel in #1837
- bugfix: properly deletes Argo Events trigger sensors when
@project
is used by @gabriel-rp in #1871 - S3PubObject was not used properly after #1807 by @romain-intel in #1872
- bump version to 2.12.1 by @saikonen in #1874
New Contributors
- @gabriel-rp made their first contribution in #1871
Full Changelog: 2.12.0...2.12.1
2.12.0
Features
Support running flows in notebooks and through Python scripts
This release introduces a new Runner API that makes it simple to run flows inside Notebooks, or as part of Python code.
Read the blog post on the feature, or dive straight into the documentation to start using it.
What's Changed
- Remove dead code by @savingoyal in #1853
- initial runner api by @madhur-ob in #1732
- synchronous run and resume functionality + nbrun for runner API by @madhur-ob in #1845
- Tests for runner by @savingoyal in #1859
- minor nbrun fixes by @madhur-ob in #1860
- fix leaked message by @madhur-ob in #1861
- raise exception instead by @madhur-ob in #1862
- allow output to be hidden in nbrun() by @tuulos in #1864
- show_output as True by default by @madhur-ob in #1865
- add explicit cleanup() methods in Runners by @tuulos in #1863
- Runner docstring fixes by @tuulos in #1866
- release 2.12.0 by @savingoyal in #1867
Full Changelog: 2.11.16...2.12.0
2.11.16
Features
Support GCP Secret Manager
This release adds support for using GCP Secret manager to supply secret values for steps environment.
In order to enable the secret manager, you should specify the type by setting METAFLOW_DEFAULT_SECRETS_BACKEND_TYPE
to gcp-secret-manager
or specifying it in the decorator
@secrets(sources=[{"type": "gcp-secret-manager", "id": "some-secret-key"}])
METAFLOW_GCP_SECRET_MANAGER_PREFIX
can be set in order to not have to write full secret locations.
Support Azure Key Vault
This release also adds support for Azure Key Vault as a secrets backend. Specify az-key-vault
as the secret backend type to use this.
Same as with the other secret managers, we provide a prefix config to avoid having to repeat common parts in the secret keys. Configure this by setting METAFLOW_AZURE_KEY_VAULT_PREFIX
Note: Currently only Secret
object types are supported when using Azure Key Vault.
@parallel
for Kubernetes
This release adds support for @parallel
when flows are run --with kubernetes
Example:
@step
def start(self):
self.next(self.parallel_step, num_parallel=3)
@kubernetes(cpu=1, memory=512)
@parallel
@step
def parallel_step(self):
...
Configurable runtime limits
It is now possible to configure the default timeout for the @timeout
decorator. This can be done by setting METAFLOW_DEFAULT_RUNTIME_LIMIT
in the environment, or in a config.json
Improvements
Resumed flows should record task competions correctly
Fixes an issue where tasks that were cloned from a previous run by resume
would not show up as completed on the Metaflow UI due to missing metadata
Fix accessing task index of a foreach task
There was an issue accessing the index of a foreach task via the client. With this release it is possible to do the following
from metaflow import Task
task = Task("ForeachFlow/123/foreach_step/task-00000000")
task.index
What's Changed
- [runtime limits] make runtime limits configurable by @valayDave in #1834
- [Ready for Review] fix bug where client can not access foreach stack by @darinyu in #1766
- [Ready for Review] add attempt_ok flag so that UI will not show up unknown node by @darinyu in #1830
- pluggable azure credentials provider by @oavdeev in #1756
- Secret Backend Support for Azure Key Vault by @iamsgarg-ob in #1839
- reducing the dep version to 4.7.0 (#47) by @iamsgarg-ob in #1840
- [
@parallel
on Kubernetes] support for Jobsets by @valayDave in #1804 - Pluggable GCP auth by @oavdeev in #1841
- GCP secret manager support by @oavdeev in #1842
- [jobsets] py3.5 compatibility fixes. by @valayDave in #1844
- Support Python 3.5 for tests by @savingoyal in #1843
- py3.5 compatibility fixes [azure/gcp/jobsets] by @valayDave in #1848
- [version bump] for release by @valayDave in #1847
- [OB-625] adding metaflow/cron annotation to argo workflows by @iamsgarg-ob in #1852
New Contributors
- @iamsgarg-ob made their first contribution in #1839
Full Changelog: 2.11.15...2.11.16
2.11.15
Features
Displaying task attempt logs
When running a task with the @retry
decorator, previously we were able to only view the logs of the latest attempt of the task.
With this release it is now possible to target a specific attempt with the --attempt
option for the logs command
python example.py logs 123/retry_step/456 --attempt 2
Scrubbing log contents
This release introduces a new command for scrubbing log contents of tasks in case they contain sensitive information that needs to be redacted.
Simplest use case is scrubbing the latest task logs. By default both stdout
and stderr
are scrubbed
python example.py logs scrub 123/example/456
There are also options to target only a specific log stream
python example.py logs scrub 123/example/456 --stderr
python example.py logs scrub 123/example/456 --stdout
when using the@retry
decorator, tasks can have multiple attempts with separate logs that require scrubbing. By default only the latest attempt is scrubbed. There are options to make scrubbing multiple attempts easier
# scrub specific attempt
python example.py logs scrub 123/retry_step/456 --attempt 1
# scrub all attempts
python example.py logs scrub 123/retry_step/456 --all
# scrub specified attempt and all prior to it (this would scrub attempts 0,1,2,3)
python example.py logs scrub 123/retry_step/456 --all --attempt 3
The command also accepts only specifying a step for scrubbing. This is useful for steps with multiple tasks, like a foreach split.
python example.py logs scrub 123/foreach_step
all the above options also apply when targeting a step for scrubbing.
Note: Log scrubbing for running tasks is not recommended, and is actively protected against. There can be occasions where a task has failed in such a way that it still counts as not completed. For such a case you can supply the --include-not-done
option to try and scrub it as well.
What's Changed
- feature: scrub logs by @saikonen in #1802
- Fix issue #1805 by @narayanacharya6 in #1807
- bug fix: unused arg for batch step --help by @mae5357 in #1817
- bump version to 2.11.15 by @saikonen in #1829
New Contributors
- @narayanacharya6 made their first contribution in #1807
- @mae5357 made their first contribution in #1817
Full Changelog: 2.11.14...2.11.15
2.11.14
What's Changed
- Increase Azure Blobstore connection_timeout. by @shrinandj in #1827
- bump version to 2.11.14 by @shrinandj in #1828
Full Changelog: 2.11.13...2.11.14
2.11.13
Features
Configurable default Kubernetes resources
This release introduces configuration options for setting default values for cpu / memory / disk
when running on Kubernetes. These can be set either with environment variables
METAFLOW_KUBERNETES_CPU=
METAFLOW_KUBERNETES_MEMORY=
METAFLOW_KUBERNETES_DISK=
or in a Metaflow profile
{
"KUBERNETES_CPU": "",
"KUBERNETES_MEMORY": "",
"KUBERNETES_DISK": "",
}
These values will be overruled by specifying a value through the @kubernetes
or @resources
decorators.
Improvements
Support for wider foreach flows with Argo Workflows
This release changes the way task ids are generated on Argo Workflows in order to solve an issue where extremely wide foreach splits could not execute correctly due to hard limits on input parameters size on Argo Workflows.
What's Changed
- fix: Deterministic foreach task id's for Argo Workflows by @saikonen in #1704
- Make Kubernetes default resources configurable by @martinhausio in #1800
- Add METAFLOW_ESCAPE_HATCH_WARNING for s3op by default. by @romain-intel in #1825
- bump version to 2.11.13 by @saikonen in #1826
New Contributors
- @martinhausio made their first contribution in #1800
Full Changelog: 2.11.12...2.11.13
2.11.12
What's Changed
- Fix: JSON Reference Path Error in AWS Step Functions Distributed Map by @nidhinnru in #1822
- fix import of the new escape hatch flag by @wangchy27 in #1823
New Contributors
- @nidhinnru made their first contribution in #1822
Full Changelog: 2.11.11...2.11.12
2.11.11
What's Changed
- Add support for getting EC2 metadata through IMDSv2. by @trhodeos in #1808
- Initialize token for IMDSv1 ec2-metadata by @trhodeos in #1809
- allow @conda to accept nvidia as a compute target by @savingoyal in #1811
- fix: github tests by @saikonen in #1813
- fix: R tests by @saikonen in #1814
- Add a flag to control escape hatch log by @wangchy27 in #1819
- upgrade package versions by @wangchy27 in #1821
New Contributors
Full Changelog: 2.11.10...2.11.11
2.11.10
Improvements
Argo Events trigger improvements for parameters with default values
This release fixes an issue where partial or empty argo event payloads would incorrectly overwrite the default values for the parameters of a triggered flow.
For example a flow with
@trigger(events=["params_event"])
class DefaultParamEventFlow(FlowSpec):
param_a = Parameter(
name="param_a",
default="default value A",
)
param_b = Parameter(
name="param_b",
default="default value B",
)
will now correctly have the default values for its parameters when triggered by
from metaflow.integrations import ArgoEvent
ArgoEvent('params_event').publish()
or a default value for param_b and the supplied value for param_a when triggered by
ArgoEvent('params_event').publish({"param_a": "custom-value"})
What's Changed
- [Ready for review] replace pull_request_target by @darinyu in #1790
- [bug fix] Flow decorator click option names fix by @valayDave in #1775
- [@kubernetes port] Allow configurable port number by @valayDave in #1793
- Bump vite from 5.0.12 to 5.0.13 in /metaflow/plugins/cards/ui by @dependabot in #1791
- feature: add metadata for argo workflows template owner by @saikonen in #1798
- fix: support default parameter values with argo events by @saikonen in #1797
- release 2.11.10 by @saikonen in #1799
Full Changelog: 2.11.9...2.11.10
2.11.9
What's Changed
- check for falsy values instead of just None by @madhur-ob in #1786
- Bump version to 2.11.9 by @madhur-ob in #1787
Full Changelog: 2.11.8...2.11.9