metaflow
metaflow copied to clipboard
:rocket: Build and manage real-life ML, AI, and data science projects with ease!
Large ML projects spanning teams reuse pipelines and models (ex: ensembles, feature engineering, etc). There are two aspects of reuse: 1. Reuse a whole Flow, to be able to compose...
Team - I have created a AWS step function with the following command. `insert_name_here.py --production step-functions create` Is there a way to trigger this production step function manually without using...
From an architectural standpoint, I have an "orchestration" flow that is calling multiple other flows and then combining their results for external use. The orchestration flow is calling previous deployed...
> I'd like to customise the metaflow UI so that it looks different in each environment (dev, stag, prod) - is there an easy way to do that? I don't...
Currently, sensor names are created by replacing `.` with `-` in the argo-workflow names. ``` # Register sensor. Unfortunately, Argo Events Sensor names don't allow for # dots (sensors run...
I was looking for a way to stop a running step function and ran across this TODO in metaflow/plugins/aws/step_functions/step_functions_client.py: ``` def terminate_execution(self, state_machine_arn, execution_arn): # TODO pass ``` Any chance...
Fixes #1572 Passes the instantiated boto client to the sub-processes instead of instantiating a new boto3 client object within each subprocess. Details for why this is needed are in the...
The currently prominent way of running the checks for a flow (including linting) for a platform specific CLI is to call `obj.check` in the entrypoint command, for example `argo_workflows()`. This...
Kubernetes provides support for both resource [requests & limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). At present, Metaflow supports requests only. It would be good if Metaflow could also provide support for limits, as these can...
Botocore will throw an error of `botocore.exceptions.NoCredentialsError` if too many s3 objects are requested simultaneously on an AWS EC2 instance (including Kubernetes.) In my use case this was because we...