flow-framework icon indicating copy to clipboard operation
flow-framework copied to clipboard

[META] Flow Framework Development Plan / Milestones

Open dbwiddis opened this issue 1 year ago • 2 comments

Flow Framework Objective:

We want to introduce our customers to a new no-code/low-code builder experience (Backend RFC and Frontend RFC) that empowers users to compose AI-augmented query and ingestion flows, integrate ML models supported by ML-Commons, and streamline the OpenSearch app development experience through a drag-and-drop designer.

Builders will continue to gain the benefits of OpenSearch Machine Learning (ML) offerings with out-of-the-box AI integrations that eliminate the need for custom middleware. Builders will further benefit from unbounded AI use case support and their limitless variations through this new builder paradigm. They will be empowered to innovate faster through automations and a low-to-no-code experience. While the initial focus is on ML offerings, the framework is intended to be generic to support non-ML workflows as well.

Key to the coordination between frontend and backend are use case templates. Frontend users will use a no-code/low-code builder to generate these, but they are also accessible to backend users to automate API calls in complex workflows.

Incremental Development Plan:

With above objective in mind, we are taking an incremental approach in terms of delivery, wherein, in the first phase we are providing automated templates which would help users to create a connector, register a model, deploy it, register agents, tools etc through one API call rather than doing the complex setup of calling multiple APIs and waiting for their responses.

This issue documents current and future development plans for Flow Framework. Note that features, priorities, and milestones do frequently change, and this issue will be kept updated. We welcome community input to prioritize backlog features and participate in all phases of development.

2.12.0

  • Initial design of Workflow Use Case Templates
  • Implementation of basic CRUD APIs for templates and a status API
  • Implementation of DAG-based sequencing of building blocks called Workflow Steps
  • Execution of the workflow steps via provision and deprovision API
  • Implementation of WorkflowSteps supporting the use case of setting up a conversational assistant / query generator integrating with ML Commons Agent Framework using a single API call

2.13.0

  • [x] Pre-defined templates and defaults https://github.com/opensearch-project/flow-framework/issues/496
  • [x] Implement CreateSearchPipeline Workflow Step integrating with Search Pipelines https://github.com/opensearch-project/flow-framework/issues/545
  • [x] https://github.com/opensearch-project/flow-framework/issues/104

Active development priorities

  • [ ] Implement steps for external REST APIs https://github.com/opensearch-project/flow-framework/issues/522
  • [ ] Continue integration with front-end UI https://github.com/opensearch-project/dashboards-flow-framework
  • [ ] Continue to improve CreateSearchPipeline Workflow Step integration with Search Pipelines
    • Conceptually this will be similar to the Agent / Tool implementation
    • Implementation will start with existing Processors, and other processors in development for 2.13.0 release
    • This will require steps corresponding to Processor interfaces for the Search Pipeline steps (pre-, post-, search phase)
    • This may involve development of new Processor types as needed. Specifically there are some processors used in the Ingest Pipeline (including but not limited to conditional, etc.) that we want to add equivalent versions of.
    • We may add additional "basic logic" processor types for common/simple workflows that do not require full DAG complexity
  • [ ] Create a new Async processor type that can wrap an entire DAG-based workflow (Proposed #367, proof of concept complete)
  • [ ] Implement CreateIngestPipeline Workflow Step
    • This will involve similar Processor interface implementations
  • [ ] Implement search pipeline processor (existing or new) for data retrieval from OpenSearch
  • [ ] Implement search pipeline processor (existing or new) for data transformation (JSON-to-JSON)
  • [ ] Implement search pipeline processor (existing or new) for data insertion into OpenSearch
  • [ ] Integrate search pipeline processors developed in other repos

Backlog

  • [ ] Implement nested workflows / sub workflows to simplify templates
  • [ ] Improve provisioning / deprovisioning flexibility (fine-grained provisioning)
  • [ ] Improve customization / settings-based workflow configuration
  • [ ] Improve saved workflow grouping/tagging/searching capabilities
  • [ ] Implement steps related to OpenSearch index lifecycle

dbwiddis avatar Jan 30 '24 18:01 dbwiddis

This is a great start @dbwiddis. Lets keep incrementing on this one and add more features/functionalities as we enhance. Thanks

minalsha avatar Jan 30 '24 19:01 minalsha

Implement steps for external REST APIs (similar to ML Connectors but more general/non-ML)

Moving this step in the backlog to 2.13.0 to support https://github.com/opensearch-project/observability/issues/1805

dbwiddis avatar Feb 12 '24 21:02 dbwiddis