ragflow icon indicating copy to clipboard operation
ragflow copied to clipboard

RAGFlow v0.22.1: API-Created Agent DSL Causes Unresponsive Save and Run Buttons in Web Interface

Open caobb1990 opened this issue 1 month ago • 8 comments

When calling the Create Agent interface via the HTTP API in RAGFlow v0.22.1 and passing a pre-configured DSL (JSON downloaded from the RAGFlow web interface), the API returns a new DSL in the web interface. However, the Save and Run buttons for this DSL remain unresponsive.

caobb1990 avatar Dec 04 '25 06:12 caobb1990

Hi @caobb1990! I'm Dosu and I’m helping the ragflow team.

This is a known issue in RAGFlow v0.22.1. The Save and Run buttons can be unresponsive when an agent DSL is created via the API if the payload is missing required fields—most notably, the permission field (should be "me" or "team"). If this field is absent, the backend ignores save requests, and the UI buttons appear stuck. This behavior is documented for Knowledge Base configs but applies to agent DSLs as well [source].

Another common cause is a mismatch between the DSL structure returned by the API and what the frontend expects. The agent editor relies on a valid graph structure (nodes, edges, components) loaded into its internal store. If the DSL is malformed or missing these, the buttons won't work [source].

There are also critical regressions in v0.22.1 that can freeze the UI due to backend deadlocks, especially on Windows/Docker. These can affect all UI elements, including Save and Run [source].

How to troubleshoot:

  • Make sure your API payload includes permission: "me" (or "team").
  • Double-check that your DSL JSON has a valid graph object with nodes and edges, and a components array.
  • If the UI is still unresponsive, try a hard browser refresh. If the issue persists, consider reverting to v0.22.0 until a fix is released.

If you need a reference DSL structure or want to validate your payload, let me know!

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other

Chat with Dosu Open in Cursor Join Discord Share on X

dosubot[bot] avatar Dec 04 '25 06:12 dosubot[bot]

I need a reference DSL structure returned by the API, For example, a common data ingestion pipeline JSON, thank you!

------------------ 原始邮件 ------------------ 发件人: "infiniflow/ragflow" @.>; 发送时间: 2025年12月4日(星期四) 下午2:21 @.>; @.@.>; 主题: Re: [infiniflow/ragflow] RAGFlow v0.22.1: API-Created Agent DSL Causes Unresponsive Save and Run Buttons in Web Interface (Issue #11737)

dosubot[bot] left a comment (infiniflow/ragflow#11737)

Hi @caobb1990! I'm Dosu and I’m helping the ragflow team.

This is a known issue in RAGFlow v0.22.1. The Save and Run buttons can be unresponsive when an agent DSL is created via the API if the payload is missing required fields—most notably, the permission field (should be "me" or "team"). If this field is absent, the backend ignores save requests, and the UI buttons appear stuck. This behavior is documented for Knowledge Base configs but applies to agent DSLs as well [source].

Another common cause is a mismatch between the DSL structure returned by the API and what the frontend expects. The agent editor relies on a valid graph structure (nodes, edges, components) loaded into its internal store. If the DSL is malformed or missing these, the buttons won't work [source].

There are also critical regressions in v0.22.1 that can freeze the UI due to backend deadlocks, especially on Windows/Docker. These can affect all UI elements, including Save and Run [source].

How to troubleshoot:

Make sure your API payload includes permission: "me" (or "team").

Double-check that your DSL JSON has a valid graph object with nodes and edges, and a components array.

If the UI is still unresponsive, try a hard browser refresh. If the issue persists, consider reverting to v0.22.0 until a fix is released.

If you need a reference DSL structure or want to validate your payload, let me know!

To reply, just mention @dosu.

How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other

   

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

caobb1990 avatar Dec 04 '25 06:12 caobb1990

Here’s a reference DSL JSON for a common data ingestion pipeline as returned by the RAGFlow API. This structure is compatible with the agent editor in v0.22.1 and includes the required fields for Save/Run button functionality. The example below models a pipeline that ingests PDF files, parses them, chunks the content, and tokenizes the result:

{
  "components": {
    "File": {
      "obj": {
        "component_name": "File",
        "params": {}
      },
      "downstream": ["Parser:0"],
      "upstream": []
    },
    "Parser:0": {
      "obj": {
        "component_name": "Parser",
        "params": {
          "setups": {
            "pdf": {
              "parse_method": "deepdoc",
              "vlm_name": "",
              "lang": "Chinese",
              "suffix": ["pdf"],
              "output_format": "json"
            }
          }
        }
      },
      "downstream": ["Splitter:0"],
      "upstream": ["File"]
    },
    "Splitter:0": {
      "obj": {
        "component_name": "Splitter",
        "params": {
          "chunk_token_size": 512,
          "delimiters": ["\n"],
          "overlapped_percent": 0
        }
      },
      "downstream": ["Tokenizer:0"],
      "upstream": ["Parser:0"]
    },
    "Tokenizer:0": {
      "obj": {
        "component_name": "Tokenizer",
        "params": {}
      },
      "downstream": [],
      "upstream": ["Splitter:0"]
    }
  },
  "graph": {
    "nodes": [
      {
        "id": "File",
        "data": { "label": "File", "name": "File" },
        "type": "fileNode",
        "position": { "x": 50, "y": 100 }
      },
      {
        "id": "Parser:0",
        "data": { "label": "Parser", "name": "Parser:0" },
        "type": "parserNode",
        "position": { "x": 250, "y": 100 }
      },
      {
        "id": "Splitter:0",
        "data": { "label": "Splitter", "name": "Splitter:0" },
        "type": "splitterNode",
        "position": { "x": 450, "y": 100 }
      },
      {
        "id": "Tokenizer:0",
        "data": { "label": "Tokenizer", "name": "Tokenizer:0" },
        "type": "tokenizerNode",
        "position": { "x": 650, "y": 100 }
      }
    ],
    "edges": [
      { "id": "e1", "source": "File", "target": "Parser:0" },
      { "id": "e2", "source": "Parser:0", "target": "Splitter:0" },
      { "id": "e3", "source": "Splitter:0", "target": "Tokenizer:0" }
    ]
  },
  "path": [],
  "history": [],
  "retrieval": [],
  "memory": [],
  "globals": {
    "sys.query": "",
    "sys.user_id": "",
    "sys.conversation_turns": 0,
    "sys.files": []
  }
}
  • The components section defines each step in the pipeline and their connections.
  • The graph section (with nodes and edges) is required for the web UI to recognize and render the workflow.
  • The path, history, retrieval, memory, and globals fields are included for compatibility with the API and frontend expectations.
  • If you are creating a dataset or knowledge base, include the "permission": "me" (or "team") field at the top level of your API payload, as this is required for some endpoints but not always for agent DSLs specifically.

You can adapt this structure for other file types or pipeline steps by modifying the components and graph sections as needed. For more details, see the example in the codebase and the API docs.

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other

Chat with Dosu Open in Cursor Join Discord Share on X

dosubot[bot] avatar Dec 04 '25 06:12 dosubot[bot]

Where did you get the original DSL from?

KevinHuSh avatar Dec 04 '25 07:12 KevinHuSh

 The original DSL json downloaded from the agent of RAGFlow web interface.

------------------ 原始邮件 ------------------ 发件人: "infiniflow/ragflow" @.>; 发送时间: 2025年12月4日(星期四) 下午3:33 @.>; @.@.>; 主题: Re: [infiniflow/ragflow] RAGFlow v0.22.1: API-Created Agent DSL Causes Unresponsive Save and Run Buttons in Web Interface (Issue #11737)

KevinHuSh left a comment (infiniflow/ragflow#11737)

Where did you get the original DSL from?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

caobb1990 avatar Dec 04 '25 08:12 caobb1990

Here’s a reference DSL JSON for a common data ingestion pipeline as returned by the RAGFlow API. This structure is compatible with the agent editor in v0.22.1 and includes the required fields for Save/Run button functionality. The example below models a pipeline that ingests PDF files, parses them, chunks the content, and tokenizes the result:

{ "components": { "File": { "obj": { "component_name": "File", "params": {} }, "downstream": ["Parser:0"], "upstream": [] }, "Parser:0": { "obj": { "component_name": "Parser", "params": { "setups": { "pdf": { "parse_method": "deepdoc", "vlm_name": "", "lang": "Chinese", "suffix": ["pdf"], "output_format": "json" } } } }, "downstream": ["Splitter:0"], "upstream": ["File"] }, "Splitter:0": { "obj": { "component_name": "Splitter", "params": { "chunk_token_size": 512, "delimiters": ["\n"], "overlapped_percent": 0 } }, "downstream": ["Tokenizer:0"], "upstream": ["Parser:0"] }, "Tokenizer:0": { "obj": { "component_name": "Tokenizer", "params": {} }, "downstream": [], "upstream": ["Splitter:0"] } }, "graph": { "nodes": [ { "id": "File", "data": { "label": "File", "name": "File" }, "type": "fileNode", "position": { "x": 50, "y": 100 } }, { "id": "Parser:0", "data": { "label": "Parser", "name": "Parser:0" }, "type": "parserNode", "position": { "x": 250, "y": 100 } }, { "id": "Splitter:0", "data": { "label": "Splitter", "name": "Splitter:0" }, "type": "splitterNode", "position": { "x": 450, "y": 100 } }, { "id": "Tokenizer:0", "data": { "label": "Tokenizer", "name": "Tokenizer:0" }, "type": "tokenizerNode", "position": { "x": 650, "y": 100 } } ], "edges": [ { "id": "e1", "source": "File", "target": "Parser:0" }, { "id": "e2", "source": "Parser:0", "target": "Splitter:0" }, { "id": "e3", "source": "Splitter:0", "target": "Tokenizer:0" } ] }, "path": [], "history": [], "retrieval": [], "memory": [], "globals": { "sys.query": "", "sys.user_id": "", "sys.conversation_turns": 0, "sys.files": [] } }

  • The components section defines each step in the pipeline and their connections.
  • The graph section (with nodes and edges) is required for the web UI to recognize and render the workflow.
  • The path, history, retrieval, memory, and globals fields are included for compatibility with the API and frontend expectations.
  • If you are creating a dataset or knowledge base, include the "permission": "me" (or "team") field at the top level of your API payload, as this is required for some endpoints but not always for agent DSLs specifically.

You can adapt this structure for other file types or pipeline steps by modifying the components and graph sections as needed. For more details, see the example in the codebase and the API docs.

To reply, just mention @dosu.

How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other

Chat with Dosu Open in Cursor Join Discord Share on X

After calling the API to create a pipeline using this reference DSL JSON, I see an Ingestion pipeline appear in the web interface. However, there are two issues:

  1. When I click to run this pipeline, it opens a chat interface, and after uploading a document and entering a question, errors occur. This is not the functionality I need. In fact, I only want it to have the capability to parse documents, consistent with the functionality of the Ingestion pipeline created directly in the web interface.
  2. When creating a knowledge base and selecting a pipeline, the Ingestion pipeline just created via the API does not appear in the options. @KevinHuSh

caobb1990 avatar Dec 04 '25 09:12 caobb1990

After calling the API to create a pipeline using this reference DSL JSON, I see an Ingestion pipeline appear in the web interface. However, there are two issues:

When I click to run this pipeline, it opens a chat interface, and after uploading a document and entering a question, errors occur. This is not the functionality I need. In fact, I only want it to have the capability to parse documents, consistent with the functionality of the Ingestion pipeline created directly in the web interface.

When creating a knowledge base and selecting a pipeline, the Ingestion pipeline just created via the API does not appear in the options.

------------------ 原始邮件 ------------------ 发件人: "infiniflow/ragflow" @.>; 发送时间: 2025年12月4日(星期四) 下午2:46 @.>; @.@.>; 主题: Re: [infiniflow/ragflow] RAGFlow v0.22.1: API-Created Agent DSL Causes Unresponsive Save and Run Buttons in Web Interface (Issue #11737)

dosubot[bot] left a comment (infiniflow/ragflow#11737)

Here’s a reference DSL JSON for a common data ingestion pipeline as returned by the RAGFlow API. This structure is compatible with the agent editor in v0.22.1 and includes the required fields for Save/Run button functionality. The example below models a pipeline that ingests PDF files, parses them, chunks the content, and tokenizes the result: { "components": { "File": { "obj": { "component_name": "File", "params": {} }, "downstream": ["Parser:0"], "upstream": [] }, "Parser:0": { "obj": { "component_name": "Parser", "params": { "setups": { "pdf": { "parse_method": "deepdoc", "vlm_name": "", "lang": "Chinese", "suffix": ["pdf"], "output_format": "json" } } } }, "downstream": ["Splitter:0"], "upstream": ["File"] }, "Splitter:0": { "obj": { "component_name": "Splitter", "params": { "chunk_token_size": 512, "delimiters": ["\n"], "overlapped_percent": 0 } }, "downstream": ["Tokenizer:0"], "upstream": ["Parser:0"] }, "Tokenizer:0": { "obj": { "component_name": "Tokenizer", "params": {} }, "downstream": [], "upstream": ["Splitter:0"] } }, "graph": { "nodes": [ { "id": "File", "data": { "label": "File", "name": "File" }, "type": "fileNode", "position": { "x": 50, "y": 100 } }, { "id": "Parser:0", "data": { "label": "Parser", "name": "Parser:0" }, "type": "parserNode", "position": { "x": 250, "y": 100 } }, { "id": "Splitter:0", "data": { "label": "Splitter", "name": "Splitter:0" }, "type": "splitterNode", "position": { "x": 450, "y": 100 } }, { "id": "Tokenizer:0", "data": { "label": "Tokenizer", "name": "Tokenizer:0" }, "type": "tokenizerNode", "position": { "x": 650, "y": 100 } } ], "edges": [ { "id": "e1", "source": "File", "target": "Parser:0" }, { "id": "e2", "source": "Parser:0", "target": "Splitter:0" }, { "id": "e3", "source": "Splitter:0", "target": "Tokenizer:0" } ] }, "path": [], "history": [], "retrieval": [], "memory": [], "globals": { "sys.query": "", "sys.user_id": "", "sys.conversation_turns": 0, "sys.files": [] } }

The components section defines each step in the pipeline and their connections.

The graph section (with nodes and edges) is required for the web UI to recognize and render the workflow.

The path, history, retrieval, memory, and globals fields are included for compatibility with the API and frontend expectations.

If you are creating a dataset or knowledge base, include the "permission": "me" (or "team") field at the top level of your API payload, as this is required for some endpoints but not always for agent DSLs specifically.

You can adapt this structure for other file types or pipeline steps by modifying the components and graph sections as needed. For more details, see the example in the codebase and the API docs.

To reply, just mention @dosu.

How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other

   

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

caobb1990 avatar Dec 05 '25 03:12 caobb1990

The root cause is that pipelines created via the API default to chat agents unless you explicitly set the "canvas_category" field to "dataflow_canvas" in your payload. Without this, the pipeline opens a chat interface and won’t show up in the knowledge base pipeline picker—only pipelines with "canvas_category": "dataflow_canvas" are treated as ingestion/dataflow pipelines and selectable for document parsing workflows [source].

How to fix: When creating an ingestion pipeline via the API, include these top-level fields in your request:

{
  "title": "My Ingestion Pipeline",
  "dsl": { ... },  // your pipeline DSL as before
  "canvas_category": "dataflow_canvas",
  "permission": "me"  // or "team"
}

This ensures the backend and UI register it as an ingestion pipeline. After saving, it will appear in the knowledge base pipeline selection and run as a document parser, not a chat agent.

Troubleshooting:
If the pipeline still doesn’t appear, double-check that "canvas_category": "dataflow_canvas" is present and spelled exactly, and that your DSL is valid. You can verify by listing agents/pipelines via the API or UI and checking the category field.

For dataset creation, use the pipeline’s ID and specify both "parse_type" (number of parsers in your Parser component) and "pipeline_id" (the pipeline’s ID), without "chunk_method" or "parser_config" [source].

Let me know if you need a full example payload or want to validate your DSL!

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other

Chat with Dosu Open in Cursor Join Discord Share on X

dosubot[bot] avatar Dec 05 '25 03:12 dosubot[bot]