dosubot[bot]
dosubot[bot]
你的 JVM 堆内存 64GB 已经非常充足,不会是 1.5.0 查询变慢的根本原因。当前 HugeGraph 1.5.0 查询慢,主要还是和 hstore/集群模式的分布式调度、锁、后端通信、索引覆盖、缓存参数、RocksDB 配置等有关。建议重点排查: 1. 检查所有 where/has/filter 条件是否有二级或范围索引覆盖,避免全表扫描,schema 设计要覆盖高频查询路径 [参考](https://github.com/apache/incubator-hugegraph/issues/2836)。 2. 查询语句尽量减少多层 out()/in() 链式遍历,优先用 has() 精准过滤起点,并加 limit、分页,避免一次性拉取过多数据导致 OOM 或超时 [参考](https://github.com/apache/incubator-hugegraph/issues/2836)。 3. cache...
你的 schema 没有任何二级索引(indexlabels),所有 where/has/filter 查询都会全表扫描,这是 1.5.0 查询极慢的核心原因,尤其在 50 亿点/边规模下,链式 outE/inE 查询会导致每层都拉全量数据在内存中过滤,性能会指数级下降。HugeGraph 1.5.0 的 HugeVertexStep、TraversalFilterStep、AndStep 等遍历步骤,底层会先构造后端查询,再用 FilterIterator 在 JVM 内存中做二次过滤,没有索引时会极大拖慢查询,且分布式锁和 session 管理进一步放大了延迟[代码分析](https://github.com/apache/incubator-hugegraph/blob/a53af864b31f803780a1db31cb20d3d94d5d34fc/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/traversal/algorithm/HugeTraverser.java)。 **必须优化 schema:** - 给所有高频查询属性(如 country_code、status、multi_scene_mark 等)补充二级索引(secondary index),如: ```groovy schema.indexLabel("userByCountryCode").onV("user").by("country_code").secondary().create()...
你的确是主键 id 起点查询,GraphStep(vertex,[id]) 这一步不需要二级索引,性能也不会有瓶颈。但你的 Gremlin 查询后续包含了多层 outE/inE/bothE 遍历和复杂 where/filter 条件(如 country_code、status 等),这些步骤在 HugeGraph 1.5.0(hstore 架构)下如果没有二级索引,依然会导致全表扫描和大量数据在 JVM 内存中过滤,极大拖慢查询。 代码实现上,HugeVertexStep、TraversalFilterStep、AndStep 等遍历步骤会先构造后端查询(GraphTransaction.constructEdgesQuery),再用 FilterIterator 在内存中过滤属性。没有索引时,每层都要拉取大量边/点数据,尤其在 50 亿点/边规模下,性能会指数级下降,硬件再强也无法弥补索引缺失带来的全表扫描和内存压力(详见[代码分析](https://github.com/apache/incubator-hugegraph/blob/a53af864b31f803780a1db31cb20d3d94d5d34fc/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/traversal/algorithm/HugeTraverser.java))。 社区和官方 issue 也明确指出,所有 where/has/filter 查询条件都必须有二级或范围索引覆盖,才能避免全表扫描,提升多层遍历和复杂过滤的性能,尤其在 1.5.0 这种分布式后端下更为关键([参考](https://github.com/apache/incubator-hugegraph/issues/2836))。...
This happens because Plate only renders videos if the isUpload prop is true, which is meant for uploaded files. When you use "Insert Via URL", isUpload is falsy, so the...
The "LLM type error" AssertionError in RAGFlow v0.20.0 usually means the agent's LLM is not fully registered or configured for agent workflows—even if it works in chat mode. You do...
Hi @bmironov! I'm [Dosu](https://go.dosu.dev/dosubot) and I’m helping the cloudnative-pg team. This is a known effect of switching from operator-managed PodMonitor to a user-managed PodMonitor: the operator previously handled relabeling so...
Great suggestion—this is a common pain point and your feedback matches what many in the community have raised. The current documentation for [monitoring with the Prometheus Operator](https://cloudnative-pg.io/documentation/1.27/monitoring/#monitoring-with-the-prometheus-operator) does not include...
Thanks for catching that—you're absolutely right. In many environments, especially with recent Prometheus Operator versions or certain RBAC setups, metrics scraping only works if you include an explicit (even empty)...
You’re exactly right about the root cause: CloudNativePG v1.27.1 has reconciliation logic that deletes any PodMonitor with the same name as the Cluster when `.spec.monitoring.enablePodMonitor` is absent or false—even if...
That's correct—the need for bearerTokenSecret in PodMonitor manifests isn't documented in CloudNativePG or Prometheus Operator docs, but it's a practical workaround confirmed by community experience. In some environments (especially with...