incubator-hugegraph
incubator-hugegraph copied to clipboard
[Feature] Support record slow-query-log in HugeServer
Feature Description (功能描述)
as you know slow query log is important for db,like mysql.
record query time, when using api to query graph info,because i think the api total time is the real query time for user.
some questions maybe need discuss @javeme @JackyYangPassion @imbajin @liuxiaocs7 @Radeity @simon824 @z7658329 @VGalaxies
- do we need store slow log to file or just print it ? just use slf4j Logger?
- hg support gremlin api & params api such as EdgeAPI/VertexAPI, they are different, do we need transform it when we record it( gremlin query language and params are different)?
- i think we need set time threold、enabled、file path?like neo4j:
threold:
enable:
@javeme like this:
- do we need store slow log to file or just print it ? just use slf4j Logger?
- hg support gremlin api & params api such as EdgeAPI/VertexAPI, they are different, do we need transform it when we record it( gremlin query language and params are different)?
- i think we need set time threold、enabled、file path?like neo4j
- I think we could try log to specified
slow-log-file
first, then consider store inDB
file next - TBD
- Fine to set them
- we need more refer info from distributed database like
TiDB/HBase/NB
to compare the pros & cons
- do we need store slow log to file or just print it ? just use slf4j Logger?
- hg support gremlin api & params api such as EdgeAPI/VertexAPI, they are different, do we need transform it when we record it( gremlin query language and params are different)?
- i think we need set time threold、enabled、file path?like neo4j
- I think we could try log to specified
slow-log-file
first, then consider store inDB
file next- TBD
- Fine to set them
- we need more refer info from distributed database like
TiDB/HBase/NB
to compare the pros & cons
TIDB :
- TiDB 将执行时间超过 tidb_slow_log_threshold(默认值为 300 毫秒)的语句输出到慢查询文件(默认值为 “tidb-slow.log”)。
2.TiDB 默认启用慢查询日志。您可以通过修改系统变量tidb_enable_slow_log来启用或禁用该功能。
record items(intro see ref https://docs.pingcap.com/tidb/stable/identify-slow-queries):
HBase:(http://svn.apache.org/repos/asf/hbase/hbase.apache.org/trunk/0.94/book/ops.monitoring.html) HBase 慢查询日志由可解析的 JSON 结构组成,这些结构描述那些客户端操作(获取、放置、删除等)的属性,这些操作要么运行时间过长,要么生成过多的输出。“运行时间太长”和“输出太多”的阈值是可配置的:
1.hbase.ipc.warn.response.time在不记录查询的情况下可以运行查询的最大毫秒数。默认值为 10000 或 10 秒。可以设置为 -1 以按时间禁用日志记录。 2.hbase.ipc.warn.response.size查询在不记录的情况下可以返回的响应的最大字节大小。默认为 100 MB。可以设置为 -1 以按大小禁用日志记录。
示例: { "tables":{ "riley2":{ "puts":[ { "totalColumns":11, "families":{ "actions":[ { "timestamp":1315501284459, "qualifier":"0", "vlen":9667580 }, { "timestamp":1315501284459, "qualifier":"1", "vlen":10122412 }, { "timestamp":1315501284459, "qualifier":"2", "vlen":11104617 }, { "timestamp":1315501284459, "qualifier":"3", "vlen":13430635 } ] }, "row":"cfcd208495d565ef66e7dff9f98764da:0" } ], "families":[ "actions" ] } }, "processingtimems":956, "client":"10.47.34.63:33623", "starttimems":1315501284456, "queuetimems":0, "totalPuts":1, "class":"HRegionServer", "responsesize":0, "method":"multiPut" }
and like this, i use a java pojo transfer to json. like hbase
2023-10-17 01:45:49 [grizzly-http-server-20] [INFO] o.a.h.a.f.AccessLogFilter - slow query log: {"executeTime":12,"rawQuery":"{"gremlin":"g.V()","bindings":{},"language":"gremlin-groovy","aliases":{"g":"__g_hugegraph"}}","method":"POST","threshold":0,"path":"gremlin"}
2023-10-17 01:45:49 [grizzly-http-server-21] [INFO] o.a.h.a.f.AccessLogFilter - slow query log: {"executeTime":1,"rawQuery":"{}","method":"GET","threshold":0,"path":"graphs/hugegraph/graph/edges"}
2023-10-17 01:45:49 [grizzly-http-server-22] [INFO] o.a.h.a.f.AccessLogFilter - slow query log: {"executeTime":1,"rawQuery":"{}","method":"GET","threshold":0,"path":"graphs/hugegraph/graph/vertices"}