Chris
Chris
test pull request
> Please provide the following task execution log, thanks task execution log as below ```java [INFO] 2022-08-08 14:51:08.638 +0800 [taskAppId=TASK-20220808-6277368089120_4-775-1896] TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.dq.DataQualityTask:[83] - data quality task params {"localParams":[],"resourceList":[],"ruleId":10,"ruleInputParameter":{"check_type":"1","comparison_type":1,"comparison_name":"0","failure_strategy":"0","operator":"3","src_connector_type":5,"src_datasource_id":11,"src_field":null,"src_table":"BW_BI0_TSTOR_LOC","threshold":"0"},"sparkParameters":{"deployMode":"cluster","driverCores":1,"driverMemory":"512M","executorCores":2,"executorMemory":"2G","numExecutors":2,"others":"--conf spark.yarn.maxAppAttempts=1"}} [INFO]...
> You are running in yarn-cluster mode, you need to go to the spark task tracking URL to see the log, or you can change to yarn-client mode to run...
>  > > It seems that it is designed like this, "table_count_check" doesn't output error data, compared with "null_check" and found that it is because the value of "errorOutputSql"...
@SbloodyS Should we keep the `Error Output Path` empty if the task is `table_count_check` and etc. to eliminate the misunderstanding?
> 以支持,待新版本发布 谢谢, 期待新版本的发布
> 没明白你说的 哈哈哈, issue里搜一下ty你就明白了
> 试过竞品的效果没?应该是一样的吧? 我用的snipaste,放大后会好很多,截图参考如下,SC这次截图还好  
> 截取同样尺寸的截图,然后对比,应该是一样的 SC没显示放大倍数,所以没办法严格控制,但是从图上看,snipaste应该是放大了更多倍数的
我也遇到了这个问题,爬到 90 条的时候就自己断开了,总共应该是 500 多条