Keith Whitley
Keith Whitley
> Hi! I'm using this space to write about my current progress, updates and plans. The biggest piece right now is the addition of "collectors to the benchmark wrapper": >...
yeah metadata is a bit weird. I'm fine if some of the fields I mentioned (start_time, end_time, run_id, etc.) are in that metadata class in the actual code as long...
Elasticsearch is built on top of [Lucene, which stores data in an inverted index](https://dzone.com/articles/apache-lucene-a-high-performance-and-full-featured). Simply put, Lucene maps terms -> documents instead of documents -> terms. Lucene doesn't really support...
@BillSullivan2020 @dstandish We were facing this after (finally) upgrading from 1.10.15 to 2.0.2. We ended up finding out that the root cause was duplicate environment variables in the worker pod...
FWIW, I've seen a similar issue with other connectors as well, specifically Trino and VARBINARY. I don't have steps to reproduce, but I agree with @dvdotsenko's theory on it happening...
I think I found some previous issues that point to a potential fix https://github.com/apache/superset/issues/8084 which mentions https://github.com/apache/superset/pull/5121/files fixed only sync queries and not async ones. The link is broken, but...
Is there a better way to find the stacktrace? This is all I can find so far: ``` superset 2024-04-25 13:19:44,743:INFO:superset.sql_lab:Query 129: Storing results in results backend, key: 6a059c1d-5242-48f1-b602-2a33d66fcdce superset...
@Dlougach If you wish to use the operator, there is a way. First step, you need to make a wrapper class around the SparkConnectServer. For example: ```scala package my.connect.server import...
use `hive --service metastore` instead i.e. `hive --service metastore -p 9083`
@vladimir-avinkin this doesn't work if the entire file is getting deleted/added.