fix(drilling): drill by pagination works with MSSQL data source
SUMMARY
fixes #24072 in which Microsoft SQL Server requires an order-by column in order for pagination to succeed. In doing so, now the drilled data is always sorted by the 1st column in ascending order. Previously it retained the dataset's sort order.
Better fixes outside of my ability might be:
- Implement this on a per-database level
- Sort by row number or the equivalent to functionally do nothing and preserve the underlying data order
- Improve user controls for how drilled data displays, allowing the user to control which columns are shown and what the sort order is
But I think the harm introduced (re-sorting the drilled data) is outweighed by fixing this feature for MSSQL.
TESTING INSTRUCTIONS
- CI
- Sam deployed and manually tested the docker image
BEFORE (4.0.0rc1
https://github.com/apache/superset/assets/7569808/18506aea-ea2b-4207-b97a-c1c9b7debf3b
AFTER
https://github.com/apache/superset/assets/7569808/0fad13b6-bb8f-4e35-8ae5-971d7fda6462
ADDITIONAL INFORMATION
- [x] Has associated issue: #24072
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 69.69%. Comparing base (
ad9024b) to head (2e580d6). Report is 1 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #27442 +/- ##
==========================================
+ Coverage 67.39% 69.69% +2.30%
==========================================
Files 1909 1909
Lines 74744 74744
Branches 8327 8327
==========================================
+ Hits 50371 52095 +1724
+ Misses 22323 20599 -1724
Partials 2050 2050
| Flag | Coverage Δ | |
|---|---|---|
| javascript | 57.22% <ø> (ø) |
|
| mysql | 78.03% <100.00%> (ø) |
|
| postgres | 78.14% <100.00%> (ø) |
|
| presto | 53.69% <50.00%> (?) |
|
| python | 83.06% <100.00%> (+4.77%) |
:arrow_up: |
| sqlite | 77.57% <100.00%> (ø) |
|
| unit | 56.68% <50.00%> (?) |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@sfirke once this is ready for review would you mind adding the relevant reviewers?
@sfirke the CI error you're getting
superset/common/query_actions.py:156: error: List item 0 has incompatible type "Tuple[Union[AdhocColumn, str], bool]"; expected "Tuple[Union[AdhocMetric, str], bool]" [list-item]
is because you're passing a Column here
query_obj.orderby = [(query_obj.columns[0], True)]
instead of a Metric which is what OrderBy expects.
Column = Union[AdhocColumn, str]
Metric = Union[AdhocMetric, str]
OrderBy = tuple[Metric, bool]
I am seeing the Docker / docker-build (ci) step is succeeding, but I don't see a corresponding image on Docker Hub: https://hub.docker.com/r/apache/superset/tags Am I missing it there when I look + search, or is it not pushing to Docker Hub? It seems like other PR images are. That would allow me to deploy the image and test it thoroughly.
I think that's correct, that PRs build the docker image (as a bit of a smoke test) but don't push them to dockerhub (to reduce clutter). I think @mistercrunch set up a command in his recent work to manually publish a branch's docker image, but I can't recall what it is :)
Ah indeed, the PR whose image I see on Docker Hub is one from @mistercrunch: https://github.com/apache/superset/pull/27465 Max, is there a command I can use to push a PR branch image to Docker Hub?
@sfirke as far as I know there isn't a way to do this in this context (PR from a fork) because of potential security issues. Docker creds just aren't available in this context, if they were, an attacker could push arbitrary code to the latest image for instance.
You should be able to docker build locally at will though, any reason why you want CI to build it for you?
That makes sense, thanks for explaining how it works & why those PR images don't get pushed.
Re: motive, one time my local setup was messed up and docker build was failing so I deployed the image from docker hub and got in that habit. But now I have repaired my local setup and will do as you suggest.
I wonder, would it work on your fork if you added your own dockerhub secrets/credentials to your own github fork? Seems like most of the actions run (or don't) based on a check to see if a secret is available, so I can't help but wonder how it goes if a matching secret IS avaialable.
supersetbot acts as a neat wrapper around docker build and could be expanded to support other hub repos. The main win from using the wrapper is setting the read cache to the location used by master builds, meaning you potentially have a lot less to build locally. It also has a notion of build "presets" that point to combination of layers and settings.
$ nvm use 20
$ npm i -g supersetbot
$ supersetbot docker --help
Usage: supersetbot docker [options]
Generates/run docker build commands use in CI
Options:
-t, --preset Build preset
-c, --context <context> Build context (default: "local")
-r, --context-ref <ref> Reference to the PR, release, or branch
-p, --platform <platform...> Platforms (multiple values allowed)
-f, --force-latest Force the "latest" tag on the release
-v, --verbose Print more info
-h, --help display help for command
$ supersetbot docker --dry-run
docker buildx build \
--load \
-t apache/superset:8ed368c1fe624e6288d9270997098732d2289f7c-arm \
-t apache/superset:8ed368c-arm \
--cache-from=type=registry,ref=apache/superset-cache:3.9-slim-bookworm \
\
--target lean \
--build-arg PY_VER=3.9-slim-bookworm \
--platform linux/arm64 \
--label sha=8ed368c1fe624e6288d9270997098732d2289f7c \
--label target=lean \
--label build_trigger= \
--label base=3.9-slim-bookworm \
--label build_actor=undefined \
.
notice the --cache-from
I'm stuck, it's unclear to me why this test is failing:
=================================== FAILURES ===================================
___________ TestPostChartDataApi.test_chart_data_applied_time_extras ___________
self = <tests.integration_tests.charts.data.api_tests.TestPostChartDataApi testMethod=test_chart_data_applied_time_extras>
@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")
def test_chart_data_applied_time_extras(self):
"""
Chart data API: Test chart data query with applied time extras
"""
self.query_context_payload["queries"][0]["applied_time_extras"] = {
"__time_range": "100 years ago : now",
"__time_origin": "now",
}
rv = self.post_assert_metric(CHART_DATA_URI, self.query_context_payload, "data")
> self.assertEqual(rv.status_code, 200)
E AssertionError: 400 != 200
And why is test-postgres-presto passing but test-postgres-hive not? Update: Is that test just broken on master branch right now? I see it failed on #27520 😭