Modis l2 available datasets
Adds the available_datasets method to the modis level 2 reader to get datasets of arbitrary modis level 2 product files which are not specified in the readers yaml file as suggested in https://github.com/pytroll/satpy/pull/812#pullrequestreview-248164014.
Currently only datasets which have a 2D shape are added and bit encoded fields are not decoded.
There are lots of level two products out there so this might not yet work in every case. I tested a couple of products which worked fine. I am not sure how to implement mock tests which cover all these different files because I am not very fit with writing tests with mock. If someone can help me out/ give me some hints I can implement those so that this can eventually be merged.
Also there is a problem with the output of the available_dataset_ids of Scene when this is used with the cloud_mask dataset of the mod35_l2 files which are already specified in the yaml file. While the dataset can be loaded it is not shown as available when available_dataset_ids is envoked.
The reason for this as fas as I can see is that this method returns the available_ids keys in the FileYAMLReader here https://github.com/pytroll/satpy/blob/35739e4b56546a9366bf544153e435a42fc1f5b4/satpy/readers/yaml_reader.py#L282
and not the all_ids attribute where all of them are included. My guess is that available_ids should also be updated with the contents of all_ids in https://github.com/pytroll/satpy/blob/35739e4b56546a9366bf544153e435a42fc1f5b4/satpy/readers/yaml_reader.py#L540
Not directly related to the main purpose of this PR I also added add_offset to the scale_factor conversion in the hdfeos_base which was missing. Currently data is read with pyhdf, maybe the reading could be changed to xarray which would do this conversion automatically and also reads the other attributes which might be desireable?
- [ ] Tests added and test suite added to parent suite
- [ ] Tests passed
- [ ] Passes
flake8 satpy - [ ] Fully documented
- [ ] Add your name to
AUTHORS.mdif not there already
Currently data is read with pyhdf, maybe the reading could be changed to xarray which would do this conversion automatically and also reads the other attributes which might be desireable?
Can xarray use pyhdf?
As for the all versus available, the available_dataset_ids are only the datasets that are available in the files provided so they should not be merged with the "all" datasets (which are all of the possible datasets we know about regardless of the files provided).
Let me check your implementation.
Currently data is read with pyhdf, maybe the reading could be changed to xarray which would do this conversion automatically and also reads the other attributes which might be desireable?
Can xarray use pyhdf?
I guess it uses pyhdf in the background because I can open modis hdf files with it.
I wouldn't be surprised if it is using NetCDF to read pyhdf by using the NetCDF C library compiled with the HDF4 C library.
DeepCode's analysis on #c6332c found:
- :warning: 1 warning, :information_source: 38 minor issues. :point_down:
- :heavy_check_mark: 31 issues were fixed.
Top issues
| Description | Example fixes |
|---|---|
| Defining only __eq__ but not __ne__ will result in a Python2 error if objects are compared with inequality. Occurrences: | :wrench: Example fixes |
| Unused CRS imported from pyproj Occurrences: | :wrench: Example fixes |
| Statement seems to have no effect Occurrences: | :wrench: Example fixes |
👉 View analysis in DeepCode’s Dashboard | Configure the bot
It looks like git got the best of this pull request with 2558 commits being shown. I can't even get github to load all the file changes to show me what changed to the hdfeos_base.py file. @BENR0 How do you feel about this PR? Do you want me to try to extract the old history before the merge with main corrupted things?
@djhoese yeah I could not remember what I did that messed things up (didn't even notice until your comment).
I reset to the appropriate commit and merged main and force pushed which seemed to fix the weird amount of number of commits.
Nice job. I'm still a little concerned by the file pattern that was added, but otherwise quickly glancing at the available datasets method, it seems pretty good. However, I think it needs to be updated to make sure the variable that it found in the file isn't one that was already configured in the YAML, right?
However, I think it needs to be updated to make sure the variable that it found in the file isn't one that was already configured in the YAML, right?
That rings a bell. But I can't remember. To be honest I just fixed the commit issue. I have to look at his a little closer again.
I don't know why netcdf writer tests are failing. Didn't touch anything related.
Merge with main and they'll work again, there was another XArray release without compression kwarg fixes. See https://github.com/pytroll/satpy/pull/2674
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (
4c30838) 95.39% compared to head (ee32f84) 95.39%.
Additional details and impacted files
@@ Coverage Diff @@
## main #913 +/- ##
=======================================
Coverage 95.39% 95.39%
=======================================
Files 371 371
Lines 52683 52728 +45
=======================================
+ Hits 50256 50301 +45
Misses 2427 2427
| Flag | Coverage Δ | |
|---|---|---|
| behaviourtests | 4.15% <0.00%> (-0.01%) |
:arrow_down: |
| unittests | 96.01% <100.00%> (+<0.01%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Pull Request Test Coverage Report for Build 7248787685
- 48 of 48 (100.0%) changed or added relevant lines in 3 files are covered.
- No unchanged relevant lines lost coverage.
- Overall coverage increased (+0.003%) to 95.958%
| Totals | |
|---|---|
| Change from base Build 7246279156: | 0.003% |
| Covered Lines: | 50427 |
| Relevant Lines: | 52551 |
💛 - Coveralls
Darn I am pretty sure I did write tests at some point but can't find/recover them. I guess they got lost in the git mixup when I re-merged main and force pushed :-/. I will add some tests again.