Validation hydrostatic water column
superseeds #441
I will recreate the json files with a lower number of points.
This looks very promising! Is there a way to augment these results with reference data from the literature within the same plots?
AFAIK, this is from Sun 2019 and also O'Connor 2021 The problem is that EDAC or a simple DDT can't deal with steady states. Over time, the particles will diffuse. That was my observation when I was setting up this example.
As mentioned in the setup I have used the setup from O'Connor 2021. The error magnitude data is extracted from Fig. 8 .
AFAIK, this is from Sun 2019 and also O'Connor 2021 The problem is that EDAC or a simple DDT can't deal with steady states. Over time, the particles will diffuse. That was my observation when I was setting up this example.
Not within the timeframe from 0.0 to 1.0. So this works fine with the same setup that O'Connor shows.
As mentioned in the setup I have used the setup from O'Connor 2021. The error magnitude data is extracted from Fig. 8 .
Ah, the "reference" is already literature data? Nice π Would be awesome to get a second data point that can be directly compared with literature.
As mentioned in the setup I have used the setup from O'Connor 2021. The error magnitude data is extracted from Fig. 8 .
Ah, the "reference" is already literature data? Nice π Would be awesome to get a second data point that can be directly compared with literature.
Yes it is. It just takes a long time. The next point that is close takes 6-8 hours per model.
This might be a really nice validation for a multi resolution FSI example. As @svchb mentioned, it takes a long time. This is due to the high water column directly sampled with particles basically being in a steady state.
Not working with even particle resolutions. Not sure why? I fixed the problem with determining the midpoint but there is something else not correct...
Codecov Report
:x: Patch coverage is 85.71429% with 1 line in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 70.49%. Comparing base (ff9b637) to head (8564459).
:warning: Report is 12 commits behind head on main.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| ...rc/schemes/fluid/weakly_compressible_sph/system.jl | 0.00% | 1 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #724 +/- ##
=======================================
Coverage 70.49% 70.49%
=======================================
Files 106 106
Lines 7033 7033
=======================================
Hits 4958 4958
Misses 2075 2075
| Flag | Coverage Ξ | |
|---|---|---|
| unit | 70.49% <85.71%> (ΓΈ) |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
Why are the errors larger than in the reference paper? Shouldn't they be the same? Are we not using the same methods?
Also, is there any point in having this relative error plot? I don't think it adds any information here.
Well it does in the sense that the absolute error is quite small but this shows the error is not actually that small compared to the solution.
Why are the errors larger than in the reference paper? Shouldn't they be the same? Are we not using the same methods?
@efaulhaber No we are not using the same method. I think they use PST+TIC. So I guess we can try this now?
I see. I think they're using a different kind of shifting, and I don't think they're using TIC in DualSPHysics, but we'll have to check the paper and probably some others and their code as well to know for sure.
I see. I think they're using a different kind of shifting, and I don't think they're using TIC in DualSPHysics, but we'll have to check the paper and probably some others and their code as well to know for sure.
@efaulhaber I will try with PST and TIC. I don't think we have to match this so closely if we don't want to validate their exact method.
If we use corrections like they do we are more inline with the reference paper:
current version
Also, is there any point in having this relative error plot? I don't think it adds any information here.