SU2
SU2 copied to clipboard
[WIP] Feature multilayer perceptron
Proposed Changes
Addition of multi-layer perceptron class which can be used to evaluate trained multi-layer perceptrons in processes such as thermodynamic state evaluation in data-driven fluid models.
Related Work
PR Checklist
Put an X by all that apply. You can fill this out after submitting the PR. If you have any questions, don't hesitate to ask! We want to help. These are a guide for you to know what the reviewers will be looking for in your contribution.
- [X] I am submitting my contribution to the develop branch.
- [X] My contribution generates no new compiler warnings (try with --warnlevel=3 when using meson).
- [ ] My contribution is commented and consistent with SU2 style (https://su2code.github.io/docs_v7/Style-Guide/).
- [ ] I have added a test case that demonstrates my contribution, if necessary.
- [ ] I have updated appropriate documentation (Tutorials, Docs Page, config_template.cpp), if necessary.
This pull request introduces 6 alerts when merging 937053653902cac3508fddbe1859128e6caffb82 into 45214cddb5a5819f0acef68e6316c4dca54ea5b3 - view on LGTM.com
new alerts:
- 4 for Resource not released in destructor
- 1 for Non-virtual destructor in base class
- 1 for Constant return type on member
Do you want to introduce this first as a standalone library, and then start using it for fluid models?
@pcarruscag Yes I want to introduce the multilayer perceptron as an option for users and developers for things like data-driven fluid models. Do you think it would be good to add a template CFluidModel child class demonstrating how the MLP class can be used to create data-driven fluid models (apart from writing a tutorial of course)?
Sounds good. Initially it would be enough to have some unit tests that would already show how to setup the network, together with documentation / example of the file format. Applications can come after.
Some initial comments: Please move the files to toolboxes/ and ideally use a namespace for the new classes. Start the class names with C as we do, e.g. CIOMap.
Then, how large are the models you've used so far? and how important is performance to this feature? (Just so I know how much to comment on that)
Of course I'll provide an example of an MLP input file, as well as a python script I wrote that translates an MLP trained through TensorFlow to such an input file.
Very well, I'll move the files from the numerics folder to the common folder and change names accordingly.
The models I used so far had between 5 and 50 perceptrons and up to 15 layers. Performance is quite important, as evaluation of MLP's is generally more computationally expensive than for example lookup tables. The larger the MLP architecture, the more costly evaluations get of course. Any improvement to the computation speed will therefore be welcome. In terms of memory, the MLP's don't seem to be an issue so far.
This pull request introduces 3 alerts when merging 47b456cb3d65ce3f29a6521e947caec819eb8335 into 88c8392ff1028af640a8b0bc90304367ea45ded3 - view on LGTM.com
new alerts:
- 3 for Resource not released in destructor
This pull request introduces 3 alerts when merging 88f0012afd0ac8f2b10778faadd0c0af4e8507d9 into d32ccec357c855fd8843b32cf52381631e62515d - view on LGTM.com
new alerts:
- 3 for Resource not released in destructor
This pull request introduces 3 alerts when merging 04813b107392ff542fc4fb7577e30bd2a54bf7e5 into 124795b612bbce076850722720ba28cfc041e436 - view on LGTM.com
new alerts:
- 3 for Resource not released in destructor
This pull request introduces 3 alerts when merging 112876da64657676a5ed1c2a2f3a60e934f59009 into 7132d99e9e794094b5c2b924605f0b1878ac21e2 - view on LGTM.com
new alerts:
- 3 for Resource not released in destructor
Heads-up: LGTM.com's PR analysis will be disabled on the 5th of December, and LGTM.com will be shut down ⏻ completely on the 16th of December 2022. Please enable GitHub code scanning, which uses the same CodeQL engine :gear: that powers LGTM.com. For more information, please check out our post on the GitHub blog.
This pull request introduces 3 alerts when merging e8a3a93e987c38e54e98b86c9f35d4830d3c90c9 into 7132d99e9e794094b5c2b924605f0b1878ac21e2 - view on LGTM.com
new alerts:
- 3 for Resource not released in destructor
Heads-up: LGTM.com's PR analysis will be disabled on the 5th of December, and LGTM.com will be shut down ⏻ completely on the 16th of December 2022. Please enable GitHub code scanning, which uses the same CodeQL engine :gear: that powers LGTM.com. For more information, please check out our post on the GitHub blog.
This pull request introduces 4 alerts when merging 0a565343307aa6a5a7f012f2779cf5c02ad4e8b7 into 7132d99e9e794094b5c2b924605f0b1878ac21e2 - view on LGTM.com
new alerts:
- 2 for Resource not released in destructor
- 2 for 'new' object freed with 'delete[]'
Heads-up: LGTM.com's PR analysis will be disabled on the 5th of December, and LGTM.com will be shut down ⏻ completely on the 16th of December 2022. Please enable GitHub code scanning, which uses the same CodeQL engine :gear: that powers LGTM.com. For more information, please check out our post on the GitHub blog.
This pull request introduces 3 alerts when merging 76194ecc4154ee6569a3420178c5af801478bfc9 into 9f082de9b9090188a820688cd30458da0fce5661 - view on LGTM.com
new alerts:
- 3 for Resource not released in destructor
Heads-up: LGTM.com's PR analysis will be disabled on the 5th of December, and LGTM.com will be shut down ⏻ completely on the 16th of December 2022. Please enable GitHub code scanning, which uses the same CodeQL engine :gear: that powers LGTM.com. For more information, please check out our post on the GitHub blog.
This pull request introduces 3 alerts when merging 68d02ae7036ef8fa5d93cfb6a6bc47aa9a280d4d into 9f082de9b9090188a820688cd30458da0fce5661 - view on LGTM.com
new alerts:
- 3 for Resource not released in destructor
Heads-up: LGTM.com's PR analysis will be disabled on the 5th of December, and LGTM.com will be shut down ⏻ completely on the 16th of December 2022. Please enable GitHub code scanning, which uses the same CodeQL engine :gear: that powers LGTM.com. For more information, please check out our post on the GitHub blog.
This pull request introduces 1 alert when merging 1a40d51afd58dd01d28189e06c9337010d850af9 into 9f082de9b9090188a820688cd30458da0fce5661 - view on LGTM.com
new alerts:
- 1 for Resource not released in destructor
Heads-up: LGTM.com's PR analysis will be disabled on the 5th of December, and LGTM.com will be shut down ⏻ completely on the 16th of December 2022. Please enable GitHub code scanning, which uses the same CodeQL engine :gear: that powers LGTM.com. For more information, please check out our post on the GitHub blog.
This pull request introduces 2 alerts when merging 054a8ff9443814b08fdae1d342537f549b341adf into 9f082de9b9090188a820688cd30458da0fce5661 - view on LGTM.com
new alerts:
- 1 for Resource not released in destructor
- 1 for 'new[]' array freed with 'delete'
Heads-up: LGTM.com's PR analysis will be disabled on the 5th of December, and LGTM.com will be shut down ⏻ completely on the 16th of December 2022. Please enable GitHub code scanning, which uses the same CodeQL engine :gear: that powers LGTM.com. For more information, please check out our post on the GitHub blog.
Hi Pedro, thanks for your reply. I don't know how to safely typedef the mlpdouble type the way you suggested. I thought you meant that I just had to copy the way su2double was defined in code_config.hpp
, but I guess that's not right. Do you know an example showing how this works?
in your type config file do:
#ifdef MLP_CUSTOM_TYPE
using mlpdouble = MLP_CUSTOM_TYPE;
#else
using mlpdouble = double;
#endif
in SU2 before the first MLP include do:
#define MLP_CUSTOM_TYPE su2double
#include ...
Thanks Pedro!
@EvertBunschoten I suggest to first finish and close this one before moving on to the hydrogen flamelets PR
I'll ignore the request for review the same way you ignored my questions shrug
Could you clarify your questions? I sent you a message on Slack.