Updating to newer python
Pull Request Description
What issue does this change request address?
#1748
What are the significant changes in functionality due to this change request?
Updating dependencies to update python version.
For Change Control Board: Change Request Review
The following review must be completed by an authorized member of the Change Control Board.
- [ ] 1. Review all computer code.
- [ ] 2. If any changes occur to the input syntax, there must be an accompanying change to the user manual and xsd schema. If the input syntax change deprecates existing input files, a conversion script needs to be added (see Conversion Scripts).
- [ ] 3. Make sure the Python code and commenting standards are respected (camelBack, etc.) - See on the wiki for details.
- [ ] 4. Automated Tests should pass, including run_tests, pylint, manual building and xsd tests. If there are changes to Simulation.py or JobHandler.py the qsub tests must pass.
- [ ] 5. If significant functionality is added, there must be tests added to check this. Tests should cover all possible options. Multiple short tests are preferred over one large test. If new development on the internal JobHandler parallel system is performed, a cluster test must be added setting, in <RunInfo> XML block, the node
<internalParallel>to True. - [ ] 6. If the change modifies or adds a requirement or a requirement based test case, the Change Control Board's Chair or designee also needs to approve the change. The requirements and the requirements test shall be in sync.
- [ ] 7. The merge request must reference an issue. If the issue is closed, the issue close checklist shall be done.
- [ ] 8. If an analytic test is changed/added is the the analytic documentation updated/added?
- [ ] 9. If any test used as a basis for documentation examples (currently found in
raven/tests/framework/user_guideandraven/docs/workshop) have been changed, the associated documentation must be reviewed and assured the text matches the example.
Hm, the windows tests are failing because cbcpy from LOGOS is not available for python > 3.7.
The typical other failures are:
FAILED:
Diff tests/framework/Samplers/AdaptiveBatch/ThickenedBand
Failed tests/framework/unit_tests/utils/Debugging
Diff tests/framework/ROM/tensorflow_keras/tf_cnn1d
Diff tests/framework/ROM/tensorflow_keras/tf_lstm
Diff tests/framework/ROM/tensorflow_keras/tf_mlpr
Diff tests/framework/ROM/tensorflow_keras/LSTMFromFile
Basically, these need to be regolded.
Hm, the windows tests are failing because cbcpy from LOGOS is not available for python > 3.7.
I couldn't find anywhere that cpcpy is actually used. I think it was intended to be a workaround to installing the CBC solver on Windows to work with pyomo since conda doesn't support a Windows installation. I think the dependency should be removed (easier) and abandon the use of CBC on Windows (unless the user goes through the process of installing CBC from the binaries like me, not recommended), or additional work done to integrate cbcpy with pyomo (more work).
Hm, the windows tests are failing because cbcpy from LOGOS is not available for python > 3.7.
I couldn't find anywhere that cpcpy is actually used. I think it was intended to be a workaround to installing the CBC solver on Windows to work with pyomo since conda doesn't support a Windows installation. I think the dependency should be removed (easier) and abandon the use of CBC on Windows (unless the user goes through the process of installing CBC from the binaries like me, not recommended), or additional work done to integrate cbcpy with pyomo (more work).
Considering that the result of running the LOGOS windows tests with cbcpy with python 3.7 is:
08-Aug-22 09:57:38 pyomo.opt ERROR Solver (cbc) returned non-zero return code (3221225781)
Traceback (most recent call last):
File "C:\Git\opt\build_root\raven\plugins\LOGOS\src\logos_main.py", line 72, in <module>
modelInstance.run()
File "C:\Git\opt\build_root\raven\plugins\LOGOS\src\CapitalInvestments\PyomoModels\ModelBase.py", line 294, in run
results = opt.solve(model, load_solutions=False, tee=self.tee, **{'use_signal_handling':False})
File "C:\Users\cogljj\.conda\envs\raven_libraries\lib\site-packages\pyomo\opt\base\solvers.py", line 597, in solve
"Solver (%s) did not exit normally" % self.name)
pyomo.common.errors.ApplicationError: Solver (cbc) did not exit normally
dropping it is definitely worth trying.
This pull request depends on https://github.com/idaholab/LOGOS/pull/42
This pull request depends on idaholab/LOGOS#42
Which now has been merged, so this is ready to review.
Job Test qsubs sawtooth on 3b8e016 : invalidated by @joshua-cogliati-inl
FAILED: Diff tests/cluster_tests/AdaptiveSobol/test_parallel_adaptive_sobol
The tensorflow data still follows the general trend:
I agree that it would be nice if it was stabler.
I am not sure why the Adaptive batch is different, but the sampler returned different values, which resulted in the limit surface being somewhat different. (If the limit surface were perfect, both would be circles)

The result for tensorflow looks good to me. For the other test, which one is the original results for the limit surface test, the orange color? I'm curious why the samples get changed, Do you see any libraries versions difference?
The result for tensorflow looks good to me. For the other test, which one is the original results for the limit surface test, the orange color? I'm curious why the samples get changed, Do you see any libraries versions difference?
Yes, the orange were the original. I am not sure why the samples got changed. The adaptive batch doesn't seem to be using tensorflow, and other libraries (including scipy 1.5.3 and scikit-learn 0.24.2) are generally the same.
Both results look reasonable for me. But I think if you have some extra time I would recommend to take a deep look at the code and figure out the cause for the difference.
Both results look reasonable for me. But I think if you have some extra time I would recommend to take a deep look at the code and figure out the cause for the difference.
I haven't figured out the exact cause yet, but it is in Samplers/LimitSurfaceSearch.py
More details. In Samplers/LimitSurfaceSearch.py it is finding different values for self.surfPoint[maxGridId][maxId,:]
if self.surfPoint is not None and len(self.surfPoint) > 0:
if self.batchStrategy == 'none':
self.__scoreCandidates()
maxDistance, maxGridId, maxId = 0.0, "", 0
for key, value in sorted(self.invPointPersistence.items()):
if key != self.exceptionGrid and self.surfPoint[key] is not None:
localMax = np.max(self.scores[key])
if localMax > maxDistance:
maxDistance, maxGridId, maxId = localMax, key, np.argmax(self.scores[key])
if maxDistance > 0.0:
for varIndex, _ in enumerate([key.replace('<distribution>','') for key in self.axisName]):
self.values[self.axisName[varIndex]] = copy.copy(float(self.surfPoint[maxGridId][maxId,varIndex]))
self.inputInfo['SampledVarsPb'][self.axisName[varIndex]] = self.distDict[self.axisName[varIndex]].pdf(self.values[self.axisName[varIndex]])
self.inputInfo['ProbabilityWeight-'+self.axisName[varIndex]] = self.distDict[self.axisName[varIndex]].pdf(self.values[self.axisName[varIndex]])
varSet=True
else:
self.raiseADebug('Maximum score is 0.0')
For the place where there are differences, self.scores has multiple values that are the max, so which one found is changing.
checklist is good, tests are green.