raven icon indicating copy to clipboard operation
raven copied to clipboard

Updating to newer python

Open joshua-cogliati-inl opened this issue 3 years ago • 8 comments


Pull Request Description

What issue does this change request address?

#1748

What are the significant changes in functionality due to this change request?

Updating dependencies to update python version.


For Change Control Board: Change Request Review

The following review must be completed by an authorized member of the Change Control Board.

  • [ ] 1. Review all computer code.
  • [ ] 2. If any changes occur to the input syntax, there must be an accompanying change to the user manual and xsd schema. If the input syntax change deprecates existing input files, a conversion script needs to be added (see Conversion Scripts).
  • [ ] 3. Make sure the Python code and commenting standards are respected (camelBack, etc.) - See on the wiki for details.
  • [ ] 4. Automated Tests should pass, including run_tests, pylint, manual building and xsd tests. If there are changes to Simulation.py or JobHandler.py the qsub tests must pass.
  • [ ] 5. If significant functionality is added, there must be tests added to check this. Tests should cover all possible options. Multiple short tests are preferred over one large test. If new development on the internal JobHandler parallel system is performed, a cluster test must be added setting, in <RunInfo> XML block, the node <internalParallel> to True.
  • [ ] 6. If the change modifies or adds a requirement or a requirement based test case, the Change Control Board's Chair or designee also needs to approve the change. The requirements and the requirements test shall be in sync.
  • [ ] 7. The merge request must reference an issue. If the issue is closed, the issue close checklist shall be done.
  • [ ] 8. If an analytic test is changed/added is the the analytic documentation updated/added?
  • [ ] 9. If any test used as a basis for documentation examples (currently found in raven/tests/framework/user_guide and raven/docs/workshop) have been changed, the associated documentation must be reviewed and assured the text matches the example.

joshua-cogliati-inl avatar Aug 08 '22 16:08 joshua-cogliati-inl

Hm, the windows tests are failing because cbcpy from LOGOS is not available for python > 3.7.

joshua-cogliati-inl avatar Aug 08 '22 18:08 joshua-cogliati-inl

The typical other failures are:

FAILED:
Diff tests/framework/Samplers/AdaptiveBatch/ThickenedBand
Failed tests/framework/unit_tests/utils/Debugging
Diff tests/framework/ROM/tensorflow_keras/tf_cnn1d
Diff tests/framework/ROM/tensorflow_keras/tf_lstm
Diff tests/framework/ROM/tensorflow_keras/tf_mlpr
Diff tests/framework/ROM/tensorflow_keras/LSTMFromFile

Basically, these need to be regolded.

joshua-cogliati-inl avatar Aug 08 '22 18:08 joshua-cogliati-inl

Hm, the windows tests are failing because cbcpy from LOGOS is not available for python > 3.7.

I couldn't find anywhere that cpcpy is actually used. I think it was intended to be a workaround to installing the CBC solver on Windows to work with pyomo since conda doesn't support a Windows installation. I think the dependency should be removed (easier) and abandon the use of CBC on Windows (unless the user goes through the process of installing CBC from the binaries like me, not recommended), or additional work done to integrate cbcpy with pyomo (more work).

dgarrett622 avatar Aug 08 '22 18:08 dgarrett622

Job Precheck on 1147bea : invalidated by @joshua-cogliati-inl

Could not resolve host: github.com

moosebuild avatar Aug 08 '22 18:08 moosebuild

Job Precheck on 1147bea : invalidated by @joshua-cogliati-inl

Could not resolve host: github.com

moosebuild avatar Aug 08 '22 19:08 moosebuild

Job Test CentOS 7 on 26af206 : invalidated by @joshua-cogliati-inl

failed in set python environment

moosebuild avatar Aug 08 '22 19:08 moosebuild

Job Mingw Test on 26af206 : invalidated by @joshua-cogliati-inl

failed in fetch

moosebuild avatar Aug 08 '22 19:08 moosebuild

Hm, the windows tests are failing because cbcpy from LOGOS is not available for python > 3.7.

I couldn't find anywhere that cpcpy is actually used. I think it was intended to be a workaround to installing the CBC solver on Windows to work with pyomo since conda doesn't support a Windows installation. I think the dependency should be removed (easier) and abandon the use of CBC on Windows (unless the user goes through the process of installing CBC from the binaries like me, not recommended), or additional work done to integrate cbcpy with pyomo (more work).

Considering that the result of running the LOGOS windows tests with cbcpy with python 3.7 is:

08-Aug-22 09:57:38 pyomo.opt            ERROR    Solver (cbc) returned non-zero return code (3221225781)

Traceback (most recent call last):

  File "C:\Git\opt\build_root\raven\plugins\LOGOS\src\logos_main.py", line 72, in <module>

    modelInstance.run()

  File "C:\Git\opt\build_root\raven\plugins\LOGOS\src\CapitalInvestments\PyomoModels\ModelBase.py", line 294, in run

    results = opt.solve(model, load_solutions=False, tee=self.tee, **{'use_signal_handling':False})

  File "C:\Users\cogljj\.conda\envs\raven_libraries\lib\site-packages\pyomo\opt\base\solvers.py", line 597, in solve

    "Solver (%s) did not exit normally" % self.name)

pyomo.common.errors.ApplicationError: Solver (cbc) did not exit normally

dropping it is definitely worth trying.

joshua-cogliati-inl avatar Aug 08 '22 20:08 joshua-cogliati-inl

Job Mingw Test on fa9eb41 : invalidated by @joshua-cogliati-inl

failed in fetch

moosebuild avatar Aug 16 '22 18:08 moosebuild

This pull request depends on https://github.com/idaholab/LOGOS/pull/42

joshua-cogliati-inl avatar Aug 23 '22 13:08 joshua-cogliati-inl

Job Mingw Test on d2fdece : invalidated by @joshua-cogliati-inl

computer rebooted

moosebuild avatar Aug 24 '22 14:08 moosebuild

This pull request depends on idaholab/LOGOS#42

Which now has been merged, so this is ready to review.

joshua-cogliati-inl avatar Aug 24 '22 18:08 joshua-cogliati-inl

Job Test Ubuntu 18 PIP on 1e0d4d1 : invalidated by @joshua-cogliati-inl

restarted civet

moosebuild avatar Aug 29 '22 17:08 moosebuild

Job Test qsubs sawtooth on 3b8e016 : invalidated by @joshua-cogliati-inl

FAILED: Diff tests/cluster_tests/AdaptiveSobol/test_parallel_adaptive_sobol

moosebuild avatar Aug 29 '22 19:08 moosebuild

The tensorflow data still follows the general trend: tensor_flow_comparison I agree that it would be nice if it was stabler.

I am not sure why the Adaptive batch is different, but the sampler returned different values, which resulted in the limit surface being somewhat different. (If the limit surface were perfect, both would be circles)

limit_surface_comparison

joshua-cogliati-inl avatar Aug 29 '22 20:08 joshua-cogliati-inl

The result for tensorflow looks good to me. For the other test, which one is the original results for the limit surface test, the orange color? I'm curious why the samples get changed, Do you see any libraries versions difference?

wangcj05 avatar Aug 29 '22 21:08 wangcj05

The result for tensorflow looks good to me. For the other test, which one is the original results for the limit surface test, the orange color? I'm curious why the samples get changed, Do you see any libraries versions difference?

Yes, the orange were the original. I am not sure why the samples got changed. The adaptive batch doesn't seem to be using tensorflow, and other libraries (including scipy 1.5.3 and scikit-learn 0.24.2) are generally the same.

joshua-cogliati-inl avatar Aug 29 '22 22:08 joshua-cogliati-inl

Both results look reasonable for me. But I think if you have some extra time I would recommend to take a deep look at the code and figure out the cause for the difference.

wangcj05 avatar Aug 29 '22 22:08 wangcj05

Both results look reasonable for me. But I think if you have some extra time I would recommend to take a deep look at the code and figure out the cause for the difference.

I haven't figured out the exact cause yet, but it is in Samplers/LimitSurfaceSearch.py

joshua-cogliati-inl avatar Sep 01 '22 18:09 joshua-cogliati-inl

More details. In Samplers/LimitSurfaceSearch.py it is finding different values for self.surfPoint[maxGridId][maxId,:]

    if self.surfPoint is not None and len(self.surfPoint) > 0:
      if self.batchStrategy == 'none':
        self.__scoreCandidates()
        maxDistance, maxGridId, maxId =  0.0, "", 0
        for key, value in sorted(self.invPointPersistence.items()):
          if key != self.exceptionGrid and self.surfPoint[key] is not None:
            localMax = np.max(self.scores[key])
            if localMax > maxDistance:
              maxDistance, maxGridId, maxId  = localMax, key,  np.argmax(self.scores[key])
        if maxDistance > 0.0:
          for varIndex, _ in enumerate([key.replace('<distribution>','') for key in self.axisName]):
            self.values[self.axisName[varIndex]] = copy.copy(float(self.surfPoint[maxGridId][maxId,varIndex]))
            self.inputInfo['SampledVarsPb'][self.axisName[varIndex]] = self.distDict[self.axisName[varIndex]].pdf(self.values[self.axisName[varIndex]])
            self.inputInfo['ProbabilityWeight-'+self.axisName[varIndex]] = self.distDict[self.axisName[varIndex]].pdf(self.values[self.axisName[varIndex]])
          varSet=True
        else:
          self.raiseADebug('Maximum score is 0.0')

For the place where there are differences, self.scores has multiple values that are the max, so which one found is changing.

joshua-cogliati-inl avatar Sep 01 '22 20:09 joshua-cogliati-inl

checklist is good, tests are green.

wangcj05 avatar Sep 01 '22 22:09 wangcj05