DecisionProgramming.jl icon indicating copy to clipboard operation
DecisionProgramming.jl copied to clipboard

Create benchmarks

Open jaantollander opened this issue 4 years ago • 1 comments

Benchmarks for decision programming on different kinds of influence diagrams. Here are some ideas on what to measure:

  • Hard lower bound versus soft lower bound with the positive path utility for path probability variables.
  • The effect of lazy cuts on performance.
  • Effects of limited memory influence diagrams on performance (compared to no-forgetting)
  • Performance comparison between the expected value and conditional value at risk.
  • Different Gurobi settings.
  • Memory usage might also be interesting.

Measuring performance requires random sampling of influence diagrams with different attributes such as the number of nodes, limited memory, and inactive chance nodes. The random.jl module is suited for this purpose. We also need to agree on good metrics for the benchmarks.

@jandelmi mentioned analyzing the model generated by Gurobi, which might be useful here as well.

backend = JuMP.backend(model)
gmodel = backend.optimizer.model.inner
Gurobi.write_model(gmodel, "gurobi_model.lp")

jaantollander avatar Sep 04 '20 17:09 jaantollander