Ax icon indicating copy to clipboard operation
Ax copied to clipboard

How to extract Pareto optimal parameters in constrained MOO?

Open IgorKuszczak opened this issue 3 years ago • 8 comments

Hello, I am using Ax's Service API for constrained multiobjective optimisation. I have recently run into a problem where applying get_pareto_optimal_parameters() returns an error message stating that this functionality is still under development for the constrained optimisation. I was wondering if there are any workarounds which I could use to at least approximate the optimal point in the optimisation. I was considering applying the simple TOPSIS algorithm to points generated with the compute_posterior_pareto_frontier (excluding the out-of-design points) to find the best point in two objectives and then extracting the correspondent parameterisation. Is this a feasible approach? If so, how could I associate a point in the frontier with a parametrisation?

IgorKuszczak avatar May 03 '22 21:05 IgorKuszczak

@IgorKuszczak, if you wanted to just grab contents of this function: https://github.com/facebook/Ax/blob/main/ax/service/utils/best_point.py#L466 and remove the check about outcome constraints: https://github.com/facebook/Ax/blob/main/ax/service/utils/best_point.py#L522 and seeing what happens? For inputs to that function, you can just do this: get_pareto_optimal_parameters(ax_client.experiment, ax_client.generation_strategy).

Let me know whether that works for your use case and feel free to put up a PR removing that error if things do look good!

lena-kashtelyan avatar May 03 '22 21:05 lena-kashtelyan

Also cc @sdaulton on this

lena-kashtelyan avatar May 03 '22 21:05 lena-kashtelyan

@lena-kashtelyan, I removed the outcome constraints check and it resulted in an empty dictionary as an output with no extra errors. Is this the expected behaviour here?

IgorKuszczak avatar May 03 '22 22:05 IgorKuszczak

@IgorKuszczak are you using objective thresholds in your multi-objective optimization config? If so, do any of your experiment's arms exceed all objectives? If no arms exceed the thresholds for all objectives, there might be zero points on the Pareto curve. Please let me know if it looks like this may be the case.

bernardbeckerman avatar May 04 '22 16:05 bernardbeckerman

Hi @bernardbeckerman, I am using thresholds in both objectives: 0.9 for the 'displacement' (minimized) and 30 for the 'weight_savings' (maximized). As you can see in the picture below, there are multiple arms which exceed all objective thresholds. image

IgorKuszczak avatar May 04 '22 18:05 IgorKuszczak

Interestingly, when I set use_model_predictions to False, the code returns all arms, apart from the ones not within the set constraint, and they do not seem to be ordered in any particular manner. For completeness, I attach the reproduction of my use case below (cross-posting from #942). In a nutshell, I am generating trials sequentially and running the simulations in parallel using concurrent.futures, using a single constraint metric. The experiment has 100 trials with 10 Sobol trials and a nominal batch size of 3. Some of the entries below come from a YAML configuration file (opt_config) I made - let me know should you need any of the details.

  ## Bayesian Optimization in Service API

  NUM_OF_ITERS = opt_config['num_of_iters']
  NUM_SOBOL_STEPS = opt_config['num_sobol_steps']

  BATCH_SIZE = opt_config['batch_size']

  # Generation strategy
  gs = GenerationStrategy(steps=
                          [GenerationStep(model=Models.SOBOL, num_trials=NUM_SOBOL_STEPS),
                           GenerationStep(model=Models[opt_config['model']],num_trials=-1,max_parallelism=3)])

  # Initialize the ax client
  ax_client = AxClient(generation_strategy=gs, random_seed=12345,
                       torch_device=torch.device("cuda"),
                       enforce_sequential_optimization=False, 
                       verbose_logging=True)

  # Define parameters
     params = opt_config['parameters']

 # Creating an experiment

    ax_client.create_experiment(
        name=opt_config['experiment_name'],
        parameters=params,
        objectives={i['name']: ObjectiveProperties(minimize=i['minimize'], threshold=i['threshold']) for i in
                    objective_config['objective_metrics']},
        outcome_constraints=opt_config['outcome_constraints'],
        parameter_constraints=opt_config['parameter_constraints'])

  # Initializing variables used in the iteration loop

  abandoned_trials_count = 0 # counter for abandoned trials
  
  # list containing batch sizes for each trial
  # Below variables represent tuples of quotients and remainders
  SoB = divmod(NUM_SOBOL_STEPS,BATCH_SIZE) # sobol steps / batch size
  ToB = divmod(NUM_OF_ITERS-NUM_SOBOL_STEPS,BATCH_SIZE) # remaining iters/ batch size
  
  batch_size_list  = (SoB[0])*[BATCH_SIZE] + [SoB[1]] + (ToB[0])*[BATCH_SIZE] + [ToB[1]]
  
  # removing entries equal to 0
  batch_size_list = [i for i in batch_size_list if i!=0]
   
  NUM_OF_BATCHES  = len(batch_size_list)

  for i in range(NUM_OF_BATCHES):
      try:
          results = {}
          trials_to_evaluate = {}

          # Sequentially generate the batch
          for j in range(batch_size_list[i]):
              parameterization, trial_index = ax_client.get_next_trial()
              trials_to_evaluate[trial_index] = parameterization

          # Evaluate the results in parallel and append results to a dictionary
          with concurrent.futures.ProcessPoolExecutor(max_workers=None) as executor:
              futures_to_idx = {executor.submit(sim.get_results, parametrization,trial_index):trial_index for trial_index,
                                parametrization in trials_to_evaluate.items()}
                                   
                          
              # go through completed trials and update the results dictionary          
              for future in concurrent.futures.as_completed(futures_to_idx):
                  trial_index = futures_to_idx[future]
                  try:
                      results.update({trial_index: future.result()})
                  except Exception as e:
                      ax_client.abandon_trial(trial_index=trial_index)
                      abandoned_trials_count += 1
                      print(f'[WARNING] Abandoning trial {trial_index} due to processing errors.')
                      print(e)
                      if abandoned_trials_count > 0.1 * NUM_OF_ITERS:
                          print(
                              f'[WARNING] Abandoned {abandoned_trials_count} trials. Consider improving the '
                              f'parametrization.')                       
          
          # update the ax_client with results
          for trial_index in results:
              ax_client.complete_trial(trial_index, results.get(trial_index))

      except KeyboardInterrupt:
          print('Program interrupted by user')
          break

IgorKuszczak avatar May 04 '22 18:05 IgorKuszczak

That's interesting that the code returns all arms when use_model_predictions=False! Can you include the cross-validation plot for each variable? (see example 4 here.) Also the interact_fitted plot (example 6 from that tutorial) might be useful here as well.

bernardbeckerman avatar May 05 '22 00:05 bernardbeckerman

I believe this is waiting on a repro, @IgorKuszczak. Is this still an active issue or can we close it?

lena-kashtelyan avatar Sep 13 '22 16:09 lena-kashtelyan

Gonna close this one for now as inactive; please reopen if you follow up, @IgorKuszczak!

lena-kashtelyan avatar Sep 20 '22 15:09 lena-kashtelyan