qiskit-machine-learning icon indicating copy to clipboard operation
qiskit-machine-learning copied to clipboard

[Enhancement] Callback for PegasosQSVC

Open tjdurant opened this issue 2 years ago • 4 comments
trafficstars

What should we add?

Hello, I'm new to the Qiskit community. I was wondering if it would be possible to add a callback function that allows users to monitor the objective function during training of PegasosQSVC - similar to what is available with VQC.

Happy to try and work on that if given some direction.

Thanks, T

tjdurant avatar Apr 07 '23 00:04 tjdurant

Hello @tjdurant sorry for the delay and thanks for the interest in Qiskit. Yes, it is possible, but there's not that much can be exposed in such a callback. What is available:

  • iteration number
  • weighted sum over support vectors
  • and a dict of alphas

Do you know what you would like to see in the callback?

In general, if we were to add a callback then we would need:

  • design the callback interface, e.g. a function like def pegasos_callback(iter_num: int, weighted_sum: float, alphas: Dict[int, int])
  • extend the constructor and add a new parameter called callback that is a callable as suggest above.
  • call the callback in fit
  • add unit tests
  • add documentations
  • may be it is worth extending the pegasos tutorial, but this can be done separately.

adekusar-drl avatar Apr 10 '23 22:04 adekusar-drl

@adekusar-drl , no worries!

I think that the main thing I would want to see is the objective function value. Similar to train_loss and val_loss in traditional ML libraries.

Sounds like that might be a reach at this point, though - I'd be happy to close this and wait until we're further down the road but defer to you and your thoughts on it.

tjdurant avatar Apr 19 '23 14:04 tjdurant

As I can see from the code objective function is not evaluated directly. But I'm not very well familiar with the algorithm. So, if you feel confident, you may extend the implementation.

adekusar-drl avatar Apr 21 '23 10:04 adekusar-drl

@tjdurant The main advantage of PevasosQSVC is that in every iteration only one data point is "classified" as the algorithm is based on stochastic gradient descent. In contrast to classical ML methods, evaluating the objective function on the whole training/validation set is quite expensive. Hence, calculating the train/validation loss after every iteration would slow down training drastically. Of course this is still something that would be good to have for testing/creating plots. However, if we implement a callback that provides the current loss values, these calculations should be optional and only performed if they are indeed needed for the callback.

gentinettagian avatar May 08 '23 09:05 gentinettagian