QuantumLibraries icon indicating copy to clipboard operation
QuantumLibraries copied to clipboard

Improve handling of invalid models in ApplySequentialClassifier

Open tcNickolas opened this issue 5 years ago • 5 comments

Describe the bug Passing an invalid model to ApplySequentialClassifier can causes unexpected and inconsistent behaviors:

  • If the index of the target qubit or one of the control qubits used in the ControlledRotation is outside of the qubits array, it will cause runtime error (since this is not handled explicitly).
  • If ParameterIndex of the parameter used in the ControlledRotation is outside of the parameters array, the gate will be quietly ignored (this is handled explicitly).

This makes errors fairly hard to find.

Expected behavior I would expect both these behaviors be handled similarly, ideally by throwing an error "The model is invalid for reason X". For better performance we might want to validate the model closer to the entry points of the library, rather than on each call of ApplySequentialClassifier.

tcNickolas avatar Jun 13 '20 18:06 tcNickolas

@tcNickolas I would love to have a go at this! I've faced the first issue myself heaps of times when I first worked with the QML library and it took me a long time to figure out the cause of it, so your suggestion here to do an early validation is great :)

Pallavi-mp avatar Oct 11 '20 04:10 Pallavi-mp

I would love to have a go at this!

Thanks for taking a look, @Pallavi-mp!

I've faced the first issue myself heaps of times when I first worked with the QML library and it took me a long time to figure out the cause of it, so your suggestion here to do an early validation is great :)

You may find the various facts in the Microsoft.Quantum.Diagnostics namespace helpful to check conditions such as the ones described here. Please let me know if there's anything I can do to help!

cgranade avatar Oct 12 '20 15:10 cgranade

@cgranade I wanted to check if my approach to solve this is okay before working on the changes. ApplySequentialClassifier is a public method exposed as a part of the ML library, and there are also multiple entry points into it from other public methods, which call it multiple times for the same model. Because of this, trying to validate the model each time in ApplySequentialClassifier itself would hit efficiency during training. And the validation of the model cannot be made optional since we don't have the concept of optional parameters yet.

Would it be acceptable to introduce a new public method that validates a given model? Then users can call it explicitly when they make a call to ApplySequentialClassifier directly, and this validation method would be called by other entry points fewer times since it is known that the model is the same for each call to ApplySequentialClassifier. There are currently two public entry points - TrainSequentialClassifierAtModel and EstimateClassificationProbability that call ApplySequentialClassifier.

Pallavi-mp avatar Oct 15 '20 16:10 Pallavi-mp

@Pallavi-mp thank your offering to take this on! Apologies for the late reply on our end. Do you still want to make this contribution? Regarding your suggested approach, adding a new public method to validate the model works for us. However, if we are adding a new functionality to the API we need to review it within our team, so we'd need some kind of spec that we can discuss.

guenp avatar Dec 07 '20 17:12 guenp

@guenp Sorry for the delayed reply, I was caught up in work. I may not be able to do justice to this terms of spending time on a good fix, so please feel free to have anyone else pick this up if they want to. If it's still open and I find some time in the coming weeks I'll surely work on it then!

Pallavi-mp avatar Jan 04 '21 01:01 Pallavi-mp