Allow techniques to query whether a configuration has previously been measured (and what its fitness was)
My mutators sometimes produce configurations that have already been measured. My technique would like to recognize this and try mutating again, or perhaps allow the duplicate to enter the population anyway if its fitness is good enough. It does not want to re-measure the configuration, so the technique needs to know before yielding the configuration.
This should be possible already.
search_driver.get_configuration(cfg_data) will give you a configuration
object (if you call it from multiple places with the same data you will get
the same object).
Then you can do search_driver.results_query(config=...).all() to get the
already generated results.
I believe there may already be de-duplication in some places, where it wont measure the same config twice -- but I am not 100% sure about that.
On Thu, Jun 2, 2016 at 1:48 PM, Jeffrey Bosboom [email protected] wrote:
My mutators sometimes produce configurations that have already been measured. My technique would like to recognize this and try mutating again, or perhaps allow the duplicate to enter the population anyway if its fitness is good enough. It does not want to re-measure the configuration, so the technique needs to know before yielding the configuration.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/jansel/opentuner/issues/85, or mute the thread https://github.com/notifications/unsubscribe/AAglPKJZwWo1SVOxsGkcf1c7ceKo8Ve2ks5qH0GpgaJpZM4Is91N .
Some existing de-duplication code is here: https://github.com/jansel/opentuner/blob/50e5eec9cf934b9117611280ef94a443eaab1f35/opentuner/search/technique.py#L256
On Fri, Jun 3, 2016 at 10:52 PM, Jason Ansel [email protected] wrote:
This should be possible already.
search_driver.get_configuration(cfg_data)will give you a configuration object (if you call it from multiple places with the same data you will get the same object).Then you can do
search_driver.results_query(config=...).all()to get the already generated results.I believe there may already be de-duplication in some places, where it wont measure the same config twice -- but I am not 100% sure about that.
On Thu, Jun 2, 2016 at 1:48 PM, Jeffrey Bosboom [email protected] wrote:
My mutators sometimes produce configurations that have already been measured. My technique would like to recognize this and try mutating again, or perhaps allow the duplicate to enter the population anyway if its fitness is good enough. It does not want to re-measure the configuration, so the technique needs to know before yielding the configuration.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/jansel/opentuner/issues/85, or mute the thread https://github.com/notifications/unsubscribe/AAglPKJZwWo1SVOxsGkcf1c7ceKo8Ve2ks5qH0GpgaJpZM4Is91N .
My technique derives from AsyncProceduralSearchTechnique, under the assumption more parallelism is available, which does not include that check. My assumption is supported by the OpenTuner technique creation tutorial: "the SequentialSearchTechnique model (note there are some other technique models which support more parallelism in running tests)".
Why is this in techniques? (I'd expect the measurement code would do this check and return the old measurement if there is one.) Is there any reason we'd deliberately want to re-measure a configuration?
You could either subclass the base class directly or AsyncProceduralSearchTechnique.
I would expect it not to matter. Usually training is dominated by testing configurations not the search technique.
On Sun, Jun 5, 2016 at 4:11 PM, Jeffrey Bosboom [email protected] wrote:
My technique derives from AsyncProceduralSearchTechnique, under the assumption more parallelism is available, which does not include that check. My assumption is supported by the OpenTuner technique creation tutorial http://opentuner.org/tutorial/techniques/: "the SequentialSearchTechnique model (note there are some other technique models which support more parallelism in running tests)".
Why is this in techniques? (I'd expect the measurement code would do this check and return the old measurement if there is one.) Is there any reason we'd deliberately want to re-measure a configuration?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/jansel/opentuner/issues/85#issuecomment-223843725, or mute the thread https://github.com/notifications/unsubscribe/AAglPMYGXA1vjqVRxoehgrQu038VGfDDks5qI1emgaJpZM4Is91N .