neuralnilm_prototype icon indicating copy to clipboard operation
neuralnilm_prototype copied to clipboard

Automate running multiple models

Open JackKelly opened this issue 9 years ago • 0 comments

Each directory would be like this:

  • e92
    • e92.py (define experiment)
    • e92.h5 (costs, metrics, network weights etc)
    • e92_costs.png (multiple subplots: cross entropy, MSE, NILM metrics)
    • e92_estimates_1250epochs_3.png
  • Read .py scripts (one for each experiment) from a directory
  • Run each script in sequence.
    • Catch exceptions and log them
  • Need a better way to set max_appliance_power, on_duration, off_duration etc. Maybe these should go into the NILMTK metadata. Although, to start with, maybe just use a function to set the metadata manually from the experiment script. I'm also starting to think that we should use real aggregate data now. Our synthetic data doesn't include, for example, that the fridge turns on multiple times.
  • Output results from each experiment to an HDF5 file:
    • training costs
    • validation costs (need a standardised validation timeseries, make sure it has a section which is 'easy' i.e. just the appliances on their own)
    • NILM validation costs
    • network weights. Both for analysis / visualisation later and also to allow training to be restarted (if power is lost) and also to use the net.

JackKelly avatar Feb 20 '15 21:02 JackKelly