neuralnilm_prototype
neuralnilm_prototype copied to clipboard
Or maybe output a CSV file with train cost, validation cost, NILM metrics, secs per epoch, train weights etc. Start log with config of the network, then each training loop
- protects again `graphic` machine being rebooted - related: #15
also: - [ ] have both inputs in range [-1, 1](try different ranges) - [ ] Centre mains power
Graves does this. See Graves 2014. I think skaae has implemented this in his code.
Each directory would be like this: - e92 - e92.py (define experiment) - e92.h5 (costs, metrics, network weights etc) - e92_costs.png (multiple subplots: cross entropy, MSE, NILM metrics) - e92_estimates_1250epochs_3.png...
As per the Deep Mind tutorial on 18th Feb. Produces probabilistic output. Handles missing data very well. Both things we need! http://arxiv.org/abs/1401.4082
Ideas: - [x] Find a minimal example which still fails (something that trains fast) - Is it: - [x] The new nntools code (check by re-running e82) - [ ]...
- use more appliances from house 1 - appliances from other houses - train on one set of houses, test on another
Vary one parameter (keeping others constant), pick best, vary next parameter, pick best, go round in loop