supereeg
supereeg copied to clipboard
Optimize code for reconstructing timesamples
I did some simulations and it seems like the answer that eq. 10 in the paper returns is not dependent on the number of time samples passed, so we have the option to pass as many time samples as we'd like. may be relevant for tensorflow implementation:
heres the code:
import numpy as np
import seaborn as sns
# unknown
Kba = np.random.rand(100,10)
# known
Kaa = np.random.rand(10,10)
Y = np.random.rand(10,5000)
# reconstruct 100 time samples
ts1 = np.squeeze(
np.dot(np.dot(Kba, np.linalg.pinv(Kaa)), Y[:,:500]).T)
# reconstruct 2500 time samples
ts2 = np.squeeze(
np.dot(np.dot(Kba, np.linalg.pinv(Kaa)), Y[:,:2500]).T)
print("TS 1 (one time sample) shape: " + str(ts1.shape))
print("TS 2 (2500 time samples) shape: " + str(ts2.shape))
# are the first 100 samples the same?
sns.plt.plot(ts1[:500,:])
sns.plt.title('500 time sample')
sns.plt.show()
sns.plt.plot(ts2[:500,:])
sns.plt.title('2500 time samples')
sns.plt.show()
sns.plt.scatter(ts1[:500,:], ts2[:500,:])
sns.plt.title('First 500 time samples')
sns.plt.show()



another implementation (i think):

re: other implementation-- it looks like that version does some regularization, which could help us. usually regularization helps avoid overfitting and improves generalizability. we may want to (eventually) implement a few different versions of the predict function and explore performance.
👍