honeybee
honeybee copied to clipboard
Add a table to database to save matrices
Reusing matrices is a key to fully take advantage of multi-phase daylight simulation potentially. Until now I was focused on capturing the results of each study but keeping track of the matrices is another topic that needs to be implemented in one way or another.
So I tried this out yesterday and the result wasn't encouraging at all. To be quite honest it was depressing! :) The idea of loading matrices to DB worked pretty well and it was quite fast however matrix multiplication in sqlite database is ~20 times slower than doing it radiance. So even if we push them to database we have to write them back as files and use radiance dctimestep
and rmtxop
for multiplication. For local cases this doesn't make sense as we have the files there so why bother at all. I leave this open to discuss it later for the server side implementation.
cc: @AntoineDao
There is also one case that this might still be faster and that is sunmatrix which has so many 0s in the matrix. Since we know the exact index of hours with sun it might be faster to write our own multiplication routine. This is something that came up when I talked to German and he has it implemented for Emp. Not using SQL database but it might work in our case too.
👍 for maybe writing our own multiplication routine.
Regardless database could be helpful to perform all tasks in memory for increased speed of i/o. This way one could query from SQL and not write to disk but run everything in memory.
from sqlalchemy import create_engine
from pandas import pd
conn = create_engine('postgresql+psycopg2://user:password@host:port/dbname')
qry = "select * from table where ..."
df = pd.read_sql(qry, con=conn)
for df in sql_reader:
out = subprocess.check_output(['dctimestep'], stdin=df)
res_df = pd.DataFrame(out)
res_df.to_sql('res_table', conn)