Refactoring & optimizing large BinaryPolynomial construction
Continuing #988 explorations. This produces a >8x speedup in the construction of large BinaryPolynomial instances, see notebook
I am thinking about making all the operations keep the polynomials zero free
I am thinking about making all the operations keep the polynomials zero free
That would be a backwards compatibility break. I am not opposed philosophically, but it would need to wait until dimod 0.11.
I am thinking about making all the operations keep the polynomials zero free
That would be a backwards compatibility break. I am not opposed philosophically, but it would need to wait until dimod 0.11.
Fair enough. I think that could significantly improve scalability, both memory and time.though
I am thinking about making all the operations keep the polynomials zero free
That would be a backwards compatibility break. I am not opposed philosophically, but it would need to wait until dimod 0.11.
Does anything really rely on the 0 weighted terms being stored explicitly?
Strictly speaking
In [3]: p = dimod.BinaryPolynomial({}, 'BINARY')
In [4]: p['abc'] = 0
In [5]: p
Out[5]: BinaryPolynomial({frozenset({'b', 'c', 'a'}): 0}, 'BINARY')
In [6]: list(p)
Out[6]: [frozenset({'a', 'b', 'c'})]
whether that "break" actually matters to anyone? :shrug:
FWIW, it is consistent with the BQM/QM behavior where we do explicitly keep 0 bias variables/interactions.
Should we make a backlog entry for 0.11?
@arcondello The last set of commits are the make_quadratic refactoring