aesara icon indicating copy to clipboard operation
aesara copied to clipboard

Remove duplicate inner-graph printing from `debugprint`

Open brandonwillard opened this issue 2 years ago • 0 comments

When debugprint prints inner-graph having nodes with multiple outputs, it prints a copy of same inner-graph for each output. This make debugprint's output obnoxious when the inner-graphs are large and/or the number of inner-graph outputs is large.

Here's an example:

import aesara
import aesara.tensor as at


k = at.iscalar("k")
A = at.vector("A")

result, _ = aesara.scan(
    fn=lambda prior_result, A: (prior_result, prior_result * A),
    outputs_info=[at.ones_like(A), None],
    non_sequences=A,
    n_steps=k,
)


aesara.dprint(result)
# Subtensor{int64::} [id A]
#  |for{cpu,scan_fn}.0 [id B]
#  | |k [id C]
#  | |IncSubtensor{Set;:int64:} [id D]
#  | | |AllocEmpty{dtype='float64'} [id E]
#  | | | |Elemwise{add,no_inplace} [id F]
#  | | | | |k [id C]
#  | | | | |Subtensor{int64} [id G]
#  | | | |   |Shape [id H]
#  | | | |   | |Rebroadcast{(0, False)} [id I]
#  | | | |   |   |InplaceDimShuffle{x,0} [id J]
#  | | | |   |     |Elemwise{second,no_inplace} [id K]
#  | | | |   |       |A [id L]
#  | | | |   |       |InplaceDimShuffle{x} [id M]
#  | | | |   |         |TensorConstant{1.0} [id N]
#  | | | |   |ScalarConstant{0} [id O]
#  | | | |Subtensor{int64} [id P]
#  | | |   |Shape [id Q]
#  | | |   | |Rebroadcast{(0, False)} [id I]
#  | | |   |ScalarConstant{1} [id R]
#  | | |Rebroadcast{(0, False)} [id I]
#  | | |ScalarFromTensor [id S]
#  | |   |Subtensor{int64} [id G]
#  | |k [id C]
#  | |A [id L]
#  |ScalarConstant{1} [id T]
# for{cpu,scan_fn}.1 [id B]
#
# Inner graphs:
#
# for{cpu,scan_fn}.0 [id B]
#  >*0-<TensorType(float64, (None,))> [id U] -> [id D]
#  >Elemwise{mul,no_inplace} [id V]
#  > |*0-<TensorType(float64, (None,))> [id U] -> [id D]
#  > |*1-<TensorType(float64, (None,))> [id W] -> [id L]
#
# for{cpu,scan_fn}.1 [id B]
#  >*0-<TensorType(float64, (None,))> [id U] -> [id D]
#  >Elemwise{mul,no_inplace} [id V]

Notice how the two inner-graphs printed at the end just represent the two different outputs of the same node. The only thing that's different between the two is that the second one isn't re-printing the inputs of a node that has already been printed—which is by design, so it's not a meaningful difference—and the first lines differ only by a *.0 and *.1 suffix, respectively, that states which output is being printed.

brandonwillard avatar Jul 04 '22 00:07 brandonwillard