scBasset icon indicating copy to clipboard operation
scBasset copied to clipboard

imputation_Y_normalize and motif_score errors

Open sidvmahesh opened this issue 4 months ago • 1 comments

Good Afternoon scBasset team,

Thank you for developing and maintaining this package, I'm really excited to use scBasset for my single-cell multiome data! I run into errors while using imputation_Y_normalize and motif_score exactly as mentioned in the tutorials.

>>> motif_score("AR", model, "../scbasset_motifs")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ec2-user/sn_multiome/jls/scBasset/scbasset/utils.py", line 477, in motif_score
    pred_motif = pred_on_fasta(fasta_motif, model, bc=bc, scale_method='sigmoid')
  File "/home/ec2-user/sn_multiome/jls/scBasset/scbasset/utils.py", line 453, in pred_on_fasta
    pred = imputation_Y_normalize(seqs_1hot, model, bc_model=bc, scale_method=scale_method)
  File "/home/ec2-user/sn_multiome/jls/scBasset/scbasset/utils.py", line 418, in imputation_Y_normalize
    new_model = tf.keras.Model(
  File "/opt/conda/envs/scbasset/lib/python3.10/site-packages/keras/src/utils/tracking.py", line 26, in wrapper
    return fn(*args, **kwargs)
  File "/opt/conda/envs/scbasset/lib/python3.10/site-packages/keras/src/models/functional.py", line 136, in __init__
    Function.__init__(self, inputs, outputs, name=name)
  File "/opt/conda/envs/scbasset/lib/python3.10/site-packages/keras/src/ops/function.py", line 63, in __init__
    raise ValueError(
ValueError: `inputs` argument cannot be empty. Received:
inputs=[]
outputs=<KerasTensor shape=(None, 1, 32), dtype=float32, sparse=False, ragged=False, name=keras_tensor_39>

Some more details I see upon further inspection:

>>> model.layers[0].input
[]
>>> model.layers
[<InputLayer name=sequence, built=True>, <StochasticReverseComplement name=stochastic_reverse_complement, built=True>, <StochasticShift name=stochastic_shift, built=True>, <GELU name=gelu, built=True>, <Conv1D name=conv1d, built=True>, <BatchNormalization name=batch_normalization, built=True>, <MaxPooling1D name=max_pooling1d, built=True>, <GELU name=gelu_1, built=True>, <Conv1D name=conv1d_1, built=True>, <BatchNormalization name=batch_normalization_1, built=True>, <MaxPooling1D name=max_pooling1d_1, built=True>, <GELU name=gelu_2, built=True>, <Conv1D name=conv1d_2, built=True>, <BatchNormalization name=batch_normalization_2, built=True>, <MaxPooling1D name=max_pooling1d_2, built=True>, <GELU name=gelu_3, built=True>, <Conv1D name=conv1d_3, built=True>, <BatchNormalization name=batch_normalization_3, built=True>, <MaxPooling1D name=max_pooling1d_3, built=True>, <GELU name=gelu_4, built=True>, <Conv1D name=conv1d_4, built=True>, <BatchNormalization name=batch_normalization_4, built=True>, <MaxPooling1D name=max_pooling1d_4, built=True>, <GELU name=gelu_5, built=True>, <Conv1D name=conv1d_5, built=True>, <BatchNormalization name=batch_normalization_5, built=True>, <MaxPooling1D name=max_pooling1d_5, built=True>, <GELU name=gelu_6, built=True>, <Conv1D name=conv1d_6, built=True>, <BatchNormalization name=batch_normalization_6, built=True>, <MaxPooling1D name=max_pooling1d_6, built=True>, <GELU name=gelu_7, built=True>, <Conv1D name=conv1d_7, built=True>, <BatchNormalization name=batch_normalization_7, built=True>, <GELU name=gelu_8, built=True>, <Reshape name=reshape, built=True>, <Dense name=dense, built=True>, <BatchNormalization name=batch_normalization_8, built=True>, <Dropout name=dropout, built=True>, <GELU name=gelu_9, built=True>, <Dense name=dense_1, built=True>, <SwitchReverse name=switch_reverse, built=True>, <Flatten name=flatten, built=True>]
>>> model.layers[0]
<InputLayer name=sequence, built=True>
>>> model.layers[-8]
<Reshape name=reshape, built=True>
>>> model.layers[-8].output
<KerasTensor shape=(None, 1, 1792), dtype=float32, sparse=False, ragged=False, name=keras_tensor_35>

Is this a documented issue / a known fix for this?

Thanks,

Sid

sidvmahesh avatar Aug 21 '25 19:08 sidvmahesh