dynamo-release
dynamo-release copied to clipboard
Error when running dynamics on dynast-preprocessed dataset
Hello,
Thanks for sharing your great work! I would love to use dynamo to analyze some of our data but I'm running into trouble.
I have ran dynast (dynast-release 1.0.1) to align and estimate labeling rates in single-cell metabolic labeling data. I can load this dataset into dynamo and preprocess it using
import dynamo as dyn
adata = dyn.read_h5ad('P1407_NM5_S10_estimate_adata.h5ad')
dyn.pp.recipe_monocle(adata)
The adata object has a lot of layers:
layers: 'X_l_TC', 'X_l_TC_est', 'X_n_TC', 'X_n_TC_est', 'ambiguous', 'labeled_TC', 'labeled_TC_est', 'sl_TC', 'sl_TC_est', 'sn_TC', 'sn_TC_est', 'spliced', 'total', 'ul_TC', 'ul_TC_est', 'un_TC', 'un_TC_est', 'unlabeled_TC', 'unlabeled_TC_est', 'unspliced', 'X_unspliced', 'X_spliced'
However, when I next run
dyn.tl.dynamics(adata, model='stochastic', cores=3, experiment_type = 'one-shot',
assumption_mRNA = 'ss', est_method = 'negbin')
(these are the parameters that made most sense for me - I have tried some other combinations but everything gave an error) I get the following error:
|-----> dynamics_del_2nd_moments_key is None. Using default value from DynamoAdataConfig: dynamics_del_2nd_moments_key=False
|-----------> removing existing M layers:[]...
|-----------> making adata smooth...
|-----> calculating first/second moments...
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 dyn.tl.dynamics(adata, model='stochastic', cores=3, experiment_type = 'one-shot',
2 assumption_mRNA = 'ss', est_method = 'negbin')
File ~/anaconda3/envs/dynamo/lib/python3.10/site-packages/dynamo/tools/dynamics.py:358, in dynamics(adata, filter_gene_mode, use_smoothed, assumption_mRNA, assumption_protein, model, est_method, NTR_vel, group, protein_names, concat_data, log_unnormalized, one_shot_method, fraction_for_deg, re_smooth, sanity_check, del_2nd_moments, cores, tkey, **est_kwargs)
356 moments(adata, genes=valid_bools, group=group)
357 else:
--> 358 moments(adata, genes=valid_bools, group=tkey)
359 elif tkey is not None:
360 main_warning(
361 f"You used tkey {tkey} (or group {group}), but you have calculated local smoothing (1st moment) "
362 f"for your data before. Please ensure you used the desired tkey or group when the smoothing was "
363 f"performed. Try setting re_smooth = True if not sure."
364 )
File ~/anaconda3/envs/dynamo/lib/python3.10/site-packages/dynamo/tools/moments.py:205, in moments(adata, X_data, genes, group, conn, use_gaussian_kernel, normalize, use_mnn, layers, n_pca_components, n_neighbors, copy)
202 for layer2 in layers[i:]:
203 layer_y = adata.layers[layer2].copy()
--> 205 layer_y_group = np.where([layer2 in x for x in [only_splicing, only_labeling, splicing_and_labeling]])[0][0]
206 # don't calculate 2 moments among uu, ul, su, sl -
207 # they should be time-dependent moments and
208 # those calculations are model specific
209 if (layer_x_group != layer_y_group) or layer_x_group == 2:
IndexError: index 0 is out of bounds for axis 0 with size 0
I would very much appreciate any help you can give here.
Thanks in advance, Bastiaan
thanks for using dynast and dynamo. Could you please just use the following layers:
adata_ntr.X = adata_ntr.layers['total']
adata_ntr.layers['new'] = adata_ntr.layers['labeled_TC'].copy()
adata_ntr.layers['total'] = adata_ntr.layers['labeled_TC'] + adata_ntr.layers['unlabeled_TC']
for layer in list(adata_ntr.layers.keys()):
if layer not in ['new', 'total', 'spliced', 'unspliced']:
del adata_ntr.layers[layer]
adata_ntr
Please also check out the dyn.tl.recipe_one_shot_data(adata)
Thanks, this got me a lot further!
I suppose that if I wanted to use dynast's estimation of the actual number of new transcripts (i.e. corrected for the labeling rate), I would have to use the layers 'labeled_TC_est' and 'unlabeled_TC_est', instead of 'unlabeled_TC' and 'unlabeled_TC', right?
that is right. Please also make sure you used the latest dynast to analyze the data
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 14 days
Comment to remove stale