synbiohub
synbiohub copied to clipboard
Constructs unable to be retrieved or written in reasonable time
Two of the SD2 strains appear unable to be retrieved or written on SynBioHub in any reasonable time, though their non-recursive retrieval seems to be working: https://hub.sd2e.org/user/sd2e/design/Strain_2_MG1655_Genomic_PhlF_Gate/1 https://hub.sd2e.org/user/sd2e/design/Strain_3_MG1655_Genomic_IcaR_Gate/1
This is causing problems for the dictionary, which seems to be timing out on its attempts to make updates to the strains. (cc: @dsumorok-raytheon)
Have you tried these on staging? They come up for me there. These are examples of ones that need to render non-recursively.
On Feb 5, 2019, at 10:41 AM, Jacob Beal [email protected] wrote:
Two of the SD2 strains appear unable to be retrieved or written on SynBioHub in any reasonable time, though their non-recursive retrieval seems to be working: https://hub.sd2e.org/user/sd2e/design/Strain_2_MG1655_Genomic_PhlF_Gate/1 https://hub.sd2e.org/user/sd2e/design/Strain_2_MG1655_Genomic_PhlF_Gate/1 https://hub.sd2e.org/user/sd2e/design/Strain_3_MG1655_Genomic_IcaR_Gate/1 https://hub.sd2e.org/user/sd2e/design/Strain_3_MG1655_Genomic_IcaR_Gate/1 This is causing problems for the dictionary, which seems to be timing out on its attempts to make updates to the strains. (cc: @dsumorok-raytheon https://github.com/dsumorok-raytheon)
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/SynBioHub/synbiohub/issues/817, or mute the thread https://github.com/notifications/unsubscribe-auth/ADWD998vOu0fwoe37YNjB0T_vIl__wr6ks5vKcI7gaJpZM4ajul-.
They're extremely slow on staging too, so the timeout issue will likely still obtain
Not sure about extremely slow. They each about 45 to 50 seconds to render, and about 40 seconds to do a non-recursive fetch. The non-recursive fetch is creating a 7MB file that includes over 4000 SequenceAnnotations. These are pretty big records. You might consider increasing the timeout to a minute.
Granted. We are still interested in making a more lazy fetch, but this will take some time to work out. The only thing I think we can do with such a large monolithic record is to consider not fetching child objects right away. This means having documents with dangling URIs pointing to child objects. Supporting this will take some serious work though in the libraries that assume children are present.
On a related note, I fixed the VisBOL error that you were seeing on the other record with missing child references, and then it ran into another place where we assumed the existence of child objects. The assumption that child objects exist permeates everything in all our libraries and likely all our SBOL software. This will be a difficult assumption to eliminate, and I would argue is fundamental change to SBOL (or at least people's assumptions about it).
I think this is another case strongly in favor of granular reading and writing of object properties separately. In this case, we're really only trying to modify the SD2 lab ID annotations, so uploading the whole object is not just wasteful but also highly fragile, since we could easily mess up the SequenceAnnotations when we really don't want to be touching them at all.
Maybe you could have an access mode where one sends a set of keys to fetch, and receives only that information from the object? Or, alternately, of keys to exclude?
Actually, this is already supported via the SPARQL API. Rather than fetching the entire object for the dictionary, you could just fetch the elements you need to populate the dictionary. For now, you would still need to fetch the entire non-recursive object for update, but you would not need to do that for objects that are not changing. This should speed things up.
We cannot write this way, however.
Agreed, but I would assume that most dictionary entries are not written on an update. Those that are not written, do not need to be completely fetched.
As we discussed in Austin, we will need a means to update partial records to provide edits in the browser, and I expect that this could also be exploited when ready to do the writes from the dictionary as well.