sparql query with BIND giving "SR319: Max row length is exceeded when trying to store a string of 52 chars into a temp col"
Dear Hugh, Dear all,
I observe an SR319 with the following query:
PREFIX up:http://purl.uniprot.org/core/ PREFIX keywords:http://purl.uniprot.org/keywords/ PREFIX uniprotkb:http://purl.uniprot.org/uniprot/ PREFIX taxon:http://purl.uniprot.org/taxonomy/ PREFIX ec:http://purl.uniprot.org/enzyme/ PREFIX rdf:http://www.w3.org/1999/02/22-rdf-syntax-ns# PREFIX rdfs:http://www.w3.org/2000/01/rdf-schema# PREFIX skos:http://www.w3.org/2004/02/skos/core# PREFIX owl:http://www.w3.org/2002/07/owl# PREFIX bibo:http://purl.org/ontology/bibo/ PREFIX dc:http://purl.org/dc/terms/ PREFIX xsd:http://www.w3.org/2001/XMLSchema# PREFIX faldo:http://biohackathon.org/resource/faldo# SELECT ?protein ?proteome WHERE { ?protein a up:Sequence . ?protein up:sequenceFor ?member . ?member up:proteome ?proteomeComponent . BIND(IRI(STRBEFORE(STR(?proteomeComponent), "#")) AS ?proteome) } GROUP BY ?protein ?proteome HAVING (COUNT(?proteome) > 1);
The dataset is uniprot. This happens on both 2015_03 and 2015_04 versions. Virtuoso is virtuoso-opensource tags/v7.2.0.1
Here are the details on the 2015_04 dataset. SR319 comes after ~3 minutes:
[user@testmachine1 ~]$ ~/bin/isql -H localhost:1112
[...]
SQL> trace_on();
Done. -- 0 msec.
SQL> sparql PREFIX up:http://purl.uniprot.org/core/ PREFIX keywords:http://purl.uniprot.org/keywords/ PREFIX uniprotkb:http://purl.uniprot.org/uniprot/ PREFIX taxon:http://purl.uniprot.org/taxonomy/ PREFIX ec:http://purl.uniprot.org/enzyme/ PREFIX rdf:http://www.w3.org/1999/02/22-rdf-syntax-ns# PREFIX rdfs:http://www.w3.org/2000/01/rdf-schema# PREFIX skos:http://www.w3.org/2004/02/skos/core# PREFIX owl:http://www.w3.org/2002/07/owl# PREFIX bibo:http://purl.org/ontology/bibo/ PREFIX dc:http://purl.org/dc/terms/ PREFIX xsd:http://www.w3.org/2001/XMLSchema# PREFIX faldo:http://biohackathon.org/resource/faldo# SELECT ?protein ?proteome WHERE { ?protein a up:Sequence . ?protein up:sequenceFor ?member . ?member up:proteome ?proteomeComponent . BIND(IRI(STRBEFORE(STR(?proteomeComponent), "#")) AS ?proteome) } GROUP BY ?protein ?proteome HAVING (COUNT(?proteome) > 1);
**\* Error 22026: [Virtuoso Driver][Virtuoso Server]SR319: Max row length is exceeded when trying to store a string of 52 chars into a temp col
at line 6 of Top-Level:
sparql PREFIX up:http://purl.uniprot.org/core/ PREFIX keywords:http://purl.uniprot.org/keywords/ PREFIX uniprotkb:http://purl.uniprot.org/uniprot/ PREFIX taxon:http://purl.uniprot.org/taxonomy/ PREFIX ec:http://purl.uniprot.org/enzyme/ PREFIX rdf:http://www.w3.org/1999/02/22-rdf-syntax-ns# PREFIX rdfs:http://www.w3.org/2000/01/rdf-schema# PREFIX skos:http://www.w3.org/2004/02/skos/core# PREFIX owl:http://www.w3.org/2002/07/owl# PREFIX bibo:http://purl.org/ontology/bibo/ PREFIX dc:http://purl.org/dc/terms/ PREFIX xsd:http://www.w3.org/2001/XMLSchema# PREFIX faldo:http://biohackathon.org/resource/faldo# SELECT ?protein ?proteome WHERE { ?protein a up:Sequence . ?protein up:sequenceFor ?member . ?member up:proteome ?proteomeComponent . BIND(IRI(STRBEFORE(STR(?proteomeComponent), "#")) AS ?proteome) } GROUP BY ?protein ?proteome HAVING (COUNT(?proteome) > 1)
SQL>
logs with trace_on():
user@testmachine1 ~]$ tail -f /[logspath]/2015_04.1/run.log
[...]
10:55:43 LTRS_1 dba 127.0.0.1 1112:57 Commit transact 0x7f73084b57b0 0
10:55:43 LTRS_2 dba 127.0.0.1 1112:57 Restart transact 0x7f73084b57b0
10:55:52 CSLQ_0 dba 127.0.0.1 1112:57 s1112_57_0 sparql PREFIX up:http://purl.uniprot.org/core/ PREFIX keywords:http://purl.uniprot.org/keywords/ PREFIX uniprotkb:http://purl.uniprot.org/uniprot/ PREFIX taxon:http://purl.uniprot.org/taxonomy/ PREFIX ec:http://purl.uniprot.org/enzyme/ PREFIX rdf:http://www.w3.org/1999/02/22-rdf-syntax-ns# PREFIX rdfs:http://www.w3.org/2000/01/rdf-schema# PREFIX skos:http://www.w3.org/2004/02/skos/core# PREFIX owl:http://www.w3.org/2002/07/owl# PREFIX bibo:http://purl.org/ontology/bibo/ PREF
10:55:52 EXEC_1 dba 127.0.0.1 1112:57 s1112_57_0 Exec 1 time(s) sparql PREFIX up:http://purl.uniprot.org/core/ PREFIX keywords:http://purl.uniprot.org/keywords/ PREFIX uniprotkb:http://purl.uniprot.org/uniprot/ PREFIX taxon:http://purl.uniprot.org/taxonomy/ PREFIX ec:http://purl.uniprot.org/enzyme/ PREFIX rdf:http://www.w3.org/1999/02/22-rdf-syntax-ns# PREFIX rdfs:http://www.w3.org/2000/01/rdf-schema# PREFIX skos:http://www.w3.org/2004/02/skos/core# PREFIX owl:http://www.w3.org/2002/07/owl# PREFIX bibo:http://purl.org/ontology/bibo/ PREFI
10:55:58 LTRS_0 dba Internal Internal Begin transact 0x7f7300003490
10:55:58 LTRS_1 dba Internal Internal Commit transact 0x7f7300003490 120259084288
10:55:58 LTRS_2 dba Internal Internal Restart transact 0x7f7300003490
10:55:58 LTRS_1 dba Internal Internal Commit transact 0x7f7300003490 64424509440
10:55:58 LTRS_2 dba Internal Internal Restart transact 0x7f7300003490
10:55:58 LTRS_1 dba Internal Internal Commit transact 0x7f7300003490 120259084288
10:55:58 LTRS_2 dba Internal Internal Restart transact 0x7f7300003490
10:55:58 LTRS_1 dba Internal Internal Commit transact 0x7f7300003490 0
10:55:58 LTRS_2 dba Internal Internal Restart transact 0x7f7300003490
10:55:58 LTRS_1 dba Internal Internal Commit transact 0x7f7300003490 0
10:55:58 LTRS_2 dba Internal Internal Restart transact 0x7f7300003490
10:58:30 ERRS_0 22026 SR319 Max row length is exceeded when trying to store a string of 52 chars into a temp col
10:58:30 LTRS_1 dba 127.0.0.1 1112:57 Commit transact 0x7f73084b57b0 140131897966592
10:58:30 LTRS_2 dba 127.0.0.1 1112:57 Restart transact 0x7f73084b57b0
I can see that similar issues have been reported in:
- https://github.com/openlink/virtuoso-opensource/issues/118
- https://github.com/openlink/virtuoso-opensource/issues/93
Thanks for your help!
(int. ref. UPS-96)
I have been able to recreate this against the uniprot live sparql endpoint http://beta.sparql.uniprot.org/sparql/, and reported it to development. In the short term, the query will probably need to be restructured to avoid this limit on the temp table col size...
Is there any progress on this issue? We are hit by it quite hard now.
This problem has not been resolved yet ...
Any progress on this? I am affected by this issue as well.