Benjamin Geer
Benjamin Geer
Bulk import is being reimplemented in [knora-py](https://github.com/dhlab-basel/knora-py), which could also handle this use case.
This might be your Internet Service Provider or your antivirus software blocking the SSL connection: https://github.com/sbt/sbt/issues/2365 https://stackoverflow.com/questions/7709540/how-to-solve-sun-security-provider-certpath-suncertpathbuilderexception https://stackoverflow.com/questions/13626965/how-to-ignore-pkix-path-building-failed-sun-security-provider-certpath-suncertp https://youtrack.jetbrains.com/issue/SCL-9856 https://intellij-support.jetbrains.com/hc/en-us/community/posts/115000094584-IDEA-Ultimate-2016-3-4-throwing-unable-to-find-valid-certification-path-to-requested-target-when-trying-to-refresh-gradle?page=1#community_comment_115000405564
You might try with the Unibas VPN and see if that helps.
An online upgrade would have to be done in a transaction. But [GraphDB only allows one update transaction at a time](http://graphdb.ontotext.com/documentation/standard/storage.html#transaction-control). So, while an update is occurring, you are effectively...
Find paragraphs containing the word "paphlagonian" marked as a noun: ```xquery xquery version "3.1"; for $par in collection("/db/books")//p[.//noun[starts-with(lower-case(.), "paphlagonian")]] return {$par} ``` But this returns two `` elements for the...
Thanks, group by does it! ```xquery xquery version "3.1"; for $par in collection("/db/books")//p[.//noun[starts-with(lower-case(.), "paphlagonian")]] group by $doc := util:document-name(root($par)) return {$par} ```
Search for `paphlagonian` as an adjective and `soul` as a noun, in the same paragraph (https://github.com/dhlab-basel/knora-large-texts/issues/2#issuecomment-541031095): ```xquery xquery version "3.1"; for $par in collection("/db/books")//p[.//adj[starts-with(lower-case(.), "paphlagonian")] and .//noun[starts-with(lower-case(.), "soul")]] group by...
Uploading books to eXist-db: ``` [info] Uploaded wealth-of-nations (6135800 bytes, 7885 ms) [info] Uploaded the-count-of-monte-cristo (8071112 bytes, 10667 ms) [info] Uploaded sherlock-holmes (1793512 bytes, 2358 ms) [info] Uploaded federalist-papers (3189160...
With all the books uploaded, the query in https://github.com/dasch-swiss/knora-api/issues/1570#issuecomment-571480042 takes 8 seconds. Knora did it it in 1 second, using Lucene to optimise the query. I'm going to see if...
It looks like eXist-db can use Lucene to optimise the query, but there's a limitation: you have to configure the Lucene index (in a configuration file), **specifying the names of...