link-grammar icon indicating copy to clipboard operation
link-grammar copied to clipboard

Open work items for 5.12.5

Open linas opened this issue 2 years ago • 12 comments

See comment in https://github.com/opencog/link-grammar/pull/1446#issuecomment-1441397457 for pending work items for 5.12.1

I think it makes sense to also start a 5.13.0 branch that will include proposals #1450, and #1453 and #1452 and maybe #1449 depending on how that goes. And if #1449 can happen easily, then it would be version 6.0

linas avatar Feb 24 '23 05:02 linas

The emscripten issues are in #1361 #1374 and #1377

linas avatar Feb 24 '23 05:02 linas

For 6.00 I have many PRs that I would like to include at least some of them:

  1. Dict token insertion (need to find the issue number).
  2. Tokenization drastic speed improvements.
  3. Generator drastic speedup.
  4. Generator API.
  5. Cross-links implementation (I need your answers to my old questions + more discussion, in order to complete it).
  6. Implement power-prune for expressions in order to make power_prune() much faster.
  7. Simplify expressions before converting them to disjuncts (it speeds up building the disjuncts). (The code was ready for PR but then I changed Exp_struct before I sent it, and its conversion to the new struct turned out to be buggy so I need to work on it some more...).
  8. More power-pruning! It removed an additional ~5% of the disjuncts. (This new power pruning had worked but then I introduced a bug without committing the working code..., so again I need to continue debugging...).
  9. Rewritten post-processing, for drastic postprocessing speedup and drastically increasing the number of good linkages per linkage_limit.
  10. Tests for link-parser.
  11. Graphical link-parser (Python).
  12. Local hard costs (we need to discuss this).
  13. Segmentation according to the dict.
  14. Partial parsing infrastructure.
  15. Phantom word handling.
  16. Capitalization handling by dict definitions.

ampli avatar Feb 24 '23 21:02 ampli

Re tokenization speed: In one of my atomese use-cases on and older slower machine, I see the following performance:

  • 500 millisecs tokenization
  • 42 millisecs prepare-to-parse
  • 400 millisecs count
  • 1200 millisecs extract linkages

The above was obtained using sentences that are all exactly 12 words long. Dictionary lookup times not included in the tokenization. Linkages limit = 15K

linas avatar Feb 28 '23 05:02 linas

More about tokenization. With the atomese dicts, the dict can grow after every sentence. Thus, I call condesc_setup(dict); after tokenization, before parsing. It took me two days to discover that it runs about 1sec at first, growing to 10 sec after a while. Thus, it acounts from 1/3 of grand-total sentence time at first, to 80% after a while.

I need to find some way of doing what it does incrementally. Possibly by telling it exactly what expressions were added. -- fixed in #1459

linas avatar Mar 02 '23 04:03 linas

I published version 5.12.1 -- I couldn't wait, certain automation scripts depend on the published tarballs.

linas avatar Mar 05 '23 18:03 linas

hi @linas I tried updating to 5.12.2 in Gentoo but am getting build failures:

In file included from /var/tmp/portage/dev-libs/link-grammar-5.12.2/work/link-grammar-5.12.2/link-grammar/sat-solver/word-tag.cpp:1:
/var/tmp/portage/dev-libs/link-grammar-5.12.2/work/link-grammar-5.12.2/link-grammar/sat-solver/word-tag.hpp:23:83: error: 'X_node' does not name a type
   23 |                     const std::vector<int>& er, const std::vector<int>& el, const X_node *w_xnode, Parse_Options opts)
      |                                                                                   ^~~~~~
In file included from /var/tmp/portage/dev-libs/link-grammar-5.12.2/work/link-grammar-5.12.2/link-grammar/sat-solver/word-tag.cpp:1:
/var/tmp/portage/dev-libs/link-grammar-5.12.2/work/link-grammar-5.12.2/link-grammar/sat-solver/word-tag.hpp:82:9: error: 'X_node' does not name a type
   82 |   const X_node *word_xnode;
      |         ^~~~~~

which we haven't seen in 5.12.0

SoapGentoo avatar Mar 11 '23 18:03 SoapGentoo

build failures:

I'm looking. Recommended fix is to disable the build of the sat-solver code. Since it's disabled by default, your build scripts must have turned it on. (Just run ../configure without any options.)

The recommendation is to disable, because the SAT parser is slower, in all situations, than the regular parser; in some cases, it is 10x or 20x slower. I've been considering deleting it permanently, although Amir convinced me that it can be fixed up. And so .. its in limbo ...

@SoapGentoo If you are willing to carry patches, I just pushed a fix here: ffdf5d8da583b3158656dfe46ed6f8bd12b3bc25

Otherwise, wait for 5.12.3 ... which might appear in a few weeks(? I have plans for "urgent" Atomese fixes which necessitate an LG release.)

linas avatar Mar 11 '23 21:03 linas

@SoapGentoo Version 5.12.3 is now out, with the fix you reported above.

linas avatar Mar 24 '23 16:03 linas

@linas after confirming that 5.12.3 works indeed, I proceeded to pass --disable-sat-solver to ./configure to disable the SAT solver as per your recommendations. Thanks :+1:

SoapGentoo avatar Mar 25 '23 11:03 SoapGentoo

Cool. OK. FWIW. the SAT solver is already disabled by default (configure.ac lines 365ff) so if it was on for you, then somehow you were carrying a config setting from long ago? Keep in mind that ./configure does not start with a clean state; it remembers flags from prior invocations. (This also reveals my testing is incomplete.)

linas avatar Mar 25 '23 19:03 linas

in general, we like to specify all options to ./configure, since it makes our configuration more robust to changes of default settings. In this case, the --enable-sat-solver=bundled was added due to a conflict with the system minisat: https://bugs.gentoo.org/593662

SoapGentoo avatar Mar 26 '23 10:03 SoapGentoo

Hm. OK. SAT was disabled to discourage it's use. In all situations, it is always slower, sometimes slower by factors of 10x or 100x. Amir says that, in fact, this can be fixed up and repaired, which might make SAT faster than the regular parser, maybe.

Whether this is worth the effort, or not, depends mostly on future applications, rather than on the current situation. For the present English, russian, Thai, etc. dictionaries, reviving SAT seems pointless: the current parser is good enough. However, I'm working with brand-new dicts which have radically different structure, and different performance profiles, and make different demands on the parser. And for those, maybe the SAT parser could be faster or more space-efficient. Maybe, or maybe not. Unexplored.

linas avatar Mar 26 '23 20:03 linas