explicit-semantic-analysis icon indicating copy to clipboard operation
explicit-semantic-analysis copied to clipboard

whose UTF8 encoding is longer than the max length 32766

Open MarceloBaeza opened this issue 6 years ago • 2 comments

Dear, I appreciate your contribution to the community. I have started working with ESA, but when it runs it gives me problems: "Document contains at least one immense term in field =" text "(whose UTF8 encoding is longer than the maximum length 32766), all of which were skipped. The analyzer does not produce such terms The prefix of the first immense term is: '[6c 61 6c 61 6c 64 6b 6a 66 76 6e 74 75 69 76 62 79 6e 65 72 75 72 72 72 72 72 72 72 72 72] ... '" I tried to change the .bz2 file and not (just to see if they had a problem). I hope you can help me.

MarceloBaeza avatar Jun 19 '18 06:06 MarceloBaeza

Hi @MarceloBaeza, sorry for my late response and thank you for your interest in ESA.

A few answers to the following questions could enable me to reproduce the problem, or help you or someone else to solve the issue without my assistence:

  • In which stage did you encounter problems exactly? Did you follow the readme or did you find an alternative way to work with ESA (no problems with that, btw); EDIT Did you change the analyzer?
  • What is the exact wikipedia dump that you used? Can you give the URL?
  • Is there only one document that causes trouble or is the complete Lucene index unusable?
  • What is the document that causes problems exactly? Can you find its title? Can you retrieve it from Wikipedia?
  • Are non-breaking spaces used in the document instead of regular white space? Is there anything else weird about the document?
  • What does the document look like in the dump file? Could the dump file format be changed since when I first wrote ESA?

pvoosten avatar Jun 26 '18 10:06 pvoosten

Hi @MarceloBaeza,

the prefix you mention would be lalaldkjfvntuivbynerurrrrrrrrr... in UTF-8 encoding. Doesn't make sense to me...

EDIT The exception is thrown by Lucene because a single term has exceeded the hard coded term length limit of 32766 bytes. More details here on StackOverflow. The answers suggest that the analyzer does produce very long terms. You should find out whether that is really the case.

pvoosten avatar Jun 26 '18 10:06 pvoosten