relex icon indicating copy to clipboard operation
relex copied to clipboard

Propogate parse-confidence across relations

Open amebel opened this issue 11 years ago • 3 comments

Presently the confidence is only set for the ParseNodes and not for the relations. This was pointed out @ https://groups.google.com/d/msg/opencog/8I4oBE2dOJc/GG0lqRmmKrwJ and https://github.com/opencog/relex/pull/154#issuecomment-52879391

How should with stv for unary & binary relations? @bgoertzel @williampma @ruiting @rodsol @linas

amebel avatar Aug 21 '14 05:08 amebel

I dunno. We are back to the question of "context". So, for a given parse, we are 100% sure/confident that those relations are correct. So, the ideal pipeline would take these with a confidence 100%, perform some reasoning based on previous sentences, and based on common-sense, and determine whether the sentence is 'consistent' with the known facts. If it is, then the parse confidence of that parse can be strengthened. If it is not, then the parse-confidence should be lowered. The process is continued for each parse, until only one (or maybe two) parses have almost all of the confidence, and the remaining ones are seem very unilkely. The unlikely ones are then deleted, and the dominent one is then folded into the main knowledgebase.

However, this kind of process requires that each parse be 'isolated' into its own hypothetical universe that does not 'leak' information into the main knowledebase. I don't think we have any wiki-page that demonstrates how to do such isolated reasoning. Its also not clear how, after doing such reasoning, how we should determine a single final score to indicate whether that parse is crazy or not.

On 21 August 2014 00:52, Amen Belayneh [email protected] wrote:

Presently the confidence is only set for the ParseNodes and not for the relations. This was pointed out @ https://groups.google.com/d/msg/opencog/8I4oBE2dOJc/GG0lqRmmKrwJ and #154 (comment) https://github.com/opencog/relex/pull/154#issuecomment-52879391

How should with stv for unary & binary relations? @bgoertzel https://github.com/bgoertzel @williampma https://github.com/williampma @ruiting https://github.com/ruiting @rodsol https://github.com/rodsol @linas https://github.com/linas

— Reply to this email directly or view it on GitHub https://github.com/opencog/relex/issues/164.

linas avatar Aug 23 '14 03:08 linas

In case my answer was too long: we should NOT propagate the confidence into the relations. The relations need to stay at 100% in that context.

On 22 August 2014 22:09, Linas Vepstas [email protected] wrote:

I dunno. We are back to the question of "context". So, for a given parse, we are 100% sure/confident that those relations are correct. So, the ideal pipeline would take these with a confidence 100%, perform some reasoning based on previous sentences, and based on common-sense, and determine whether the sentence is 'consistent' with the known facts. If it is, then the parse confidence of that parse can be strengthened. If it is not, then the parse-confidence should be lowered. The process is continued for each parse, until only one (or maybe two) parses have almost all of the confidence, and the remaining ones are seem very unilkely. The unlikely ones are then deleted, and the dominent one is then folded into the main knowledgebase.

However, this kind of process requires that each parse be 'isolated' into its own hypothetical universe that does not 'leak' information into the main knowledebase. I don't think we have any wiki-page that demonstrates how to do such isolated reasoning. Its also not clear how, after doing such reasoning, how we should determine a single final score to indicate whether that parse is crazy or not.

On 21 August 2014 00:52, Amen Belayneh [email protected] wrote:

Presently the confidence is only set for the ParseNodes and not for the relations. This was pointed out @ https://groups.google.com/d/msg/opencog/8I4oBE2dOJc/GG0lqRmmKrwJ and #154 (comment) https://github.com/opencog/relex/pull/154#issuecomment-52879391

How should with stv for unary & binary relations? @bgoertzel https://github.com/bgoertzel @williampma https://github.com/williampma @ruiting https://github.com/ruiting @rodsol https://github.com/rodsol @linas https://github.com/linas

— Reply to this email directly or view it on GitHub https://github.com/opencog/relex/issues/164.

linas avatar Aug 23 '14 03:08 linas

In theory, PLN should be able to do reasoning on Atoms embedded in a ContextLink, with the "leakage" into the rest of the ATomspace determined by the node probability of the node serving as context...

E.g.

ContextLink <1> A Inheritance B C

ContextLink <1> A Inheritance C D

should yield

ContextLink <1> A Inheritance B C <1>

...

regardless of whether in the overall Atomspace we have

InheritanceLink A B <.001>

InheritanceLink B C <.02>

etc.

If we have

A <.0001>

then the degree of "leakage" of

ContextLink <1> A Inheritance B C <1>

into the overall Atomspace should be very little...

If we have

A <.5>

then the leakage would be a lot, because the node probability of A is indicating implicitly that 50% of the observations recorded in the Atomspace are observations of instances of A ...

-- Ben

On Sat, Aug 23, 2014 at 11:10 AM, Linas Vepstas [email protected] wrote:

I dunno. We are back to the question of "context". So, for a given parse, we are 100% sure/confident that those relations are correct. So, the ideal pipeline would take these with a confidence 100%, perform some reasoning based on previous sentences, and based on common-sense, and determine whether the sentence is 'consistent' with the known facts. If it is, then the parse confidence of that parse can be strengthened. If it is not, then the parse-confidence should be lowered. The process is continued for each parse, until only one (or maybe two) parses have almost all of the confidence, and the remaining ones are seem very unilkely. The unlikely ones are then deleted, and the dominent one is then folded into the main knowledgebase.

However, this kind of process requires that each parse be 'isolated' into its own hypothetical universe that does not 'leak' information into the main knowledebase. I don't think we have any wiki-page that demonstrates how to do such isolated reasoning. Its also not clear how, after doing such reasoning, how we should determine a single final score to indicate whether that parse is crazy or not.

On 21 August 2014 00:52, Amen Belayneh [email protected] wrote:

Presently the confidence is only set for the ParseNodes and not for the relations. This was pointed out @ https://groups.google.com/d/msg/opencog/8I4oBE2dOJc/GG0lqRmmKrwJ and #154 (comment) https://github.com/opencog/relex/pull/154#issuecomment-52879391

How should with stv for unary & binary relations? @bgoertzel https://github.com/bgoertzel @williampma < https://github.com/williampma> @ruiting https://github.com/ruiting @rodsol https://github.com/rodsol

@linas https://github.com/linas

— Reply to this email directly or view it on GitHub https://github.com/opencog/relex/issues/164.

— Reply to this email directly or view it on GitHub https://github.com/opencog/relex/issues/164#issuecomment-53141039.

Ben Goertzel, PhD http://goertzel.org

"In an insane world, the sane man must appear to be insane". -- Capt. James T. Kirk

"Emancipate yourself from mental slavery / None but ourselves can free our minds" -- Robert Nesta Marley

bgoertzel avatar Aug 23 '14 04:08 bgoertzel